id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2303.11053
Fair Healthcare Rationing to Maximize Dynamic Utilities
Allocation of scarce healthcare resources under limited logistic and infrastructural facilities is a major issue in the modern society. We consider the problem of allocation of healthcare resources like vaccines to people or hospital beds to patients in an online manner. Our model takes into account the arrival of resources on a day-to-day basis, different categories of agents, the possible unavailability of agents on certain days, and the utility associated with each allotment as well as its variation over time. We propose a model where priorities for various categories are modelled in terms of utilities of agents. We give online and offline algorithms to compute an allocation that respects eligibility of agents into different categories, and incentivizes agents not to hide their eligibility for some category. The offline algorithm gives an optimal allocation while the on-line algorithm gives an approximation to the optimal allocation in terms of total utility. Our algorithms are efficient, and maintain fairness among different categories of agents. Our models have applications in other areas like refugee settlement and visa allocation. We evaluate the performance of our algorithms on real-life and synthetic datasets. The experimental results show that the online algorithm is fast and performs better than the given theoretical bound in terms of total utility. Moreover, the experimental results confirm that our utility-based model correctly captures the priorities of categories
Aadityan Ganesh, Prajakta Nimbhorkar, Pratik Ghosal, Vishwa Prakash HV
2023-03-20T12:07:21Z
http://arxiv.org/abs/2303.11053v1
# Fair Healthcare Rationing to Maximize Dynamic Utilities ###### Abstract Allocation of scarce healthcare resources under limited logistic and infrastructural facilities is a major issue in the modern society. We consider the problem of allocation of healthcare resources like vaccines to people or hospital beds to patients in an online manner. Our model takes into account the arrival of resources on a day-to-day basis, different categories of agents, the possible unavailability of agents on certain days, and the utility associated with each allotment as well as its variation over time. We propose a model where priorities for various categories are modelled in terms of utilities of agents. We give online and offline algorithms to compute an allocation that respects eligibility of agents into different categories, and incentivizes agents not to hide their eligibility for some category. The offline algorithm gives an optimal allocation while the online algorithm gives an approximation to the optimal allocation in terms of total utility. Our algorithms are efficient, and maintain fairness among different categories of agents. Our models have applications in other areas like refugee settlement and visa allocation. We evaluate the performance of our algorithms on real-life and synthetic datasets. The experimental results show that the online algorithm is fast and performs better than the given theoretical bound in terms of total utility. Moreover, the experimental results confirm that our utility-based model correctly captures the priorities of categories. ## 1 Introduction Healthcare rationing has become an important issue in the world amidst the COVID-19 pandemic. At certain times the scarcity of medical resources like vaccines, hospital beds, ventilators, medicines especially in developing countries raised the question of fair and efficient distribution of these resources. One natural approach is to define priority groups. For example, for vaccination, the main priority groups considered include health care workers, workers in other essential services, and people with vulnerable medical conditions [36, 44]. Racial equity has been another concern [9]. Having made the priority groups, it still remains a challenge to allocate resources within the groups in a transparent manner [15; 45]. A New York Times article has mentioned this as one of the hardest decisions for health organizations [16]. In light of this, it is a major problem to decide how to allocate medical resources fairly and efficiently while respecting the priority groups and other ethical concerns. The healthcare rationing problem has been recently addressed by market designers. In [35], the problem was framed as a two-sided matching problem (see e.g. [39]). Their model has reserve categories each with its own priority ordering of people. This ordering is based on the policy decisions made according to various ethical guidelines. It is shown in [35] that running the Deferred Acceptance algorithm of Gale and Shapley [18] has desired properties like eligibility compliance, non-wastefulness and respect to priorities. This approach of [35] has been recommended or adopted by organizations like the NASEM (National Academies of Sciences, Engineering, and Medicine) [19]. It has also been recognized in medical literature [36; 43], and is mentioned by the Washington Post [12]. The Smart Reserves algorithm of [35] gives a maximum matching satisfying the desired properties mentioned earlier. However, it assumes a global priority ordering on people. In a follow-up work, [5] generalize this to the case where categories are allowed to have heterogeneous priorities. Their Reverse Rejecting (REV) rule, and its extension to Smart Reverse Rejecting (S-REV) rule are shown to satisfy the goals like eligibility compliance, respect to priorities, maximum size, non-wastefulness, and strategyproofness. However, the allocation of healthcare resources is an ongoing process. On a day-to-day basis, new units arrive in the market and they need to be allocated to people. The variation in the availability of medical resources over a period of time, and the possible unavailability of recipients on certain days is an important factor in making allocation decisions. For example, while allocating vaccines, the unavailability of people on certain days might lead to wastage of vaccines, especially if the units are reserved for categories a priori. The previous models do not encompass this dynamic nature of resources. Moreover, the urgency with which a resource needs to be allocated to an individual also changes over time. While priority groups or categories aim to model this by defining a priority order on people, defining a strict ordering is not practically possible. While dealing with a large population, defining a strict ordering on people is not desirable. For instance, in the category of old people, it is neither clear nor desirable to define a strict order on people of the same age and same vulnerabilities. Even if categories are allowed to have ties in their ordering, the ordering still provides only an ordinal ranking. Our model provides the flexibility to have cardinal rankings in terms of prioritizing people by associating a _utility value_ for each individual. Thus, in our work, categories do not define an ordering over people, instead, there is a utility value associated with allocation of the resource to each person. The goal is to find an allocation with maximum total utility while respecting category quotas. However, utilities can change over time. For instance, the advantage of allotting a ventilator to a person today might be far more beneficial than al lotting it tomorrow. Similarly, vaccinating the vulnerable population as early as possible is much more desirable from a social perspective than delaying it to a later day. We model this through _dynamic utilities_. Thus, we consider utilities that diminish over time. The _discounting factor_\(0<\delta<1\) is multiplicative. Such exponential discounting is commonly used in economics literature [38, 40]. Our utility maximization objective can thus be seen as maximization of _social welfare_. Another advantage is that the division of available units into various categories is not static. It is dynamically decided depending on the supply of units and availability of people on each day. Our algorithms to find a maximum utility allocation are based on network flows. They adhere to the following important ethical principles which were introduced by Aziz et al in [5]: 1. complies with the eligibility requirements 2. is strategyproof (does not incentivize agents to under-report the categories they qualify for or days what they are available on), 3. is non-wasteful (no unit is unused but could be used by some eligible agent) Additionally our algorithms give an approximate maximum weight matchings, where the weights denote the utility value of a matching. We note that the current state-of-practice algorithms such as first-come first-serve or random ordering do not guarantee non-wastefullness as the matching returned by them may not be of maximum size. Furthermore, matchings returned by these algorithms could be of arbitrarily low total utility. Using category quotas and utility values, we provide a framework in which more vulnerable populations can be prioritized while maintaining a balance among the people vaccinated through each category on a day-to-day basis. ### Related Work The topic of constrained matching problems has been an active area or research and it has been studied in the context of school choice and hospital residents problem apart from healthcare rationing [30, 5, 27, 28, 8, 21, 41]. The setting with two-sided preferences has been considered in [22, 29, 23, 17]. The fairness and welfare objectives have been covered in a comprehensive manner in [32]. Another application of the constrained matching problem is in the refugee resettlement problem. Refugee resettlement is a pressing matter in the twenty-first century where people have been forced to leave their country in order to escape war, persecution, or natural disaster. In the refugee resettlement process the refugee families are settled from asylum countries to the host countries where the families are given permanent residentship. The reallocation is done keeping in mind the necessities of the families as well as the infrastructure capacities of the host countries. Delacretaz et al. [13] formalized refugee allocation as a centralized matching market design problem. The refugee allocation problems have been studied both in terms of matching problems with preferences [3, 14, 6, 25, 26, 34, 42] and without preferences[7, 13]. In the matching problem with preferences, the goal is to match the refugees to localities based on the preference of either one or both sides, while satisfying the multidimensional resettlement constraints. Delacretaz et al. considered the problem both in terms of with and without preferences. The problem without preference can be reduced to multiple multidimensional knapsack problems [13]. The branch-and-bound method can be used to find the exact solution. Bansak et al. [7] used a combination of supervised machine learning and optimal matching to obtain a refugee matching that satisfies the constraints of refugees and localities. The dynamic version of the refugee resettlement problem [4, 1, 11] has also been considered in literature. ### Our Models We define our model below and then define its extension. Throughout this paper, we consider vaccines as the medical resource to be allocated. People are referred to as agents. Note that although the discussion assumes perishability of resources, it can easily be extended to non-perishable resources. Model 1:Our model consists of a set of agents \(A\), a set of categories \(C\), and a set of days \(D\). For day \(d_{j}\in D\), there is a _daily supply_\(s_{j}\) denoting the number of vaccine shots available for that day. For each category \(c_{i}\in C\), and each day \(d_{j}\in D\), we define a _daily quota_\(q_{ij}\). Here \(q_{ij}\) denotes the maximum number of vaccines that can be allocated for \(c_{i}\) on day \(d_{j}\). There is a priority factor \(\alpha_{k}\) associated with an agent \(a_{k}\). Let \(\alpha_{\max}~{}=~{}\max_{i}\{\alpha_{i}~{}\mid~{}\alpha_{i}~{}\) is the priority factor of agent \(a_{i}\}\) and \(\alpha_{\min}~{}=~{}\min_{i}\{\alpha_{i}~{}\mid~{}\alpha_{i}~{}\) is the priority factor of agent \(a_{i}\}\). Utilities have a _discount factor_\(\delta\in(0,1)\) denoting the multiplicative factor with which the utilities for vaccinating agents reduce with each passing day. Thus if \(a_{k}\) is vaccinated on day \(d_{j}\), the utility obtained is \(\alpha_{k}\cdot\delta^{j}\). Each agent \(a_{k}\) has an _availability vector_\(v_{k}\in\{0,1\}^{|D|}\). The \(j\)th entry of \(v_{k}\) is 1 if and only if \(a_{k}\) is available for vaccination on day \(d_{j}\). Model 2:Model 2 is an extension of Model 1 in the following way. The sets \(A,C,D\) and the daily supply and daily quotas are the same as those in model 1. Apart from the daily quota, each category \(c_{i}\) also has an _overall quota_\(q_{i}\) that denotes the maximum total number of vaccines that can be allocated for category \(c_{i}\) over all the days. Note that overall quota is also an essential quantity in applications like visa allocation and refugee settlement. In both the models, a matching \(M:A\rightarrow(C\times D)\cup\{\emptyset\}\) is a function denoting the day on which a person is vaccinated and the category through which it is done, such that the category quota(s) and daily supply values do not exceed on any day. Thus if we define variables \(x_{ijk}\) such that \(x_{ijk}=1\) if \(M(a_{k})=(c_{i},d_{j})\) and \(x_{ijk}=0\) if \(M(a_{k})=\emptyset\), then we have \(\sum_{i,j}x_{ijk}\leq 1\) for each \(k\), \(\sum_{k,j}x_{ijk}\leq q_{i}\) for each \(i\), \(\sum_{k}x_{ijk}\leq q_{ij}\) for each \(i\), \(j\), and \(\sum_{i,k}x_{ijk}\leq s_{j}\) for each \(j\). Here \(1\leq i\leq|C|,1\leq j\leq|D|,1\leq k\leq|A|\). If \(M(a_{k})=\emptyset\) for some \(a_{k}\in A\), it means the person could not be vaccinated through our algorithm within \(|D|\) days. In both the models, the utility associated with \(a_{k}\) is \(\alpha_{k}\cdot\delta^{j-1}\) where \(M(a_{k})=(c_{i},d_{j})\). The goal is to find a matching that maximizes the total utility. ### Our Contributions The utilities \(\alpha_{k}\) and discounting factor \(\delta\) have some desirable properties. If agent \(a_{k}\) is to be given a higher priority over agent \(a_{\ell}\), then we set \(\alpha_{k}>\alpha_{\ell}\). On any day \(d_{j}\), \(\alpha_{k}\cdot\delta^{j}>\alpha_{\ell}\cdot\delta^{j}\). Moreover, the difference in the utilities of the two agents diminishes over time i.e. if \(j<j^{\prime}\) then \((\alpha_{k}-\alpha_{\ell})\delta^{j}>(\alpha_{k}-\alpha_{\ell})\delta^{j^{ \prime}}\). Thus the utility maximization objective across all days vaccinates \(a_{k}\) earlier than \(a_{\ell}\). We consider both online and offline settings. The offline setting relies on the knowledge about availability of agents on all days. This works well in a system where agents are required to fill up their availability in advance e.g. in case of planned surgeries, and visa allocations. The online setting involves knowing the availability of all the agents only on the current day as in a _walk-in_ setting. Thus the availability of an agent on a day in future is not known. We give an optimal algorithm for Model 1 in the offline setting.. Theorem 1.1: _There is a polynomial-time algorithm that computes an optimal solution for any instance of Model \(1\) in the offline setting._ We also give algorithms for both Model 1 and Model 2 in the online setting. We give theoretical guarantees on the performance of online algorithms in terms of their _competitive ratio_ in comparison with the utility of an offline optimal solution. Theorem 1.2: _There is an online algorithm (Algorithm 1) that gives a competitive ratio of (i) \(1+\delta\) for Model \(1\) and (ii) of \(1+\delta+(\alpha_{\max}/\alpha_{\min})\delta\) for Model \(2\) when \(\delta\) is the common discounting factor for all agents. The algorithm runs in polynomial time._ We prove part \((i)\) of Theorem 1.2 in Section 3.2 whereas part \((ii)\) is proved in Appendix. Strategy-proofness:It is a natural question whether agents benefit by hiding their availability on some days. We show that the online algorithm is strategy-proof. In this context, we analyze our online algorithm for Model 1 from a game theoretic perspective. We exhibit that the offline setting has a _pure Nash equilibrium_ that corresponds to the solution output by the online algorithm. For this, we assume that the tie-breaking among agents is done according to an arbitrary permutation \(\pi\) of agents. Theorem 1.3: _Let an offline optimal solution that breaks ties according to a random permutation \(\pi\) match agent \(a_{i}\) on day \(d_{i}\). Then for each agent \(a_{i}\), reporting availability exactly on day \(d_{i}\) (unmatched agents mark all days as unavailable) is a pure Nash equilibrium. Moreover, the Nash equilibrium corresponds to a solution output by the online algorithm._ Experimental Results:We also give experimental results in Section 6 using real-world datasets. Apart from maximization of utilities, we also consider the number of days taken by the online algorithm for vaccinating high priority people. Our experiments show that the online algorithm almost matches the offline algorithm in terms of both of these criteria. Selection of utility values:An important aspect of our model is that the choice of utility values does not affect the outcome as long as the utility values have the same numerical order as the order of priorities among agents. Thus the output of online as well as offline algorithm remains the same as long as \(\alpha_{k}>\alpha_{\ell}\) whenever agent \(a_{k}\) has a higher priority over agent \(a_{\ell}\). ## 2 Optimal Offline Algorithm for Model 1 The problem can be modelled as an instance of the minimum cost flow network. We define the minimum cost flow problem here for completeness. The minimum cost flow problem:The input is a flow network \(G=(V,E)\) as a directed graph with node set \(V\), edge set \(E\), capacities \(c_{e}>0\) and cost \(u_{e}\in\mathbb{R}\) on each edge \(e\in E\), and a source \(s\) and sink \(t\). A flow \(f:E\rightarrow\mathbb{R}\) is a valid flow in \(G\) if \(f(e)\leq c(e)\), and the incoming flow at any node except \(s\) and \(t\) equals the outgoing flow. The cost of a flow \(f(e)\) along an edge \(e\) is \(u_{e}\cdot f(e)\). A minimum cost flow in the network is the one that minimizes the sum of costs of the flow along all edges. There are polynomial-time algorithms known for the minimum cost flow problem. Also, it is known that if all the capacities are integers, then the optimum flow is an integer. We refer the reader to [2] for the details of minimum cost flow. Reduction:The construction of the flow network is shown in Figure 2. The flow network consists of a source \(s\), a sink \(t\), nodes for each day \(d_{j}\), each agent \(a_{k}\) and nodes \(c_{ij}\) for each \((c_{i},d_{j})\in C\times D\). Each edge \((s,d_{j})\) has capacity \(s_{j}\) denoting the daily supply for day \(d_{j}\), each edge \((d_{j},c_{ij})\) has capacity equal to \(q_{ij}\), and all other edges have capacity \(1\). All the edges are directed. Additionally, each \((c_{ij},a_{k})\) edge has cost \(-u_{k}\cdot\delta_{k}^{j}\) whereas other edges have cost \(0\). Proof: (of Theorem 2.1) We show that a minimum cost flow \(f\) in the flow network corresponds to a maximum utility matching in the given instance. The integrality of minimum cost flow implies that each edge incident on \(t\) can have a flow of either \(0\) or \(1\). For each \(k\), if \(f(a_{k},t)=1\), then there is exactly one \(c_{ij}\) such that \(f(c_{ij},a_{k})=1\). Set \(M(a_{k})=(c_{i},d_{j})\) in the corresponding matching \(M\). Similarly, for any matching \(M\), A corresponding flow can be shown as follows. If \(M(a_{k})=(c_{i},d_{j})\) then set \(f(a_{k},t)=f(c_{ij},a_{k})=1\), and set \(f(s,d_{j})\) equal to the number of agents vaccinated on day \(d_{j}\), and \(f(d_{j},c_{ij})\) equal to the number of agents vaccinated on day \(d_{j}\) through category \(c_{i}\). It is clear that this is a valid flow in the network, and the negation of the cost of the flow is the same as the utility of the corresponding matching. ## 3 Algorithms for Model 1 We give a flow based polynomial-time optimal offline algorithm for Model 1 in Appendix. Here, we give an online algorithm for the same which achieves a competitive ratio of \(1+\delta\), where \(\delta\) is the discounting factor of the agents. ### Online Algorithm for Model 1 We present an online algorithm which greedily maximizes utility on each day. We show that this algorithm indeed achieves a competitive ratio of \(1+\delta\). _Outline of the Algorithm:_ On each day \(d_{i}\), starting from day \(d_{1}\), we construct a bipartite graph \(H_{i}=(A_{i}\cup C,E_{i},w_{i})\) where \(A_{i}\) is the set of agents who are available on day \(d_{i}\) and are not vaccinated earlier than day \(d_{i}\). Let the weight of the edge \((a_{j},c_{k})\in E_{i}\) be \(w_{i}(a_{j},c_{k})=\alpha_{j}.\delta^{i-1}\). We define capacity of the category \(c_{k}\in C\) as \(b^{\prime}_{i,k}\). In this graph, our algorithm finds a maximum weighted b-matching of size not more than the daily supply value \(s_{i}\). The following lemma shows that the maximum weight b-matching computed in Algorithm 1 is also a maximum size b-matching of size at most \(s_{i}\). Lemma 1: _The maximum weight b-matching in \(H_{i}\) of size at most \(s_{i}\) is also a maximum size b-matching of size at most \(s_{i}\)._ Proof: We prove that applying an augmenting path in \(H_{i}\) increases the weight of the matching. Consider a matching \(M_{i}\) in \(H_{i}\) such that \(M_{i}\) is not of maximum size and \(|M_{i}|<s_{i}\). Let \(\rho=(a_{1},c_{1},a_{2},c_{2},\cdots,a_{k},c_{k})\) be an \(M_{i}\)-augmenting path in \(H_{i}\). We know that every edge incident to an agent has the same weight in \(H_{i}\). If we apply the augmenting path \(\rho\), the weight of the matching increases by the Figure 1: Flow network for finding a maximum utility matching in Model 1 weight of the edge \((a_{1},c_{1})\). This proves that a maximum weight matching in \(H_{i}\) of size at most \(s_{i}\) is also a maximum size b-matching of size at most \(s_{i}\). ### Charging scheme We compare the solution obtained by Algorithm 1 with the optimal offline solution to get the worst-case competitive ratio for Algorithm 1. Let \(M\) be the output of Algorithm 1 and \(N\) be an optimal offline solution. To compare \(M\) and \(N\), we devise a _charging scheme_ by which, each agent \(a_{p}\) matched in \(N\)_charges_ a unique agent \(a_{q}\) matched in \(M\). The amount charged, referred to as the _charging factor_ here is the ratio of utilities obtained by matching \(a_{p}\) and \(a_{q}\) in \(M\) and \(N\) respectively. Properties of the charging scheme: 1. Each agent matched in \(N\) charges exactly one agent matched in \(M\), 2. Each agent \(a_{q}\) matched in \(M\) is charged by at most two agents matched in \(N\), with charging factors at most \(1\) and \(\delta\). This implies that the utility of \(N\) is at most \((1+\delta)\) times the utility of \(M\). We divide the agents matched in \(N\) into two types. Type 1 agents are those which are matched in \(M\) on an earlier day compared to that in \(N\). Thus \(a_{p}\in A\) is a Type 1 agent if \(a_{p}\) is matched on day \(d_{i}\) in \(M\) and on day \(d_{j}\) in \(N\), such that \(i<j\). The remaining agents are called Type 2 agents. Our charging scheme is as follows: 1. Each Type 1 agent \(a_{p}\) charges themselves with a charging factor \(\delta\), since the utility associated with them in \(N\) is at most \(\delta\) times that in \(M\). 2. Here onwards, we consider only Type 2 agents and discuss the charging scheme associated with them. Let \(X_{i}\) be the set of Type 2 agents matched on day \(d_{i}\) in \(N\), and let \(Y_{i}\) be the set of agents matched on day \(d_{i}\) in \(M\). Since Algorithm 1 greedily finds a maximum size b-matching of size at most \(s_{i}\), and as each edge in the b-matching corresponds to a unique agent, we show the following lemma holds: Lemma 2: _For each \(d_{i}\in D\), the set \(|X_{i}|\leq|Y_{i}|\)._ Proof: Since \(X_{i}\) contains only Type 2 agents matched in \(N_{i}\), the agents in \(X_{i}\) are not matched by \(M\) until day \(i-1\). Therefore \(X_{i}\subseteq A_{i}\), where \(A_{i}\) is defined in Algorithm 1. The daily quota and the daily supply available for computation of \(N_{i}\) and \(M_{i}\) is the same i.e. \(q_{i,k}\), and \(s_{i}\) respectively. By construction, \(M_{i}\) is a matching that matches maximum number of agents in \(A_{i}\), up to an upper limit of \(s_{i}\), \(|X_{i}|\leq|Y_{i}|\). To obtain the desired competitive ratio we design an injective mapping according to which, each agent \(a_{p}\) in \(X_{i}\) can uniquely charge an agent \(a_{q}\) in \(Y_{i}\) such that \(\alpha_{p}\leq\alpha_{q}\). The following lemma shows that such an injective mapping always exists. Lemma 3: _There exists an injective mapping \(f:X_{i}\to Y_{i}\) such that if \(f(a_{p})=a_{q}\), then \(\alpha_{p}\leq\alpha_{q}\)._ Proof: Let \(N_{i}\) and \(M_{i}\) respectively be the restrictions of \(N\) and \(M\) to day \(d_{i}\). We construct an auxiliary bipartite graph \(G_{i}\) where \(X_{i}\cup Y_{i}\) form one bipartition and categories form another bipartition. The edge set is \(N_{i}\cup M_{i}\). Then we set the capacity of \(c_{k}\) in \(G_{i}\) to be \(b_{i,k}=q_{i,k}\). The charging scheme is as follows. Consider the symmetric difference \(M_{i}\oplus N_{i}\). It is known that \(M_{i}\oplus N_{i}\) can be decomposed into edge disjoint paths and even cycles [33]. Consider a component \(C\) which is an even cycle as shown in Fig 2a. Since each agent \(a_{p}\) in \(C\) has both \(M_{i}\) edge and \(N_{i}\) edge indecent on it, agent \(a_{p}\) in \(X_{i}\) charges her own image in \(Y_{i}\) with a charging factor of 1. Now, Consider a component which is a path \(\rho\). There are two cases. 1. _Case 1: The path \(\rho\) has an even length:_ If \(\rho\) starts and ends at a category node, then each agent along the path is matched in both \(N_{i}\) and \(M_{i}\). Hence, all such agents can charge themselves with a charging factor of 1. Suppose \(\rho\) starts and ends at an agent as shown in Fig 2b i.e. \(\rho=(a_{1},c_{1},a_{2},c_{2},\cdots,a_{k-1},c_{k-1},a_{k})\). Let \(a_{1}\) be matched in \(M_{i}\) and \(a_{k}\) is matched in \(N_{i}\). Then, \(\alpha_{1}\) must be greater than or equal to \(\alpha_{k}\). Otherwise from Lemma 1, \(M_{i}\oplus\rho\) is a matching of higher weight - which contradicts the fact that \(M_{i}\) is the maximum weight matching. Now, every agent in \(\rho\) except \(a_{1}\) and \(a_{k}\) charge themselves with a charging factor of 1 and \(a_{k}\) charges \(a_{1}\) with a charging factor of \(\alpha_{k}/\alpha_{1}\). 2. _Case 2: The path \(\rho\) has an odd length:_ Then either \(\rho\) begins and ends with an \(M_{i}\) edge or with an \(N_{i}\) edge. If \(\rho\) starts and ends with an \(M_{i}\) edge, then every agent along the path who is matched in \(N_{i}\) is also matched in \(M_{i}\). Therefore all the agents on \(\rho\) charge themselves. Consider the case when \(\rho\) starts with an \(N_{i}\) edge. Since \(c_{k}\) is an end-point of \(\rho\) with an \(N_{i}\)-edge, \(c_{k}\) must have more agents matched to it in \(N_{i}\) than that in \(M_{i}\). So \(c_{k}\) cannot be saturated in \(M_{i}\). As \(M_{i}\) is a maximum size matching [1], we cannot augment \(M_{i}\) to \(M_{i}\oplus\rho\) in \(G_{i}\) even though both endpoints are unsaturated. This can happen only because the daily supply is met. That is \(|M_{i}|=s_{i}\). As \(a_{1}\) is vaccinated in category \(c_{1}\) in \(N_{i}\), we claim that the weight \(w(a_{1},c_{1})\) is less than every other edge in \(M_{i}\). This is because if there exists an edge \(e\in M_{i}\) such that \(w(e)<w(a_{1},c_{1})\), we can remove the edge \(e\) from \(M_{i}\) and apply the augmenting path \(\rho\) to get a matching with a higher weight, which is a contradiction. Therefore, as \(w(a_{1},c_{1})\) is less than every other edge in \(M_{i}\), agent \(a_{1}\) can safely charge any agent \(a_{q}\) who is matched in \(M_{i}\). Since \(|M_{i}|\geq|N_{i}|\), we are guaranteed to have sufficient agents in \(N_{i}\) for charging. Order of charging among Type 2 agents:First, every agent who has both \(M_{i}\) and \(N_{i}\) edges indecent on it, charges herself. Next every agent who is an end-point of an even-length path charges the agent represented by the other end-point. The rest of the agents are end-points of an odd-length path matched in \(N_{i}\). We proved that the edges incident on these agents have a weight smaller than every edge in \(M_{i}\). They can charge any agent of \(M_{i}\) who has not been charged yet by any agent of \(N_{i}\), as stated above. Proof (of Theorem 2.1 (i)): Let \(a_{q}\) be an agent who is vaccinated by the online matching \(M\) on day \(i\). Then \(a_{q}\) can be charged by at most two agents matched in \(N\). Suppose \(a_{q}\) is vaccinated by the optimal matching \(N\) on some day \(i^{\prime}>i\). Assume that the agent \(a_{p}\) of type 2 who also charges \(a_{q}\). If the priority factor of Figure 2: Charging schemes \(a_{q}\) and \(a_{p}\) are \(\alpha_{q}\) and \(\alpha_{p}\) respectively, then \[\frac{\alpha_{p}.\delta^{i}+\alpha_{q}.\delta^{i^{\prime}}}{\alpha_{q}.\delta^{i} }\ =\ \left(\frac{\alpha_{p}}{\alpha_{q}}\right)^{i}+\delta^{i^{\prime}-i}\leq\ 1+\delta.\] The last inequality follows as \(0<\alpha_{p}\leq\alpha_{q}<1,\text{ and }i^{\prime}>i\). Therefore the utility obtained by \(a_{p}\) and \(a_{q}\) in \(M_{i}\) is atmost \(1+\delta\) times the the utility of \(a_{q}\) in \(M_{i}\). Therefore the competitive ratio of Algorithm 1 is at most \(1+\delta\). In the Appendix, we show a tight example which achieves this compititve ratio. Since the daily supply of day \(d_{1}\) is \(1\), vaccinating \(a_{1}\) maximizes the utility gained on the first day. Hence there exists a run of Algorithm 1 where \(a_{1}\) is vaccinated under category \(c_{1}\) on day \(d_{1}\). In this run, agent \(a_{2}\) cannot be vaccinated on day \(d_{2}\) as she is unavailable on that day. Hence, total utility gained by the online allocation is \(\alpha_{1}\). Whereas in a optimal allocation scheme all the agents can be vaccinated. We vaccinate agent \(a_{2}\) on day \(d_{1}\) under category \(c_{2}\), agent \(a_{1}\) on day \(d_{2}\) under category \(c_{1}\). This sums to a total utility of \(\alpha_{1}+\alpha_{1}\delta\). Therefore the competitive ratio is \(\frac{\alpha_{1}+\alpha_{1}\delta}{\alpha_{1}}=1+\delta\). ### Tight example for the Online Algorithm The following example shows that the competitive ratio of Algorithm 1 is tight. Let the set of agents \(A=\{a_{1},a_{2}\}\) and categories \(C=\{c_{1},c_{2}\}\). Agent \(a_{1}\) is eligible under \(\{c_{1},c_{2}\}\) and agent \(a_{2}\) is eligible only under \(\{c_{2}\}\). The daily supply: \(s_{1}=1\) and \(s_{2}=1\). The daily quota of each category on each day is set to \(1\). The priority factor for both the agents is \(\alpha_{1}\). Assume that \(a_{1}\) is available on both the days whereas the agent \(a_{2}\) is available only on the first day. Figure 3 depicts this example. ## 4 Online Algorithm for Model 2 We present an online algorithm which greedily maximizes utility on each day. We assume that the discounting factor of the agents is \(\delta\). Moreover each agent \(a_{k}\) has Figure 3: A tight example with competitive ratio \(1+\delta\). Online allocation indicated in red, Optimal allocation indicated in green and arrows indicate charging a priority factor \(\alpha_{k}\). Let \(\alpha_{\max}=\max_{i}\{\alpha_{i}\mid\alpha_{i}\) is the priority factor of agent \(a_{i}\}\) and \(\alpha_{\min}=\min_{i}\{\alpha_{i}\mid\alpha_{i}\) is the priority factor of agent \(a_{i}\}\). We show that this algorithm indeed achieves a competitive ratio of \(1+\delta+\frac{\alpha_{\max}}{\alpha_{\min}}\delta\). _Outline of the Algorithm:_ On each day \(d_{i}\), starting from day \(d_{1}\), we construct a bipartite graph \(H_{i}=(A_{i}\cup C,E_{i},w_{i})\) where set \(A_{i}\) is the set of agents who are available on day \(d_{i}\) and are not vaccinated earlier than day \(d_{i}\). Let the weight of the edge \((a_{j},c_{k})\in E_{i}\) be \(w_{i}(a_{j},c_{k})=\alpha_{j}.\delta^{i-1}\). Let \(b^{\prime}_{i,k}\) represent the capacity of \(c_{k}\in C\) in \(H_{i}\). In this graph, our algorithm finds a maximum weighted b-matching of size not more than the daily supply value \(s_{i}\). This can be found in polynomial time [31]. Lemma 1 proves that the maximum weight b-matching is also a maximum cardinality b-matching of \(H_{i}\). ``` Input: An instance \(I\) of Model 2 Output: An allocation \(M:A\rightarrow(C\times D)\cup\{\varnothing\}\) 1: Let \(D,A,C\) be the set of Days, Agents and Categories respectively. 2:\(M(a_{j})\leftarrow\varnothing\) for each agent \(a_{j}\in A\) 3:\(r_{k}\gets q_{k}\) for each category \(c_{k}\in C\) 4:for day \(d_{i}\) in \(D\)do 5:\(A_{i}\leftarrow\{a_{j}\in A\mid a_{j}\) is available on \(d_{i}\) and \(a_{j}\) is not vaccinated\(\}\) 6:\(E_{i}=\{(a_{j},c_{k})\in A_{i}\times C\mid a_{j}\) is eligible to be vaccinated under category \(c_{k}\}\) 7: Construct bipartite graph \(H_{i}=(A_{i}\cup C,E_{i})\). 8:for\(c_{k}\) in \(C\)do 9:\(b^{\prime}_{i,k}\gets min(q_{ik},r_{k})\) {Capacity for each \(c_{k}\) in \(H_{i}\)} 10:endfor 11: Find maximum weight b-matching \(N_{i}\) in \(H_{i}\) of size at most \(s_{i}\). 12:for each edge \((a_{j},c_{k})\) in \(M_{i}\)do 13:\(M(a_{j})\leftarrow(c_{k},d_{i})\) {Mark \(a_{j}\) as vaccinated on day \(d_{i}\) under category \(c_{k}\)} 14:\(r_{k}\gets r_{k}-1\) {Update remaining overall quota} 15:endfor 16:endfor 17:return\(M\) ``` **Algorithm 2** Online Algorithm for Vaccine Allocation ### Outline of the charging scheme We compare the solution obtained by Algorithm 2 with the optimal offline solution to get the worst-case competitive ratio for Algorithm 2. Let \(M\) be the output of Algorithm 2 and \(N\) be an optimal offline solution. To compare \(M\) and \(N\), we devise a _charging scheme_ similar to that in Section 3.2, by which each agent \(a\) matched in \(N\)_charges_ a unique agent \(a^{\prime}\) matched in \(M\). The amount charged, referred to as the _charging factor_ here is the ratio of utilities obtained by matching \(a\) and \(a^{\prime}\) in \(M\) and \(N\) respectively. Properties of the charging scheme: 1. Each agent matched in \(N\) charges exactly one agent matched in \(M\), 2. Each agent matched in \(M\) is charged by at most three agents matched in \(N\), with charging factors at most \(1,\delta\) and \(\frac{\alpha_{\max}}{\alpha_{\min}}\delta\). This implies that the utility of \(N\) is at most \((1+\delta+\frac{\alpha_{\max}}{\alpha_{\min}}\delta)\) times the utility of \(M\). We divide the agents matched in \(N\) into two types. Type 1 agents are those which are matched in \(M\) on an earlier day compared to that in \(N\). Thus \(a\in A\) is a Type 1 agent if \(a\) is matched on day \(d_{i}\) in \(M\) and on day \(d_{j}\) in \(N\), such that \(i<j\). The remaining agents are called Type 2 agents. Our charging scheme is as follows: 1. Type 1 agents charge themselves with a charging factor \(\delta\), since the utility associated with them in \(N\) is at most \(\delta\) times that in \(M\). 2. Here onwards, we consider only Type 2 agents and discuss the charging scheme associated with them. Let \(X_{i}\) be the set of Type 2 agents matched on day \(d_{i}\) in \(N\), and let \(Y_{i}\) be the set of agents matched on day \(d_{i}\) in \(M\). 1. _Case 1:_\(|X_{i}|\leq|Y_{i}|\)_:_ From Lemma 3 we claim that each agent \(a_{p}\in X_{i}\) charges an agent in \(a_{q}\in Y_{i}\) with \(\alpha_{p}\leq\alpha_{q}\). Therefore the agents in \(X_{i}\) charge the agents in \(Y_{i}\) with a charging factor of 1. 2. _Case 2:_\(|X_{i}|=|Y_{i}|+z,z>0\)_:_ Let \(N_{i}\) and \(M_{i}\) respectively be the restrictions of \(N\) and \(M\) to day \(d_{i}\). We construct an auxiliary bipartite graph \(G_{i}\) where \(X_{i}\cup Y_{i}\) form one bipartition and categories form another bipartition. The edge set is \(N_{i}\cup M_{i}\). For a category \(c_{k}\), let \(n_{j,k}\) and \(m_{j,k}\) be the number of agents matched in \(N\) and \(M\) respectively, under category \(c_{k}\) on day \(d_{j}\). Then we set the quota of \(c_{k}\) in \(G_{i}\) to be \(b_{i,k}=\min\{q_{i,k},\max\{q_{k}-\sum_{j=1}^{i-1}n_{j,k},q_{k}-\sum_{j=1}^{i-1 }m_{j,k}\}\}\). This is the maximum of the quotas of \(c_{k}\) that were available for computation of \(N_{i}\) and \(M_{i}\) respectively. The charging scheme is given by the following. Consider the symmetric difference \(M_{i}\oplus N_{i}\). Since \(|N_{i}|=|M_{i}|+z\), there are exactly \(z\) edge-disjoint alternating paths in \(M_{i}\oplus N_{i}\) that start and end with an edge of \(N\)[33]. Let \(\rho=\langle a_{1},c_{1},a_{2},\ldots,a_{k},c_{k}\rangle\) be one such path. Then \(a_{2},\ldots,a_{k-1}\) are matched in both \(M_{i}\) and \(N_{i}\), so they charge themselves with a charging factor of 1. From Lemma 3, the agent \(a_{1}\) charges \(a_{k}\) with charging factor of at most 1. It remains to decide whom \(a_{k}\) charges. Since \(\rho\) terminates at \(c_{k}\) with an \(N_{i}\)-edge, the number of agents matched to \(c_{k}\) in \(N_{i}\) is more than those matched to \(c_{k}\) in \(M_{i}\). In Lemma 4, we show that this can happen only because of exhaustion of \(q_{k}\) in Algorithm 2 on or before day \(d_{i}\). So agent \(a_{k}\) can charge some agent \(a_{l}\) matched to \(c_{k}\) in \(M\) on an earlier day, with charging factor \(\frac{\alpha_{k}}{\alpha_{l}}\delta\leq\frac{\alpha_{\max}}{\alpha_{\min}}\delta\). Lemma 4: _If node \(c_{k}\) is an end-point of a path \(\rho\) in \(G_{i}\), then \(q_{k}\) is exhausted in Algorithm 2 on or before day \(d_{i}\)._ Proof: Suppose \(c_{k}\) be an endpoint of \(\rho\) in \(G_{i}\). The number of agents matched to \(c_{k}\) in \(N_{i}\) is more than those matched to \(c_{k}\) in \(M_{i}\). We know that the daily supply \(s_{i}\) of the day \(d_{i}\) is an upperbound for both \(|M_{i}|\) and \(|N_{i}|\). Since \(|N_{i}|=|M_{i}|+z\) we have \(|M_{i}|<s_{i}\). From Algorithm 2 we know that \(M_{i}\) is a maximum-size b-matching in \(H_{i}\) of size at most \(s_{i}\). If the capacity of \(c_{k}\) is not saturated in \(H_{i}\), then we can augment the path \(\rho\) contradicting the maximality of \(M_{i}\). Since \(c_{k}\) has more edges of \(M_{i}\) than \(N_{i}\) incident to it, from the definition of \(b_{i,k}\), category \(c_{k}\) must have exhausted the overall quota \(q_{k}\) in Algorithm 2 on or before day \(d_{i}\). ### Tight Example The following example shows that the competitive ratio of Algorithm 2 is tight. Let set of agents \(A=\{a_{1},a_{2},a_{3}\}\) and categories \(C=\{c_{1},c_{2}\}\). Agent \(a_{1}\) is eligible under \(\{c_{1},c_{2}\}\). Agent \(a_{2}\) is eligible only under \(\{c_{1}\}\) and agent \(a_{3}\) is eligible only under \(\{c_{2}\}\). The daily supply: \(s_{1}=1\) and \(s_{2}=2\). Overall quotas: \(q_{1}=1\) and \(q_{2}=2\). The daily quota of each category on each day is set to \(1\). The utility discounting factor for each agent is \(\delta\). The priority factor of the agent \(a_{i}\) is \(\alpha_{i}\) for \(i=1,2,3\). We assume that \(0\leq\alpha_{1}=\alpha_{3}<\alpha_{2}\leq 1\). Agent \(a_{1}\) is available on both the days. Agent \(a_{3}\) is available only on the first day, whereas agent \(a_{2}\) is available only on the second day. Figure 4 depicts this example. Since the daily supply of day \(d_{1}\) is \(1\), vaccinating \(a_{1}\) maximizes the utility gained on the first day. Hence there exists a run of Algorithm 2 where \(a_{1}\) is vaccinated under category \(c_{1}\) on day \(d_{1}\). In this run, agent \(a_{2}\) cannot be vaccinated on day \(d_{2}\) as she is eligible only under category \(c_{1}\) and overall quota of category \(c_{1}\) is exhausted. Hence, total utility gained by the online allocation is \(\alpha_{1}\). Whereas in a optimal allocation scheme all the agents can be vaccinated. Vaccinate agent \(a_{3}\) on day \(d_{1}\) under category \(c_{2}\), agent \(a_{1}\) and \(a_{2}\) on day \(d_{2}\) under categories \(c_{2},c_{1}\) respectively. This sums to a total utility of \(\alpha_{3}+\alpha_{1}\delta+\alpha_{2}\delta\). Therefore the competitive ratio of the online algorithm is \(\frac{\alpha_{3}+\alpha_{1}\delta+\alpha_{2}\delta}{\alpha_{1}}=\frac{\alpha_ {1}+\alpha_{1}\delta+\alpha_{2}\delta}{\alpha_{1}}=1+\delta+\frac{\alpha_{ \max}}{\alpha_{\min}}\delta\). The first equality holds as \(\alpha_{1}=\alpha_{3}\). The second equality holds as \(\alpha_{\max}=\alpha_{2}\) and \(\alpha_{\min}=\alpha_{1}\). Figure 4: A tight example with competitive ratio \(1+\delta+\frac{\alpha_{2}}{\alpha_{1}}\delta\). Online allocation indicated in red, Optimal allocation indicated in green and arrows indicate charging ## 5 Strategy-proofness of the online algorithm We give the details of the Pure Nash Equilibrium here. ### Pure Nash equilibrium The offline algorithm might choose any arbitrary matching that maximizes the utility. We present a deterministic tie-breaking rule similar to the one used in [5] to force the algorithm to pick a unique matching. For this, we fix an ordering \(\pi\) on agents. We show the existence of a pure Nash equilibrium under the deterministic tie-breaking. We cast our problem as a linear program as given in Fig 5. It can be seen that this LP models the network flow formulation of our problem stated in Section 2. It is known ([31]) that the polytope arising from the network flow problem is integral. To impose the deterministic tie breaking, we modify the objective function as follows. \[\text{maximize}\sum_{\begin{subarray}{c}i\in A,j\in C,\\ k\in D\end{subarray}}u_{ik}.x_{ijk}+\lambda\times REG,\text{where}\] \[REG=\sum_{i\in A}\frac{\sum_{k\in D,j\in C}x_{ijk}}{2^{\pi(i)}}\] For a sufficiently small \(\lambda\) (\(\lambda<\delta^{|D|+1}\)), the difference between utilities of any two allocations is greater than \(REG\). Therefore, the linear program in Figure 5 maximizes the objective function in Fig 5, but breaks ties to maximize \(REG\). Figure 5: Here \(u_{ik}\) is the utility value of agent \(i\) on day \(k\), and \(s_{k}\&q_{jk}\) are the daily supply and daily quotas respectively. Let \(A_{d_{i}}\) be defined as the set of agents matched on a day \(d_{i}\in D\) and \(A_{\infty}\) be the set of unmatched agents at the end of a run of the Algorithm 1. Let agent \(a_{p}\) be matched on \(d_{i}\) (WLOG, assume all unmatched agents are matched on day \(\infty\). Now, we present a proof of Theorem 5.1. Proof: (of Theorem 5.1) Suppose the agent \(a_{p}\) is matched on day \(d_{i}\), and deviates to reporting a subset of the actual available days. If agent \(a_{p}\) gets matched on a day \(d_{j}\), \(j<i\), because of misreporting her available days, then some agent \(a_{q}\) on day \(d_{j}\) will remain unmatched. This follows, since on any given day, the matching computed by algorithm 1 is of maximum size and all agents other than \(a_{p}\) turn up on at most one day. The rest of the matching will remain unchanged. But, agent \(a_{q}\) is prioritized by \(\pi\) over agent \(a_{p}\). Otherwise, algorithm 1 would have matched \(a_{p}\) and not \(a_{q}\) on day \(d_{j}\). Hence, agent \(a_{p}\) cannot replace agent \(a_{q}\) on day \(d_{j}\) even after misreporting her availability. Therefore agent \(a_{p}\) has no advantage in deviating from the strategy. Hence, the above matching is a pure Nash equilibrium. ## 6 Experimental Evaluation In Section 3 we prove worst-case guarantees for the online algorithm. We also give a tight example instance achieving a competitive ratio of \(1+2\delta\). Here, we experimentally evaluate the performance of the online algorithm and compare it with the worst-case guarantees on a real-life dataset. For finding the optimal allocation that maximizes utility, we solve the workflow linear program with the additional constraint for overall quota \(\sum_{i\in A,k\in D}x_{ijk}\leq q_{j}\quad\forall c_{j}\in C\). This LP is described in the Appendix. The code and datasets for the experiments can be found at [24] ### Methodology All experiments run on a 64-bit Ubuntu 20.04 desktop of 2.10GHz * 4 Intel Core i3 CPU with 8GB memory. The proposed online approximation algorithm runs in polynomial time. In contrast, the optimal offline algorithm solves an integer linear program which might take exponential time depending on the integrality of the polytope. We relax the integrality constraints to achieve an upper-bound on the optimal allocation. For comparing the performance of the online Algorithm 1 and the offline Algorithm, we use vaccination data of 24 hospitals in Chennai, India for the month of May 2022. We use small data-sets with varying instance sizes for evaluating the running times of the algorithms. We use large data-sets of smaller instance sizes for evaluating competitive ratios. All the programs used for the simulation are written in Python language. For solving LP, ILP, and LPR, we use the general mathematical programming solver COIN-OR Branch and Cut solver MILP (Version: 2.10.3)[10] on PuLP (Version 2.6) framework[37]. When measuring the running time, we consider the time taken to solve the LP. ### Datasets Our dataset can be divided into two parts. _Supply:_ We consider vaccination data of twenty four hospitals of Chennai, India for the month of May 2022. This data is obtained from the official COVID portal of India using the API's provided. The data-set consists of details such as daily vaccination availability, type of vaccines, age limit, hospital ID, hospital zip code, etc. for each hospital. _Demand:_ Using the Google Maps API [20], we consider the road network for these 24 hospitals in our data-set. From this data we construct a complete graph with hospitals as vertices and edge weights as the shortest distance between any two hospitals. For each hospital \(h\in H\), we consider the cluster \(C(h)\) as the set of hospitals which are at most five kilo meters away from \(h\). We consider these clusters as our categories. Now, we consider 10000 agents who are to be vaccinated. For each agent \(a\), we pick a hospital \(h\) uniformly at random. The agent \(a\) belongs to every hospital in the cluster \(C(h)\). Each agent's availability over 30 days is independently sampled from the uniform distribution. Now, we consider the age wise population distribution of the city. For each agent we assign an age sampled from this distribution. Now, we partition the set of agents as agents of age 18-45years, 45-60years and 60+. We assign \(\alpha\)-values \(0.96,0.97\) and \(0.99\) respectively. We also consider the same dataset with \(\alpha\)-values \(0.1,0.5\) and \(0.9\) respectively. We set the discounting factor \(\delta\) to be \(0.95\). For analyzing the running time of our algorithms, we use synthetically generated datasets with varying number of instance sizes ranging from 100 agents to 20000 agents. Each agent's availability and categories are chosen randomly from a uniform distribution. ### Results and Discussions We show that the online algorithm runs significantly faster than the offline algorithm while achieving almost similar results. We give a detailed numerical evaluation of the running times in the Appendix. To compare the performance of the online Algorithm 1 against the offline algorithm we define a notion of _remaining fraction of un-vaccinated agents_. That is, on a given day \(d_{i}\), we take the set of agents \(P_{d_{i}}\) who satisfy both of the following conditions: 1. Agent \(a\) is available on some day \(d_{j}\) on or before day \(d_{i}\). 2. Agent \(a\) belongs to some hospital \(h\) and \(h\) has non-zero capacity on day \(d_{j}\) \(P_{d_{i}}\) is the set of agents who could have been vaccinated without violating any constraints. Let \(\gamma_{i}=|P_{d_{i}}|\). Let \(V_{d_{i}}\) be the set of agents who are vaccinated by the algorithm on or before day \(d_{i}\). Let \(\eta_{i}=|V_{d_{i}}|\). Now, \(1-\eta_{i}/\gamma_{i}\) represents the fraction of unvaccinated agents. In Figure 6 we compare the age-wise \(1-\eta_{i}/\gamma_{i}\) of both of our online and offline algorithms. We note that the vaccination priorities given to vulnerable groups by the online approximation algorithm is very close to that of the offline optimal algorithm. In both the algorithms, By the end of day 2, 50% of \(1-\eta_{i}/\gamma_{i}\) was achieved for agents of 60+ age group. By the end of day 8, only 10% of the most vulnerable group remained unvaccinated. ### Running Time Analysis In Table 1 we compare the performance of the online algorithm and the offline algorithm against the same dataset. We consider alpha values \((0.96,0.97,0.99)\) and \((0.1,0.5,0.9)\). In both the cases, the online algorithm vaccinates almost the same number of agents as that of the offline while algorithm achieving similar total utility. The competitive ratio is 0.99. The online algorithm runs significantly faster than the offline algorithm. Figure 6: The \(1-\eta_{i}/\gamma_{i}\) value achieved by the online algorithm is very similar to that of the offline algorithm across age groups. Both algorithm vaccinate achieves vaccinate 90% of the most vulnerable group within 8 days. ### Performance Analysis In Figure 8, we plot the number of agents of age group 18-45 getting vaccinated by the online algorithm 1 on each day for \(alpha\) values 0.96 and 0.1. It is clear that the vaccination follows almost identical pattern as long as the order of \(alpha\) values remain the same. Figure 9 shows similar results for the optimal offline algorithm. The independence on cardinal values shows that the algorithm is practically useful as ordering the vulnerable groups is much more feasible than assigning a particular value. Similar plots for other age groups are given in the appendix. In Figure 10, we plot the number of agents of age group 45-60 getting vaccinated by the online algorithm 1 on each day for alpha values 0.97 and 0.5. It is clear that the vaccination follows almost identical pattern as long as the order of alpha values remain the same. Figure 11 shows similar results for the optimal offline algorithm. Figure 12 and Figure 13 plot similar results for the 60+ age group population. We note that in both online and the offline algorithm, allocations of vaccines for the age group 60+ are higher in the initial days and decreases with days. Most of the agents from this group are vaccinated by the end of 10th day. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Online} & \multicolumn{2}{c|}{Offline} \\ & \multicolumn{2}{c|}{Algorithm} & \multicolumn{2}{c|}{Algorithm} \\ \hline \(\alpha\) value & \(\mathbf{\alpha_{1}}\) & \(\mathbf{\alpha_{2}}\) & \(\mathbf{\alpha_{1}}\) & \(\mathbf{\alpha_{2}}\) \\ \hline \(\delta\) & 0.95 & 0.95 & 0.95 & 0.95 \\ \hline \begin{tabular}{c} Running time \\ (in sec) \\ \end{tabular} & 319.04 & 336.55 & 888.90 & 806.65 \\ \hline \begin{tabular}{c} Total \\ no. of agents \\ vaccinated \\ \end{tabular} & 7154 & 7145 & 7192 & 7192 \\ \hline \begin{tabular}{c} Total Utility \\ \end{tabular} & 3567.95 & 1550.23 & 3580.68 & 1573.95 \\ \hline \end{tabular} \end{table} Table 1: The vector \(\mathbf{\alpha_{1}}=(0.96,0.97,0.99)\) and vector \(\mathbf{\alpha_{2}}=(0.1,0.5,0.9)\) represent the \(alpha\) values for the three age groups. The average competitive ratio is 0.99. The average running time of the online and the offline algorithms are 327.79 seconds and 847.77 seconds respectively. Figure 7: Time taken by offline and online algorithms (on synthetic datasets) vs instance size Figure 8: Number of agents in the 60+ age group vaccinated by the online algorithm for \(alpha\)-values 0.96 and 0.1 respectively. Figure 9: Number of agents in the 60+ age group vaccinated by the offline algorithm for \(alpha\)-values 0.96 and 0.1 respectively. Figure 11: Number of agents in the 45-60 age group vaccinated by the offline algorithm for \(alpha\)-values 0.97 and 0.5 respectively. Figure 10: Number of agents in the 45-60 age group vaccinated by the online algorithm for \(alpha\)-values 0.97 and 0.5 respectively. Figure 12: Number of agents in the 60+ age group vaccinated by the online algorithm for \(alpha\)-values 0.99 and 0.9 respectively. Figure 13: Number of agents in the 60+ age group vaccinated by the offline algorithm for \(alpha\)-values 0.99 and 0.9 respectively. ## 7 Conclusion We investigate the problem of dynamically allocating perishable healthcare goods to agents arriving over a period of time. We capture various constraints while allocating a scarce resource to a large population, like production constraint on the resource, infrastructure and constraints. While we give an offline optimal algorithm for Model 1, getting one for Model 2 or showing NP-hardness remains open. We also propose an online algorithm approximating welfare that elicits information every day and makes an immediate decision. The online algorithm does not require a foresight and hence has a practical appeal.Our experiments show that the online algorithm generates a utility roughly equal to the utility of the offline algorithm while achieving very little to no wastage.
2301.10534
The Bogomolov multiplier of groups of order $p^7$ and exponent $p$
In this paper, the Bogomolov multiplier of $p$-groups of order $p^7$ ($p>2$) and exponent $p$ is given.
Z. Araghi Rostami, M. Parvizi, P. Niroomand
2023-01-25T11:56:49Z
http://arxiv.org/abs/2301.10534v1
# The Bogomolov multiplier of groups of order \(p^{7}\) and exponent \(p\) ###### Abstract. In this paper, the Bogomolov multiplier of \(p\)-groups of order \(p^{7}\) (\(p>2\)) and exponent \(p\) is given. Key words and phrases:Commutativity-Preserving exterior product, \(\bar{B_{0}}\)-pairing, Curly exterior square, Bogomolov multiplier the group \(\tilde{B_{0}}(G)\) can be described as a section of the non abelian exterior square of a group \(G\). Also, he proved that \(\tilde{B_{0}}(G)=\mathcal{M}(G)/\mathcal{M}_{0}(G)\), in which the Schur multiplier \(\mathcal{M}(G)\) or the same \(H^{2}(G,\mathbb{Q}/\mathbb{Z})\) interpreted as the kernel of the commutator homomorphism \(G\wedge G\to[G,G]\) given by \(x\wedge y\to[x,y]\), and \(\mathcal{M}_{0}(G)\) is a subgroup of \(\mathcal{M}(G)\) defined as \(\mathcal{M}_{0}(G)=<x\wedge y\mid[x,y]=0,\ x,y\in G>\). Therefore in the class of finite groups, \(\tilde{B_{0}}(G)\) is naturally isomorphic to \(B_{0}(G)\). Similar to the Schur multiplier, the Bogomolov multiplier can be explained as a measure of the extent to which relations among commutators in a group fail to be consequences of universal relation. Furthermore, Moravec's method relates the Bogomolov multiplier to the concept of commuting probability of a group and shows that the Bogomolov multiplier plays an important role in commutativity preserving central extensions of groups, that are famous cases in \(K\)-theory, (see [12, 18] for more information). There are some papers to compute this multiplier for some groups, for example Chu and Kang in [7], proved that the Bogomolov multiplier of all groups of order at most \(p^{4}\), is trivial. Bogomolov in [3] claimed that if \(G\) is a group of order \(p^{5}\), then \(\tilde{B_{0}}(G)\) is trivial. This claim in [6] was confirmed by Chu et al. for \(p=2\). Later, Moravec proved that this claim is false for \(p=3\), and he showed that there are precisely three groups of order \(3^{5}\) with the non trivial Bogomolov multipliers. Bogomolov in [3] found some examples of groups of order \(p^{6}\) with non trivial Bogomolov multiplier. Recently, Yin Chen and Rui Ma in [5], computed the Bogomolov multiplier for some \(p\)-groups of order \(p^{6}\). On the other hand, in [10, 19], Hoshi et al. and Moravec used different methods to prove that if \(p>3\) is prime and \(G\) is a group of order \(p^{5}\), then the Bogomolov multiplier is non trivial if and only if \(G\) belongs to the isoclinism family \(\Phi_{10}\). Also, Moravec in [20] showed that for two isoclinic \(p\)-groups \(G_{1}\) and \(G_{2}\), \(\tilde{B_{0}}(G_{1})\) is isomorphic to \(\tilde{B_{0}}(G_{2})\). Thus if \(\Phi\) denotes an isoclinism family of finite \(p\)-groups, so the Bogomolov multiplier of \(\Phi\) is well defined and denotes by \(\tilde{B_{0}}(\Phi)\). Furthermore, in [6] Chue et al. classified all non abelian groups of order \(2^{5}\) and \(2^{6}\) with non trivial Bogomolov multipliers. Michilov in [16], computed \(\tilde{B_{0}}(G)\) for some \(p\)-groups of nilpotency class \(2\) that do not have the ABC (Abelian-By-Cyclic) property. Also, the Bogomolov multiplier was computed for groups of order \(2^{7}\) by Jezernik and Moravec in [12]. The classification of \(p\)-groups of order \(p^{7}\) (\(p>2\)) and exponent \(p\), based on isomorphism, was conducted by Wilkinson in [28]. In this paper, we determine which one of those groups has trivial Bogomolov multiplier. ## 2. **Preliminaries and notations** For the convenience of the reader, we give some known results and notations that will be used through the paper. Let \(G\) be a group. The exterior square of \(G\), \(G\wedge G\), is defined to be the group generated by the symbols \(x\wedge y\), subject to the following relations: * \(xy\wedge z=(x^{y}\wedge z^{y})(y\wedge z)\), * \(x\wedge yz=(x\wedge z)(x^{z}\wedge y^{z})\), * \(x\wedge x=1\), for all \(x,y,z\in G\). Let \(L\) be a group. A function \(\Phi:G\times G\to L\) is called an exterior pairing, if * \(\Phi(xy,z)=\Phi(x^{y},z^{y})\Phi(y,z)\), * \(\Phi(x,yz)=\Phi(x,z)\Phi(x^{z},y^{z})\), * \(\Phi(x,x)=1\), for all \(x,y,z\in G\). It is easy to see that, an exterior pairing \(\Phi\) determines a unique homomorphism of groups \(\Phi^{*}:G\wedge G\to L\) given by \(\Phi^{*}(x\wedge y)=\Phi(x,y)\) for all \(x,y\in G\). An example of an exterior pairing is the commutator map \(\kappa:G\times G\to[G,G]\) given by \((x,y)\longrightarrow[x,y]\) for all \(x,y\in G\). It induces a homomorphism \(\tilde{\kappa}:G\wedge G\to[G,G]\), given by \(x\wedge y\longmapsto[x,y]\) for all \(x,y\in G\). Miller in [17] showed that the kernel of \(\tilde{\kappa}\) is isomorphic to the Schur multiplier of \(G\), that is \[\mathcal{M}(G)=\{\prod_{i=0}^{n}(x_{i}\wedge y_{i})^{\varepsilon_{i}}\in G \wedge G\ |\ \varepsilon_{i}=\pm 1,\prod_{i=0}^{n}[x_{i},y_{i}]^{\varepsilon_{i}}=1\,\ i\in \mathbb{N}\cup\{0\}\}.\] Moreover, \(\mathcal{M}_{0}(G)\) is the subgroup of \(\mathcal{M}(G)\) generated by all elements \(x\wedge y\), such that \(x,y\in G\) commute, that is \[\mathcal{M}_{0}(G)=\{\prod_{i=0}^{n}(x_{i}\wedge y_{i})^{\varepsilon_{i}}\in G \wedge G\ |\ \varepsilon_{i}=\pm 1,[x_{i},y_{i}]=1\,\ i\in\mathbb{N}\cup\{0\}\}.\] Moravec in [18] proved that for a finite group \(G\), \(B_{0}(G)\cong\tilde{B_{0}}(G)=\mathcal{M}(G)/\mathcal{M}_{0}(G)\). Blyth and Morse in [2], introduced a suitable way for obtaining the exterior square of \(G\). The advantage of this description is the ability of using the full power of the commutator calculus instead of computing with elements of \(G\wedge G\). **Definition 2.1**.: _[_19_]_ _Let \(G\) be a group and let \(G^{\varphi}\) be an isomorphic copy of \(G\) via the mapping \(\varphi:x\to x^{\varphi}\), for all \(x\in G\). We define the group \(\tau(G)\) as follows:_ \[\tau(G)=<G,G^{\varphi}\ |\ [x,y^{\varphi}]^{z}=[x^{z},(y^{z})^{\varphi}]=[x,y^{ \varphi}]^{z^{\varphi}},[x,x^{\varphi}]=1\,\ x,y,z\in G>.\] Note that \(G\) and \(G^{\varphi}\) may be embedded into \(\tau(G)\). So the groups \(G\) and \(G^{\varphi}\) can be viewed as subgroups of \(\tau(G)\). Let \([G,G^{\varphi}]=<[x,y^{\varphi}]\ |\ x,y\in G>\). Now we give some properties of \(\tau(G)\) and \([G,G^{\varphi}]\) that will be used frequently in our calculations. **Proposition 2.2**.: _[_2_, Proposition 16]_ _Let \(G\) be a group. Then the map \(\Phi:G\wedge G\to[G,G^{\varphi}]\) given by \((x\wedge y)\to[x,y^{\varphi}]\) for all \(x,y\in G\), is isomorphism._ Let \(\kappa^{*}=\tilde{\kappa}\Phi^{-1}\) be the composite map from \([G,G^{\varphi}]\) to \([G,G]\), \(\mathcal{M}^{*}(G)=\ker\kappa^{*}\) and \(\mathcal{M}_{0}{}^{*}(G)=\Phi(\mathcal{M}_{0}(G))\). More precisely, we have \[\mathcal{M}^{*}(G)=\{\prod_{i=0}^{n}[x_{i},y_{i}{}^{\varphi}]^{ \varepsilon_{i}}\in[G,G^{\varphi}]\ |\ \varepsilon_{i}=\pm 1,\prod_{i=0}^{n}[x_{i},y_{i}]^{ \varepsilon_{i}}=1\,\ i\in\mathbb{N}\cup\{0\}\}\] and \[\mathcal{M}_{0}{}^{*}(G)=\{\prod_{i=0}^{n}[x_{i},y_{i}{}^{\varphi}]^{ \varepsilon_{i}}\in[G,G^{\varphi}]\ |\ \varepsilon_{i}=\pm 1,[x_{i},y_{i}]=1\,\ i\in \mathbb{N}\cup\{0\}\}.\] It's an immediate consequence that in finite case, \(B_{0}(G)\) is isomorphic to \(\dfrac{\mathcal{M}^{*}(G)}{\mathcal{M}_{0}{}^{*}(G)}=\tilde{B_{0}}(G)\) (see [18]). So in order to show that the triviality of \(\tilde{B_{0}}(G)=0\), it suffices to show that \(\mathcal{M}^{*}(G)\subseteq\mathcal{M}_{0}{}^{*}(G)\). In the following, we introduce farther properties of \(\tau(G)\) and \([G,G^{\varphi}]\) that will be useful. **Lemma 2.3**.: _[_2_, Lemmas 9, 10, 11]_ _Let \(G\) be a group. The following statements, for all \(x,y,z,v,w\in G\) and all \(n,m\in\mathbb{N}\), hold:_ 1. \([x,yz]=[x,z][x,y][x,y,z]\) _and_ \([xy,z]=[x,z][x,z,y][y,z]\)_._ 2. _If_ \(G\) _is nilpotent of class_ \(c\)_, then_ \(\tau(G)\) _is nilpotent of class at most_ \(c+1\)_._ 3. _If_ \(G\) _is nilpotent of class_ \(\leq 2\)_, then_ \([G,G^{\varphi}]\) _is abelian._ 4. \([x,y^{\varphi}]=[x^{\varphi},y]\)_._ 5. \([x,y,z^{\varphi}]=[x,y^{\varphi},z]=[x^{\varphi},y,z]=[x^{\varphi},y^{ \varphi},z]=[x^{\varphi},y,z^{\varphi}]=[x,y^{\varphi},z^{\varphi}]\)_._ 6. \([[x,y^{\varphi}],[v,w^{\varphi}]]=[[x,y],[v,w]^{\varphi}]\)_._ 7. \([x^{n},y^{\varphi}]=[x,y^{\varphi}]^{n}=[x,(y^{\varphi})^{n}]\)_, where_ \([x,y]=1\)_._ 8. _If_ \([G,G]\) _is nilpotent of class_ \(c\)_, then_ \([G,G^{\varphi}]\) _is nilpotent of class_ \(c\) _or_ \(c+1\)_._ 9. _If_ \(x\) _and_ \(y\) _are commuting elements of_ \(G\) _of orders_ \(m\) _and_ \(n\)_, respectively, then the order of_ \([x,y^{\varphi}]\) _divides_ \(\gcd(m,n)\)_._ In the process of the proofs in the third section, we need the following lemma to expanding the commutators. **Lemma 2.4**.: _Let \(G\) be a nilpotent group of class at most \(6\). Then_ \[[x^{n},y]^{n}= [x,y]^{n}[x,y,x]^{\binom{n}{2}}[x,y,x,x]^{\binom{n}{3}}[x,y,x,x,x] ^{\binom{n}{4}}[x,y,x,[x,y]]^{a(n)}\] \[[x,y,x,x,x,x]^{\binom{n}{5}}[x,y,x,[x,y],x]^{\binom{n}{3}+2\binom{n}{ 4}}[x,y,x,x,[x,y]]^{\binom{n}{3}+\binom{n}{4}},\] _for all \(x,y\in\tau(G)\) and every positive integer \(n\), where \(a(n)=n(n-1)(2n-1)/6\)._ Proof.: We proceed by induction on \(n\). The case that \(n=1\) is obvious. By using the induction hypothesis we have: \[[x^{n+1},y] =[x^{n},y][x^{n},y,x][x,y]\] \[=[x^{n},y][x,y][x^{n},y,x][x^{n},y,x,[x,y]]\] \[=[x,y]^{n+1}[x,y,x]^{\binom{n}{2}+n}[x,y,x,x]^{\binom{n}{2}+ \binom{n}{3}}\] \[\qquad[x,y,x,x]^{\binom{n}{3}+\binom{n}{4}}[x,y,x,[x,y]]^{a(n+1)} [x,y,x,x,x]^{\binom{n}{4}+\binom{n}{5}}\] \[\qquad[x,y,x,[x,y],x]^{\binom{n}{2}+3\binom{n}{3}+2\binom{n}{4}}[x,y,x,x,[x,y]]^{\binom{n}{2}+2\binom{n}{3}+\binom{n}{4}}.\] Hence our conclusion follows. **Definition 2.5**.: _[_24_, Definition 5]__\(G\) is polycyclic if it has a descending chain of subgroups \(G=G_{1}\geq G_{2}\geq...\geq G_{n+1}=1\) in which \(G_{i+1}\lhd G_{i}\), and \(G_{i}/G_{i+1}\) is cyclic. Such a chain of subgroups is called a polycyclic series._ Since \(G_{i}/G_{i+1}\) is cyclic, there exist \(x_{i}\) with \(<x_{i}G_{i+1}>=G_{i}/G_{i+1}\) for every \(i\in\{1,...,n\}\). **Definition 2.6**.: _[_24_, Definition 7]_ _A polycyclic generating sequence of a group \(G\) (PCGS) is a sequence \(x_{1},...,x_{n}\) of elements of \(G\) such that \(<x_{i}G_{i+1}>=G_{i}/G_{i+1}\) for \(1\leq i\leq n\)._ **Definition 2.7**.: _[_24_, Definition 8]_ _Let \(X\) be a PCGS for \(G\). The sequence \(R(X):=(r_{1},...,r_{n})\) defined by \(r_{i}:=|G_{i}:G_{i+1}|\in\mathbb{N}\cup\{\infty\}\) is the sequence of relative orders for \(X\)._ Note that \(G\) is finite if and only if every entry in \(R(X)\) is finite. **Lemma 2.8**.: _[_24_, Lemma 12]_ _Let \(X=x_{1},...,x_{n}\) be a polycyclic sequence for \(G\) with the relative orders \(R(X)=(r_{1},...,r_{n})\). For every \(g\in G\) there exists a sequence \((e_{1},...,e_{n})\), with \(e_{i}\in\mathbb{Z}\) for \(1\leq i\leq n\) and \(0\leq e_{i}<r_{i}\), such that \(g=x_{1}^{e_{1}},...,x_{n}^{e_{n}}\)._ **Definition 2.9**.: _[_24_, Definition 14]_ _The expression \(g={x_{1}}^{e_{1}}\ldots{x_{n}}^{e_{n}}\) is the normal form of \(g\in G\) with respect to \(X\)._ **Definition 2.10**.: _[_24_, Definition 16]_ _A presentation \(\{x_{1},...,x_{n}|R\}\) is a polycyclic presentation, if there is a sequence \(S=(s_{1},...,s_{n})\) with \(s_{i}\in\mathbb{N}\cup\{\infty\}\) and integers \(a_{i,k},b_{i,j,k},c_{i,j,k}\) such that \(R\) consists of the following relations_ \[{x_{i}}^{s_{i}}={x_{i+1}}^{a_{i,i+1}}...{x_{n}}^{a_{i,n}}\quad\text{for}\quad 1 \leq i\leq n\ \ \text{with}\ \ s_{i}<\infty,\] \[{x_{j}}^{-1}{x_{i}}{x_{j}}={x_{i+1}}^{b_{i,j,j+1}}...{x_{n}}^{b_{i,j,n}}\quad \text{for}\quad 1\leq j<i\leq n,\] \[x_{j}x_{i}{x_{j}}^{-1}=x_{j+1}{}^{c_{i,j,j+1}}...{x_{n}}^{c_{i,j,n}}\quad\text{ for}\quad 1\leq j<i\leq n.\] If \(G\) is defined by such a polycyclic presentation, then \(G\) is called a \(PC\) group. In addition to every \(PC\) group can be defined by a polycyclic presentation. **Definition 2.11**.: _[_24_]_ _A polycyclic presentation in which every element is represented by exactly one normal word is consistent._ In this paper, we will also use of consistent presentation in Section 4 to calculate the Bogomolov multiplier of some \(p\)-groups of order \(p^{7}\) and exponent \(p\). **Proposition 2.12**.: _[_2_, Proposition 20]_ _Let \(G\) be a finite group with a polycyclic generating sequence \(x_{1},\ldots,x_{n}\), then the group \([G,G^{\varphi}]\) is generated by_ \[\{[x_{i},{x_{j}}^{\varphi}]\ |\ i,j=1,\ldots,n,i>j\}.\] **Proposition 2.13**.: _[_19_, Proposition 3.2]_ _Let \(p>3\) be a prime number, \(G\) be a \(p\)-group of class at most \(3\) and \(x_{1},\ldots,x_{n}\) be a polycyclic generating sequence of \(G\). If all nontrivial commutators \([x_{i},x_{j}]\ (i>j)\) are different elements of the polycyclic generating sequence, then \(B_{0}(G)=0\)._ ## 3. **Groups of order \(p^{7}\) and exponent \(p\ (p>5)\) with trivial Bogomolov multiplier** The groups of exponent \(p\) and order \(p^{7}\), were classified by Wilkinson in [28]. Each group is presented by seven generators, named alphabetically from \(a\) to \(g\). The non trivial commutator relations between generators are introduced as \([b,a]=cg\), the right side of this relation is the usual product of the elements of the group. It is assumed that all other commutators are identity, and that the \(p\)th power of every generator is identity. Using the notations of [28], we have the following propositions. Here, we use various techniques to determine group with trivial Bogomolov multiplier depending on the structure of the group, so we state the results in separate propositions. **Proposition 3.1**.: _The following groups, have trivial Bogomolov multiplier._ \(G_{1}:\) _Elementary abelian groups._ \(G_{2}:\)__\(<\)\(a,b,c,d,e,f,g\ |\ [b,a]=c>\)_._ \(G_{3}:\)__\(<\)\(a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d>\)_._ \(G_{4}:\)__\(<\)\(a,b,c,d,e,f,g\ |\ [b,a]=d,[c,a]=e>\)_._ \(G_{5}:\)__\(<\)\(a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[c,b]=e>\)_._ \(G_{8}:\)__\(<\)\(a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[d,a]=e>\)_._ \[G_{10}:<a,b,c,d,e,f,g\ |\ [b,a]=d,[c,a]=e,[c,b]=f>.\] \[G_{11}:<a,b,c,d,e,f,g\ |\ [b,a]=e,[d,c]=f>.\] \[G_{15}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=e,[d,b]=f>.\] \[G_{17}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=e,[d,a]=f>.\] \[G_{35}:<a,b,c,d,e,f,g\ |\ [b,a]=e,[c,a]=f,[d,c]=g>.\] \[G_{39}:<a,b,c,d,e,f,g\ |\ [b,a]=e,[c,a]=f,[d,a]=g>.\] \[G_{40}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=e,[d,a]=f,[d,b]=g>.\] \[G_{41}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=e,[c,b]=f,[d,a]=g>.\] \[G_{45}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=f,[e,d]=g>.\] Proof.: These groups are nilpotent of class at most \(3\) and all nontrivial commutators \([x_{i},x_{j}]\ (i>j)\) are different elements of the polycyclic generating sequence. Thus, by using Proposition 2.13, they have trivial Bogomolov multiplier. **Proposition 3.2**.: _The following groups, have trivial Bogomolov multiplier._ [MISSING_PAGE_POST] Proof.: We state the proof in details to \(G_{20}\) and \(G_{190}\), the remaining case can be proved in similar way. Let \(G\cong G_{20}\). By using Proposition 2.12, \([G_{20},G_{20}^{\varphi}]\) is generated by \([b,a^{\varphi}],[c,a^{\varphi}]\), \([c,b^{\varphi}],[d,a^{\varphi}]\) modulo \({\mathcal{M}_{0}}^{*}(G_{20})\). Using Lemma 2.3 (vi), we have \[[[b,a^{\varphi}],[c,a^{\varphi}]]=[[b,a],[c,a]^{\varphi}]=[c,d^{\varphi}] \in{\mathcal{M}_{0}}^{*}(G_{20}),\] and \[[[b,a^{\varphi}],[c,b^{\varphi}]]=[[b,a],[c,b]^{\varphi}]=[c,e^{\varphi}] \in{\mathcal{M}_{0}}^{*}(G_{20}).\] Similarly, \[[[b,a^{\varphi}],[d,a^{\varphi}]],[[c,a^{\varphi}],[c,b^{\varphi}]],[[c,a^{ \varphi}],[d,a^{\varphi}]],[[c,b^{\varphi}],[d,a^{\varphi}]]\in{\mathcal{M}_{0 }}^{*}(G_{20}).\] Thus any two elements of the generating set of \([G_{20},G_{20}^{\varphi}]\), are commuting modulo \({\mathcal{M}_{0}}^{*}(G_{20})\), and each element of \([G_{20},G_{20}^{\varphi}]\) can be expressed as \[[b,a^{\varphi}]^{\alpha_{1}}[c,a^{\varphi}]^{\alpha_{2}}[c,b^{\varphi}]^{\alpha_ {3}}[d,a^{\varphi}]^{\alpha_{4}}\tilde{w},\] where \(\tilde{w}\in{\mathcal{M}_{0}}^{*}(G_{20})\), and \(1\leq i\leq 4\), \(\alpha_{i}\in\mathbb{Z}\). Let \(w=[b,a^{\varphi}]^{\alpha_{1}}[c,a^{\varphi}]^{\alpha_{2}}[c,b^{\varphi}]^{ \alpha_{3}}[d,a^{\varphi}]^{\alpha_{4}}\tilde{w}\in{\mathcal{M}}^{*}(G_{20})\), then \(1=\kappa^{*}(w)=c^{\alpha_{1}}d^{\alpha_{2}}e^{\alpha_{3}}f^{\alpha_{4}}\). Since \(c,d,e,f\) are in the polycyclic generating sequence and \(exp(G_{20})=p\), we have \(c^{\alpha_{1}}=d^{\alpha_{2}}=e^{\alpha_{3}}=f^{\alpha_{4}}=1\) and \(p\) divides \(\alpha_{1},\alpha_{2},\alpha_{3}\) and \(\alpha_{4}\), respectively. Now using Lemmas 2.3 and 2.4, we have \[1=[c^{p},b^{\varphi}]=[c,b^{\varphi}]^{p}[c,b^{\varphi},c]^{\binom{p}{2}}[c,b ^{\varphi},c,c]^{\binom{p}{3}}.\] Since \[[c,b^{\varphi},c]^{\binom{p}{2}}=[c,b,c^{\varphi}]^{\binom{p}{2}}=[e,c^{ \varphi}]^{\binom{p}{2}}=[e^{\binom{p}{2}},c^{\varphi}]=1\] and \[[c,b^{\varphi},c,c]^{\binom{p}{3}}=[c,b,c,c^{\varphi}]^{\binom{p}{3}}=[e,c,c^{ \varphi}]^{\binom{p}{3}}=[1,c^{\varphi}]^{\binom{p}{3}}=1,\] \([c,b^{\varphi}]^{p}=1\). Similarly, \([b,a^{\varphi}]^{p}=[c,a^{\varphi}]^{p}=[d,a^{\varphi}]^{p}=1\). Thus \(w=\tilde{w}\). Hence \({\mathcal{M}}^{*}(G_{20})\subseteq{\mathcal{M}_{0}}^{*}(G_{20})\) and \(\tilde{B_{0}}(G_{20})=0\). Now let \(G\cong G_{190}\). By using Proposition 2.12, the group \([G_{190},G_{190}^{\varphi}]\) is generated by \([b,a^{\varphi}],[c,a^{\varphi}],[d,a^{\varphi}],[e,a^{\varphi}]\), and \([f,a^{\varphi}]\) modulo \({\mathcal{M}_{0}}^{*}(G_{190})\). Lemma 2.3 (vi) implies \[[[b,a^{\varphi}],[c,a^{\varphi}]]=[c,d^{\varphi}]\in{\mathcal{M}_{0}}^{*}(G_{ 190}).\] Also \[[[b,a^{\varphi}],[d,a^{\varphi}]]=[c,e^{\varphi}]\in{\mathcal{M}_{0}}^{*}(G_ {190}).\] Similarly, \[[[b,a^{\varphi}],[e,a^{\varphi}]],[[b,a^{\varphi}],[f,a^{\varphi }]],[[c,a^{\varphi}],[d,a^{\varphi}]],[[c,a^{\varphi}],[e,a^{\varphi}]],[[c,a^ {\varphi}],[f,a^{\varphi}]],\] \[[[d,a^{\varphi}],[e,a^{\varphi}]],[[d,a^{\varphi}],[f,a^{\varphi }]],[[e,a^{\varphi}],[f,a^{\varphi}]],[[e,a^{\varphi}],[f,a^{\varphi}]]\in{ \mathcal{M}_{0}}^{*}(G_{190}).\] Thus any two elements of generating set of \([G_{190},G_{190}^{\varphi}]\) are commuting modulo \({\mathcal{M}_{0}}^{*}(G_{190})\). So each element of \([G_{190},G_{190}^{\varphi}]\) can be written as \[[b,a^{\varphi}]^{\alpha_{1}}[c,a^{\varphi}]^{\alpha_{2}}[d,a^{\varphi}]^{ \alpha_{3}}[e,a^{\varphi}]^{\alpha_{4}}[f,a^{\varphi}]^{\alpha_{5}}\tilde{w},\] where \(\tilde{w}\in{\mathcal{M}_{0}}^{*}(G_{190})\), and \(1\leq i\leq 5\), \(\alpha_{i}\in\mathbb{Z}\). Let \(w=[b,a^{\varphi}]^{\alpha_{1}}[c,a^{\varphi}]^{\alpha_{2}}[d,a^{\varphi}]^{ \alpha_{3}}[e,a^{\varphi}]^{\alpha_{4}}[f,a^{\varphi}]^{\alpha_{5}}\tilde{w} \in{\mathcal{M}}^{*}(G_{190})\). Then \(1=\kappa^{*}(w)=c^{\alpha_{1}}d^{\alpha_{2}}e^{\alpha_{3}}f^{\alpha_{4}}g^{ \alpha_{5}}\). Since \(c,d,e,f,g\) belong to the polycyclic generating sequence and \(exp(G_{190})=p\), we obtain \(c^{\alpha_{1}}=d^{\alpha_{2}}=e^{\alpha_{3}}=f^{\alpha_{4}}=g^{\alpha_{5}}=1\) and \(p\) divides \(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}\) and \(\alpha_{5}\), respectively. Now using Lemmas 2.3 and 2.4, we have \[1=[b^{p},a^{\varphi}]= [b,a^{\varphi}]^{p}[b,a^{\varphi},b]^{\binom{p}{2}}[b,a^{\varphi},b,b]^{\binom{p}{3}}[b,a^{\varphi},b,b,b]^{\binom{p}{4}}[b,a^{\varphi},b,[b,a^{ \varphi}]]^{\binom{p}{3}}\] \[[b,a^{\varphi},b,b,b]^{\binom{p}{5}}[b,a^{\varphi},b,b,[b,a^{ \varphi}]]^{\binom{p}{4}+\binom{p}{3}}[b,a^{\varphi},b,[b,a^{\varphi}],b]^{ \binom{p}{4}}.\] Since \[[b,a^{\varphi},b]^{\binom{p}{2}}=[b,a,b^{\varphi}]^{\binom{p}{2}}=[c,b^{ \varphi}]^{\binom{p}{2}}=[c^{\binom{p}{2}},b^{\varphi}]=1,\] \[[b,a^{\varphi},b,b]^{\binom{p}{3}}=[b,a,b,b^{\varphi}]^{\binom{p}{5}}=[c,b,b^ {\varphi}]^{\binom{p}{5}}=[1,b^{\varphi}]^{\binom{p}{5}}=1,\] and \[[b,a^{\varphi},b,b]^{\binom{p}{4}} =[b,a^{\varphi},b,[b,a^{\varphi}]]^{\binom{p}{3}}=[b,a^{\varphi},b,b,b]^{\binom{p}{5}}\] \[=[b,a^{\varphi},b,b,[b,a^{\varphi}]]^{\binom{p}{4}+\binom{p}{3}}=[ b,a^{\varphi},b,[b,a^{\varphi}],b]^{\binom{p}{4}}=1,\] we have \([b,a^{\varphi}]^{p}=1\). Similarly \([c,a^{\varphi}]^{p}=[d,a^{\varphi}]^{p}=[e,a^{\varphi}]^{p}=[f,a^{\varphi}]^{ p}=1\). So, \(\mathcal{M}^{*}(G_{190})\subseteq\mathcal{M}_{0}{}^{*}(G_{190})\). Hence \(\tilde{B_{0}}(G_{190})=0\). **Proposition 3.3**.: _The following groups, have trivial Bogomolov multiplier._ \[G_{6}:< a,b,c,d,e,f,g\ |\ [b,a]=e,[d,c]=e>.\] \[G_{7}:< a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=[d,b]=e>.\] \[G_{12}:< a,b,c,d,e,f,g\ |\ [b,a]=e,[c,a]=f,[d,c]=e>.\] \[G_{16}:< a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=e,[c,b]=f,[d,b]=f>.\] \[G_{21}:< a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=f,[e,d]=f>.\] \[G_{22}:< a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[d,a]=f,[e,b]=f>.\] \[G_{24}:< a,b,c,d,e,f,g\ |\ [b,a]=d,[c,a]=e,[d,b]=f,[e,c]=f>.\] \[G_{25}:< a,b,c,d,e,f,g\ |\ [b,a]=d,[c,a]=e,[d,b]=f,[e,c]=f^{t}>.\] \[G_{26}:< a,b,c,d,e,f,g\ |\ [b,a]=d,[c,a]=e,[d,b]=f,[e,a]=f>.\] \[G_{28}:< a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[c,b]=e,[d,a]=f,[e,b]=f>.\] \[G_{32}:< a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[d,a]=e,[d,b]=f,[e,b]=f,[d,c]=f^{-1}>.\] \[G_{36}:< a,b,c,d,e,f,g\ |\ [b,a]=e,[c,a]=f,[d,b]=f,[d,c]=g>.\] \[G_{37}:< a,b,c,d,e,f,g\ |\ [b,a]=e,[c,a]=f,[c,b]=g,[d,b]=f>.\] \[G_{43}:< a,b,c,d,e,f,g\ |\ [b,a]=f,[d,c]=g,[e,d]=f>.\] \[G_{46}:< a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=f,[d,b]=f,[e,d]=g>.\] \[G_{47}:< a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=f,[d,b]=g,[e,d]=f>.\] \[G_{48}:< a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=f,[d,b]=g,[e,b]=f>.\] \[G_{49}:< a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=f,[d,a]=g,[e,d]=f>.\] \[G_{50}:< a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=f,[d,a]=g,[e,b]=f>.\] \[G_{52}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=f,[d,a]=g,[e,b]=g>.\] \[G_{54}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=f,[c,b]=g,[e,d]=f>.\] \[G_{55}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=f,[c,b]=g,[d,b]=f,[e,d]=g>.\] \[G_{57}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=f,[c,b]=g,[d,a]=g,[d,b]=f,[e,b]=g>.\] \[g>.\] \[G_{59}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[c,b]=g,[d,a]=f,[e,b]=f>.\] \[G_{62}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[c,b]=g,[d,a]=f,[e,b]=g>.\] \[G_{71}:<a,b,c,d,e,f,g\ |\ [b,a]=d,[c,a]=e,[c,b]=g,[d,b]=f,[e,c]=f>.\] \[G_{73}:<a,b,c,d,e,f,g\ |\ [b,a]=d,[c,a]=e,[d,a]=f,[d,b]=g,[e,c]=f>.\] \[G_{74}:<a,b,c,d,e,f,g\ |\ [b,a]=d,[c,a]=e,[c,b]=g,[d,a]=f,[e,c]=f>.\] \[G_{76}:<a,b,c,d,e,f,g\ |\ [b,a]=d,[c,a]=e,[d,a]=f,[d,b]=g,[e,c]=g>.\] \[G_{81}:<a,b,c,d,e,f,g\ |\ [b,a]=d,[c,a]=e,[d,a]=f,[e,a]=f,[e,c]=g,d,b]=g^{-1}>.\] \[G_{89}:<a,b,c,d,e,f,g\ |\ [b,a]=d,[c,a]=e,[d,c]=f,[e,b]=f,[e,c]=g>.\] \[G_{91}:<a,b,c,d,e,f,g\ |\ [b,a]=d,[c,a]=e,[d,b]=g,[d,c]=f,[e,a]=f,[e,b]=f>.\] \[G_{100}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=e,[c,b]=g,[d,c]=f,[e,a]=f,[d,b]= eg>.\] \[G_{105}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[d,a]=e,[d,b]=f,[e,a]=g,[e,b]=f,[d,c]=f^{-1}>.\] \[G_{110}:<a,b,c,d,e,f,g\ |\ [b,a]=g,[d,c]=g,[f,e]=g>.\] \[G_{111}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=g,[d,b]=g,[f,e]=g>.\] \[G_{112}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[d,a]=g,[f,e]=g>.\] \[G_{114}:<a,b,c,d,e,f,g\ |\ [b,a]=d,[c,a]=e,[d,a]=g,[e,c]=g,[f,b]=g>.\] \[G_{117}:<a,b,c,d,e,f,g\ |\ [b,a]=d,[c,a]=e,[d,c]=g,[e,b]=g,[f,a]=g>.\] \[G_{118}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=e,[d,b]=e,[d,c]=g,[e,a]=g,[f,d]=g>.\] \[g>.\] \[G_{120}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=e,[d,b]=e,[d,c]=g,[e,a]=g,[f,b]=g>.\] \[g,[f,d]=g>.\] \[G_{121}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[d,a]=e,[e,a]=g,[f,b]=g>.\] \[G_{122}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[c,b]=g,[d,a]=e,[e,b]=g,[f,b]=g>.\] \[G_{123}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[d,a]=e,[d,b]=g,[e,b]=g,[f,a]=g>.\] \[G_{131}:<a,b,c,d,e,f,g\ |\ [b,a]=e,[d,c]=f,[e,a]=g,[f,c]=g>.\] \[G_{132}:<a,b,c,d,e,f,g\ |\ [b,a]=e,[d,b]=g,[d,c]=f,[e,a]=g,[f,c]=g>.\] \[G_{140}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=e,[d,b]=f,[e,a]=g,[f,d]=g>.\] \[G_{141}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=e,[c,b]=g,[d,b]=f,[e,a]=g>.\] \[G_{142}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=e,[d,b]=f,[e,a]=g,[f,b]=g>.\] \[G_{143}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=e,[d,b]=f,[d,c]=g,[e,a]=g,[f,a]=g,[f,b]=g>.\] \[g,[f,b]=g>.\] \[G_{144}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=e,[c,b]=f,[d,b]=f,[e,a]=g,[f,b]=g>.\] \[G_{149}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=e,[d,a]=f,[e,a]=g,[f,d]=g>.\] \[G_{152}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=e,[d,a]=f,[e,a]=g,[f,b]=g,[d,c]=g^{-1}>.\] \[G_{162}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[c,b]=e,[d,a]=f,[e,b]=g,[f,a]=g>.\] \[G_{163}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[c,b]=e,[d,a]=f,[f,b]=g,[d,c]=g^{-1},[e,a]=g^{-1}>.\] \[G_{164}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[c,b]=e,[d,a]=f,[e,b]=g,[f,b]=g,[d,c]=g^{-1},[e,a]=g^{-1}>.\] \[G_{164}:<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[c,b]=e,[d,a]=f,[e,b]=g,[f,b]=g,[d,c]=g^{-1},[e,a]=g^{-1}>.\] Proof.: We state the proof in details to \(G_{12}\), \(G_{16}\) and \(G_{132}\), the remaining case can be proved in similar way. Let \(G\cong G_{12}\). We can see that \([G_{12},G_{12}^{\varphi}]\) is generated by \([b,a^{\varphi}],[c,a^{\varphi}],[d,c^{\varphi}]\) modulo \({\mathcal{M}_{0}}^{*}(G_{12})\). Using Lemma 2.3 (vi), we have \[[[b,a^{\varphi}],[c,a^{\varphi}]]=[[b,a],[c,a]^{\varphi}]=[e,f^{\varphi}]\in{ \mathcal{M}_{0}}^{*}(G_{12})\] and \[[[b,a^{\varphi}],[d,c^{\varphi}]]=[[b,a],[d,c]^{\varphi}]=[e,e^{\varphi}]\in{ \mathcal{M}_{0}}^{*}(G_{12}).\] Also \[[[c,a^{\varphi}],[d,c^{\varphi}]]=[[c,a],[d,c]^{\varphi}]=[f,e^{\varphi}]\in{ \mathcal{M}_{0}}^{*}(G_{12}).\] Hence any two elements of the generator set of \([G_{12},G_{12}^{\varphi}]\) are commuting modulo \({\mathcal{M}_{0}}^{*}(G_{12})\). By using Proposition 2.12, each element of \([G_{12},G_{12}^{\varphi}]\) can be written as \[[b,a^{\varphi}]^{\alpha_{1}}[c,a^{\varphi}]^{\alpha_{2}}[d,c^{\varphi}]^{ \alpha_{3}}\tilde{w},\] where \(\tilde{w}\in{\mathcal{M}_{0}}^{*}(G_{12})\), and for all \(i\) such that, \(1\leq i\leq 3\), \(\alpha_{i}\in\mathbb{Z}\). Let \(w=[b,a^{\varphi}]^{\alpha_{1}}[c,a^{\varphi}]^{\alpha_{2}}[d,c^{\varphi}]^{ \alpha_{3}}\tilde{w}\in{\mathcal{M}}^{*}(G_{12})\). Then \(1=\kappa^{*}(w)=e^{\alpha_{1}}f^{\alpha_{2}}e^{\alpha_{3}}=e^{\alpha_{1}+ \alpha_{3}}f^{\alpha_{2}}\). Since \(e\) and \(f\) are elements of polycyclic generating sequence and \(exp(G_{12})=p\), we have \(f^{\alpha_{2}}=e^{\alpha_{1}+\alpha_{3}}=1\) and \(p\) divides \(\alpha_{2}\) and \((\alpha_{1}+\alpha_{3})\), respectively. So, there is two integers, \(k\) and \(k^{\prime}\) such that \(\alpha_{2}=kp\) and \(\alpha_{1}+\alpha_{3}=k^{\prime}p\). Thus \[w =[b,a^{\varphi}]^{\alpha_{1}}[c,a^{\varphi}]^{\alpha_{2}}[d,c^{ \varphi}]^{k^{\prime}p-\alpha_{1}}\tilde{w}\] \[=([b,a^{\varphi}][d,c^{\varphi}]^{-1})^{\alpha_{1}}[d,c^{\varphi} ]^{k^{\prime}p}[c,a^{\varphi}]^{kp}\tilde{w}\] \[=([b,a^{\varphi}][d,c^{\varphi}]^{-1})^{\alpha_{1}}\tilde{w}.\] We know that \(cl(G_{12})=2\). So, Lemmas 2.3 and 2.4 imply that \[1=[c^{p},a^{\varphi}]=[c,a^{\varphi}]^{p}\ \ \ \,\ \ \ \ 1=[d^{p},c^{\varphi}]=[d,c^{ \varphi}]^{p}.\] Thus \(w=([b,a^{\varphi}][d,c^{\varphi}]^{-1})^{\alpha_{1}}\tilde{w}\). We claim that \([b,a^{\varphi}][d,c^{\varphi}]^{-1}\in{\mathcal{M}_{0}}^{*}(G_{12})\), and so the result follows. At first we use Lemma 2.3 (i) to check that \([cab,cad]=1\). Indeed, \[[cab,cad] =[ca,cad][ca,cad,b][b,cad]\] \[=[ca,d][ca,ca][[ca,ca],d][b,d][b,ca][b,ca,d]\] \[=[c,d][c,d,a][a,d][b,d][b,a][b,c][b,c,a][[b,c][b,c,a],d]=e^{-1}e=1.\] Thus \([cab,(cad)^{\varphi}]=[cab,c^{\varphi}a^{\varphi}d^{\varphi}]\in{\mathcal{M}_ {0}}^{*}(G_{12})\). Expanding it, we obtain that \[[cab,c^{\varphi}a^{\varphi}d^{\varphi}]=[c,d^{\varphi}][c,d^{ \varphi},a][a,d^{\varphi}][b,d^{\varphi}][b,a^{\varphi}][b,c^{\varphi}]\] \[\ \ \(c^{\alpha_{1}}e^{\alpha_{2}}f^{\alpha_{3}+\alpha_{4}}\). Since \(c\), \(e\) and \(f\) are elements of polycyclic generating sequence and \(exp(G_{16})=p\), we have \(c^{\alpha_{1}}=e^{\alpha_{2}}=f^{\alpha_{3}+\alpha_{4}}=1\) and hence \(p\) divides \(\alpha_{1},\alpha_{2}\), and \((\alpha_{3}+\alpha_{4})\), respectively. We know \(cl(G_{16})=3\), so by using Lemmas 2.3 and 2.4, we have \[1=[c^{p},b^{\varphi}]=[c,b^{\varphi}]^{p}[c,b^{\varphi},c]^{\binom{p}{2}}.\] Since \([c,b^{\varphi},c]^{\binom{p}{2}}=[c,b,c^{\varphi}]^{\binom{p}{2}}=[f,c^{ \varphi}]^{\binom{p}{2}}=[f^{\binom{p}{2}},c^{\varphi}]=1\), \([c,b^{\varphi}]^{p}=1\). Also \[1=[b^{p},a^{\varphi}]=[b,a^{\varphi}]^{p}[b,a^{\varphi},b]^{\binom{p}{2}},\] and \[[b,a^{\varphi},b]^{\binom{p}{2}}=[b,a,b^{\varphi}]^{\binom{p}{2}}=[c,b^{ \varphi}]^{\binom{p}{2}}=[c^{\binom{p}{2}},b^{\varphi}]=1.\] Thus \([b,a^{\varphi}]^{p}=1\). Similarly \([c,a^{\varphi}]^{p}=1\), and so \(w=([d,b^{\varphi}][c,b^{\varphi}]^{-1})^{\alpha_{3}}\tilde{w}\). We claim that \([d,b^{\varphi}][c,b^{\varphi}]^{-1}\in{\mathcal{M}_{0}}^{*}(G_{16})\), and so the result follows. Using Lemma 2.3 (i) we show that \([db,bc]=1\). Indeed, \[[db,bc] =[db,c][db,b][db,b,c]\] \[=[d,c][d,c,b][b,c][d,b][d,b,b][b,b][[d,b][b,b],c]=f^{-1}f=1.\] Thus \([db,(bc)^{\varphi}]\in{\mathcal{M}_{0}}^{*}(G_{16})\). Expanding it, we have \[[db,(bc)^{\varphi}] =[d,c^{\varphi}][d,c^{\varphi},b][b,c^{\varphi}][d,b^{\varphi}][ b,b^{\varphi}][d,b^{\varphi},c^{\varphi}]\] \[\qquad[d,b^{\varphi},b,c^{\varphi}][b,b^{\varphi},c^{\varphi}]\] \[=[d,c^{\varphi}][d,c^{\varphi},b][b,c^{\varphi}][d,b^{\varphi}][ d,b^{\varphi},b][b,b^{\varphi}][d,b,c^{\varphi}]\] \[\qquad[d,b,b,c^{\varphi}][b,b,c^{\varphi}].\] It is easy to see that, all of the above commutators except \([b,c^{\varphi}]\) and \([d,b^{\varphi}]\), belong to \({\mathcal{M}_{0}}^{*}(G_{16})\). So modulo \({\mathcal{M}_{0}}^{*}(G_{16})\), \([b,c^{\varphi}][d,b^{\varphi}]=[d,b^{\varphi}][b,c^{\varphi}]=[d,b^{\varphi}] [c,b^{\varphi}]^{-1}\), and \([d,b^{\varphi}][c,b^{\varphi}]^{-1}\in{\mathcal{M}_{0}}^{*}(G_{16})\), as required. Hence \(\tilde{B_{0}}(G_{16})=0\). Now let \(G\cong G_{132}\). Using Proposition 2.12, the group \([G_{132},G_{132}^{\varphi}]\) is generated by \([b,a^{\varphi}],[d,b^{\varphi}],[d,c^{\varphi}],[e,a^{\varphi}]\), \([f,c^{\varphi}]\) modulo \({\mathcal{M}_{0}}^{*}(G_{132})\). By using Lemma 2.3 (vi), we have \[[[b,a^{\varphi}],[d,b^{\varphi}]]=[[b,a],[d,b]^{\varphi}]=[e,g^{\varphi}]\in{ \mathcal{M}_{0}}^{*}(G_{132}).\] Similarly, \[[[b,a^{\varphi}],[d,c^{\varphi}]],[[b,a^{\varphi}],[e,a^{\varphi}]],[[b,a^{ \varphi}],[f,c^{\varphi}]],[[d,b^{\varphi}],[d,c^{\varphi}]],[[d,b^{\varphi}],[e,a^{\varphi}]],\] \[[[d,b^{\varphi}],[f,c^{\varphi}]],[[d,c^{\varphi}],[e,a^{\varphi}]],[[d,c^{ \varphi}],[f,c^{\varphi}]],[[e,a^{\varphi}],[f,c^{\varphi}]]\in{\mathcal{M}_{0 }}^{*}(G_{132}).\] Thus any two elements of the generating set of \([G_{132},G_{132}^{\varphi}]\) are commuting modulo \({\mathcal{M}_{0}}^{*}(G_{132})\), and each element of \([G_{132},G_{132}^{\varphi}]\) can be written as \[[b,a^{\varphi}]^{\alpha_{1}}[d,b^{\varphi}]^{\alpha_{2}}[d,c^{\varphi}]^{\alpha _{3}}[e,a^{\varphi}]^{\alpha_{4}}[f,c^{\varphi}]^{\alpha_{5}}\tilde{w},\] where \(\tilde{w}\in{\mathcal{M}_{0}}^{*}(G_{132})\), and for all \(i\) such that, \(1\leq i\leq 5\), \(\alpha_{i}\in\mathbb{Z}\). Let \(w=[b,a^{\varphi}]^{\alpha_{1}}[d,b^{\varphi}]^{\alpha_{2}}[d,c^{\varphi}]^{ \alpha_{3}}[e,a^{\varphi}]^{\alpha_{4}}[f,c^{\varphi}]^{\alpha_{5}}\tilde{w} \in\mathcal{M}^{*}(G_{132})\). Then \(1=\kappa^{*}(w)=e^{\alpha_{1}}g^{\alpha_{2}}f^{\alpha_{3}}g^{\alpha_{4}}g^{ \alpha_{5}}=c^{\alpha_{1}}f^{\alpha_{3}}g^{\alpha_{2}+\alpha_{4}+\alpha_{5}}\). Since \(c\), \(f\) and \(g\) are in the polycyclic generating sequence and \(\exp(G_{132})=p\), \(c^{\alpha_{1}}=f^{\alpha_{3}}=g^{\alpha_{2}+\alpha_{4}+\alpha_{5}}=1\), so \(p\) divides \(\alpha_{1}\), \(\alpha_{3}\) and \((\alpha_{2}+\alpha_{4}+\alpha_{5})\), respectively. Now Lemmas 2.3 (ii) and 2.4 imply that \[1=[b^{p},a^{\varphi}]=[b,a^{\varphi}]^{p}[b,a^{\varphi},b]^{\binom{p}{2}}.\] Since \[[b,a^{\varphi},b]^{\binom{p}{2}}=[b,a,b^{\varphi}]^{\binom{p}{2}}=[e,b^{ \varphi}]^{\binom{p}{2}}=[e^{\binom{p}{2}},b^{\varphi}]=1,\] we have \([b,a^{\varphi}]^{p}=1\). Similarly \([d,c^{\varphi}]^{p}=1\). Thus \(w=([d,b^{\varphi}][f,c^{\varphi}]^{-1})^{\alpha_{2}}([e,a^{\alpha}][f,c^{ \alpha}]^{-1})^{\alpha_{4}}\tilde{w}\) for any \(w\in\mathcal{M}^{*}(G_{132})\). If \(([d,b^{\varphi}][f,c^{\varphi}]^{-1})\) and \(([e,a^{\varphi}][f,c^{\varphi}]^{-1})\in{\mathcal{M}_{0}}^{*}(G_{132})\), then the result is obtained. We use Lemma 2.3 (i) to prove that \([dc,bf]=[ec,af]=1\). Expanding the commutators, we have \[[dc,bf] =[dc,f][dc,b][dc,b,f]\] \[=[d,f][d,f,c][c,f][d,b][d,b,c][c,b][[d,b][d,b,c][c,b],f]=g^{-1}g=1.\] \[[ec,af] =[ec,f][ec,a][ec,a,f]\] \[=[e,f][e,f,c][c,f][e,a][e,a,c][[e,a][e,a,c][c,a],f]=g^{-1}g=1.\] Thus \([dc,(bf)^{\varphi}]\) and \([ec,(af)^{\varphi}]\in{\mathcal{M}_{0}}^{*}(G_{132})\). Expanding \([dc,(bf)^{\varphi}]\), obtain that \[[dc,(bf)^{\varphi}] =[d,f^{\varphi}][d,f^{\varphi},c][c,f^{\varphi}][d,b^{\varphi}][d,b^{\varphi},c][c,b^{\varphi}][d,b^{\varphi},f^{\varphi}][d,b^{\varphi},c,f^{ \varphi}][c,b^{\varphi},f^{\varphi}]\] \[=[d,f^{\varphi}][d,f,c^{\varphi}][c,f^{\varphi}][d,b^{\varphi}][d,b,c^{\varphi},b][c,b^{\varphi}][d,b,f^{\varphi}][d,b,c,f^{\varphi}][c,b,f^{ \varphi}].\] We can see that all of the above commutators except \([c,f^{\varphi}]\) and \([d,b^{\varphi}]\) belong to \({\mathcal{M}_{0}}^{*}(G_{132})\). So modulo \({\mathcal{M}_{0}}^{*}(G_{132})\), \([c,f^{\varphi}][d,b^{\varphi}]=[d,b^{\varphi}][c,f^{\varphi}]=[d,b^{\varphi}][f, c^{\varphi}]^{-1}\) and \([d,b^{\varphi}][f,c^{\varphi}]^{-1}\in{\mathcal{M}_{0}}^{*}(G_{132})\). Also \[[ec,(af)^{\varphi}] =[ec,a^{\varphi}f^{\varphi}]=[ec,f^{\varphi}][ec,a^{\varphi}][ec,a^{ \varphi},f^{\varphi}]\] \[=[e,f^{\varphi}][e,f^{\varphi},c][c,f^{\varphi}][e,a^{\varphi}][e, a^{\varphi},c][c,a^{\varphi}]\] \[\qquad[[e,a^{\varphi}][e,a^{\varphi},c][c,a^{\varphi}],f^{\varphi}]\] \[=[e,f^{\varphi}][e,f^{\varphi},c][c,f^{\varphi}][e,a^{\varphi}][e, a^{\varphi},c][c,a^{\varphi}]\] \[\qquad[e,a^{\varphi},f^{\varphi}][e,a^{\varphi},c,f^{\varphi}][c, a^{\varphi},f^{\varphi}]\] \[=[e,f^{\varphi}][e,f,c^{\varphi}][c,f^{\varphi}][e,a^{\varphi}][e, a,c^{\varphi}][c,a^{\varphi}]\] \[\qquad[e,a,f^{\varphi}][e,a,c,f^{\varphi}][c,a,f^{\varphi}].\] We can see that, all of the above commutators except \([c,f^{\varphi}]\) and \([e,a^{\varphi}]\) belong to \({\mathcal{M}_{0}}^{*}(G_{132})\). So modulo \({\mathcal{M}_{0}}^{*}(G_{132})\), \([c,f^{\varphi}][e,a^{\varphi}]=[e,a^{\varphi}][f,c^{\varphi}]^{-1}\) and \([e,a^{\varphi}][f,c^{\varphi}]^{-1}\in{\mathcal{M}_{0}}^{*}(G_{132})\), as required. Hence \(\tilde{B_{0}}(G_{132})=0\). ## 4. **Groups of order \(p^{7}\) and exponent \(p\) (\(p>5\)) with non trivial Bogomolov multiplier** In this section, using a similar technique which is used in [8, 12], we will show that the Bogomolov multiplier of some \(p\)-groups of order \(p^{7}\) and exponent \(p\) is non trivial. First in [8], Eick and Nickel introduced a useful algorithm for computing a consistent polycyclic presentation of the Schur multiplier and the nonabelian tensor square of a group using consistent polycyclic presentation. Later, Jezernik and Moravec in [12] expanded this method for calculation of the Bogomolov multiplier and curly exterior square of a polycyclic group that has a consistent presentation of the group. Their main tool in this method is to calculate the certain central extensions of a group given by a consistent presentation. Let \(G\) be a finite polycyclic group defined by a consistent polycyclic presentation \(F/R\), where \(F\) is the free group on generators \(x_{i}\) (\(1\leq i\leq n\)) for some \(n\) and following relations \[{x_{i}}^{e_{i}}=\prod_{k=i+1}^{n}{x_{k}}^{z_{i,k}}\ \ \ \ \ ;\ \ \ \ \ 1\leq i\leq n,\] \[[x_{i},x_{j}]=\prod_{k=i+1}^{n}{x_{k}}^{y_{i,j,k}}\ \ \ \ ;\ \ \ \ \ \ 1\leq j<i\leq n.\] Note that, in such a presentation, we omit the trivial commutator relations. Similar [8, 12], we introduce \(l\) new generators \(t_{1},...,t_{l}\) (called tails) and we define \(G_{\oslash}^{*}\) as the group generated by the generators \(x_{1},...,x_{n},t_{1},...,t_{l}\) and the following relations \[x_{i}{}^{e_{i}}=\prod_{k=i+1}^{n}x_{k}{}^{z_{i,k}}.t_{l(i)}\ \ \ \ \ ;\ \ \ \ \ 1\leq i\leq n,\] \[[x_{i},x_{j}]=\prod_{k=i+1}^{n}x_{k}{}^{y_{i,j,k}}.t_{l(i,j)}\ \ \ \ \ ;\ \ \ \ \ 1\leq j<i\leq n.\] The next lemma states some facts about \(G_{\oslash}^{*}\). **Lemma 4.1**.: _[_8_, Lemma 1]_ _Let \(G\) be defined by the consistent polycyclic presentation \(F/R\) as above. Using the above notation we obtain the following._ 1. \(G_{\oslash}^{*}\cong F/[R,F]\ \,\ \ T\cong R/[R,F]\ \,\ \ G_{\oslash}^{*}/T\cong F/R\cong G\)_,_ 2. \(G_{\oslash}^{*}\) _is defined by a polycyclic presentation._ _Where \(T:=<t_{1},...,t_{l}>\)._ Therefore \(G_{\oslash}^{*}\) is a central extension of \(T\) by \(G\), but the given relations may determine an inconsistent presentation for \(G_{\oslash}^{*}\). Now, by using following consistency relations, we introduce consistency relations between the tails \[x_{k}(x_{j}x_{i})=(x_{k}x_{j})x_{i}\ \ \ \ \ ;\ \ \ \ \ k>j>i,\] \[(x_{j}{}^{e_{j}})x_{i}=x_{j}{}^{e_{j}-1}(x_{j}x_{i})\ \ \ \ \ \ ;\ \ \ \ \ j>i,\] \[x_{j}(x_{i}{}^{e_{i}})=(x_{j}x_{i})x_{i}{}^{e_{i}-1}\ \ \ \ \ \ ;\ \ \ \ \ j>i,\] \[(x_{i}{}^{e_{i}})x_{i}=x_{i}(x_{i}{}^{e_{i}})\ \ \ \ \ \ \ \ \ \ \ \mbox{for all i}.\] In addition to the consistency relations, we want to evaluate the commutators \([x,y]\) in the extension \(G_{\oslash}^{*}\) with the elements \(x\), \(y\) commuting in \(G\), which themselves cause the creation of some new tail relations. In fact this step is the same as determining the subgroup \(\mathcal{M}_{0}(G)\cong<K(F)\cap R>/[R,F]\) of the Schur multiplier \(\mathcal{M}(G)\cong(R\cap F^{\prime})/<K(F)\cap R>\), that was mentioned in the first two sections. Let \(G_{\oslash}^{*}\) be the group obtained by factoring \(G_{\oslash}^{*}\) by these extra relations. Using the Gaussion elimination method, we produce a generating set for all relations between the tails and collect in the matrix \(T\). Then we introduce a new basis for the tails (say \(t_{l}^{*}\)), by using Smith normal form \(S=PTQ\) of \(T\) that will be obtained by two invertible matrices \(Q\) and \(P\). The abelian invariants of the group generated by the tails are identified as the elementary divisors of \(T\). Finally, the Bogomolov multiplier of \(G\) is recognized as the torsion subgroup of \(<t_{l}^{*}\ |\ 1\leq l\leq m>\) inward \(G_{\oslash}^{*}\), (for more information you can see [8, 12]). Note that the theoretical history of this method being the following proposition which Jezernik and Moravec have proved it in their article. **Proposition 4.2**.: _[_12_, Proposition 2.1]_ _Let \(G\) be a finite group presented by \(G=F/R\) with \(F\) free of rank \(n\). Denote by \(K(F)\) the set of commutators in \(F\). Then \(\tilde{B_{0}}(G)\) is isomorphic to the torsion subgroup of \(R/(K(F)\cap R)\) and the torsion free factor \(R/([F,F]\cap R)\) is free abelian of rank \(n\). Moreover, every complement \(C\) to \(\tilde{B_{0}}(G)\) in \(R/(K(F)\cap R)\) yields a commutativity preserving central extension of \(\tilde{B_{0}}(G)\) by \(G\)._ Now by using this method, we show that following groups have nontrivial Bogomolov multiplier. **Proposition 4.3**.: _All groups, except the groups of the Propositions 3.1, 3.2 and 3.3, have non trivial Bogomolov multiplier._ Proof.: We state the proof in detail for \(G_{9}\), the remaining cases can be proved in a similar way. The group \(G_{9}\) has a following polycyclic presention \[G_{9}=<a,b,c,d,e,f,g\ |\ [b,a]=c,[c,a]=d,[c,b]=[d,a]=e>.\] We add 11 tails to the presentation to make a quotient of the universal central extension of the system: \[a^{p}=t_{1},b^{p}=t_{2},c^{p}=t_{3},d^{p}=t_{4},e^{p}=t_{5},f^{p}=t_{6},g^{p}= t_{7},\] \[[b,a]=ct_{8},[c,a]=dt_{9},[c,b]=et_{10},[d,a]=et_{11}.\] So we have a new group that generated by \(a,b,c,d,e,f,g\) and \(t_{i}\), where \(1\leq i\leq 11\). On the other hand, by using consistency relations, we have following relations between the tails: \[t_{8}{}^{p}t_{3}=1\ \,\ \ \ t_{9}{}^{p}t_{4}=1\ \,\ \ t_{10}{}^{p}t_{5}=1\ \,\ \ t_{11}{}^{p}t_{5}=1.\] Now we collect all the coefficients of these relationships in a matrix and using elementary row (column) operations, we turn it into the upper triangular matrix \(T\). \[\text{T=}\begin{pmatrix}0&0&1&0&0&0&0&\text{p}&0&0&0\\ 0&0&0&1&0&0&0&0&\text{p}&0&0\\ 0&0&0&0&1&0&0&0&0&\text{p}&0\\ 0&0&0&0&1&0&0&0&0&\text{p}\end{pmatrix}\] Then we change basis for the tails (say \(t_{l}^{*}\)), by using Smith normal form \(S=PTQ\) which is obtained by the following two invertible matrices \(P\) and \(Q\). \[\text{P=}\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&\text{-1}&1\end{pmatrix}\] \[\text{Q=}\begin{pmatrix}1&0&0&0&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&\text{-p}&0&0&0\\ 0&0&0&1&0&0&0&\text{-p}&0&0\\ 0&0&0&0&1&0&0&0&\text{-p}&0\\ 0&0&0&0&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&1&0&0&0\\ 0&0&0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0&0&1&1\end{pmatrix}\] \[\text{S=}\begin{pmatrix}0&0&1&0&0&0&0&0&0&0&0&0\\ 0&0&0&1&0&0&0&0&0&0&0\\ 0&0&0&0&1&0&0&0&0&0&0\\ 0&0&0&0&1&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&\text{p}\end{pmatrix}.\] Therefore the non trivial elementary divisors of the Smith normal form of \(T\) are \(1,1,1,p\). The element \(t_{11}^{*}\) is corresponding to the divisor that is greater than 1. Thus, \(\tilde{B_{0}}(G)\cong<t_{11}^{*}\ |\ (t_{11}^{*})^{p}=1>\). We know the Bogomolov multiplier has only nonuniversal commutator relations as generators. So, we must discard the new tails \(t_{i}^{*}\) that their corresponding elementary divisors are trivial or 1. After converting the situation back to the principal tails \(t_{i}\), we have \(t_{11}^{*}=t_{11}\) and the other tails \(t_{i}\) are trivial. Thus we have a commutativity preserving central extension (CP extension) of the tails subgroup by \(G_{9}\), that has following presentation \[<a,b,c,d,e,f,g,t_{11}^{*}\ |\ a^{p}=b^{p}=c^{p}=d^{p}=e^{p}=f^{p}=g^{p}=(t_{11}^{* })^{p}=1\,\] \[[b,a]=c,[c,a]=d,[c,b]=e,[d,a]=et_{11}^{*}>.\] On the other hand its derived subgroup is isomorphic to \(G_{9}\curlywedge G_{9}\) (see [12]). Also, we know \(t_{11}^{*}=[c,b][d,a]^{-1}\). Finally, we have \[\tilde{B_{0}}(G_{9})\cong<[c,b][d,a]^{-1}\ |\ ([c,b][d,a]^{-1})^{p}=1>\cong \mathbb{Z}_{p}.\] ## 5. **The Bogomolov multiplier of groups of order \(3^{7}\) and exponent \(3\)** According to the Wilkinson's classification in [28], all groups of exponent \(3\) and order \(3^{7}\) are given by the \(16\) groups in the list with the following numbers \[\{1,2,4,6,10,11,12,13,35,36,37,38,39,43,44,110\}.\] The \(17\)th such group is the Burnside group on three generators. with the commutator relations \[G_{17}=[b,a]=d,[c,a]=e,[c,b]=f,[d,c]=g,[b,e]=g,[f,a]=g.\] Similar to the previous content, \(\tilde{B_{0}}(G_{i})=0\), where \(i\neq 38\). Also, we can show that, \(G\) have trivial Bogomolov multiplier. ## 6. **The Bogomolov multiplier of groups of order \(5^{7}\) and exponent \(5\)** Here also according to the Wilkinson's classification [28], all groups of exponent \(5\) and order \(5^{7}\) are given by the following \(153\) groups in a list containing the following numbers \[\{1,...,29,35,...,82,84,...,102,110,...,120,126-n(n=0,...,4),130,...,147,\] \[148-n(n=2,3,4),149,150,...,152,153-n(n=0,...,4),154,155-n(n=2,3,4),\] \[156,157,158-n(n=1,2),159,160,161-kn(k=1,n=4)\}.\] Therefore \(\tilde{B_{0}}(G_{i})=0\), where \(i\in\{1,2,3,4,5,6,7,8,10,11,12,15,16,17,20,21,22,24,25,26,28,35,36,37,39,40,\) \(41,43,45,46,47,48,49,50,52,54,55,57,59,60,62,64,70,71,73,74,75,76,81,85,\) \(89,91,95,100,111,112,114,117,118,131,132,133,140,141,142,143,144,149,152\}.\)
2310.06417
Advective Diffusion Transformers for Topological Generalization in Graph Learning
Graph diffusion equations are intimately related to graph neural networks (GNNs) and have recently attracted attention as a principled framework for analyzing GNN dynamics, formalizing their expressive power, and justifying architectural choices. One key open questions in graph learning is the generalization capabilities of GNNs. A major limitation of current approaches hinges on the assumption that the graph topologies in the training and test sets come from the same distribution. In this paper, we make steps towards understanding the generalization of GNNs by exploring how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies. We first show deficiencies in the generalization capability of existing models built upon local diffusion on graphs, stemming from the exponential sensitivity to topology variation. Our subsequent analysis reveals the promise of non-local diffusion, which advocates for feature propagation over fully-connected latent graphs, under the assumption of a specific data-generating condition. In addition to these findings, we propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations that have a closed-form solution backed up with theoretical guarantees of desired generalization under topological distribution shifts. The new model, functioning as a versatile graph Transformer, demonstrates superior performance across a wide range of graph learning tasks.
Qitian Wu, Chenxiao Yang, Kaipeng Zeng, Fan Nie, Michael Bronstein, Junchi Yan
2023-10-10T08:40:47Z
http://arxiv.org/abs/2310.06417v1
# Advective Diffusion Transformers for Topological Generalization in Graph Learning ###### Abstract Graph diffusion equations are intimately related to graph neural networks (GNNs) and have recently attracted attention as a principled framework for analyzing GNN dynamics, formalizing their expressive power, and justifying architectural choices. One key open questions in graph learning is the generalization capabilities of GNNs. A major limitation of current approaches hinges on the assumption that the graph topologies in the training and test sets come from the same distribution. In this paper, we make steps towards understanding the generalization of GNNs by exploring how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies. We first show deficiencies in the generalization capability of existing models built upon local diffusion on graphs, stemming from the exponential sensitivity to topology variation. Our subsequent analysis reveals the promise of non-local diffusion, which advocates for feature propagation over fully-connected latent graphs, under the assumption of a specific data-generating condition. In addition to these findings, we propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations that have a closed-form solution backed up with theoretical guarantees of desired generalization under topological distribution shifts. The new model, functioning as a versatile graph Transformer, demonstrates superior performance across a wide range of graph learning tasks. Source codes will be made publicly available. ## 1 Introduction Learning representations for non-Euclidean data is essential for geometric deep learning. Graph-structured data in particular has attracted increasing attention, as graphs are a very popular mathematical abstraction for systems of relations and interactions that can be applied from microscopic scales (e.g. molecules) to macroscopic ones (social networks). The most common framework for learning on graphs is graph neural networks (GNNs), which operate by propagating information between adjacent nodes of the graph networks (Scarselli et al., 2008; Gilmer et al., 2017; Kipf and Welling, 2017). GNNs are intimately related to graph diffusion equations (Atwood and Towsley, 2016; Klicpera et al., 2019; Chamberlain et al., 2021) and can be seen as discretized versions thereof. Considering GNNs as diffusion equations offers powerful tools from the domain of partial differential equations (PDEs) allowing to study the expressive power (Bodnar et al., 2022), behaviors such as over-smoothing (Rusch et al., 2023; Di Giovanni et al., 2022) and over-squashing (Topping et al., 2022), the settings of missing features (Rossi et al., 2022), and guide architectural choices (Di Giovanni et al., 2022). While significant efforts have been devoted to understanding the expressive power of GNNs and similar architectures for graph learning, the generalization capabilities of such methods are largely an open question. In many important real-world settings, the training and testing graph topologies can be generated from different distributions (a phenomenon referred to as _"topological shift"_) (Koh et al., 2021; Hu et al., 2021; Bazhenov et al., 2023; Zhang et al., 2023). Generalization to testing data with new unseen topological patterns can be highly challenging when training observations are insufficient. One of the established principles by prior works resorts to the invariant underlying mechanism (Rojas-Carulla et al., 2018; Arjovsky et al., 2019; Scholkopf et al., 2021) that governs the shared data-generating process and enables generalization across environments. However, unlike in Euclidean space, in the case of graphs, the invariant topological features can be more abstract and complex, making it hard to come up with a single model to resolve the challenge. ContributionsWe explore how graph diffusion equations (and derived GNN architectures) generalize in the presence of topological shifts. We show that current models relying on local graph diffusion suffer from undesirable sensitivity to variations in graph structure, making it difficult to achieve stable and reliable predictions and potentially tampering generalization. Extending the diffusion operators to latent fully-connected graphs in principle allows ideal generalization if the ground-truth labels are independent of the observed graphs in data generation, which is however often violated in practice. To overcome this problem, we introduce a novel method for learning graph representations based on _advective diffusion_ equations. We connect advective diffusion with a Transformer-like architecture particularly designed for the challenging topological generalization: the non-local diffusion term (instantiated as global attention) aims to capture invariant latent interactions that are insensitive to the observed graphs; the advection term (instantiated as local message passing) accommodates the observed topological patterns specific to environments. We prove that the closed-form solution of this new diffusion system possesses the capability to control the rate of change in node representations w.r.t. topological variations at arbitrary orders. This further produces a guarantee of the desired level of generalization under topological shifts. For efficiently calculating the solution of the diffusion equation, we use the numerical scheme based on the Pade-Chebyshev theory (Golub & Van Loan, 1989). Experiments show that our model, which we call _Advective Diffusion Transformer (ADiT)_, offers superior generalization across a broad spectrum of graph ML tasks in diverse domains, including social and citation networks, molecular screening, and protein interactions. ## 2 Background and Preliminaries As building blocks of our methodology, we first recapitulate diffusion equations on manifolds (Freidlin & Wentzell, 1993; Medvedev, 2014) and its established connection with graph representations. **Diffusion on Riemannian manifolds.** Let \(\Omega\) denote an abstract domain, which we assume here to be a Riemannian manifold (Eells & Sampson, 1964). A key feature distinguishing an \(n\)-dimensional Riemannian manifold from a Euclidean space is the fact that it is only _locally_ Euclidean, in the sense that at every point \(u\in\Omega\) one can construct \(n\)-dimensional Euclidean _tangent space_\(T_{u}\Omega\cong\mathbb{R}^{n}\) that locally models the structure of \(\Omega\). The collection of such spaces (referred to as the _tangent bundle_ and denoted by \(T\Omega\)) is further equipped with a smoothly-varying inner product (_Riemannian metric_). Now consider some quantity (e.g., temperature) as a function of the form \(q:\Omega\to\mathbb{R}\), which we refer to as a _scalar field_. Similarly, we can define a _tangent vector field_\(Q:\Omega\to T\Omega\), associating to every point \(u\) on a manifold a tangent vector \(Q(u)\in T_{u}\Omega\), which can be thought of as a local infinitesimal displacement. We use \(\mathcal{Q}(\Omega)\) and \(\mathcal{Q}(T\Omega)\) to denote the functional spaces of scalar and vector fields, respectively. The _gradient_ operator \(\nabla:\mathcal{Q}(\Omega)\to\mathcal{Q}(T\Omega)\) takes scalar fields into vector fields representing the local direction of the steepest change of the field. The _divergence_ operator is the adjoint of the gradient and maps in the opposite direction, \(\nabla^{*}:\mathcal{Q}(T\Omega)\to\mathcal{Q}(\Omega)\). A manifold diffusion process models the evolution of a quantity (e.g., temperature or chemical concentration) due to its difference across spatial locations on \(\Omega\). Denoting by \(q(u,t):\Omega\times[0,\infty)\to\mathbb{R}\) the quantity over time \(t\), the process is described by a PDE (_diffusion equation_) (Romeny, 2013): \[\frac{\partial q(u,t)}{\partial t}=\nabla^{*}\left(S(u,t)\odot\nabla q(u,t) \right),\ \ t\geq 0,u\in\Omega\ \ \ \text{with initial conditions}\ \ q(u,0)=q_{0}(u), \tag{1}\] and possibly additional boundary conditions if \(\Omega\) has a boundary. \(S\) denotes the _diffusivity_ of the domain. It is typical to distinguish between an _isotropic_ (location-independent diffusivity), _non-homogeneous_ (location-dependent diffusivity \(S=s(u)\in\mathbb{R}\)), and _anisotropic_ (location- and direction-dependent \(S(u)\in\mathbb{R}^{n\times n}\)) settings. In the cases studied below, we will assume the dependence of the diffusivity on the location is via a function of the quantity itself, i.e., \(S=S(q(u,t))\). **Diffusion on Graphs.** Recent works leverage diffusion equations as a foundation principle for learning graph representations (Chamberlain et al., 2021; Thorpe et al., 2022; Bodnar et al., 2022; Choi et al., 2023; Rusch et al., 2023), employing analogies between calculus on manifolds and graphs. Let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) be a graph with nodes \(\mathcal{V}\) and edges \(\mathcal{E}\), represented by the \(|\mathcal{V}|\times|\mathcal{V}|\)_adjacency matrix_\(\mathbf{A}\). Let \(\mathbf{X}=[\mathbf{x}_{u}]_{u\in\mathcal{V}}\) denote a \(|\mathcal{V}|\times D\) matrix of node features, analogous to scalar fields on manifolds. The graph gradient \((\nabla\mathbf{X})_{uv}=\mathbf{x}_{v}-\mathbf{x}_{u}\) defines edge features for \((u,v)\in\mathcal{E}\), analogous to a vector field on a manifold. Similarly, the graph divergence of edge features \(\mathbf{E}=[\mathbf{e}_{uv}]_{(u,v)\in\mathcal{E}}\), defined as the adjoint \((\nabla^{*}\mathbf{E})_{u}=\sum_{v:(u,v)\in\mathcal{E}}\mathbf{e}_{uv}\), produces node features. Diffusion-based approaches replace discrete GNN layers with continuous time-evolving node embeddings \(\mathbf{Z}(t)=[\mathbf{z}_{u}(t)]\), where \(\mathbf{z}_{u}(t):[0,\infty)\rightarrow\mathbb{R}^{D}\) is driven by the graph diffusion equation, \[\frac{\partial\mathbf{Z}(t)}{\partial t}=\nabla^{*}\left(\mathbf{S}(\mathbf{ Z}(t),t;\mathbf{A})\odot\nabla\mathbf{Z}(t)\right),\;\;t\geq 0,\;\;\;\text{with initial conditions}\;\;\mathbf{Z}(0)= \phi_{enc}(\mathbf{X}), \tag{2}\] where \(\phi_{enc}\) is a node-wise MLP encoder and w.l.o.g., the diffusivity \(\mathbf{S}(\mathbf{Z}(t),t;\mathbf{A})\) over the graph can be defined as a \(|\mathcal{V}|\times|\mathcal{V}|\) matrix-valued function dependent on \(\mathbf{A}\), which measures the rate of information flows between node pairs. With the graph gradient and divergence, Eqn. 2 becomes \[\frac{\partial\mathbf{Z}(t)}{\partial t}=(\mathbf{C}(\mathbf{Z}(t),t;\mathbf{ A})-\mathbf{I})\mathbf{Z}(t),\;\;0\leq t\leq T,\;\;\;\text{with initial conditions}\;\;\mathbf{Z}(0)=\phi_{enc}(\mathbf{X}), \tag{3}\] where \(\mathbf{C}(\mathbf{Z}(t),t;\mathbf{A})\) is a \(|\mathcal{V}|\times|\mathcal{V}|\) coupling matrix associated with the diffusivity. Eqn. 3 yields a dynamics from \(t=0\) to an arbitrary given stopping time \(T\), where the latter gives node representations for prediction, e.g., \(\hat{\mathbf{Y}}=\phi_{dec}(\mathbf{Z}(T))\). The coupling matrix determines the interactions between different nodes in the graph, and its common instantiations include the normalized adjacency (non-parametric) and learnable attention matrix (parametric), in which cases the finite-difference numerical iterations for solving Eqn. 3 correspond to the discrete propagation layers of common GNNs (Chamberlain et al., 2021) and Transformers (Wu et al., 2023) (see Appendix A for details). It is typical to tacitly make a closed-world assumption, i.e., the graph topologies of training and testing data are generated from the same distribution. The challenge of generalization arises when the testing graph topology is different from the training one. In such an open-world regime, it still remains unexplored how graph diffusion equations extrapolate and generalize to new unseen structures. ## 3 Can Graph Diffusion Generalize? As a prerequisite for analyzing the generalization behaviors of graph diffusion models, we need to characterize how topological shifts happen in nature. In general sense, extrapolation is impossible without any exposure to the new data or prior knowledge about the data-generating mechanism. In our work, we assume testing data is strictly unknown during training, in which case structural assumptions become necessary for authorizing generalization. ### Problem Formulation: Graph Data Generation We present the underlying data-generating mechanism of graph data in Fig. 1, inspired by the graph limits (Lovasz and Szegedy, 2006; Medvedev, 2014) and random graph models (Snijders and Nowicki, 1997). In graph theory, the topology of a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) can be assumed to be generated by a _graphon_ (or continuous graph limit), a random symmetric measurable function \(W:[0,1]^{2}\rightarrow[0,1]\), which is an unobserved latent variable. In our work, we generalize this data-generating mechanism to include alongside graph adjacency also node features and labels, as follows: **i)** Each node \(u\in\mathcal{V}\) has a latent i.i.d. variable \(U_{u}\sim U[0,1]\). The _node features_ are a random variable \(X=[X_{u}]\) generated from each \(U_{u}\) through a certain node-wise function \(X_{u}=g(U_{u};W)\). We denote by matrix \(\mathbf{X}\) a particular realization of the random variable \(X\). **ii)** Similarly, the _graph adjacency_\(A=[A_{uv}]\) is a random variable generated through a pairwise function \(A_{uv}=h(U_{u},U_{v};W,E)\) additionally dependent on the _environment_\(E\). The change of Figure 1: The data-generating mechanism with topological shifts caused by environment \(E\). The solid (resp. dashed) nodes represents observed (resp. latent) random variables. happens when it transfers from training to testing, resulting in a different distribution of \(A\). We denote by \(\mathbf{A}\) a particular realization of the adjacency matrix. **iii)** The _label_\(Y\) can be specified in certain forms. In graph-level tasks (as we assume in below), \(Y\) is generated by a function over sets, \(Y=r(\{U_{v\in V}\},A;W)\). Denote by \(\mathbf{Y}\) a realization of \(Y\). The above process formalizes the data-generating mechanism behind various data of inter-dependent nature. It boils down to finding parameters \(\theta\) of a parametric function \(\Gamma_{\theta}(\mathbf{A},\mathbf{X})\) that establishes the predictive mapping from observed node features \(\mathbf{X}\) and graph adjacency \(\mathbf{A}\) to the label \(\mathbf{Y}\). \(\Gamma_{\theta}\) is typically implemented as a GNN, which is expected to possess sufficient _expressive power_ (in the sense that \(\exists\theta\) such that \(\Gamma_{\theta}(\mathbf{A},\mathbf{X})\approx\mathbf{Y}\)) as well as _generalization capability_ under topological distribution shift (i.e., when the observed graph topology varies from training to testing, which in our model amounts to the change in \(E\)). While significant attention in the literature has been devoted to the former property (Morris et al., 2019; Xu et al., 2019; Bouritsas et al., 2023; Papp et al., 2021; Balcilar et al., 2021; Bodnar et al., 2022); the latter is largely an open question. ### Graph Diffusion under Topological Shifts Building upon the connection between GNNs and diffusion equations, we next study the behavior of diffusion equation (i.e., Eqn. 3) under graph perturbation, which will shed lights on GNN generalization. The effect of \(\mathbf{A}\) on node representations (solution of the diffusion equation \(\mathbf{Z}(T)\)) stems from the coupling matrix \(\mathbf{C}(\mathbf{Z}(t),t;\mathbf{A})\). Thereby, the output of the diffusion process can be expressed as \(\mathbf{Z}(T)=f(\mathbf{Z}(0),\mathbf{A})\). We are interested in the change of \(\mathbf{Z}(T)\) w.r.t. the variation of \(\mathbf{A}\). _Linear Diffusion._ We first consider the constant diffusivity setting inducing \(\mathbf{C}(\mathbf{Z}(t),t;\mathbf{A})=\mathbf{C}\). In this case, Eqn. 3 becomes a linear diffusion equation with a closed-form solution \(\mathbf{Z}(t)=e^{-(\mathbf{I}-\mathbf{C})t}\mathbf{Z}(0)\). In this case, using the numerical scheme to solve the PDE would induce the discrete propagation layers akin to SGC (Wu et al., 2019), where the non-linearity in-between layers is omitted for acceleration (see more illustration on this connection in Appendix A). The following proposition shows that the variation magnitude of \(\mathbf{Z}(T)\) can be significant for small change of input graphs. **Proposition 1**.: _If the coupling matrix \(\mathbf{C}\) is set as the normalized adjacency \(\tilde{\mathbf{A}}=\mathbf{D}^{-1}\mathbf{A}\) or \(\tilde{\mathbf{A}}=\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\), where \(\mathbf{D}\) denotes the diagonal degree matrix of \(\mathbf{A}\), then the change of \(\mathbf{Z}(T;\tilde{\mathbf{A}})\) given by Eqn. 3 w.r.t. a perturbation \(\Delta\tilde{\mathbf{A}}\) is \(\|\mathbf{Z}(T;\tilde{\mathbf{A}}+\Delta\tilde{\mathbf{A}})-\mathbf{Z}(T; \tilde{\mathbf{A}})\|_{2}=\mathcal{O}(\exp\left(\|\Delta\tilde{\mathbf{A}}\|_{ 2}T\right))\)._ The consequence of this result is that the label prediction \(\hat{\mathbf{Y}}=\phi_{dec}(\mathbf{Z}(T;\tilde{\mathbf{A}}))\) can be highly (exponentially) sensitive to the change of the graph topology. Under the assumption of our graph generation model in which the graph adjacency is a realization of a random variable \(A=h(U_{u},U_{v};W,E)\) dependent on a varying environment \(E\), this may result in poor generalization.1 Proposition 1 can be extended to the multi-layer model comprised of multiple piece-wise diffusion dynamics with feature transformations (e.g., neural networks) in-between layers (see Appendix B.2). Footnote 1: The influence of topology variation is inherently associated with \(h\). For example, if one considers \(h\) as the stochastic block model (Snijders and Nowicki, 1997), then the change of \(E\) may lead to generated graph data with different edge probabilities. In the case of real-world data with intricate topological patterns, the functional forms of \(h\) can be more complex, consequently inducing different types of topological shifts. _Non-Linear Diffusion._ In a more general setting, the diffusivity can be time-dependent. The analogy in GNN architectures e.g. GAT (Velickovic et al., 2018) is layer-wise propagation that can aggregate neighbored nodes' signals with adaptive strengths across edges. Consider the time-dependent case used in (Chamberlain et al., 2021), where \(\mathbf{C}(t)\) depends on \(\mathbf{Z}(t)\) throughout the diffusion process: \[\mathbf{C}(\mathbf{Z}(t);\mathbf{A})=[c_{uv}(t)]_{u,v\in\mathcal{V}},\quad c_{ uv}(t)=\mathbb{I}[(u,v)\in\mathcal{E}]\cdot\frac{\eta(\mathbf{z}_{u}(t), \mathbf{z}_{v}(t))}{\sum_{w,(u,w)\in\mathcal{D}}(\mathbf{z}_{u}(t),\mathbf{z }_{w}(t))}, \tag{4}\] where \(\eta:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) denotes a pairwise function ("attention"). While such a non-linear diffusion equation has no closed-form solution anymore, we can generalize our previous result as follows: **Proposition 2**.: _For arbitrary time limit \(T\) and bounded function \(\eta\), the change of \(\mathbf{Z}(T)\) by the diffusion model Eqn. 3 with \(\mathbf{C}(\mathbf{Z}(t);\mathbf{A})\) by Eqn. 4 w.r.t. a perturbation \(\Delta\mathbf{A}\) is \(\mathcal{O}\left(\exp\left(\|\Delta\mathbf{A}\|_{2}T\right)\right)\)._ The analysis so far suggests the common limitation of local graph diffusion equations with different instantiations, i.e., the sensitivity of the output states w.r.t. the change of graph topology. This implies the potential failure of such a model class for the challenge of generalization where the graph topology varies from training to testing. Moreover, the analysis enlightens that the crux of the matter lies in the diffusion operators which determine the effect of graph structures throughout the diffusion process. ### Non-Local Graph Diffusion and Generalization with Conditions We proceed to extend our discussion to another class of neural diffusion models that resort to non-local diffusion operators allowing instantaneous information flows among arbitrary locations (Chasseigne et al., 2006). In the context of learning on graphs, the non-local diffusion can be seen as generalizing the feature propagation to a _complete_ or fully-connected (latent) graph (Wu et al., 2023), in contrast with common GNNs that allow message passing only between neighboring nodes. Formally speaking, we can define the gradient and divergence operators on a complete graph: \((\nabla\mathbf{X})_{uv}=\mathbf{x}_{v}-\mathbf{x}_{u}\) (\(u,v\in\mathcal{V}\)) and \((\nabla^{*}\mathbf{E})_{u}=\sum_{v\in\mathcal{V}}\mathbf{e}_{uv}\) (\(u\in\mathcal{V}\)). The corresponding diffusion equation still exhibits the form of Eqn. 3. Nevertheless, unlike the models studied in Sec. 3.2 assuming that \(\mathbf{C}(t)\) only has non-zeros entries \(c_{uv}(t)\neq 0\) for neighboring node pairs \((u,v)\in\mathcal{E}\), the non-local diffusion model allows non-zero \(c_{uv}(t)\) for arbitrary \((u,v)\)'s to accommodate the all-pair information flows. For example, the coupling matrix can be instantiated as the global attention \(\mathbf{C}(\mathbf{Z}(t))=[c_{uv}(t)]_{u,v\in\mathcal{V}}\) with \(c_{uv}(t)=\frac{\eta(\mathbf{x}_{u}(t),\mathbf{z}(t))}{\sum_{v\in\mathcal{V}} \eta(\mathbf{z}_{u}(t),\mathbf{z}_{w}(t))}\), in which case the finite-difference iteration of the non-local diffusion equation corresponds to a Transformer layer (Vaswani et al., 2017) (see details in Appendix A). The non-local diffusion model essentially learns latent interaction graphs among nodes from input data and is agnostic to observed graph. For the predictive function \(\Gamma_{\theta}\) built by the diffusion equation along with the encoder \(\phi_{enc}\) and decoder \(\phi_{dec}\), we can theoretically guarantee topological generalization when \(Y\) is conditionally independent from \(A\) within the data-generating process in Sec. 3.1. **Proposition 3**.: _Suppose the label \(Y\) is conditionally independent from \(A\) with given \(\{U_{u}\}_{u\in\mathcal{V}}\) in the data generation hypothesis of Sec. 3.1, then for non-local diffusion model \(\Gamma_{\theta}\) minimizing the empirical risk \(\mathcal{R}_{emp}(\Gamma_{\theta};E_{tr})=\frac{1}{N_{tr}}\sum_{i=1}^{N_{tr}}l( \Gamma_{\theta}(\mathbf{X}^{(i)},\mathbf{A}^{(i)}),\mathbf{Y}^{(i)})\) over training data \(\{(\mathbf{X}^{(i)},\mathbf{A}^{(i)},\mathbf{Y}^{(i)})\}\) generated from \(p(X,A,Y|E=E_{tr})\), it holds with confidence \(1-\delta\) for the bounded generalization error on unseen data \((\mathbf{X}^{\prime},\mathbf{A}^{\prime},\mathbf{Y}^{\prime})\) from a new environment \(E_{te}\neq E_{tr}:\mathcal{R}(\Gamma_{\theta};E_{te})\triangleq\)_ \[\mathbb{E}_{(\mathbf{X}^{\prime},\mathbf{A}^{\prime},\mathbf{Y}^{\prime}) \sim p(X,A,Y|E=E_{te})}[l(\Gamma_{\theta}(\mathbf{X}^{\prime},\mathbf{A}^{ \prime}),\mathbf{Y}^{\prime})]\leq\mathcal{R}_{emp}(\Gamma_{\theta};E_{tr})+ \mathcal{D}_{1}(\Gamma,N_{tr}), \tag{5}\] _where \(\mathcal{D}_{1}(\Gamma,N_{tr})=\sqrt{(1/2N_{tr})(\log|\mathcal{H}(\Gamma)|+ \log(2/\delta))}\), \(|H(\Gamma)|\) denotes the hypothesis space size of \(\Gamma\), \(N_{tr}\) is the size of the training set, and \(l\) denotes the loss function._ The conditional independence between \(Y\) and \(A\), however, can be violated in many situations where labels strongly correlate with observed graph structures. In such cases, the non-local diffusion alone, discarding any observed structural information, could be insufficient for generalization. ## 4 Graph Advective Diffusion for Topological Generalization The preceding analysis reveals that the obstacles for graph diffusion models to achieving generalization arise from the non-fulfillment of two critical criteria: i) the diffusion process is capable of learning useful topological patterns; ii) the node representations are insensitive to variation of graph structures. While balancing these two objectives can be challenging due to the inherent trade-off, we present a novel graph diffusion model in this section that offers a provable level of generalization. The new model is inspired by a different class of diffusion equations, _advective diffusion_. ### Model Formulation: Graph Advective Diffusion **Advective Diffusion Equations.** We first introduce the classic advective diffusion commonly used for characterizing physical systems with convoluted quantity transfers, where the term _advection_ (or _convection_) refers to the evolution caused by the movement of the diffused quantity (Chandrasekhar, 1943). Consider the abstract domain \(\Omega\) of our interest defined in Sec. 2, and assume \(V(u,t)\in T_{u}\Omega\) (a vector field in \(\Omega\)) to denote the velocity of the particle at location \(u\) and time \(t\). The advective diffusion of the physical quantity \(q\) on \(\Omega\) is governed by the PDE as (Leveque, 1992) \[\frac{\partial q(u,t)}{\partial t}=\underbrace{\nabla^{*}\left(S(u,t)\odot \nabla q(u,t)\right)}_{\text{diffusion}}+\beta\underbrace{\nabla^{*}\left(V(u,t) \cdot q(u,t)\right)}_{\text{advection}},\;\;t\geq 0,u\in\Omega;\;\;q(u,0)=q_{0}(u), \tag{6}\] where \(\beta\geq 0\) is a weight. For example, if we consider \(q(u,t)\) as the water salinity in a river, then Eqn. 6 describes the temporal evolution of salinity at each location that equals to the spatial transfers of both diffusion process (caused by the concentration difference of salt and \(S\) reflects the molecular diffusivity in the water) and advection process (caused by the movement of the water and \(V\) characterizes the flowing directions). Similarly, on a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), we can define the velocity for each node \(u\) as a \(|\mathcal{V}|\)-dimensional vector-valued function \(\mathbf{V}(t)=[\mathbf{v}_{u}(t)]\). Then, we have \((\nabla^{*}(\mathbf{V}(t)\cdot\mathbf{Z}(t)))_{u}=\sum_{v\in\mathcal{V}}v_{ uv}(t)\mathbf{z}_{v}(t)\), giving rise to the graph advective diffusion equation: \[\frac{\partial\mathbf{Z}(t)}{\partial t}=\left[\mathbf{C}(\mathbf{Z}(t),t)+ \beta\mathbf{V}(t)-\mathbf{I}\right]\mathbf{Z}(t),\;\;\;0\leq t\leq T. \tag{7}\] **Graph Advective Diffusion.** We proceed to discuss how to properly define the coupling matrix \(\mathbf{C}\) and the velocity \(\mathbf{V}\) to ensure that advective diffusion equations are stable under topological shifts. Our inspiration stems from the recent research line in the pursuit of invariance in data generation (Rojas-Carulla et al., 2018; Arjovsky et al., 2019; Scholkopf et al., 2021), where the principle of (out-of-distribution) generalization lies in enforcing proper inductive bias that guides the model to capture the invariant underlying mechanism shared across environments. Different from natural data in Euclidean space (e.g., images), the invariant topological patterns in graphs can be much more difficult to capture given their abstract and versatile characteristics. We next generalize the invariance principle as an important inductive bias integrated into the advective diffusion for generalization purpose (with illustration in Fig. 2). _Non-local diffusion as global attention_. The diffusion process led by the concentration gradient acts as an internal driving force, where the diffusivity keeps invariant across environments (e.g., the molecular diffusivity stays constant in different rivers). This resonates with the environment-invariant latent interactions among nodes, determined by the underlying data manifold, that induce all-pair information flows over a complete graph. We thus follow Sec. 3.3 and instantiate \(\mathbf{C}\) as a global attention that computes the similarities between arbitrary node pairs. _Advection as local message passing_. The advection process driven by the directional movement belongs to an external force, with the velocity depending on contexts (e.g., different rivers). This is analogous to the environment-sensitive graph topology that is informative for prediction in specific environments. We instantiate the velocity as the normalized adjacency \(\mathbf{V}=\tilde{\mathbf{A}}\) that reflects graph structures. With the above definitions, our graph advective diffusion model can be formulated as: \[\frac{\partial\mathbf{Z}(t)}{\partial t}=\left[\mathbf{C}+\beta \tilde{\mathbf{A}}-\mathbf{I}\right]\mathbf{Z}(t),\;\;\;0\leq t\leq T\;\;\; \text{with initial conditions}\;\;\mathbf{Z}(0)=\phi_{enc}(\mathbf{X}), \tag{8}\] \[\text{where}\;\;\;\;\mathbf{C}=[c_{uv}]_{u,v\in\mathcal{V}},\;\;c _{uv}=\frac{\eta(\mathbf{z}_{u}(0),\mathbf{z}_{v}(0))}{\sum_{w\in\mathcal{V}} \eta(\mathbf{z}_{u}(0),\mathbf{z}_{w}(0))}.\] Here \(\beta\in[0,1]\) is a weight hyper-parameter and \(\eta\) is a learnable pairwise similarity function. The two mechanisms of non-local diffusion (implemented through attention akin to Transformers) and advection (implemented like message passing neural networks) give rise to a new architecture, which we call the _Advective Diffusion Transformer_, or ADiT for short. _Remark_.: Eqn. 8 has a closed-form solution \(\mathbf{Z}(t)=e^{-(\mathbf{I}-\mathbf{C}-\beta\tilde{\mathbf{A}})t}\mathbf{Z}(0)\), and as we will show in the next subsection, it allows generalization guarantees with topological distribution shifts. A special case Figure 2: Illustration of the proposed model. of \(\beta=0\) (no advection) can be used in situations where the graph structure is not useful. Moreover, one can extend Eqn. 8 to a non-linear equation with time-dependent \(\mathbf{C}(\mathbf{Z}(t),t)\), in which situation the equation will have no closed-form solution and need numerical schemes for solving. Similarly to Di Giovanni et al. (2022), we found in our experiments a simple linear diffusion to be sufficient to yield promising performance. We therefore leave the study of the non-linear variant for the future. ### How Graph Advective Diffusion Handles Topological Shifts We proceed to analyze the behavior of our proposed model w.r.t. topological shifts to demonstrate its capability of generalizing to out-of-distribution (OOD) data. Our first main result is derived based on the universal approximation power of neural networks and the data generation hypothesis in Sec. 3.1. **Theorem 1**.: _For the advective diffusion model Eqn. 7 with \(\mathbf{C}\) pre-computed by global attention over \(\mathbf{Z}(0)\) and fixed velocity \(\mathbf{V}=\tilde{\mathbf{A}}\), the change rate of node representations \(\mathbf{Z}(T;\tilde{\mathbf{A}})\) w.r.t. \(\Delta\tilde{\mathbf{A}}\) can be reduced to \(\mathcal{O}(\psi(\|\Delta\tilde{\mathbf{A}}\|_{2}))\) where \(\psi\) denotes an arbitrary polynomial function._ Theorem 1 suggests that the advective diffusion model with observed structural information incorporated is capable of controlling the impact of topology variation on node representations to arbitrary rates. We can further derive the generalization error that is decomposed into the in-distribution generalization (ID) error \(\mathcal{D}_{1}(\Gamma,N_{tr})\) and the topological distribution gap between ID and OOD data. **Theorem 2**.: _Assume \(l\) and \(\phi_{dec}\) are Lipschitz continous. Then for data generated with the data generation hypothesis of Sec. 3.1 from arbitrary \(E_{tr}\) and \(E_{te}\), we have the generalization error bound of the model \(\Gamma_{\theta}\) with confidence \(1-\delta\):_ \[\mathcal{R}(\Gamma_{\theta};E_{te})\leq\mathcal{R}_{emp}(\Gamma_{\theta};E_{ tr})+\mathcal{D}_{1}(\Gamma,N_{tr})+\mathcal{D}_{2}(E_{tr},E_{te},W), \tag{9}\] _where \(\mathcal{D}_{2}(E_{tr},E_{te},W)=\mathcal{O}(\mathbb{E}_{\mathbf{A}\sim p(A| E_{tr}),\mathbf{A}^{\prime}\sim p(A|E_{te})}[\psi(\|\Delta\tilde{\mathbf{A}}\|_{2})])\)._ Theorem 2 implies that the generalization error can be controlled with the adaptive change rate yielded by the model. The model possesses provable potential for achieving a desired level of generalization with topological shifts. Furthermore, our model instantiation only requires trainable parameters for two shallow MLPs \(\phi_{enc}\) and \(\phi_{dec}\) and the attention network \(\eta\), which is highly parameter-efficient. This reduces the hypothesis space of \(\Gamma\) that impacts \(\mathcal{D}_{1}\) and is beneficial for generalization. ### Numerical Solvers for Graph Advective Diffusion We next delve into the model implementation, with a key question how to compute the closed-form solution \(e^{-(\mathbf{I}-\mathbf{C}-\beta\tilde{\mathbf{A}})t}\). Direct computation of the matrix exponential through eigendecomposition is computationally intractable for large matrices. As an alternative, we explore several numerical approximation techniques based on series expansion. **ADiT-inverse** uses a numerical method based on the extension of Pade-Chebyshev theory to rational fractions (Golub & Van Loan, 1989; Gallopoulos & Saad, 1992), which has shown empirical success in 3D shape analysis (Patane, 2014). The matrix exponential is approximated by solving multiple linear systems (see more details and derivations in Appendix D) and we generalize it as a flexible multi-head network where each head propagates in parallel: \[\mathbf{Z}(T)\approx\sum_{h=1}^{H}\phi_{FC}^{(h)}(\mathbf{Z}_{h}),\ \ \ \mathbf{Z}_{h}=\text{insolver}(\mathbf{L}_{h},\mathbf{Z}(0)),\ \ \mathbf{L}_{h}=(1+\theta)\mathbf{I}-\mathbf{C}_{h}-\beta\tilde{\mathbf{A}}, \tag{10}\] where the _linsolver_ computes the matrix inverse \(\mathbf{Z}_{h}=(\mathbf{L}_{h})^{-1}\mathbf{Z}(0)\) and can be efficiently implemented via torch.linalg.solve() that supports automated differentiation. Each head contributes to propagation with the pre-computed attention \(\mathbf{C}_{h}\) and node-wise transformation \(\phi_{FC}^{(h)}\). **ADiT-series** approximates the matrix inverse via finite geometric series (see Appendix D for detailed derivations) \[\mathbf{Z}(T)\approx\sum_{h=1}^{H}\phi_{FC}^{(h)}(\mathbf{Z}_{h}),\ \ \ \mathbf{Z}_{h}=[\mathbf{Z}(0),\mathbf{P}_{h}\mathbf{Z}(0),\cdots,(\mathbf{P}_{h} )^{K}\mathbf{Z}(0)],\ \ \ \mathbf{P}_{h}=\mathbf{C}_{h}+\beta\tilde{\mathbf{A}}, \tag{11}\] for better scalability. This model resorts to aggregation of \(K\)-order propagation with the propagation matrix \(\mathbf{P}_{h}\) in each head. The feed-forward of the model can be efficiently computed within linear complexity w.r.t. the number of nodes (see how we achieve this acceleration in Appendix E.1.2). The node representations obtained by approximate solution of the diffusion equation \(\mathbf{Z}(T)\) are then fed into \(\phi_{dec}\) for prediction and loss computation (e.g., cross-entropy for classification or mean square loss for regression). Due to space limit, we defer details of model architectures to Appendix E.1. Moreover, in Appendix E.2 we discuss how to extend our model to accommodate edge attributes. ## 5 Experiments We apply our model to synthetic and real-world datasets that involve various topological distribution shifts. We consider a wide variety of graph-based downstream tasks of disparate scales and granularities. More detailed dataset information is provided in Appendix F.1. In each case, we compare with different sets of competitors that are suitable for the tasks. Details on baselines and implementation are deferred to Appendix F.2 and F.3, respectively. ### Synthetic Datasets We create synthetic datasets that simulate the data generation in Sec. 3.1 to validate our model. We instantiate \(h\) as a stochastic block model which generates edges \(A_{uv}\) according to block numbers (\(b\)), intra-block edge probability (\(p_{1}\)) and inter-block edge probability (\(p_{2}\)). Then we study three types of topological distribution shifts: **homophily shift** (changing \(p_{2}\) with fixed \(p_{1}\)); **density shift** (changing \(p_{1}\) and \(p_{2}\)); and **block shift** (varying \(b\)). The predictive task is node regression and we use RMSE to measure the performance. Details for dataset generation is presented in Appendix F.1.1. Fig. 3 plots RMSE on training/validation/testing graphs in three cases. We compare our model (ADiT-inverse and ADiT-series) with diffusion-based models analyzed in Sec. 3. The latter includes _Diff-Linear_ (graph diffusion with constant \(\mathbf{C}\)), _Diff-MultiLayer_ (the extension of _Diff-Linear_ with intermediate feature transformations), _Diff-Time_ (graph diffusion with time-dependent \(\mathbf{C}(\mathbf{Z}(t))\)) and _Diff-NonLocal_ (non-local diffusion with global attentive diffusivity \(\mathbf{C}(\mathbf{Z}(t))\)). Three local graph diffusion models exhibit clear performance degradation w.r.t. topological shifts exacerbated from #1 to #10 testing graphs, while our two models yield consistently low RMSE across environments. In contrast, the non-local diffusion model produce comparably stable performance yet inferior to our models due to its failure of utilizing the observed topological information. ### Real-World Datasets We proceed to evaluate ADiT beyond the synthetic cases and experiment on real-world datasets with more complex shifts in graph topologies encountered in diverse and broad applications. **Information Networks**. We first consider node classification on citation networks Arxiv (Hu et al., 2020) and social networks Twitch (Rozemberczki et al., 2021) with graph sizes ranging from 2K to 0.2M, where we use the scalable version ADiT-series. To introduce topological shifts, we partition the data according to publication years and geographic information for Arxiv and Twitch, respectively. The predictive task is node classification, and we follow the common practice Figure 3: Results of RMSE (\(\downarrow\)) on synthetic datasets that simulate the topological shifts caused by the environment \(E\) in Fig. 1. We consider three types of shifts w.r.t. homophily levels, edge densities, and block numbers, respectively. In each case, the validation and #1\(\sim\)#10 testing sets are generated with different configurations introducing increasing distribution gaps from the training set. comparing Accuracy (resp. ROC-AUC) for Arxiv (resp. Twitch). We compare with three types of state-of-the-art baselines: (i) **classical GNNs** (_GCN_(Kipf and Welling, 2017), _GAT_(Velickovic et al., 2018) and _SGC_(Wu et al., 2019)); (ii) **diffusion-based GNNs** (_GDC_(Klicpera et al., 2019) and _GRAND_(Chamberlain et al., 2021a)), and (iii) **graph Transformers** (_GraphTrans_(Wu et al., 2021), _GraphGPS_(Rampasek et al., 2022), and the diffusion-based _DIFFormer_(Wu et al., 2023)). Appendix F.2 presents detailed descriptions for these models. Table 1 reports the results, showing that our model offers significantly superior generalization for node classification. **Molecular Property Prediction**. We next study graph classification for predicting molecular properties on OGB-BACE and OGB-SIDER. We follow the scaffold-based splits by Hu et al. (2020), which guarantee structural diversity across training and test sets and provide a realistic estimate of model generalization in prospective experimental settings (Yang et al., 2019). The performance is measured by ROC-AUC. Table 2 reports the results, showing that our model outperforms classical GNNs and powerful graph Transformers2 that use the same input data and training loss. Footnote 2: Note that our comparison focuses on generic GNN architectures, rather than specialized methods that are tailored for chemical problems and additionally leverage domain knowledge such as structural motifs. **Protein Interactions**. We then test on protein-protein interactions of yeast cells (Fu and He, 2022). Each node denotes a protein with a time-aware gene expression value and the edges indicate co-expressed protein pairs at each time. The dataset consists of 12 dynamic networks each of which is obtained by one protein identification method and records the metabolic cycles of yeast cells. The networks have distinct topological features (e.g., distribution of cliques) as observed by (Fu and He, 2022), and we use 6/1/5 networks for train/valid/test. To test the generalization of the model across different tasks, we consider: i) node regreesion for gene expression values (measured by RMSE); 2) edge regression for predicting the co-expression correlation coefficients (measured by RMSE); 3) link prediction for identifying co-expressed protein pairs (measured by ROC-AUC). Table 3 shows that our models yield the first-ranking results in three tasks. In contrast, ADiT-series performs better in node/edge regression tasks, while ADiT-inverse exhibits better competitiveness for link prediction. The possible reason might be that ADiT-inverse can better exploit high-order structural information as the matrix inverse can be treated as ADiT-series with \(K\rightarrow\infty\). **Molecular Mapping Operator Generation**. Finally we investigate on the generation of molecular coarse-grained mapping operators, an important step for molecular dynamics simulation, aiming to find a representation of how atoms are grouped in a molecule (Li et al., 2020). The task is a graph segmentation problem which can be modeled as predicting edges that indicate where to partition the graph. We use the relative molecular mass to split the data and test the model's extrapolation ability for larger molecules. Fig. 4 compares the testing cases (with more cases in Appendix G.1) generated by different models, which shows the more accurate estimation of our model (we use ADiT-series for experiments) that demonstrates desired generalization. **Additional Experimental Results**. Due to space limit, we defer more results such as ablation studies and hyper-parameter analysis (for \(\beta\), \(\theta\) and \(K\)) along with more discussions to Appendix G.2. ## 6 Conclusions and Discussions This paper has systematically studied the generalization capabilities of graph diffusion equations under topological shifts, and shed lights on building generalizable GNNs in the open-world regime. The \begin{table} \begin{tabular}{l|c c c|c} \hline \hline **GCN**(Kipf and Welling, 2017) & 50.14 \(\pm\) 0.46 & 48.06 \(\pm\) 1.13 & 46.46 \(\pm\) 0.85 & 59.76 \(\pm\) 0.34 \\ **GAT**(Velickovic et al., 2018) & 56.90 \(\pm\) 0.43 & 48.60 \(\pm\) 0.28 & 46.50 \(\pm\) 0.21 & 59.14 \(\pm\) 0.72 \\ **SGC**(Wu et al., 2019) & 51.40 \(\pm\) 0.10 & **49.15 \(\pm\) 0.16** & 46.94 \(\pm\) 0.29 & 68.06 \(\pm\) 0.13 \\ **GDC**(Klicpera et al., 2019) & 51.53 \(\pm\) 0.42 & 49.02 \(\pm\) 0.51 & 47.33 \(\pm\) 0.40 & 61.36 \(\pm\) 0.10 \\ **GRAND**(Chamberlain et al., 2021) & 52.45 \(\pm\) 0.27 & 50.18 \(\pm\) 0.18 & 48.01 \(\pm\) 0.24 & 61.65 \(\pm\) 0.23 \\ **GraphTrans**(Wu et al., 2021) & OOM & OOM & OOM & 61.65 \(\pm\) 0.23 \\ **GraphGPS**(Rampasek et al., 2022) & 51.11 \(\pm\) 0.19 & 48.91 \(\pm\) 0.34 & 44.66 \(\pm\) 0.95 & 62.13 \(\pm\) 0.34 \\ **DIFFormer**(Wu et al., 2023) & 50.45 \(\pm\) 0.49 & 47.37 \(\pm\) 1.58 & 44.30 \(\pm\) 2.02 & 62.11 \(\pm\) 0.11 \\ \hline **ADiT-series** & **53.41 \(\pm\) 0.48** & **51.23 \(\pm\) 0.60** & **49.64 \(\pm\) 0.54** & **62.51 \(\pm\) 0.07** \\ \hline \hline \end{tabular} \end{table} Table 1: Results on Arxiv and Twitch, where we use time and spatial contexts for data splits, respectively. We report the Accuracy (\(\uparrow\)) for three testing sets of Arxiv and average ROC-AUC (\(\uparrow\)) for all testing graphs of Twitch (results for each case are reported in Appendix G.1). Top performing methods are marked as first/second/third. OOM indicates out-of-memory error. latter remains a largely under-explored question in graph ML community. Our new model, inspired by advective diffusion equations, has provable topological generalization capability and is implemented as a Transformer-like architecture. It shows superior performance in various graph learning tasks. Our analysis and proposed methodology open new possibilities of leveraging established PDE techniques for building generalizable GNNs. **Outlook.** Our generalization analysis focuses on the data-generating mechanism inspired by the random graph model. While this mechanism can in principle reflect real-world data generation process in various graph-structured data, in the open-world regime, there could exist situations involving structural distribution shifts by diverse factors or their combination. Future works can extend our framework for such cases where inter-dependent data is generated with different mechanisms. Another future research direction is the instantiation of the diffusion and advection operators in our model. Besides our choice of MPNN architecture to implement the advection process, other possibilities include structural and positional embeddings. We leave this line of exploration for the future, along with the analysis of the generalization capabilities of more general (e.g., non-linear) versions of the advective diffusion equation and additional architectural choices. **Reproducibility Statement.** We supplement the complete proofs for all the theoretical results and detailed information for model implementations and experiments, with references below: * The proofs for technical results in Sec. 3 are presented in Appendix B. * The proofs for technical results in Sec. 4 are presented in Appendix C. * The detailed derivations for our proposed models in Sec. 4.3 are shown in Appendix D. * The architectures of our models along with pseudo codes are illustrated in Appendix E. * The detailed information for all experimental datasets is presented in Appendix F.1. * The details for competitors are provided in Appendix F.2. * The implementation details for experiments are provided in Appendix F.3. The source codes will be made publicly available. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{2}{c|}{**OGB-BACE**} & \multicolumn{2}{c}{**OGB-SIDER**} \\ & **Train** & **Valid** & **Test** & **Train** & **Valid** & **Test** \\ \hline **GCN** & \(93.58\pm 0.43\) & \(67.83\pm 0.39\) & \(80.93\pm 0.59\) & \(76.21\pm 0.10\) & \(61.84\pm 0.18\) & \(59.87\pm 0.14\) \\ **GAT** & \(91.67\pm 1.85\) & \(79.31\pm 1.27\) & \(78.18\pm 1.43\) & \(80.26\pm 0.03\) & \(61.88\pm 0.10\) & \(58.99\pm 0.06\) \\ **GraphTrans** & \(96.96\pm 0.59\) & \(79.16\pm 1.53\) & \(80.12\pm 0.58\) & \(97.67\pm 1.22\) & \(62.46\pm 0.85\) & \(60.73\pm 1.97\) \\ **GraphGPS** & \(68.24\pm 2.18\) & \(66.54\pm 2.44\) & \(73.46\pm 0.30\) & \(74.97\pm 1.06\) & \(60.87\pm 0.07\) & \(67.11\pm 0.07\) \\ **DIPFormer** & \(95.97\pm 0.97\) & \(74.88\pm 1.31\) & \(79.67\pm 0.87\) & \(89.94\pm 3.57\) & \(64.13\pm 0.58\) & \(60.94\pm 21.7\) \\ **APDT-reverse** & \(97.15\pm 0.97\) & \(73.82\pm 1.45\) & \(80.38\pm 1.40\) & \(83.67\pm 0.09\) & \(60.85\pm 0.22\) & \(62.59\pm 0.16\) \\ **ADIT-series** & \(93.58\pm 0.46\) & \(67.03\pm 0.53\) & \(\mathbf{82.03\pm 0.42}\) & \(80.24\pm 0.23\) & \(59.70\pm 0.35\) & \(62.28\pm 0.36\) \\ \hline \hline \end{tabular} \end{table} Table 2: ROC-AUC (\(\uparrow\)) on two molecule datasets OGB-BACE and OGB-SIDER with scaffold splits for training/validation/testing, where the task is to predict molecular graph properties. Figure 4: Testing cases for molecular mapping operators generated by different models with averaged testing Accuracy (\(\uparrow\)) reported. The task is to generate subgraph-level partitions resembling expert annotations (ground-truth) for each molecule instance. See more results in Appendix G.1. \begin{table} \begin{tabular}{l|c c|c c c|c c} \hline \hline & \multicolumn{2}{c|}{**Node Regression (RMSE) (\(\downarrow\))**} & \multicolumn{2}{c|}{**Edge Regression (RMSE) (\(\downarrow\))**} & \multicolumn{2}{c}{**Link Prediction (ROC-AUC) (\(\uparrow\))**} \\ & **Valid** & **Test** & **Valid** & **Test** & **Valid** & **Test** \\ \hline **GCN** & \(3.74\pm 0.01\) & \(3.40\pm 0.01\) & \(0.170\pm 0.004\) & \(0.184\pm 0.004\) & \(67.36\pm 8.80\) & \(0.683\pm 0.062\) \\ **GAT** & \(3.10\pm 0.09\) & \(2.86\pm 0.06\) & \(0.164\pm 0.001\) & \(0.176\pm 0.001\) & \(76.65\pm 2.34\) & \(0.687\pm 0.031\) \\ **SGC** & \(3.66\pm 0.00\) & \(3.40\pm 0.02\) & \(0.177\pm 0.016\) & \(0.190\pm 0.004\) & \(65.87\pm 4.47\) & \(0.775\pm 0.042\) \\ **GraphTrans** & OOM & OOM & OOM & OOM & OOM & OOM \\ **GraphGPS** & \(1.80\pm 0.01\) & \(1.65\pm 0.02\) & \(0.165\pm 0.016\) & \(0.159\pm 0.007\) & \(60.45\pm 2.99\) & \(0.673\pm 0.068\) \\ **DIPFormer** & \(2.06\pm 0.04\) & \(2.04\pm 0.02\) & \(0.173\pm 0.012\) & \(0.155\pm 0.002\) & \(93.56\pm 3.06\) & \(9.092\pm 0.054\) \\ \hline **ADIT-series** & \(1.83\pm 0.02\) & \(1.75\pm 0.02\) & \(0.146\pm 0.002\) & \(\mathbf{0.147\pm 0.002}\) & \(94.60\pm 2.72\) & \(0.957\pm 0.018\) \\ **ADIT-series** & \(1.56\pm 0.02\) & \(\mathbf{1.49\pm 0.03}\) & \(0.146\pm 0.002\) & \(\mathbf{0.144\pm 0.001}\) & \(82.85\pm 2.69\) & \(0.866\pm 0.036\) \\ \hline \hline \end{tabular} \end{table} Table 3: Results on dynamic protein interaction networks DDPIN with splits by different protein identification methods. The predictive tasks span node regression, edge regression and link prediction.
2310.19537
On consequences of finetuning on data with highly discriminative features
In the era of transfer learning, training neural networks from scratch is becoming obsolete. Transfer learning leverages prior knowledge for new tasks, conserving computational resources. While its advantages are well-documented, we uncover a notable drawback: networks tend to prioritize basic data patterns, forsaking valuable pre-learned features. We term this behavior "feature erosion" and analyze its impact on network performance and internal representations.
Wojciech Masarczyk, Tomasz Trzciński, Mateusz Ostaszewski
2023-10-30T13:43:50Z
http://arxiv.org/abs/2310.19537v2
# On consequences of finetuning on data with highly discriminative features ###### Abstract Deep learning has witnessed remarkable advancements in various domains, driven by the ability of neural networks to learn intricate patterns from data. One key aspect contributing to their success is the process of transfer learning, where pre-trained models are fine-tuned on specific tasks, leveraging knowledge acquired from previous training Pratt and Jennings (1996), Yosinski et al. (2014). This technique is especially important in the advent of training ever-growing models such as Large Language Models (LLMs) Devlin et al. (2019), et al. (2023), OpenAI (2023) or massive ViTs Zhai et al. (2022), et al. (2023). However, while transfer learning is a powerful tool, it is not without its nuances. This work presents a thought-provoking experiment exposing a network's tendency to greedily follow the simplest discriminative features present in the data. This phenomenon was observed in multiple works and is commonly called simplicity bias Arpit et al. (2017), Valle-Perez et al. (2019), Nakkiran et al. (2019). Here, we take a step further and investigate the implications of this behavior in the realms of transfer learning. To investigate this effect, we train the network on CIFAR-10 to perfect training accuracy (error-free). Next, we introduce a highly discriminative, class-correlated pattern to the corner of each dataset image and proceed with the training. Surprisingly, as shown in Fig. 1, despite the model's perfect accuracy, finetuning it on the oversimplified task causes an abrupt performance loss (blue curve) and pushes the model to focus solely on the novel pattern. We call this phenomenon _feature erosion_. Our analysis shows that during the fine-tuning phase, the pretrained model greedily abandons salient, generalizing features in favor of the new discriminative ones. In Section 2, we define details of the experiments and investigate the breadth of the phenomenon 2, presenting that all tested contemporary neural networks exhibit this behavior. Next, we investigate the detrimental effect of feature erosion on the model's representation formation, transfer learning, and plasticity. Section 3 discusses the phenomenon's implications, its novelty concerning related works, and its hazards to real-world applications. Figure 1: **Feature erosion for VGG-19 on CIFAR-10. Introducing a simpler discriminative feature (green hatch area) solely focuses the model on that feature (red curve) and abruptly erases previous knowledge about the data (blue line).** ## 2 Experiments and results In this section, we explore the phenomenon we introduced in the previous section, aiming to understand its severity and impact on network behavior. We initiate our investigation by examining the robustness of feature erosion across different network architectures and datasets (Fig. 2). Next, we assess the detrimental effect of the phenomenon on the network's representations (Fig. 3) and show that pursuing simpler, discriminative patterns collapse the network's rank (Fig. 4). We hypothesize that this effect is linked with the loss of plasticity observed in further training of that network (Fig. 5). **Experimental setup** We will now delve into feature erosion using ResNet-18, ResNet-50, and VGG-19 models trained on either the CIFAR-10 or ImageNet dataset. For CIFAR-10, these networks underwent 160 training epochs, consistently achieving \(100\%\) training accuracy before introducing the oversimplified second task. The same hyperparameters were maintained for an additional 160 epochs during the second task, following recommendations for optimal model performance by Liu et al. Liu et al. (2018). Regarding the ImageNet models, we utilized PyTorch's pre-trained weights and randomly selected a subset of 10 classes from the ImageNet dataset for the oversimplified task. To create this dataset, we superimposed squares of the same size and placement on the training images, with each square's color representing the image's class. In most of our experiments, we applied these squares to all training images, the exception is presented in Fig. 6, where we investigated the impact of that ratio on the model's performance. **Feature Erosion for different models and datasets** The performance of these models is illustrated in Figure 2, showcasing test accuracy curves during both the pretraining phase (white background) and the subsequent fine-tuning phase on the oversimplified task (green-hatched background). The results clearly indicate that all neural architectures and datasets exhibit feature erosion, resulting in a noticeable decline in test accuracy on the pre-training dataset. Notably, as the sole distinction between the first and second tasks is the presence of these colored squares, the dramatic shift from near-perfect accuracy in the first task to random-chance accuracy in the second task implies that the model exclusively focuses on the colored squares. **Unpacking Feature Erosion: Analyzing Representations** Having observed significant shifts in model performance on established benchmarks, our objective is to investigate the impact of fine-tuning on oversimplified datasets on the model's representations. In our subsequent experiment, we perform a comparative analysis of representations at each layer after training on the first task and subsequent training on the second task. We employ Centered Kernel Alignment (CKA) Kornblith et al. (2019) as a metric to quantify similarity. We extract representations Figure 3: **Feature erosion impacts CKA similarity between representations** from different layers extracted from the model after completing the 1st and 2nd tasks. The experiment was conducted using the VGG-19 model and the CIFAR-10 dataset. Figure 2: Feature erosion effect for ResNet18 trained on CIFAR-10 (left) ResNet50 trained of ImageNet (middle) and VGG-19 trained on ImageNet (right). ImageNet models used pretrained weights provided by PyTorch. ResNet18 reached 100% training accuracy before introducing the discriminative pattern to the dataset. from standard CIFAR-10 datasets (without color squares) for both models to isolate the impact of weight evolution on model representations. As illustrated in Figure 3, the representations show substantial dissimilarity. Most notably, most diagonal elements are black, indicating minimal similarity between representations within the same layers after undertaking different tasks. The lighter colors in the top left corner suggest that changes during the fine-tuning phase commence early in the network, particularly in the bottom layers. Given the observed dramatic changes in representations across nearly all layers, we delve deeper into understanding the underlying dynamics. In Fig. 4, we investigate feature erosion within each layer using linear probing and the numerical rank of representations. A comparison of the linear probing plots reveals that after fine-tuning (blue dots), the model fully adapts to the new data, with the second layer achieving accuracy levels comparable to those of the entire model. Furthermore, the numerical rank of representations experiences a substantial decline at each layer following fine-tuning (red dots), indicating that the model, starting from the initial layers, probably adheres to the simplicity bias Morwani et al. (2023) and projects the data into smaller subspaces. ### Loss of plasticity To better understand the impact of feature erosion on network performance, we examined the network's ability to relearn information from a previous task. We conducted experiments involving a sequence of three tasks: CIFAR-10 (Task 0), an oversimplified version of CIFAR-10, and the standard CIFAR-10 once again (Task 2). Typically, relearning is expected to be faster and require fewer computational resources than training from scratch. However, our results, as shown in Figure 1, indicate a deviation from this expected behavior. In this setup, the network not only learns more slowly compared to training from scratch but also fails to achieve the same level of performance within the same computational budget as the network trained from scratch. This performance difference is commonly referred to as "loss of plasticity" and is often associated with the degradation of the penultimate layer's rank. While we speculate that this explanation may apply in our case, a thorough investigation of this hypothesis is beyond the scope of our current research. Figure 4: **Feature Erosion collapses rank of the hidden representations** and impacts linear probing accuracy of VGG-19 trained on CIFAR10. Crosses refer to model after pre-training, dots refer to the model after finetuning. Blue color refer to the test accuracy, orange color refers to the representations rank. Figure 5: **Loss of plasticity**. The model is trained on the sequence of 3 tasks. The first (Task 0) and the last (Task 2) are standard CIFAR-10 datasets. The middle task is CIFAR-10 with correlated squares. Discussion In this study, we have uncovered a nuanced aspect of catastrophic forgetting, demonstrating that it can occur even when the full dataset is present in the new task data. Our experiments, which include GradCam analysis, reveal that despite achieving 100% training accuracy after introducing highly discriminative features to the model, it begins to focus exclusively on these new features, leaving behind previously developed ones. A deeper investigation using techniques such as Centered Kernel Alignment (CKA) exposes significant changes in representations, along with a deterioration in model performance indicated by a rank collapse of representations in nearly all layers. Our research expands our knowledge of the complex relationship between task similarity (Ramasesh et al., 2020; Braun et al., 2022), forgetting, and transfer dynamic (Chen et al., 2022). On the one hand, recent studies have revealed that intermediate task similarity tends to contribute most to catastrophic forgetting (Ramasesh et al., 2020; Braun et al., 2022). The task similarity is the use of a "data-mixing framework," which combines images and labels from two distinct datasets of equal size. However, our experimental setup features identical datasets, differing only in a small image segment transitioning from random to highly correlated with a specific class. While this does not contradict earlier findings, it certainly introduces a novel perspective on the phenomenon. On the other hand, our observation also has implications in light of recent research (Chen et al., 2022), which suggests that less forgetful representations result in improved performance on new tasks, indicating a robust relationship between retaining previous information and enhanced learning efficiency. In this context, our toy example falsifies the reverse implication, i.e., the model exhibits perfectly transferable features yet forgets them in favor of features with greater predictive power. In our concluding experiment, extending our analysis of feature erosion, we explore the relation between forgetting and the ratio of samples containing oversimplified discriminative patterns. In Fig. 6, we observe a non-linear relationship between the number of oversimplified samples in the training dataset and the extent of forgetting. However, even modest ratios of oversimplified data are enough to induce forgetting in the model. This reinforces the importance of the phenomenon. The situation we present in this study has practical significance, especially in scenarios involving incremental learning in various domains. Real-world applications that continually receive new data with limited human involvement might come across samples containing highly predictive patterns. This can lead to the loss of previously acquired knowledge. For instance, in medical imaging, where data artifacts may correlate with task objectives, the phenomenon of feature erosion could be a frequent concern. Additionally, our findings connect with existing literature on concepts such as simplicity bias (Neyshabur et al., 2014; Shah et al., 2020) and gradient starvation (Pezeshki et al., 2021). Our results suggest that simplicity bias not only affects generalization but can also disrupt previously well-functioning representations. The resilience of simplicity bias to approaches like ensembles or adversarial training raises questions about the effectiveness of common continual learning methods. Finally, there may be a positive aspect to this phenomenon. In the current era of heightened focus on AI ethics, machine unlearning (Bourtoule et al., 2021) and fairness in deep learning (Du et al., 2020) are prominent topics. Our study prompts the question of whether intentionally introducing highly discriminatory patterns to unwanted samples can facilitate the intentional forgetting of such samples, a topic that warrants further exploration. In summary, our work reveals a novel facet of catastrophic forgetting, challenging conventional wisdom about its occurrence and implications. These findings have relevance for both the field of machine learning and practical applications that involve continual learning with evolving data. Figure 6: The stronger the discriminative pattern (higher ratio), the higher the forgetting of the model. Each dot represents a single model trained on CIFAR-10 and fine-tuned on an oversimplified task with different ratios of images with colored squares.
2308.03725
Efficient Temporal Sentence Grounding in Videos with Multi-Teacher Knowledge Distillation
Temporal Sentence Grounding in Videos (TSGV) aims to detect the event timestamps described by the natural language query from untrimmed videos. This paper discusses the challenge of achieving efficient computation in TSGV models while maintaining high performance. Most existing approaches exquisitely design complex architectures to improve accuracy with extra layers and loss, suffering from inefficiency and heaviness. Although some works have noticed that, they only make an issue of feature fusion layers, which can hardly enjoy the highspeed merit in the whole clunky network. To tackle this problem, we propose a novel efficient multi-teacher model (EMTM) based on knowledge distillation to transfer diverse knowledge from both heterogeneous and isomorphic networks. Specifically, We first unify different outputs of the heterogeneous models into one single form. Next, a Knowledge Aggregation Unit (KAU) is built to acquire high-quality integrated soft labels from multiple teachers. After that, the KAU module leverages the multi-scale video and global query information to adaptively determine the weights of different teachers. A Shared Encoder strategy is then proposed to solve the problem that the student shallow layers hardly benefit from teachers, in which an isomorphic teacher is collaboratively trained with the student to align their hidden states. Extensive experimental results on three popular TSGV benchmarks demonstrate that our method is both effective and efficient without bells and whistles.
Renjie Liang, Yiming Yang, Hui Lu, Li Li
2023-08-07T17:07:48Z
http://arxiv.org/abs/2308.03725v2
# Efficient Temporal Sentence Grounding in Videos with Multi-Teacher Knowledge Distillation ###### Abstract. Temporal Sentence Grounding in Videos (TSGV) aims to detect the event timestamps described by the natural language query from untrimmed videos. This paper discusses the challenge of achieving efficient computation in TSGV models while maintaining high performance. Most existing approaches exquisitsly design complex architectures to improve accuracy with extra layers and loss, suffering from inefficiency and heaviness. Although some works have noticed that, they only make an issue of feature fusion layers, which can hardly enjoy the highspeed merit in the whole clunky network. To tackle this problem, we propose a novel efficient multi-teacher model (EMTM) based on knowledge distillation to transfer diverse knowledge from both heterogeneous and isomorphic networks. Specifically, We first unify different outputs of the heterogeneous models into one single form. Next, a Knowledge Aggregation Unit (KAU) is built to acquire high-quality integrated soft labels from multiple teachers. After that, the KAU module leverages the multi-scale video and global query information to adaptively determine the weights of different teachers. A Shared Encoder strategy is then proposed to solve the problem that the student shallow layers hardly benefit from teachers, in which an isomorphic teacher is collaboratively trained with the student to align their hidden states. Extensive experimental results on three popular TSGV benchmarks demonstrate that our method is both effective and efficient without bells and whistles. Our code is available at [https://github.com](https://github.com). Temporal Sentence Grounding in Videos, Multi-Teacher Knowledge Distillation, Shared Encoders, Knowledge Aggregation Unit + Footnote †: ccs: Information systems Video search Computing methodologies Neural networks + Footnote †: ccs: Information systems Video search Computing methodologies Neural networks + Footnote †: ccs: Information systems Video search Computing methodologies Neural networks ## 1. Introduction Temporal Sentence Grounding in Videos (TSGV), which aims to ground a temporal segment in an untrimmed video with a natural language query, has drawn widespread attention over the past few years (Sutton et al., 2017). There is a clear trend that top-performing models are becoming larger with numerous parameters. Additionally, the recent work shows that accuracy in TSGV tasks has reached a bottleneck period, while the combination of complex networks and multiple structures is becoming more prevalent to further improve the ability of the model, which will cause an expansion of the model size. However, the heavy resource cost required by the approaches restricts their applications to platforms and devices with limited computational capability and low memory storage. In order to improve efficiency, FMVR (Kalal et al., 2017) and CCA (Kalal et al., 2018) are proposed to construct fast TSGV models by reducing the fusion time. Although they decline the inferred time significantly, the whole network is still time-consuming, even surpassing the conventional methods, as depicted in Figure 0(a). The time of the whole network encompasses the duration from inputting the video feature (e.g., I3D or C3D) and query sentence to producing predictions in this paper. To be specific, FMVR and CCA require encoding and storing of Figure 1. (a) FLOPs and accuracy plot of state-of-the-art TSGV approaches on Chradas-STA and ActivityNet. We report [email protected] for the two datasets. Our proposed EMTM achieves the best accuracy-speed balance among all the competitors. (b) Various predictions from different models when given the same input. Ground truth is shown in the gray area. the video feature in advance, followed by inference based on the query. However, the encoding process is highly time-consuming. In real-world scenarios, there may not be an opportunity to pre-encode the video. The processing of their methods is more similar to the Video Corpus Moment Retrieval (VCMR), e.g. retrieval moment from an existing video corpus by a query. Our objective is to expand the efficiency interval to cover the entire TSGV model. To tackle this challenge, the natural approach is to reduce the complexity of the network, which can involve decreasing the hidden dimension, reducing the number of layers, and eliminating auxiliary losses. Nevertheless, all of these methods will lead to a decrease in performance to some extent. One promising technique is knowledge distillation [7] to mitigate the decrease in performance and maintain high levels of accuracy when lighting the network. Initially, knowledge distillation employed a single teacher, but as technology advanced, multiple teachers have been deemed beneficial for imparting greater knowledge[4], as extensively corroborated in other domains[18]. Multi-teacher strategy implies that there is a more diverse range of dark knowledge to be learned, with the optimal knowledge being more likely to be present [22]. Regarding the TSGV task, different models will predict results with varying quality when given the same input, as shown in Figure 0(b). Thus far, multiple-teacher knowledge distillation has not been studied and exploited for the TSGV task. An immediate problem is that different models will produce heterogeneous output, e.g., candidates for proposed methods, or probability distribution for proposal-free methods. Another question is how to identify optimal knowledge from multiple teachers. In addition, knowledge can hardly backpropagate to the front layers from the soft label in the last layers [15], meaning that the front part of the student model usually hardly enjoys the benefit of teachers' knowledge. Until now, here are three issues we need to deal with: i) how to unify knowledge from the heterogeneous models, ii) how to select the optimal knowledge and assign weights among these teachers, and iii) how the front layers of the student benefit from the teachers. Firstly, We unify the various types of outputs from multiple heterogeneous models into 1D probability distribution through corresponding processing. This enables us to seamlessly integrate the knowledge when training model. The 1D probability distribution is the output of the span-based method from the proposal-free catalog, which has an inherent speed advantage over the proposal-based methods. Then, a Knowledge Aggregation Unit (KAU) is built that associates the knowledge from the different models. KAU, which consists of multiple parallel transformations with different receptive fields, leverages multi-scale information[9], thus obtaining target distribution with higher quality instead of simply averaging these probabilities. It adaptively determines the importance weights of different teachers with respect to a specific instance based on both the teacher and student representation, avoiding turning the weights of different teachers manually, which are sensitive hyperparameters for multiple teachers' distillation [11]. Finally, a shared layer strategy was designed to learn shallow knowledge from the teacher. Specifically, an isomorphic teacher is added and co-trained with our student model while sharing their encoder layers and aligning their hidden states, which guarantees that the student is able to gain global and exhaustive knowledge. Through the above approach, the whole student model is required to learn from both the isomorphic and heterogeneous teachers which serve as complementary cues to provide an enhanced supervisory signal when model training. During inference, we only exploit the student model to perform inference, which does not add computational overhead. To sum up, this paper's primary contributions can be distilled into three main points, which are outlined below: * We propose a multi-teacher knowledge distillation framework for the TSGV task. This approach substantially reduces the time consumed and significantly decreases the number of parameters, while still maintaining high levels of accuracy. * To enable the whole student to benefit from various teacher models, we unify the knowledge from different models and use the KAU module to adaptively integrate to a single soft label. Additionally, a shared encoder strategy is utilized to share knowledge from the isomorphic teacher model in front layers. * Extensive experimental results on three popular TSGV benchmarks demonstrate that our proposed method performs superior to the state-of-the-art methods and has the highest speed and minimal parameters and computation. ## 2 Related Work Given an untrimmed video, temporal sentence grounding in videos (TSGV) is to retrieve a video segment according to a query, which is also known as Video Moment Retrieval (VMR). Existing solutions to video grounding are roughly categorized into proposal-based and proposal-free frameworks. We also introduce some works on fast video temporal grounding as follows. ### Proposal-based Methods The majority of proposal-based approaches rely on a number of carefully thought-out dense sample strategies, which gather a set of video segments as candidate proposals and rank them in accordance with the scores obtained for the similarity between the proposals and the query to choose the most compatible pairs. Yuan et al. [24] offer a Semantic Conditioned Dynamic Modulation (SCDM) based on [5], which can combine the query with visual representations for correlating the sentence-related video contents and dynamically change the temporal convolution according to the query semantics. In order to produce excellent video temporal candidates, Xiao et al. [20] propose a Boundary Proposal Network (BPN) using a third party model. Liu et al. [10] creat the Motion-Appearance Reasoning Network (MARN), which makes use of retrieved object information and models their relationships for improved localization, to differentiate frame-level features in videos. Rich temporal information is also taken into account in some works. Zhang et al. [29] convert visual features into a 2D temporal map and encode the query in sentence-level representation, which is the first solution to model proposals with a 2D temporal map (2D-TAN). BAN-APR [3] utilize a boundary-aware feature enhancement module to enhance the proposal feature with its boundary information by imposing a new temporal difference loss. Currently, most proposal-based methods are time-consuming due to the large number of proposal-query interactions. ### Proposal-free Methods Actually, the caliber of the sampled proposals has a significant impact on the impressive performance obtained by proposal-based methods. To avoid incurring the additional computational costs associated with the production of proposal features, proposal-free approaches directly regress or forecast the beginning and end times of the target moment. Wang et al. [17] aggregate contextual information by obtaining the relations between the current segment and its neighbor segments and propose a Contextual Boundary-aware Prediction (CBP). VSLNet Zhang et al. [28] exploits context-query attention modified from QANet Yu et al. [23] to perform fine-grained multimodal interaction. Then a conditioned span predictor computes the probabilities of the start/end boundaries of the target moment. SeqPAN [26] design a self-guided parallel attention module to effectively capture self-modal contexts and cross-modal attentive information between video and text inspired by sequence labeling tasks in natural language processing. Yang and Wu [21] propose Entity-Aware and Motion-Aware Transformers (EAMAT) that progressively localize actions in videos by first coarsely locating clips with entity queries and then finely predicting exact boundaries in a shrunken temporal region with motion queries. In addition, Huang et al. [8] introduce Elastic Moment Bounding (EMB) to accommodate flexible and adaptive activity temporal boundaries toward modeling universally interpretable video-text correlation with tolerance to underlying temporal uncertainties in pre-fixed annotations. Nevertheless, with the improvement of performance, huge and complex architectures inevitably result in higher computational cost during inference phase. ### Fast Video Temporal Grounding Recently, fast video temporal grounding has been proposed for more practical applications. TSGV task usually requires methods to efficiently localize target video segments in thousands of candidate proposals. In fact, several early algorithms, e.g., common space-learning methods and scanning-based methods, make some contributions to reducing the computational costs. According to [6], the standard TSGV pipeline can be divided into three components. The visual encoder and the text encoder are proved to have little influence in model testing due to the features pre-extracted and stored at the beginning of the test, and cross-modal interaction is the key to reducing the test time. Thus, a fine-grained semantic distillation framework is utilized to leverage semantic information for improving performance. Besides, Wu et al. [19] utilize commonsense knowledge to obtain bridged visual and text representations, promoting each other in common space learning. However, based on our previous analysis, the inferred time proposed by [6] is only a part of the entire prediction processing. The processing from inputting video features to predicting timestamps is still time-consuming. ## 3. Methodology In this section, we first give a brief task definition of TSGV in Section 3.1. In the following, heterogeneous knowledge unification is presented as a prerequisite in Section 3.2.1. Then we introduce the student network (Section 3.2.2), shared encoder strategy (Section 3.2.4) and knowledge aggregation unit (Section 3.2.3) as shown in 2. Finally, the training and inference processes are presented in section 3.3, as well as the loss settings. ### Problem Formulation Given an untrimmed video \(V=[f_{t}]_{t=1}^{T}\) and the language query \(Q=[q_{j}]_{j=1}^{m}\), where \(T\) and \(m\) are the numbers of frames and words, respectively. The start and end times of the ground truth moment are indicated by \(\tau^{s}\) and \(\tau^{e}\), \(1\leq\tau_{s}<\tau_{e}\leq T\). Mathematically, TSGV is to retrieve the target moment starting from \(\tau_{s}\) and ending at \(\tau_{e}\) by giving a video \(V\) and query \(Q\), i.e., \(\mathcal{F}_{TSGV}:(V,Q)\mapsto(\tau_{s},\tau_{e})\). ### General Scheme #### 3.2.1. Heterogeneous Knowledge Unification Compared to the proposal-based method, the span-based method doesn't need to generate redundant proposals, which is an inherent advantage in terms of efficiency. Meanwhile, 1D distribution carries more knowledge than the regression-based method. Hence we unify various heterogeneous outputs into 1D probability distribution and develop our network based on the span-based method, as shown in Figure 2. The outputs of the span-based method are the 1D probability distributions of start and end moments, denoted as \(P_{s},P_{e}\in\mathbb{R}^{n}\). To keep concise, we adopt \(P\in\mathbb{R}^{2n}\) without subscripts to express stacked probability for the start and end moments. We simply adopt the softmax function to the outputs of the span-based methods and obtain probability distributions. \[P_{s}=Softmax(P_{s}^{\prime})\quad P_{e}=Softmax(P_{e}^{\prime}) \tag{1}\] 2D-map anchor-based method is a common branch of the proposal-base method, such as [29], [3]. A 2D map \(S=[s_{i,j}]\in\mathbb{R}^{n\times n}\) is generated to model temporal relations between proposal candidates, on which one dimension indicates the start moment and the other indicates the end moment. We calculate the max scores of \(S\) by row/column as start/end distributions. \[\begin{split} P_{s}=& Softmax(\max_{j}s_{i,j})\\ P_{e}=& Softmax(\max_{i}s_{i,j})\end{split} \tag{2}\] As for the regression-based method, we can get a time pair \((t_{s},t_{e})\) after computation. Then the Gaussian distribution is leveraged to simulate the probability distribution of the start/end moments as follows: \[\begin{split} P_{s}=& Softmax(N(t_{s},\sigma^{2}))\\ P_{e}=& Softmax(N(t_{e},\sigma^{2}))\end{split} \tag{3}\] The proposal-generated method will generate a triple candidate list \(S^{\prime}=(t_{s}^{i},t_{e}^{i},\tau^{i})\in\mathbb{R}^{3\times k}\), where \(k\) is the number of proposal candidates. Similarly, we use the Gaussian distribution to generate the probability distribution of the start/end moment for each candidate. Then we put different weights on various candidates and accumulate them: \[\begin{split} P_{s}=& Softmax(\sum_{i}r^{i}N(t_{s}^{i}, \sigma^{2}))\\ P_{e}=& Softmax(\sum_{i}r^{i}N(t_{e}^{i},\sigma^{2})) \end{split} \tag{4}\] where \(\sigma^{2}\) is the variance of Gaussian distribution \(N\). #### 3.2.2 Student Network. For each video, we extract its visual features \(\mathbf{V}\in\mathbb{R}^{n\times d_{\texttt{o}}}\) with a pre-trained convolutional neural network model [2], where \(n\) is the length of extracted features. For each query \(Q\), we initialize the word features \(\mathbf{Q}\in\mathbb{R}^{m\times d_{\texttt{q}}}\) by GloVe embeddings. We first project \(\mathbf{V}\) and \(\mathbf{Q}\) into the same dimension \(d\) by projection matrices, and incorporate a position embedding to every input of both video and query sequences. Then we feed the results into the VisualEncoder and QueryEncoder respectively: \[\mathbf{V}^{\prime} =\texttt{VisualEncoder}(\texttt{FFN}_{\texttt{v}}(\mathbf{V})+E_{ \texttt{p}}) \tag{5}\] \[\mathbf{Q}^{\prime} =\texttt{QueryEncoder}(\texttt{FFN}_{\texttt{q}}(\mathbf{Q})+E_{ \texttt{p}})\] where \(\texttt{FFN}\) is projection matrices, \(E_{\texttt{p}}\) denotes the positional embeddings. VisualEncoder and QueryEncoder consist of stacked 1D convolutional blocks to learn representations by carrying knowledge from neighbor tokens. To enhance the cross-modal interactions between visual and textual features, we utilize the context-query attention (CQA) strategy [12], and aggregate text information for each visual element. Specially, we calculate the similarity scores \(\mathcal{S}\in\mathbb{R}^{n\times m}\) between each visual feature and query feature. Then the attention weights of visual-to-query (\(\mathcal{A}\)) and query-to-visual (\(\mathcal{B}\)) are computed as: \[\mathcal{A} =\mathcal{S}_{\texttt{r}}\cdot\mathbf{Q}^{\prime}\in\mathbb{R}^{n \times d} \tag{6}\] \[\mathcal{B} =\mathcal{S}_{\texttt{r}}\cdot\mathcal{S}_{\texttt{c}}^{T}\cdot \mathbf{V}^{\prime}\in\mathbb{R}^{n\times d}\] where \(\mathcal{S}_{\texttt{r}}\) and \(\mathcal{S}_{\texttt{c}}\) are the row-wise and column-wise normalization of \(\mathcal{S}\) by softmax operation, respectively. Finally, the output of visual-query attention is written as: \[\mathbf{V}^{\texttt{q}\texttt{p}}=\texttt{FFN}\big{(}[\mathbf{V}^{\prime};\mathcal{A} ;\mathbf{V}^{\prime}\odot\mathcal{A};\mathbf{V}^{\prime}\odot\mathcal{B}]\big{)} \tag{7}\] where \(\mathbf{V}^{\texttt{q}\texttt{p}}\in\mathbb{R}^{n\times d}\); \(\texttt{FFN}\) is a single feed-forward layer; \(\odot\) denotes element-wise multiplication. \(\mathbf{V}^{\texttt{q}\texttt{p}}\) is the fused multi-modal semantic features with visual and query attention. Then we follow [26] and calculate \(\mathbf{P_{s}}\) and \(\mathbf{P_{e}}\). Hence, the prediction part of TSGV model can be defined as: \[(\mathbf{P_{s}},\mathbf{P_{e}})=\texttt{Predictor}(\mathbf{V}^{\texttt{q}\texttt{p}})\in \mathbb{R}^{n} \tag{8}\] #### 3.2.3 Knowledge Aggregation Unit. Our goal is to combine all the unified predictions from \(b\) branches to establish a strong teacher distribution. Previous image classification work [31] adopted a simple convolution block as the gate module to generate an importance score for each branch. But a simple convolution block cannot effectively capture the contextual representation due to the temporal scale variation problem which widely exists in video tasks. Thus, capturing multi-scale information is required to handle this problem. Inspired by the previous works [9], we design the Knowledge Aggregation Unit (KAU), which consists of multiple parallel transformations with different receptive fields, leveraging both local and global information to obtain a more accurate target probability. The architecture of the proposed KAU is depicted in Figure 3. Figure 2. An overview of the proposed framework. EMTM mainly consists of three components: the student model, the shared encoder, and the KAU. The shared encoder with isomorphic structures is utilized to align their hidden states. Heterogeneous model outputs are unified into 1D probability distribution as shown in the right. Then it is adopted to adaptively determine the importance weights of different teachers with respect to a specific instance based on both the teacher and student representation. During the inference stage, only student model is adopted for fast TSGV. Considering saving more original information, we first take the video features \(V^{\prime}\) in the eq. (5) as input and then add convolution layers. The convolution operation is conducted with a small kernel size of 3 initially and then consistently increased to 5 and 7. Further, we incorporate the average pooling of query features \(Q^{\prime}\) in the eq. (5) for richer representations. Then we concatenate all the splits and obtain the intermediate vector \(v\), denoted as: \[v=[q_{\textit{aug}},g([v_{\textit{conv3}},v_{\textit{conv5}},v_{\textit{conv7}} ])] \tag{9}\] where \(q_{\textit{aug}}\) denotes the result of \(Q^{\prime}\) after average pooling, g(\(\cdot\)) denotes the global pooling function, \(v_{\textit{conv3}}\), \(v_{\textit{conv5}}\), and \(v_{\textit{conv7}}\) denote the results after the convolution layers with kernel size 3, 5, and 7, respectively. Passing through a fully connected layer \(FC\), a channel-wise softmax operator is applied to obtain the soft attention \(a\). \[a=Softmax(FC(s))\in\mathbb{R}^{2b\times n} \tag{10}\] where the \(b\) denote the number of teacher branch, \(2b\) is because there are two probability distributions (i.e., start and end). Finally, we fuse prediction results from multiple branches via an element-wise summation to obtain the weighted ensemble probability. \[\widetilde{P}=\sum_{i=1}^{b}a^{i}\otimes\widetilde{P}^{i}\in\mathbb{R}^{2 \times n} \tag{11}\] where \(\widetilde{P}\) denotes the ensemble probability, \(P^{i}\in\mathbb{R}^{2\times n}\) means the start and end distribution from \(i\)-th teacher branch, and \(\otimes\) refers to the channel-wise multiplication. Our experiments (see Section 4.5.1) prove that the weights generated by KAU can achieve better distillation performance. #### 3.2.4 Shared Encoder Strategy. When the knowledge that exists in soft label backpropagates from back to front, the shallow layers hardly enjoy the benefit, due to the non-linear activity function and dropout design. But the feature invariant in the shallow layers [25] inspires us, we share several shallow layers of the student with an isomorphic teacher. Through collaborative training with the teacher network, the shallow layers can acquire additional knowledge. Specifically, a student and an isomorphic teacher share their text and query encoder, shown in Figure 2. The encoder consists of several conv1D in our network, which is lightweight and fast due to its inherent characteristic. The VisualEncoder, QueryEncoder, and FFN layers in eq. (5) denote the shared layers in our network. ### Training and Inference #### 3.3.1 Tsgv Loss. The overall training loss of our model is described as follows. For the student and the isomorphic teacher, the hard loss (i.e. label loss) is used to optimize distributions of start/end boundaries. \[\begin{split}& L_{\textit{loc}}^{st}=f_{CE}(P^{st},Y)\\ & L_{\textit{loc}}^{tc}=f_{CE}(P^{tc},Y)\end{split} \tag{12}\] \begin{table} \begin{tabular}{c|c|c c c c|c c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Year} & \multicolumn{4}{c|}{Charades-STA} & \multicolumn{4}{c|}{ActivityNet} & \multicolumn{4}{c}{TACoS} \\ \cline{3-14} & & \multicolumn{3}{c|}{FLOPS (B) Params (M) Times (ms) sumACC} & \multicolumn{3}{c|}{FLOPS (B) Params (M) Times (ms) sumACC} & \multicolumn{3}{c}{FLOPS (B) Params (M) Times (ms) sumACC} \\ \hline \hline SCDM & 2019 & 16.5000 & 12.8800 & - & 87.87 & 260.2300 & 15.6500 & - & 56.61 & 260.2300 & 15.6500 & - & - \\ 2D-TAN & 2020 & 52.2616 & 69.0606 & 13.3425 & 66.05 & 1067.9000 & 82.4400 & 77.9903 & 71.43 & 1067.9000 & 82.4400 & 77.9903 & *36.82 \\ VSLNet & 2020 & 0.0300 & 0.7828 & 8.0020 & 77.50 & 0.0521 & 0.8005 & 8.9893 & 69.38 & 0.0630 & 0.8005 & 8.9893 & 44.30 \\ SeqPAN & 2021 & 0.0209 & 1.1863 & 10.5168 & 102.20 & 0.0214 & 1.2143 & 13.7138 & 73.87 & 0.0218 & 1.2359 & 23.3025 & **67.71** \\ EMB & 2022 & 0.0885 & 2.2168 & 22.3900 & 97.58 & 0.2033 & 6.1515 & 25.0871 & 70.88 & 0.2817 & 2.2172 & 23.6349 & 60.36 \\ EAMAT & 2022 & 1.2881 & 94.1215 & 56.1753 & 103.65 & 4.1545 & 93.0637 & 125.7822 & 60.94 & 4.1545 & 93.0637 & 125.7822 & 64.98 \\ BAN-APR & 2022 & 9.4527 & 34.6491 & 19.9767 & **105.96** & 25.4688 & 45.6714 & 44.8587 & **77.79** & 25.4688 & 45.6714 & 44.8587 & *52.10 \\ \hline \hline CPL & 2022 & 3.4444 & 5.3757 & 26.8451 & 71.63 & 3.8929 & 7.0115 & 26.4423 & 49.14 & - & - & - & - \\ CNM & 2022 & 0.5260 & 5.3711 & 5.4482 & 50.10 & 0.5063 & 7.0074 & 4.8629 & *48.96 & - & - & - & - \\ \hline FVMR & 2021 & - & - & - & 88.75 & - & - & - & 71.85 & - & - & - & - \\ CCA & 2022 & 137.2984 & 79.7671 & 26.9734 & 89.41 & 151.1023 & 22.5709 & 31.5400 & *75.95 & 151.1023 & 22.5709 & 31.5400 & 50.90 \\ \hline \hline EMTM (Ours) & & **0.0081** & **0.6569** & **4.7998** & 92.80 & **0.0084** & **0.6848** & **3.5431** & 70.91 & **0.0087** & **0.7065** & **4.5737** & 58.24 \\ \hline \hline \end{tabular} \end{table} Table 1. Efficiency analysis on Charades-STA, ActivityNet, and TACoS. sumACC is the sum of [email protected] and [email protected]. All the data is measured with strict adherence to the source code in the same environment. * denotes the accuracy we reproduce. Figure 3. Illustration of Knowledge Aggregation Unit, which exploits the multi-scale information from various teachers to generate higher-quality knowledge. The final ensemble probability distribution \(\widetilde{p}\) is obtained by the weighted sum from all individual branches. where \(f_{CE}\) is the cross-entropy function, and \(Y\) is one-hot labels for the start and end boundaries of ground truth. Similarly, we encourage ensemble probability to get closer to ground truth distribution. \[L_{loc}^{ens}=f_{CE}(\widetilde{P},Y) \tag{13}\] As we discussed previously, the learned ensemble information serves as complementary cues to provide an enhanced supervisory signal to our student model. As a result, we introduce multiple distillation learning, which transfers the rich knowledge in the form of softened labels. The formulation is given by: \[L_{dis}=f_{KL}(softmax(P^{st},t),softmax(\widetilde{P},t) \tag{14}\] where \(f_{KL}\) represents the KL divergence. The \(t\) is the temperature in knowledge distillation, which control the smoothness of the output distribution. Based on the above design, the overall objective for a training video-query pair is formulated as: \[L=L_{loc}^{st}+L_{loc}^{tc}+L_{loc}^{ens}+acl_{dis} \tag{15}\] where \(\alpha\) is a balance term. #### 3.3.2 Inference. The teacher and student models will be collaboratively trained, while we only adopt the student model for TSGV during testing. The learned rich information serves as complementary cues to provide an enhanced supervisory signal to the TSGV model. Compared with FMVR [6] and CCA [19], we won't pre-calculate and store visual features. ## 4 Experiments ### Datasets To evaluate the performance of TSGV, we conduct experiments on three challenging video moment retrieval datasets, all the queries in these datasets are in English. Details of these datasets are shown as follows: **Charades-STA**[5] is composed of daily indoor activities videos, which is based on Charades dataset [16]. This dataset contains 6672 videos, 16,128 annotations, and 11,767 moments. The average length of each video is 30 seconds. \(12,408\) and 3, 720 moment annotations are labeled for training and testing, respectively; **ActivityNet Caption**[1] is originally constructed for dense video captioning, which contains about 20k YouTube videos with an average length of 120 seconds. As a dual task of dense video captioning, video moment retrieval utilize the sentence description as a query and outputs the temporal boundary of each sentence description. **TACoS**[14] is collected from MPII Cooking dataset [14], which has 127 videos with an average length of 286.59 seconds. TACoS has 18,818 query-moment pairs, which are all about cooking scenes. We follow the same splits in [5], where \(10,146,4,589\), and \(4,083\) annotations are used for training, validation, and testing, respectively. ### Evaluation Metrics Following existing video grounding works, we evaluate the performance on two main metrics: **mIoU:** "mIoU" is the average predicted Intersection over Union in all testing samples. The mIoU metric is particularly challenging for short video moments; **Recall:** We adopt "R@\(n,\text{IoU}=\mu\)" as the evaluation metrics, following [5]. The "R@\(n,\text{IoU}=\mu\)" represents the percentage of language queries having at least one result whose IoU between top-\(n\) predictions with ground truth is larger than \(\mu\). In our experiments, we reported the results of \(n=1\) and \(\mu\in\{0.3,0.5,0.7\}\). **The Metric of Efficiency:** Time, FLOPs, and Params are used to measure the efficiency of the model. Specifically, the time refers to the entire inferring time from the input of the video and query pair to the output of the prediction. FLOPs refers to floating point operations, which is used to measure the complexity of the model. Params refers to the model parameter size except the word embedding. ### Implementation Details For language query \(Q\), we use the 300-D GloVe [13] vectors to initialize each lowercase word, which are fixed during training. Following the previous methods, 3D convolutional features (I3D) are extracted to encode videos. We set the dimension of all the hidden layers as 128, the kernel size of the convolutional layer as 7, and the head size of multi-head attention as 8 in our model. For all datasets, models are trained for 100 epochs. The batch size is set to 16. The dropout rate is set as 0.2. Besides, an early stopping strategy is adopted to prevent overfitting. The whole framework is trained by Adam optimizer with an initial learning rate of 0.0001. The loss weight \(\alpha\) is set as 0.1 in all the datasets. The temperate was set to 1, 3, 3 on Charades-STA, ActivityNet, and TACoS. The pre-trained teacher models are selected in SeqPAN, BAN-APR, EAMAT, and CCA, and we use SeqPAN as an isomorphic teacher to share the encoder. More ablation studies can be found in Section 4.5. All experiments are conducted on an NVIDIA RTX A5000 GPU with 24GB memory. All experiments were performed three times, and reporting the average of performance. ### Comparison with State-of-the-art Methods We strive to gather the most current approaches, and compare our proposed model with the following state-of-the-art baselines on three benchmark datasets: * Proposal-based Methods: SCDM [24], 2D-TAN [29], BAN-APR [3]. * Proposal-free Methods: VSLNet [28], SeqPAN [26], EMB [8], EAMAT [21]. * Weakly Supervised Methods: CPL [30], CNM [30] * Fast Methods: FVMR [6], CCA [19] The best performance is highlighted in **bold** and the second-best is highlighted with **underline** in tables. **Overall Efficiency-Accuracy Analysis** Considering that fast TSGV task pays the same attention to the efficiency as the accuracy, we evaluate FLOPs, Params, and Times for each model. For a fair comparison, the batch size is set to 1 for all methods during inference. Besides, we also calculate the sum of the accuracy in terms of "[email protected]" and "[email protected]", named sumACC to evaluate the whole performance of each model. As Table 1 shows, our method surpasses all other methods and achieves the highest speed, minimal FLOPs and Params on all three datasets. We can find that our proposed method is at least 2000 times fewer in FLOPs than state-of-the-arts proposal-based models (SCDM and 2D-TAN). According to sumACC, we also notice that our proposed EMTM outperforms these two models by gains of at most 26.75% on Charades-STA and 14.30% on ActivtyNet. Though these proposal-free approaches such as VSLNet, SeqPAN and EMB also achieve favorable performance with low computational expenses, our proposed method still outperforms better overall. Compared to the baseline method SeqPAN, although there is a slight accuracy decrease, we achieve 4x fewer in FLOPs and 2x faster. Despite the parameter size of VSLNet is at the same level as our method, we outperform it significantly in terms of accuracy. For instance, EMTM achieves 15.30% absolute improvement by "sumACC" on Charades-STA. When it comes to CCA, which is proposed for fast TSGV, EMTM outperforms 16950x fewer in FLOPs and 121x fewer in model parameter size on Charades-STA. The above comparison illustrates that our method has significant efficiency and accuracy advantages. **Accuracy Analysis** We compare the performance of our proposed method against extensive video temporal grounding models on three benchmark datasets. As shown in Table 2, we can observe that our method performs better than other methods in most metrics. Compared with FVMR and CCA, our model performers better in all metrics. Especially, EMTM achieves an absolute improvement of 4.58% on Charades- STA and 5.34% on TACoS on the metric "R@1, IoU=0.7". Note that R@1, IoU=0.7 is a more crucial criterion to determine whether a TSGV model is accurate or not. The comparison of performance "on R1, IoU=0.7" shows that our method can predict results with higher quality. Then, we compare our model with much more TSGV methods in more detail. Firstly, we compare EMTM with previous proposal-based methods: SCDM and 2D-TAN. From the results in Table 2, we observe that our EMTM achieves great performance compared with the aforementioned methods on most of the metrics. We also notice that EMTM surpasses 2D-TAN on Charades-STA and TACoS by (16.55%, 11.76%) in terms of "R@1, IoU=0.7". Moreover, we compare our method with previous proposal-free methods: VSLNet, SeqPAN, EMB and EAMAT. Compared with them, our proposed CCA method achieves better performance. On ActivityNet, we outperform VSLNet by gains of (1.51%, 2.14%) in terms of "R@1, IoU=0.5" and "mIoU". Besides, it also surpasses the recent work EAMAT with an average 7.22% improvement on the metrics "R@1, IoU=0.3, 0.5, 0.7". ActivtyNet has larger scales than the other two datasets. The results indicate that our method also performs well in a more complex visual-text environment. In addition, our model offers apparent benefits over weakly supervised methods. It indicates that our method can localize the moment with higher quality. Our EMTM obtains more accurate results because multiple teachers make the model have the ability to comprehend complex cross-modal relationships. In fact, the simple but effective use of transferred knowledge replaces large and repetitive cross-modal interaction and reduces the time and computational cost. It validates that EMTM can efficiently and effectively localize the target moment boundary. ### Ablation Studies In this part, we perform in-depth ablation studies to analyze the effectiveness of the EMTM. All experiments are performed three times with different random seeds to eliminate the contingency. \begin{table} \begin{tabular}{l|c c c|c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\begin{tabular}{c} Shared \\ Encoder \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Label \\ Distillation \\ \end{tabular} } & \multirow{2}{*}{[email protected]} & \multicolumn{2}{c}{[email protected]} & \multicolumn{2}{c}{[email protected]} & \multicolumn{2}{c}{mIoU} \\ \hline EMTM wo S-ELD & & ✗ & 62.06\({}^{+0.05}_{-0.05}\) & 43.90\({}^{+0.25}_{-0.25}\) & 25.63\({}^{+0.05}_{-0.05}\) & 44.54\({}^{+0.25}_{-0.05}\) \\ EMTM wo SE & ✗ & ✓ & 63.19\({}^{+0.05}_{-0.05}\) & 44.11\({}^{+0.12}_{-0.25}\) & 25.74\({}^{+0.12}_{-0.12}\) & 65.15\({}^{+0.05}_{-0.05}\) \\ EMTM wo LD & ✓ & ✗ & 62.98\({}^{+0.33}_{-0.38}\) & **44.48\({}^{+0.19}_{-0.15}\)** & **26.10\({}^{+0.12}_{-0.12}\)** & 52.24\({}^{+0.11}_{-0.11}\) \\ EMTM & ✓ & ✓ & **63.20\({}^{+0.19}_{-0.19}\)** & 44.73\({}^{+0.26}_{-0.35}\) & **45.05\({}^{+0.17}_{-0.33}\)** \\ \hline \hline \end{tabular} \end{table} Table 4. Effects of Main Components on ActivityNet \begin{table} \begin{tabular}{c|c c c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Charades-STA} & \multicolumn{4}{c|}{ActivityNet} & \multicolumn{4}{c}{TACoS} \\ \cline{2-13} & [email protected] & [email protected] & [email protected] & mIoU & [email protected] & [email protected] & [email protected] & mIoU & [email protected] & [email protected] & [email protected] & mIoU \\ \hline \hline SCDM & - & 54.44 & 33.43 & - & 54.80 & 36.75 & 19.86 & - & 26.11 & 21.17 & - & - \\ 2D-TAN & - & 42.80 & 23.25 & - & 58.75 & 44.05 & 27.38 & - & 35.17 & 25.17 & 11.65 & 24.16 \\ VSLNet & 64.30 & 47.31 & 30.19 & 45.15 & 63.16 & 43.22 & 26.16 & 43.19 & 29.61 & 24.27 & 20.03 & 24.11 \\ SeqPAN & 73.84 & 60.86 & 41.34 & 53.92 & 61.65 & 45.50 & 28.37 & 45.11 & 48.64 & 39.64 & 28.07 & 37.17 \\ EMB & 72.50 & 58.33 & 39.25 & 53.09 & 64.13 & 44.81 & 26.07 & 45.59 & 50.46 & 37.82 & 22.54 & 35.49 \\ EAMAT & 74.19 & 61.69 & 41.96 & 54.45 & 55.33 & 38.07 & 22.87 & 40.21 & 50.11 & 38.16 & 26.82 & 36.43 \\ BAN-APR & *74.05 & 63.68 & 42.28 & *54.15 & *65.11 & 48.12 & 29.67 & *45.87 & 48.24 & 33.74 & *17.44 & *32.95 \\ \hline \hline CPL & 66.40 & 49.24 & 22.39 & 43.48 & 55.73 & 31.37 & 12.32 & 36.82 & - & - & - & - \\ CNM & 60.04 & 35.15 & 14.95 & - & 55.68 & 33.33 & *12.81 & * 36.15 & - & - & - & - \\ \hline \hline FVMR & - & 55.01 & 33.74 & - & 60.63 & 45.00 & 26.85 & - & 41.48 & 29.12 & - & - \\ CCA & 70.46 & 54.19 & 35.22 & 50.02 & 61.99 & 46.58 & 29.37 & *45.11 & 45.30 & 32.83 & 18.07 & - \\ \hline \hline EMTM (Ours) & 72.70 & 57.91 & 39.80 & 53.00 & 63.20 & 44.73 & 26.08 & 45.33 & 45.78 & 34.83 & 23.41 & 34.44 \\ \(\Delta_{SOTA}\) & \(\uparrow\) 2.24 & \(\uparrow\) 2.90 & \(\uparrow\) 4.58 & \(\uparrow\) 2.98 & \(\uparrow\) 1.21 & \(\downarrow\) 1.85 & \(\downarrow\) 3.29 & \(\uparrow\) 0.22 & \(\uparrow\) 0.48 & \(\uparrow\) 2.42 & \(\uparrow\) 5.34 & - \\ \hline \hline \end{tabular} \end{table} Table 2. Performance comparison with the state-of-the-art methods. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\begin{tabular}{c} Shared \\ Encoder \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Label \\ Distillation \\ \end{tabular} } & \multirow{2}{*}{[email protected]} & \multirow{2}{*}{[email protected]} & \multicolumn{2}{c}{[email protected]} & \multicolumn{2}{c}{mIoU} \\ \hline EMTM wo S-ELD & ✗ & ✗ & 62.06\({}^{+0.05}_{-0.05}\) & 43.90\({}^{+0.25}_{-0.05}\) & 25.63\({}^{+0.05}_{-0.05}\) & 44.54\({}^{+0.25}_{-0.05}\) \\ EMTM wo SE & ✗ & ✓ & 63.19\({}^{+0.05}_{-0.0 #### 4.5.1. Effects of Main Components. In our proposed framework, we design the sharing encoders (SE) to learn shallow knowledge from the isomorphic teacher, while knowledge contained in soft targets is taught by label distillation(LD). To better reflect the effects of these two main components, we measure the performance of different combinations. As table 3 and 4 show, each interaction component has a positive effect on the TSGV task. On Charades-STA, the full model outperforms **w/o SE** by gains of 1.44% on metrics "R@1, IoU=0.7" and exceeds "w/o LD" by (0.08%, 1.40%, 2.26%, 0.61%) on the all metrics. Besides, the full model also outperforms "w/o SE-LD" by a large margin on all metrics while achieving a significant 2.26% improvement in terms of "R@1, IoU=0.7". Similarly, our full model has made significant improvements in every metric compared with the variant "EMTM w/o SE-LD" on ActivityNet. Obviously, the interaction between sharing encoders and multiple teachers' label distillation leads to a favorable improvement in performance, which proves that various teachers inject useful knowledge into our student model. As a result, the student model has become stronger than every teacher. #### 4.5.2. Effect of Number of Teacher Models. We investigate the influence of different numbers of teacher models on Charades-STA. As shown in 4, the performance presents a rising tendency with the increase of teachers. If SeqPAN is removed, the accuracy will be reduced by about 1% in terms of [email protected]. When simply utilizing EAMAT, the performance will reach 38.25% and 52.69% compared to the original student 32.71% and 52.50% on mIoUR and [email protected], which proves the effectiveness of EAMAT. However, there exists a great distance of nearly 2% between our full model and its variant 'no teacher', showing that one teacher is not enough. In summary, our improvements are not only from soft targets with one single teacher, but also from the learning of structural knowledge and intermediate-level knowledge with fused multi-teacher teaching. Multiple teachers make knowledge distillation more flexible, ensemble helps improve the training of student and transfer related information of examples to the student. #### 4.5.3. Effect of Different Degree of Lightweight Models. We evaluate the influence of different degrees of lightweight models by adjusting their hidden dimension \(d\) on Charades-STA. As shown in 5, obviously as \(d\) decreases, the FLOPs and model parameter size will decline, which would also reduce the performance of our model. From 128 to 64 for \(d\), both [email protected] and mIoU reduce by about 5%, while FLOPs and model parameter size drop by a small margin. For the trade-offs, we select 128 as the hidden dimension in our full model. ### Qualitative Analysis Two samples of prediction on Charades-STA are depicted in Figure 6. In general, the moments retrieved by full EMTM are closer to the ground truth than that are retrieved by EMEM without utilizing the shared encoder and label distillation strategy. The first sample indicated our approach can refine the predictions when the basic model already obtained satisfactory results. The second sample shows the basic model trend to predict the boundary position, possibly due to its limited understanding of the video. As a result, the model relies on biased positional information to make moment predictions. However, utilizing a shared encoder and label distillation approach can provide additional information that enables the model to more precisely predict the moment boundary. ## 5. Conclusion In this paper, we focus on the efficiency of the model on Temporal Sentence Grounding in Videos and try to expand the efficiency interval to cover the entire TSGV model. A knowledge distillation Figure 4. Effect of the Number of Teacher Models on Charades-STA. In detail, we adopt EAMAT, EAMAT & BAN-APR, EAMAT & BAN-APR & SeqPAN, which correspond to one teacher, two teachers and three teachers respectively. Figure 5. Effect of different degrees of lightweight by adjusting the hidden dimension \(d\). Figure 6. Examples of visualization of EMTM w/o SE-LD and EMTM on the Charades-STA. framework (EMTM) is proposed, which utilizes label distillation from multiple teachers and a shared encoder strategy. We additionally design corresponding processes to unify heterogeneous outputs, enabling a smooth knowledge distillation in the subsequent step. Our model achieves high effectiveness and efficiency at the same time. The experimental results demonstrate that our method exhibits strong generalization. In the future, we will pay attention to video feature extraction in TSGV, which is also a time-consume process. In real scenarios like surveillance video retrieval with raw videos as input, this issue is much more critical. We tend to explore the lightweight end-to-end model that includes the part of video feature extraction, thereby eliminating the constraints of computational capacity and high-demand storage.
2304.05081
Robust beam splitter with fast quantum state transfer through a topological interface
The Su-Schrieffer-Heeger (SSH) model, commonly used for robust state transfers through topologically protected edge pumping, has been generalized and exploited to engineer diverse functional quantum devices. Here, we propose to realize a fast topological beam splitter based on a generalized SSH model by accelerating the quantum state transfer (QST) process essentially limited by adiabatic requirements. The scheme involves delicate orchestration of the instantaneous energy spectrum through exponential modulation of nearest neighbor coupling strengths and onsite energies, yielding a significantly accelerated beam splitting process. Due to properties of topological pumping and accelerated QST, the beam splitter exhibits strong robustness against parameter disorders and losses of system. In addition, the model demonstrates good scalability and can be extended to two-dimensional crossed-chain structures to realize a topological router with variable numbers of output ports. Our work provides practical prospects for fast and robust topological QST in feasible quantum devices in large-scale quantum information processing.
Jia-Ning Zhang, Jin-Xuan Han, Jin-Lei Wu, Jie Song, Yong-Yuan Jiang
2023-04-11T09:27:23Z
http://arxiv.org/abs/2304.05081v1
# Robust beam splitter with fast quantum state transfer through a topological interface ###### Abstract The Su-Schrieffer-Heeger (SSH) model, commonly used for robust state transfers through topologically protected edge pumping, has been generalized and exploited to engineer diverse functional quantum devices. Here, we propose to realize a fast topological beam splitter based on a generalized SSH model by accelerating the quantum state transfer (QST) process essentially limited by adiabatic requirements. The scheme involves delicate orchestration of the instantaneous energy spectrum through exponential modulation of nearest neighbor coupling strengths and onsite energies, yielding a significantly accelerated beam splitting process. Due to properties of topological pumping and accelerated QST, the beam splitter exhibits strong robustness against parameter disorders and losses of system. In addition, the model demonstrates good scalability and can be extended to two-dimensional crossed-chain structures to realize a topological router with variable numbers of output ports. Our work provides practical prospects for fast and robust topological QST in feasible quantum devices in large-scale quantum information processing. **Keywords**: quantum state transfer, beam splitter, topological router. ## I Introduction In large-scale quantum information processing, information encoded in quantum states needs to be transmitted in a coherent manner between different nodes within a quantum network [1; 2; 3; 4; 5; 6]. In the last few years, great efforts have been devoted into exploring the optimal protocol for achieving efficient state transfer in the simplest and most common one-dimensional spin-\(1/2\) chain, and the results can be further applied to incorporate various quantum systems such as quantum dots [7; 8; 9; 10; 11], coupled waveguides [12; 13; 14], superconducting circuits [15; 16; 17], and coupled-cavity arrays [18; 19]. However, due to the existence of inevitable manufacturing imperfections within the devices and decoherence effect induced by the environment, the reliability of the quantum information transmission may be significantly reduced [20; 21; 22; 23; 24]. Therefore, it is of urgent need to improve the fidelity of quantum state transfer (QST) and circumvent the impact of different sources of disorder and decoherence. Recently, the discovery of topological insulators opens up new prospects for efficient and robust quantum information processing [25; 26; 27; 28; 29]. Owing to their nontrivial topological energy band structures in momentum space which are inequivalent to traditional insulators, topological insulators hold simultaneously insulating bulk states and conducting edge states which can be characterized by topological invariants rooted in global geometric properties of the system [30; 31; 32; 33]. These conducting edge states localized at the boundary of the system are inherently protected by the energy gap, and as a consequence, are immune to mild manufacturing defects or environmental perturbations and able to propagate along the boundary unidirectionally without generating back scattering [34; 35; 36]. These prominent features make topological edge states a promising candidate for not only robust QST [6; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 183; 188; 189; 191; 1920; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 2111; 214; 215; 216; 217; 218; 219; 223; 231; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 2777; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 289; 290; 301; 315; 329; 331; 332; 333; 340; 341; 342; 343; 353; 354; 355; 356; 357; 358; 359; 360; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 40; 401; 402; 403; 404; 411; 442; 443; 444; 451; 452; 453; 454; 455; 456; 457; 458; 459; 561; 462; 47; 457; 459; 578; 463; 479; 480; 405; 406; 407; 408; 409; 409; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 431; 422; 432; 444; 453; 454; 46; 471; 419; 43, 46, 472; 48; 48; 491; 49, 49; 58; 59; 600; 610; 620; 621; 63; 64, 40; 65; 66; 67; 68; 69; 601; 63; 69; 611; 64, 65; 67; 69; 621; 66; 68; 602; 63; 69; 603; 61; 64, 66; 69; 622; 65; 67; 63; 68; 61; 69; 604; 605; 69; 611; 622; 64, 67; 64; 65; 66; 67; 68; 69; 606; 641; 66; 69; 631; 607; 618; 629; 63; 64; 66; 67; 68; 69; 607; 69; 608; 619; 620; 64; 65; 68; 62; 69; 607; 69; 609; 612; 66; 67; 61; 68; 69; 617; 62; 69; 60; 622; 630; 64; 66; 67; 68; 69; 608; 60; 69; 609; 613; 614; 60; 615; 68; 63; 69; 616; 60; 642; 63; 65; 66; 67; 68; 69; 617; 69; 60; 618; 64; 69; 60; 62; 64; 66; 67; 68; 69; 619; 60; 621; 65; 69; 622; 66; 67; 68; 69; 60; 67; 69; 617; 63; 69; 60; 68; 618; 640; 69; 60; 632; 60; 64; 67; 68; 69; 60; 69; 62; 633; 64; 65; 67; 69; 60; 68; 617; 64; 69; 60; 60; 61; 67; 68; 619; 60; 69; 634; 60; 641; 68; 67; 69; 622; 64; 60; 69; 63; 65; 68; 67; 66; 617; 68; 69; 609; 60; 618; 69; 60; 67; 619; 621; 63; 67; 68; 67; 63; 69; 68; 62; 67; 69; 60; 63; 68; 64; 67; 68; 69 logically protected state transfer from one single node to the other, and are rarely combined with specific topological quantum devices, which are not conducive to the construction of large-scale quantum networks. In this work, we propose to realize fast and robust QST in a symmetrical topological beam splitter based on an odd-sized SSH model with alternating onsite energies and a topological interface, for which the exponential modulation of nearest neighbor coupling strengths and onsite energies account for accelerating the transfer process. The introduction of alternating onsite energies and the topological interface opens up a topological channel, through which the input state initially prepared at the interface site can be transferred to two end sites with equal probabilities and phases. We propose exponential modulation of nearest neighbor coupling strengths and onsite energies to accelerate the state transfer process whose speed is intrinsically limited by adiabatic requirements. The effect of different exponential parameters on the performance of scheme are examined, and the optimal exponential parameters for chains of different sizes are shown. Furthermore, we investigate the robustness of the topological beam splitter by taking into consideration the impact of diagonal and off-diagonal disorders and losses of system. In addition, we prove the scalability of the symmetrical beam splitter and generalize the model to two-dimensional crossed-chain structures that can be employed to implement a topological router whose number of outputs can be adjusted conveniently by cross-linking different numbers of identical even-sized SSH chains via one mutual site. Finally, we stress that fast and robust QST in the proposed beam splitter and router can be realized in superconducting circuit devices under current experimental conditions, which has numerous potential applications in efficient quantum information processing and the construction of large-scale quantum networks. ## II Physical model and engineering of topological pumping ### Topologically protected edge states for the generalized SSH model Schematic illustration of the generalized SSH model is shown in Fig. 1, which describes a one-dimensional dimerized lattice composed of \(N\) unit cells. The Hamiltonian of system reads as (\(\hbar=1\)) \[H=\sum_{n}V_{a}a_{n}^{\dagger}a_{n}+V_{b}b_{n}^{\dagger}b_{n}+\left(J_{1}a_{n }^{\dagger}b_{n}+J_{2}a_{n+1}^{\dagger}b_{n}+\text{H.c.}\right)\text{,} \tag{1}\] where the first two terms represent onsite energies of two types of sites while the rest terms represent the nearest coupling between two adjacent sites. Here, \(a_{n}\) (\(a_{n}^{\dagger}\)) and \(b_{n}\) (\(b_{n}^{\dagger}\)) are the annihilation (creation) operators of a particle at the \(n\)th _a_- and _b_-type sites with onsite energies \(V_{a}\) and \(V_{b}\), and \(J_{1}\) and \(J_{1}\) are the respective intracell and intercell coupling coefficients assumed to be real and positive. For periodic boundary conditions (PBC), we can use the Bloch theorem and rewrite the bulk Hamiltonian as \[H_{\text{bulk}}=\sum_{n=1}^{N}V_{a}a_{n}^{\dagger}a_{n}+V_{b}b_{n}^{\dagger}b_ {n}+(J_{1}a_{n}^{\dagger}b_{n}+J_{2}a_{m+1}^{\dagger}b_{n}+\text{H.c.})\text{,} \tag{2}\] with \(m=n\bmod N\). After performing a Fourier transformation \(a_{n}=\frac{1}{\sqrt{N}}\sum_{k}e^{ikn}a_{k}\) and \(b_{n}=\frac{1}{\sqrt{N}}\sum_{k}e^{ikn}b_{k}\), with the wavenumber \(k\in\left\{\frac{2\pi}{N},\frac{4\pi}{N},\cdots,\frac{2N\pi}{N}\right\}\) being from the first Brillouin zone, the Hamiltonian can be moved into the momentum space, represented by \(H_{\text{bulk}}=\sum_{k}V_{a}a_{k}^{\dagger}a_{k}+V_{b}b_{k}^{\dagger}b_{k}+ \left[\left(J_{1}+J_{2}e^{-ik}\right)a_{k}^{\dagger}b_{k}+\text{ H.c. }\right]\). For each wavenumber \(k\), the bulk Hamiltonian in the momentum space under the basis of \((a_{k},b_{k})^{T}\) can be expressed as \[H_{k}=\left(\begin{array}{cc}V_{a}&J_{1}+J_{2}e^{-ik}\\ J_{1}+J_{2}e^{ik}&V_{b}\end{array}\right)\text{.} \tag{3}\] We first consider the case of the standard SSH model where there is no onsite energy on the lattice sites. By diagonalizing \(H_{k}\), the eigenvalues can be obtained \(E_{\pm}(k)=\pm\sqrt{J_{1}^{2}+J_{2}^{2}+2J_{1}J_{2}\cos k}\), corresponding to eigenstates \[\left|\psi_{\pm}(k)\right\rangle=\frac{1}{\sqrt{2}}\left[E_{\pm}(k)/\left(J_ {1}+J_{2}e^{ik}\right),1\right]^{T}\text{.} \tag{4}\] The eigenenergy spectrum of the system is divided into two bands, with an energy gap of \(2\Delta\) separating the lower and filled bands from the upper and empty bands, with \(\Delta=\min_{k}E(k)=\left|J_{1}-J_{2}\right|\). We plot the dispersion relation for three choices of the coupling strengths in Figs. 2(a)-(c). As the coupling strengths ranges from \(J_{1}>J_{2}\) to \(J_{1}<J_{2}\), the band gap is first closed and then reopened at the boundaries of the first Brillouin zone. Introducing the Pauli matrices \(\mathbf{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})\) as base vectors, the Hamiltonian can be expressed in the form \[H(k)=\mathbf{d}\cdot\mathbf{\sigma}\text{,} \tag{5}\] with \(\mathbf{d}=(d_{x},d_{y},d_{z})=(J_{1}+J_{2}\cos k,J_{2}\sin k,V_{a})\). As displayed in Figs. 2(d)-(f), \(\mathbf{d}\) does not enclose the origin as the wavenumber runs across the Brillouin zone for Figure 1: Diagrammatic sketch of the generalized SSH model composed of \(N\) unit cells. Each unit cell contains a pair of _a_-(blue dot) and _b_-type (purple dot) sites with onsite energies \(V_{a}\) and \(V_{b}\). Double lines and single lines denote the intracell coupling strength \(J_{1}\) and intercell coupling strength \(J_{2}\) between two adjacent sites, respectively. \(J_{1}>J_{2}\) while encloses the origin for \(J_{1}<J_{2}\), which correspond to two topological distinct phases, respectively. In the phase transition point \(J_{1}=J_{2}\), eigenstates of the bulk are available with arbitrarily small energy, and the SSH model behaves like a conductor which can transport electrons from one end of the chain to the other. Otherwise, the SSH model behaves like an insulator. The encirclement of the Hamiltonian \(H_{k}\) can be characterized by the winding number \(w\) defined as \[w_{\pm}=\frac{i}{\pi}\int_{-\pi}^{\pi}\left\langle\psi_{\pm}(k)\mid\partial_{k }\mid\psi_{\pm}(k)\right\rangle. \tag{6}\] \(w=0\) indicates that \(H_{k}\) does not enclose the origin, corresponding to the topological trivial phase; \(w=1\) indicates that \(H_{k}\) encloses the origin, corresponding to the topological nontrivial phase, in which according to the body-boundary correspondence [30], the SSH lattice exhibits edge states in the bulk gap under the open boundary conditions (OBC). We examine how the spectrum of an open chain changes as the intracell and intercell coupling strengths are continuously modulated. For an even-sized SSH model composed of \(2N=40\) lattice sites, the topological nontrivial phase hosts two edge states exponentially localized on the boundaries of the chain, as shown in Figs. 3(a)-(c). By analytically solving the eigenvalue equation, we get the eigenvalues \(E_{\text{even,\pm}}=\pm\left|-J_{2}\frac{\left(-J_{1}/J_{2}\right)^{N}\left[ \left(-J_{1}/J_{2}\right)^{2}-1\right]}{\left(-J_{1}/J_{2}\right)^{2N}-1}\right|\), corresponding to eigenstates \[\left|\Psi_{\pm}\right\rangle=(\left|L\right\rangle\pm\left|R\right\rangle)/ \sqrt{2}, \tag{7}\] with \[\begin{split}\left|L\right\rangle&=\left|1,0,-J_{1 }/J_{2},0,\cdots,0,(-J_{1}/J_{2})^{n-1}\,,0,\cdots\right\rangle,\\ \left|R\right\rangle&=\left|\cdots,0,(-J_{1}/J_{2} )^{N-n}\,,0,\cdots,0,-J_{1}/J_{2},0,1\right\rangle,\end{split} \tag{8}\] denoting the ideal left and right edge states in thermodynamic limit (See Appendix for more details). The degenerate gap modes of a finite-sized system in the topological nontrivial phase takes a pair of almost-zero-energy eigenvalues opposite to each other due to chiral symmetry, which makes eigenstates take the superposition of the ideal left and right edge states and localize at both ends of the system. For an odd-sized SSH model composed of \(2N+1=41\) lattice sites, there is always a zero-energy edge state with eigenvector \[\left|\Psi_{0}\right\rangle=\left|1,0,-J_{1}/J_{2},0,(-J_{1}/J_{2})^{2}\,, \cdots,(-J_{1}/J_{2})^{N}\right\rangle, \tag{9}\] with the localized position depending on the ratio \(J_{1}/J_{2}\), as demonstrated in Figs. 3(d)-(f). ### Symmetrical beam splitter via edge channel in the Rice-Mele model We now consider the case of the Rice-Mele model [63] originated from the standard SSH model by adding alternate onsite potentials \(V_{a}=-V_{b}\). For each wavenumber \(k\), eigenvalues and corresponding eigenstates can be obtained by analytically solving the eigenvalue equation \[\begin{split}& E_{\pm}(k)=\pm\sqrt{V_{a}^{2}+J_{1}^{2}+J_{2}^{2}+2 J_{1}J_{2}\cos k},\\ &\left|\psi_{\pm}(k)\right\rangle=N_{k}\left[\left(E_{\pm}(k)+V_{a }\right)/\left(J_{1}+J_{2}e^{ik}\right),1\right]^{T},\end{split} \tag{10}\] with \(N_{k}\) being normalization factor. In the odd-sized Rice-Mele model, there is always a gap state with eigenenergy \(V_{a}\) and eigenstate \(\left|\psi_{V_{a}}\right\rangle=\left|1,0,-\frac{J_{1}}{J_{2}},0,\left(-\frac{ J_{1}}{J_{2}}\right)^{2},0,\cdots,0,\left(-\frac{J_{1}}{J_{2}}\right)^{N}\right\rangle\) for which the localized position can be also modulated by tuning \(J_{1}/J_{2}\), and thus can be exploited as a topologically protected quantum channel. Setting \(J_{1}/J_{2}=0\) initially, \(J_{1}/J_{2}=+\infty\) finally, and continuously modulating the intracell and intercell coupling strengths, a topologically protected state transfer can be realized from the left edge to the right. Inspired by topological edge pumping in the Rice-Mele model, a symmetrical topological beam splitter with an equal phase can be obtained based on an odd-sized SSH Figure 2: (a)-(c) Energy spectrum of SSH lattice in the momentum space for three settings of the coupling strengths (a) \(J_{1}=1,J_{2}=0.6\); (b) \(J_{1}=1,J_{2}=1\); (c) \(J_{1}=0.6,J_{2}=1\). (d)-(f) Winding of the bulk momentum-space Hamiltonian for the three settings as the wavenumber runs across the Brillouin zone. model with alternating onsite energies and a topological interface. We consider a finite chain comprising of \(L=2N+1\) sites, with \(N\) being even, i.e., structured by linking two even-sized SSH chains via one mutual _a_-type site, as schematically shown in Fig. 4. Intracell (intercell) coupling strengths and alternate onsite energies are mirror-symmetric with respect to the interface site. The system can be described by the following interaction-picture Hamiltonian \[H =\sum_{n}\left[V_{a}a_{n}^{\dagger}a_{n}+V_{b}b_{n}^{\dagger}b_{ n}\right]+\left[\sum_{n=1}^{N/2}(J_{1}a_{n}^{\dagger}b_{n}+J_{2}a_{n+1}^{ \dagger}b_{n})\right.\] \[\left.+\sum_{n=N/2+1}^{N}(J_{2}a_{n}^{\dagger}b_{n}+J_{1}a_{n+1}^ {\dagger}b_{n})+\text{ H.c.}\right]. \tag{11}\] In the energy spectrum of this system there always exists a gap state with eigenenergy \(V_{a}\) and eigenvector \[\left|\psi_{V_{a}}\right\rangle =\left|1,0,-\frac{J_{1}}{J_{2}},0,\left(-\frac{J_{1}}{J_{2}} \right)^{2},0,\cdots,0,\left(-\frac{J_{1}}{J_{2}}\right)^{N/2},0,\right.\] \[\left.\cdots,0,\left(-\frac{J_{1}}{J_{2}}\right)^{2},0,-\frac{J_ {1}}{J_{2}},1\right\rangle,\] which is localized at the topological interface when Figure 4: Schematic of SSH model with the size of \(L=2N+1\) with alternate onsite energies and an interface. Double and single lines denote the intracell and intercell coupling strengths \(J_{1}\) and \(J_{2}\) between two adjacent sites, respectively. Intracell (intercell) coupling strengths and alternate onsite energies are mirror-symmetric with respect to the interface site. \(N\) is an even number, so that the topological interface falls on an _a_-type site. Figure 3: (a) Energy spectrum of the SSH lattice with 40 sites with varying intracell coupling \(J_{1}\) but fixed intercell coupling \(J_{2}=1\). \(J_{1}<1\) (\(J_{1}>1\)) corresponds to the nontrivial (trivial) topological phase. (b) Energy spectrum (upper panel) and distribution of the gap state (lower panel) for \(J_{1}=0.6\). (c) Energy spectrum (upper panel) and density distribution of one bulk state (lower panel) for \(J_{1}=2\). (d)-(f) Energy spectra and distributions of the zero-energy edge state of the SSH lattice with 41 sites with the same coupling strengths as those in (a)-(c), respectively. \(J_{1}/J_{2}=+\infty\) but localized at both ends of the chain when \(J_{1}/J_{2}=0\). Assisted by this topological edge channel, a topologically protected QST from the interface site to the two end sites with equal probabilities can be realized by continuously modulating the intracell and inter-cell coupling strengths from \(J_{1}/J_{2}=+\infty\) to \(J_{1}/J_{2}=0\). When regarding the interface site as the input port and the two end sites as two output ports, the whole system is equivalent to a symmetrical topological beam splitter, in which a particle injected into the interface site can be transferred to the two endpoints of the chain with equal probabilities. It is worth emphasizing that this beam splitting process along the gap state \(\ket{\psi_{V_{a}}}\) is topologically protected by the band gap between the gap state and its adjacent bulk eigenstates and is thus immune to scattering from inherent disorders and local imperfections. ### Choice of the modulating coupling strengths and analysis of energy spectrum The realization of the topological beam splitter is essentially based on the adiabatic evolution of the gap state, which requires the system to be driven slowly enough so that the initial state always evolves along the gap state \(\ket{\psi_{V_{a}}}\) during the transfer process. The topological pumping based on the gap state is governed by the following time-dependent Schrodinger equation \[i\hbar\frac{\partial}{\partial t}\ket{\Psi(t)}=H(t)\ket{\Psi(t)} \tag{12}\] where \(\ket{\Psi(t)}\) can be expressed as \[\ket{\Psi(t)}=\sum_{n}a_{n}(t)e^{-iE_{n}(t)t}\ket{\psi_{n}(t)}, \tag{13}\] with \(\ket{\psi_{n}(t)}\) and \(E_{n}(t)\) obeying the instantaneous eigeneqation \(H(t)\ket{\psi_{n}(t)}=E_{n}(t)\ket{\psi_{n}(t)}\), and \(a_{n}(t)\) denotes the probability amplitude on the \(n\)th instantaneous eigenstates. Substituting Eq. (13) into Eq. (12), we get \[\frac{\partial}{\partial t}a_{n}(t)=\sum_{m\neq n}a_{m}(t)e^{it[E_{n}(t)-E_{m }(t)]}\frac{\bra{\psi_{n}(t)}\frac{\partial H(t)}{\partial t}\ket{\psi_{m}(t) }}{E_{m}(t)-E_{n}(t)}. \tag{14}\] In order to approach the adiabatic limit, we have \[\sum_{m\neq n}\frac{\left\langle\psi_{n}(t)\left|\frac{\partial H(t)}{ \partial t}\right|\psi_{m}(t)\right\rangle}{\left|E_{m}(t)-E_{n}(t)\right|} \ll 1. \tag{15}\] To satisfy the adiabatic condition, the instantaneous energy difference between the gap and bulk states should be large enough, and the derivative of the Hamiltonian, which is directly related to the slope of the driving function, should be sufficiently small. In order to enhance the speed and efficiency of state transfer, we need to adjust the intracell and intercell coupling strengths appropriately so that the system can be driven strongly where the energy gap is wide but mildly when the energy gap is narrow. Several protocols have been proposed to realize accelerated QST via topological edge channel based on the SSH model [59; 60; 61; 62]. However, these techniques for accelerated adiabatic edge pumping are mainly based on the standard SSH model where there is no onsite energy on the lattice sites and focus on the topologically protected state transfer from one single node to the other. For example, Palaiodimopoulos et al. [62] proposed exponential modulation of nearest-neighbor coupling in an odd-sized SSH chain and achieved fast topological edge pumping. This approach, unlike the shortcuts to adiabaticity [64] where elaborately orchestrated counter-adiabatic terms in the Hamiltonian are introduced to suppress unwanted excitations, only involves engineering of the driving function. In this paper we adopt the exponential modulation of not only the nearest neighbor coupling strengths but also onsite energies \[J_{1}=J_{0}\frac{1-e^{-\alpha(t^{*}-t)/t^{*}}}{1-e^{-\alpha}}, \tag{16a}\] \[J_{2}=J_{0}\frac{1-e^{-\alpha t/t^{*}}}{1-e^{-\alpha}}, \tag{16b}\] Figure 5: Waveforms of coupling strengths and alternate onsite energy with (a) \(\alpha=2\), (c) \(\alpha=6\), and (e) \(\alpha=10\). (b), (d) and (f) Instantaneous energy spectrum as a function of time for different exponential parameters in (a), (c) and (e), respectively. The total evolution time is chosen to unity and the size of chain to be \(21\). \[V_{b}=-V_{a}=J_{0}\sqrt{\frac{J_{2}(2t)}{J_{0}}}, \tag{16c}\] where \(t^{*}\) denotes the total evolution time and \(\alpha\) is a tunable exponential modulation parameter. According to the definitions in Eq. (16a-16c), \(J_{1,2}\) satisfy \(J_{1}/J_{2}=0\) and \(J_{1}/J_{2}=+\infty\) at the initial and end instants, respectively. Taking the system with chain length of \(L=21\) as an example, we select three different values of exponential parameter and plot evolution of the coupling strengths and alternating onsite energy in Figs. 5(a), (c), and (e). In addition, in Figs. 5(b), (d), and (f), we show how the instantaneous eigen-spectrum evolves over time. Referring to the gap states (magenta line) in eigen-spectrum and the corresponding evolution of onsite energy, we identify that the gap state \(|\psi_{V_{a}}\rangle\) varies along the topological channel in the symmetrical beam splitting process. This can be further verified by distribution of the gap state during the evolution process as depicted in Figs. 6(a)-(c), in which the state transfers from the interface site to two-end sites with equal probabilities. Evolution of energy gap between the gap state and the bulk adjacent state (green line) in Figs. 5(b), (d), and (f) is shown in Figs. 5(a), (c), and (e) (green-dotted lines), respectively. In the whole process of evolution, the moment of larger (smaller) values of energy gap obviously corresponds to the larger (smaller) slope of the driving function \(J_{2}\). Besides, as illustrated in Fig. 6(d), the minimum energy gap augments with the increase of parameter \(\alpha\), which is positively related to the slope of the driving function. As a result, the exponential modulation of coupling strengths and alternate onsite energies is a qualified candidate to achieve fast and efficient topologically protected state transfer in the symmetrical topological beam splitter. ## III Fast QST with high robustness and scalability ### Fast QST in the topological beam splitter The initial state of the system \[|\Psi_{i}\rangle = \left|\rho_{a,1}e^{i\phi_{a,1}},\rho_{b,1}e^{i\phi_{b,1}},\cdots, \rho_{a,N}e^{i\phi_{a,N}},\cdots,\right.\] \[\left.\rho_{a,2N+1}e^{i\phi_{a,2N+1}},\rho_{b,2N+1}e^{i\phi_{b,2N+ 1}}\right\rangle.\] \[= \left|0,0,0,\cdots,0,1,0,\cdots,0,0,0\right\rangle\] is specified to set the interface site as the input port of the beam splitter. To measure how faithfully the transfer from the interface site to two end sites has occurred, we Figure 6: Distribution of the gap state [magenta lines in Figs. 5(b), (d), and (f)] with eigenenergy \(V_{a}\) with different values of (a) \(\alpha=2\), (b) \(\alpha=6\), and (c) \(\alpha=10\). (d) Minimum energy gap between the gap state and the nearest-neighbor bulk states versus values of \(\alpha\) with fixed chain size \(L=21\). Figure 7: (a Fidelity as a function with respect to the final time of QST for the cosine and exponential protocols. (b)-(e) Distribution of the gap state during the evolution and the phase distribution of the evolved final state for the cosine protocol in (b) and (c), and for the exponential protocol in (d) and (e), respectively. Other parameters take \(L=21\) and \(\alpha=3.2\). introduce fidelity defined as \(F=\left|\left\langle\Psi_{t}\mid\Psi\left(t^{\ast}\right)\right\rangle\right|^{2}\), where \[\left|\Psi_{t}\right\rangle =\left|\rho_{a,1}^{\prime}e^{i\phi_{a,1}^{\prime}},\rho_{b,1}^{ \prime}e^{i\phi_{b,1}^{\prime}},\cdots,\rho_{a,N}^{\prime}e^{i\phi_{a,N}^{ \prime}},\cdots,\right.\] \[\left.\rho_{a,2N+1}^{\prime}e^{i\phi_{a,2N+1}^{\prime}},\rho_{b,2 N+1}^{\prime}e^{i\phi_{b,2N+1}^{\prime}}\right\rangle.\] \[=\frac{1}{\sqrt{2}}\left|1,0,0,\cdots,0,0,0,\cdots,0,0,1\right\rangle\] and \(\Psi\left(t^{\ast}\right)\) denote the target state and the evolved state at final time \(t^{\ast}\), respectively. Consider the system with chain size \(L=21\). In order to see the speed of transfer, we plot in Fig. 7(a) the QST fidelity of beam splitter versus the total transfer time to compare the commonly-used cosine protocol (for example, protocols in Refs. [49, 50]) and the exponential protocol with exponential parameter \(\alpha=3.2\), where parameters in the cosine protocol are \(J_{1}=\frac{J_{0}}{2}\left(1+\cos\frac{\pi t}{t^{\ast}}\right)\), \(J_{2}=\frac{J_{0}}{2}\left(1-\cos\frac{\pi t}{t^{\ast}}\right)\), and \(V_{b}=-V_{a}=J_{0}\sin\frac{\pi t}{t^{\ast}}\). For both protocols, as total evolution time approaches infinity, the fidelity approaches unity, meaning that an excitation imposed initially at the interface site can be perfectly transferred along the chain to two-end sites with equal probabilities, which satisfies the adiabatic approximation during the transfer process, so that the system state always evolves along the gap state \(\left|\psi_{V_{a}}\right\rangle\) without leaking to others. Here we suppose that the QST is successfully implemented if the fidelity is stabilized above \(0.99\). The implementation of the exponential protocol leads to a significantly accelerated QST process which is about \(10\) times faster than its cosine counterpart, since the fidelity is stabilized above \(0.99\) after \(t^{\ast}=100/J_{0}\) for the exponential protocol as compared to \(t^{\ast}=1080/J_{0}\) for the cosine protocol. The process of QST and the phase distribution of the evolved final state for the cosine and exponential protocols are illustrated in Figs. 7(b)-(e), indicating that both protocols can achieve symmetrical topological beam splitting with equal phase by costing sufficient transfer time, but obviously the exponential protocol is much faster. ### Effect of different values of \(\alpha\) The realization of fast QST via edge channel in the symmetrical topological beam splitter is exemplified above by setting a fine-tuned exponential parameter \(\alpha=3\) in a chain of size \(L=21\). We note in Sec. II.3 that for different \(\alpha\) in the exponential modulation, there are evident differences in the slopes of the coupling functions and the corresponding energy gaps between the gap state and its nearest-neighbor bulk state in the instantaneous spectrum, leading to different effects on the QST process. To give some quantitative results,in Fig. 8 we plot fidelity as a function of the transfer time for the exponential protocol with different \(\alpha\). Taking a closer look at the fidelity curves of the exponential protocol, we notice the existence of mild oscillations, which indicates that resonant processes are at work in the QST process. When a smaller \(\alpha\) is chosen, the driving function is flattened and the minimum energy gap between the gap state and its nearest-neighbor bulk state narrows down, as analytically investigated in Sec. II.3, where the resonant processes are suppressed effectively and longer total transfer time is required for successful symmetrical beam splitting. Conversely, larger values of the exponential parameter \(\alpha\) lead to a steeper slope of the driving function and better separation between the gap state and its nearest-neighbor bulk state. As a consequence, the resonant processes are intensified and strong oscillations appear at the fidelity curve, which takes longer time for the fidelity being stabilized at a sufficiently large value. Therefore, it is a trade-off to set \(\alpha=3\) for a chain of size \(L=21\), when the system is driven strongly enough to achieve high-fidelity QST of beam splitter in relatively short time, yet mildly enough to avoid strong oscillations of the fidelity curve. Further, we investigate the fidelity of QST by varying \(\alpha\) and the total evolution time with \(L=21,\ 33,\ 45\), and \(57\), respectively, as illustrated in Figs. 9(a)-(d). The \(0.9\) and \(0.99\) fidelity contour lines manifest intense oscillations for larger \(\alpha\), yet similar to Fig. 8, for smaller \(\alpha\) the oscillations are suppressed substantially. We can always find the optimal exponential parameter so as to reach a balance between accelerating the symmetrical beam splitting process and avoiding excessive oscillations. For instance, the optimal exponential parameters can be set as \(\alpha=3.2,\ 4.5,\ 5.5\), and \(6.1\) for chain sizes \(L=21,\ 33,\ 45\), and \(57\), respectively. As shown in Figs. 9(e)-(f), by selecting different chain sizes and optimizing the best exponential parameters as well as corresponding total evolution times for the final fidelity equaling to \(0.99\) as numerical samples, \(\alpha_{\text{optimal}}\) versus \(L\) and \(\alpha_{\text{optimal}}^{0.99}\) versus \(L\) can be fitted by cubic functions \(\alpha_{\text{optimal}}=1.2\times 10^{-5}L^{3}-0.0026L^{2}+0.22L-0.33\) and \(J_{0}t_{\text{optimal}}^{0.99}=0.00052L^{3}+0.059L^{2}-0.34L+68\), respec Figure 8: Fidelity as a function of the transfer time for the exponential protocol with different values of \(\alpha\) with \(L=21\). tively. The exponential parameter and corresponding evolution time should be large enough to satisfy the adiabatic condition for a longer size of the chain. ### Robustness against disorders and loss of system Due to the existence of manufacturing defects of systematic elements in practice, perfect modulation of the coupling strengths and onsite energies is almost unattainable. In this section, we examine robustness of QST in the beam splitting process by introducing disorders both in coupling strengths and onsite energies to discuss its effect on the performance of beam splitter. The disorder in coupling strengths is generally addressed as off-diagonal disorder, while the disorder in onsite energies as diagonal ones, depending on its effect on the matrix representation of the Hamiltonian. We first consider the case of symmetric distortion, in which the way each disorder realization is imposed on the system parameters can be assumed as \[J^{i}_{1(2),n} \to J^{i}_{1(2),n}\left(1+\delta J^{i}_{1(2)}\right),\] \[V^{i}_{1(2),n} \to V^{i}_{1(2),n}\left(1+\delta V^{i}_{1(2)}\right), \tag{17}\] where \(\delta J^{i}_{1(2)}\) and \(\delta V^{i}_{1(2)}\) are assumed to remain constant during the QST process, but \(\delta J^{i}_{1(2)}\) and \(\delta V^{i}_{1(2)}\) acquire random real values sampled from the interval \([-\omega_{s},~{}\omega_{s}]\), where \(\omega_{s}\) is termed the disorder strength. We plot the mean fidelity of splitting QST versus total transfer time for both kinds of disorders with moderate strength \(\omega_{s}=0.4\) for the cosine and exponential protocols with \(L=21\) and \(\alpha=3.2\) in Figs. 10(a) and (b), respectively. Each point corresponds to the mean value of fidelity \(\bar{F}=\frac{1}{M}\sum_{i=1}^{M}F_{i}\) averaged over \(M=100\) disorder realizations for the sake of universality, while the error bars correspond to the standard deviation. What we can immediately notice is that both protocols are highly robust to the diagonal disorder, because the \(F\)-\(t^{*}\) curves almost coincide with the unperturbed curve. Besides, the transfer process is insignificantly destroyed by non-diagonal disorder applied in the both protocols. Rather, the main impact is longer total transfer time required to achieve QST of beam splitter. In the presence of off-diagonal disorder, a longer total transfer time \(t^{*}\) is required to achieve high-fidelity QST for both protocols. In Figs. 10(c) and (d), for different types of disorder for the cosine and exponential protocols with \(\alpha=3.2\) and \(L=21\), we plot the average fidelity (\(M=100\)) as a function of disorder strength. For the sake of comparison, we set the total transfer time \(t^{*}=1100/J_{0}\) so that the QST of beam splitter can be implemented via both protocols when disorder strength equals zero. Numerical results reveal strong robustness against diagonal disorders for both protocols, because average fidelities in Fig. 10(c) always approach unity. However, we find that the exponential protocol manifests stronger robustness than the cosine protocol against off-diagonal disorder, as shown in Fig. 10(d) where the Figure 10: Impact of diagonal and off-diagonal disorders with \(\omega_{s}=0.4\) on the fidelity for the (a) cosine and (b) exponential protocols. Each point corresponds to the mean value of fidelity averaged over 100 disorder realizations. Average fidelity as a function of the disorder strength for (c) diagonal and (d) off-diagonal disorders for different protocols. Other parameters take \(L=21\) and \(\alpha=3.2\). Figure 9: Fidelity of QST versus varying \(\alpha\) and \(t^{*}\) for the exponential protocol with (a) \(L=21\), (b) \(L=33\), (c) \(L=45\), and (d) \(L=57\). The green and red solid lines represent 0.9 and 0.99 fidelity contour lines, respectively. (e) Optimal exponential parameters and (f) the corresponding total evolution time needed for 0.99-fidelity as a function of the size of the chain. The scattering dots and the lines represent the numerical and cubic polynomial fitting results, respectively. average fidelity of the former decreases significantly when \(\omega_{s}=0.7\), while it is \(\omega_{s}=0.4\) that the average fidelity of the cosine protocol starts to declines significantly. To sum up, the exponential protocol apparently outperforms the cosine protocol in terms of robustness to off-diagonal disorder, while both protocols are quite robust to diagonal disorder. Diagonal and off-diagonal effects as depicted in Eq. (17) are mirror symmetrical with respect to the topological interface. In the following, we try to reveal the effects of asymmetric disorders. For asymmetric distortions on coupling strengths and onsite energies, their effects are described by \[\begin{cases}J_{1,n}^{i}\to J_{1,n}^{i},J_{2,n}^{i}\to J_{2,n}^{i}\left(1+ \delta J_{1}^{i}\right),&n=1,\cdots,\frac{N}{2},\\ J_{1,n}^{i}\to J_{1,n}^{i},J_{2,n}^{i}\to J_{2,n}^{i}\left(1+\delta J_{2}^{i} \right),&n=\frac{N}{2}+1,\cdots,N,\end{cases} \tag{18}\] \[\begin{cases}V_{1,n}^{i}\to V_{1,n}^{i},V_{2,n}^{i}\to V_{2,n}^{i}\left(1+ \delta V_{1}^{i}\right),&n=1,\cdots,\frac{N}{2},\\ V_{1,n}^{i}\to V_{1,n}^{i},V_{2,n}^{i}\to V_{2,n}^{i}\left(1+\delta V_{2}^{i} \right),&n=\frac{N}{2}+1,\cdots,N,\end{cases} \tag{19}\] where \(\delta J_{1(2)}^{i}\) and \(\delta V_{1(2)}^{i}\) acquire random real values in the interval \([-\omega_{s},\ \omega_{s}]\). Taking the system of chain size \(L=21\) as an example, we plot in Fig. 11 the mean fidelity of the topological beam splitter and phase difference of the evolved final state at two end sites averaged over \(M=100\) samples versus disorder strength \(\omega_{s}\) for the cosine and exponential protocols. We find that for both protocols, when asymmetric diagonal disorders are imposed on the system, it can still function as a robust symmetrical beam splitter with equal phase. However, in terms of robustness against asymmetric diagonal disorders, the evolved final state at the two ends may not have the same phase and amplitude, and the deviations will be amplified when \(\omega_{s}\) increases. On the other hand, for high-efficiency QST in quantum networks, the systemic loss is an important factor on infidelity. The effect of losses during the QST of beam splitter can be considered by changing the Hamiltonian to a non-Hermitian form \[H^{\prime}=H-i\sum_{n}\left[\gamma_{n}^{a}a_{n}^{\dagger}a_{n}+\gamma_{n}^{b} b_{n}^{\dagger}b_{n}\right], \tag{20}\] where \(H\) is the lossless Hamiltonian, and \(\gamma_{n}^{a,b}\) denotes the loss rate of each type of sites. We first consider the impact of symmetrical loss. For convenience, we assume \(\gamma_{n}^{a}=\gamma_{n}^{b}=\gamma\). The dynamics of system is governed by the non-Hermitian Liouville equation \(\dot{\rho}=-i\left(H^{\prime}\rho-\rho H^{\prime\dagger}\right)\). We plot in Fig. 12(a) effects of symmetrical loss on the final fidelity of QST for a chain of size \(L=21\) for the cosine and exponential protocols with \(\alpha=3.2\) and transfer time fixed at \(t^{*}=1080/J_{0}\) and \(t^{*}=100/J_{0}\), respectively, so that the QST can be successfully implemented via both protocols when no loss exists. Evidently, the exponential protocol manifests a notable improvement of the final fidelity compared to the cosine protocol. The weakened damage of the loss effect to the QST process can be attributed to the shorter accumulation time of decoherence during the QST in the exponential protocol. We also demonstrate in Fig. 12(b) effects of asymmetri Figure 11: Average fidelity and phase difference of the evolved final state at two-end sites versus disorder strength for asymmetric (a) diagonal and (b) off-diagonal disorders for different protocols. Other parameters take \(L=21\) and \(\alpha=3.2\). Figure 12: Final fidelity as a function of the loss rate \(\gamma\) for the cosine protocol with \(t^{*}=1080/J_{0}\) and the exponential protocol with \(t^{*}=100/J_{0}\) and \(\alpha=3.2\) under symmetrical losses. (b) Final fidelity and phase difference of the evolved final state at two-end sites versus loss rate \(\gamma\) for both protocols under asymmetrical losses. cal loss on the final fidelity and phase difference of the evolved final state at two end sites. Here, we assume \[\gamma_{n}^{a} =\gamma,\] \[\gamma_{n}^{b} =\gamma(1+\delta\gamma_{1}),\quad n=1,\cdots,\frac{N}{2},\] \[\gamma_{n}^{b} =\gamma(1+\delta\gamma_{2}),\quad n=\frac{N}{2}+1,\cdots,N,\] where \(\delta\gamma_{1(2)}\) acquire random real values in the interval \([-0.1,\ 0.1]\). Apparently, numerical results for effect of asymmetrical loss have few differences from its symmetrical counterpart. ### Scalability #### iv.4.1 Size of chains As noted above, the exponential protocol is a compelling alternative for fast and robust QST in the symmetrical topological beam splitter, which exhibits a remarkable improvement in speed of beam splitting and robustness against both types of disorders compared to the commonly-used cosine protocol. In order to verify more extensively the effect of exponential coupling strengths and onsite energies, one crucial aspect determining the efficiency of QST is its scalability. In the following, we focus on how the exponential protocol behaves when the system size is altered. We show the phase diagram of the QST in the parameter space \((t^{*},L)\) of the cosine and exponential protocols in Figs. 13(a) and (b), respectively, with the exponential parameter fixed at \(\alpha=3.2\). The yellow (purple) areas indicate that the QST of beam splitter can be implemented with fidelity over (below) \(0.99\). Evidently, the parameter space can be divided into three regions according to different phase boundaries, as demonstrated in Fig. 13(c). In region I, the symmetrical beam splitter can be faithfully realized via both protocols. In region II, beam splitting is faithfully implemented via only the exponential protocol, while in region III neither protocols cannot work well. Consequently, we can choose feasible modulation protocols according to different parameter design in the system. We plot in Fig. 13(d) the total transfer time \(t_{0.99}^{*}\) each protocol takes to achieve beam splitting as a function of the size of the system, where \(t_{\rm{cos}}^{0.99}\) versus \(L\) for the cosine protocol can be fitted by cubic function \(J_{0}t_{\rm{cos}}^{0.99}=0.1L^{3}-0.46L^{2}+28L-260\). Obviously, the symmetrical topological beam splitter modulated by both protocols needs a longer total evolution time with the augmentation of chain size \(L\). Nevertheless, it is evident that the exponential protocol outperforms the cosine protocol in terms of the transfer speed and manifests good scalability within the range of length we have considered here. In the following, we take into consideration the inevitable loss of the system. As shown in Fig. 14, we plot for both protocols the final fidelity as a function of the chain size with fixed loss parameter \(\gamma=2.5\times 10^{-5}J_{0}\). The parameter selection is made in accordance with current technological capabilities in the superconducting circuit devices, where the initial (end) coupling strengths between resonators and the decoherence rates of photon in superconducting resonator are set to be \(J_{0}/2\pi=100\)MHz and \(\gamma/2\pi=2.5\)kHz, respectively [65; 66; 67]. The exponential parameters are chosen as \(\alpha=1.2\times 10^{-5}L^{3}-0.0026L^{2}+0.22L-0.33\) for chains of different sizes, and the total evolution times are set to be \(J_{0}t=0.1L^{3}-0.46L^{2}+28L-260\) for the cosine protocol Figure 14: Fidelity as a function of the chain size with fixed loss parameter \(\gamma=2.5\times 10^{-5}J_{0}\) for both protocols. \(\alpha=3.2\) is used here. Figure 13: Phase diagram of the QST in the parameter space \((t^{*},L)\) of the (a) cosine and (b) exponential protocols. (c) The total phase diagram derived from (a) and (b). (d) The transfer time \(t_{0.99}^{*}\) that each protocol takes to reach \(0.99\) fidelity as a function of the size of the system. \(\alpha=3.2\) is used here. and \(J_{0}t=0.00052L^{3}+0.059L^{2}-0.34L+68\) for the exponential protocol, which ensures the 0.99-fidelity QST for both protocols in the absence of losses. The results indicate that the exponential protocol not only exhibits stronger robustness against environment-induced decoherence, but also manifests good scalability, whereas the fidelity of the cosine protocol plummets with increasing size. #### iv.2.2 Number of chains The symmetrical topological beam splitter based on an odd-sized SSH model with alternating onsite energies and a topological interface can be regarded structurally as a system composed of two even-sized SSH chains connected to a mutual additional \(a\)-type site. Another vital direction for scalability is how the exponential protocol behaves when the number of constituent chains in the crossed-chain structure is altered. To this end, in Fig. 15 without loss of generality we consider a crossed-chain structure comprised of \(K=4\) even-sized SSH chains connected to a mutual additional \(a\)-type site, which will be taken as a typical example and analyzed extensively in the following. The interaction of the crossed-chain structure formed by \(L=4N+1\) sites can be described by the following interaction-picture Hamiltonian \[H^{\prime} = \sum_{\sigma}\sum_{n}\left[V_{a}a_{n}^{\sigma\dagger}a_{n}^{ \sigma}+V_{b}b_{n}^{\sigma\dagger}b_{n}^{\sigma}\right]+\sum_{\sigma}\sum_{n= 1}^{N/2}\left[J_{1}a_{n}^{\sigma\dagger}b_{n}^{\sigma}\right. \tag{21}\] \[\left.+J_{2}a_{n+1}^{\sigma\dagger}b_{n}^{\sigma}+\text{ H.c. }\right],\] where \(a_{N/2+1}^{\sigma}=a_{N/2+1}=a_{0}\), \(a_{n}^{\sigma}\) and \(b_{n}^{\sigma}\) are the amplitudes at the \(n\)th \(a\)- and \(b\)-type sites in a single SSH chain indexed by \(\sigma\). If we regard the connecting site as the input port and the \(K\) end sites as \(K\) output ports, the crossed-chain structure is equivalent to a _topological router_, in which a particle injected into the connecting site can be transferred to \(K\) end sites with equal probabilities. To verify the fast QST for the exponential protocol in topological router with four output ports, we plot the fidelity versus the total transfer time for the cosine and exponential protocols with \(\alpha=3.2\) in Fig. 16(a). The QST process for the exponential protocol is still about 10 times faster than its cosine counterpart, since fidelity is stabilized above 0.99 after \(t^{*}=91/J_{0}\) for the exponential protocol as compared to \(t^{*}=935/J_{0}\) for the cosine protocol. The process of QST and the amplitude distribution of the evolved final state under the basis of \[C = \left(a_{1}^{1},b_{1}^{1},\ldots,a_{N/2}^{1},b_{N/2}^{1},a_{1}^{2 },\ldots,b_{N/2}^{2},a_{1}^{3},\ldots,b_{N/2}^{3},a_{1}^{4},\right.\] \[\left.\ldots,b_{N/2}^{4},a_{N/2+1}\right)\] for the exponential and cosine protocols are illustrated in Figs. 16(b)-(e), implying that both protocols can achieve successful topological routing under sufficient transfer time, but the former protocol is obviously faster. We plot in Fig. 17(a) the transfer time \(t_{0.99}^{*}\) as a function of the size \(N\) of each constituent chain. Total transfer time increases with the augmentation of the size of each constituent chain for the four-chain structure, which is consistent with the results in the two-chain beam splitter. The transfer time \(t_{0.99}^{*}\) as a function of the number of constituent chains is illustrated in Fig. 17(b), where exponential parameter is set as \(\alpha=3.2\) and the size of each constituent chain is chosen to be \(N=10\). We can see that in general, total transfer time increases with the augmentation of the number of constituent chains connected in the crossed-chain structure, with mild fluctuations which can be attributed to inevitable oscillation in the \(F\)-\(t^{*}\) curves. Therefore, by modulating the number of Figure 16: (a) Final fidelity of four-output router as a function of the transfer time for the cosine and exponential protocols. (b)-(e) Distribution of the gap state with energy eigenvalue of \(V_{a}\) during the evolution and amplitude distribution of the evolved final state for the exponential protocol in (b) and (c), and the cosine protocol in (d) and (e), respectively. Other parameters take \(L=4N+1=41\) and \(\alpha=3.2\). Figure 15: Schematic illustration of the crossed-chain structure comprised of four even-sized SSH chains connected to a mutual additional \(a\)-type site. crossed linked chains, the number of outports can be adjusted conveniently. Such good flexibility and scalability give the topological beam splitter and topological router broad application prospects in quantum information distribution and large-scale quantum information network construction. ## IV Experimental consideration and anticipated improvement ### Experiment consideration for superconducting circuit devices This protocol for realizing fast and robust QST in a symmetrical topological beam splitter is applicable to superconducting circuit devices, which benefits from existing circuit-QED technologies. We can construct a superconducting resonator chain to arrange alternately the resonator \(A_{n}\) and the resonator \(B_{n}\) in one-dimensional space, whose equivalent circuit of one unit cell is shown in Fig. 18. The resonator \(A_{n}\) (\(B_{n}\)) is composed by a spiral inductor \(L_{a}\) (\(L_{b}\)) and a capacitor \(C_{a}\) (\(C_{b}\)) in analogy with harmonic oscillator, which has a single mode. In terms of the capacitor charge \(Q_{a}\) (\(Q_{b}\)) and the inductor current \(I_{a}\) (\(I_{b}\)), the Hamiltonian of oscillator is written as \[\hat{H}_{LC}=\frac{Q_{j}^{2}}{2C_{j}}+\frac{\Phi_{j}^{2}}{2L_{j}} \tag{22}\] where \(\Phi_{j}\) is the flux through the inductor \(L_{j}\), and \(Q_{j}\) is the charge on the capacitor \(C_{j}\) (\(j=a,b\)). Based on the standard quantization process of an \(LC\) circuit [70], the Hamiltonian of the resonator \(A_{n}\) (\(B_{n}\)) can be further written as \(H_{\rm LC}=\hbar V_{j}j^{\dagger}j\) in terms of the creation and annihilation operators defined by \(j^{\dagger}=1/\sqrt{2\hbar V_{j}}(Q_{j}/\sqrt{C_{j}}-i\Phi_{j}/\sqrt{L_{j}})\) and \(j=1/\sqrt{2\hbar V_{j}}(Q_{j}/\sqrt{C_{j}}+i\Phi_{j}/\sqrt{L_{j}})\), where \(V_{j}=1/\sqrt{L_{j}C_{j}}\) is the oscillator frequency. Thus, onsite energies of two types of sites \(A_{n}\) (\(B_{n}\)) can be engineered in a large range of possible values by adjusting the parameters \(L_{j}\) and \(C_{j}\). In experiment, the onsite energies of the resonator can be selectively controlled by a DC bias voltage supply connected with the variable capacitor via a low-pass filter. The dependence of its resonant frequency on the DC bias is observed with no hysteresis, which is of great value for tunability [71]. Furthermore, a direct tunable coupler is realized by a tunable circuit element between the resonators, e.g., a flux-biased direct-current superconducting quantum interference device (SQUID) to generate strong resonant and nonresonant tunable interactions between any two lumped-element resonators. In this work, we adopt a direct tunable coupler replaced by a SQUID between resonator \(A_{n}\) and \(B_{n}\), which is composed by the additional Josephson junction \(E_{J}\)[72; 73]. The flux \(\phi\) threading the SQUID loop gives rise to a circulating current \(I_{s}(\phi)=-I_{c}\sin(2\pi\phi/\phi_{0})\), where \(\phi_{0}\) is the flux Figure 17: Using the exponential protocol with \(\alpha=3.2\) for the four-output router, transfer time \(t_{0,99}^{*}\) as a function of (a) the size of each constituent chain with \(K=4\) and (b) the number of constituent chains with \(N=10\). Figure 18: Equivalent circuit of the coupled superconducting resonator system. Circuit elements are used to model the microwave resonator \(A_{n}\) (\(B_{n}\)) and the coupler with the additional Josephson junction in a dilution refrigerator (with a temperature \(T\sim\) mK), which is placed in a magnetic shield. The microwave resonator \(A_{n}\) (\(B_{n}\)) is an \(LC\) circuit composed of a spiral inductor \(L_{a}\) (\(L_{b}\)) and a capacitor \(C_{a}\) (\(C_{b}\)). The external classical filed can be attained independently via changing the magnetic flux \(\phi\) threading on the loop of coupler, which can add the FBL to connect with an AWG by adopting controlled voltage pulses [68; 69]. quantum. Here, \(\phi\) is the sum of the externally applied flux \(\phi_{\text{ext}}\) and the flux generated by \(I_{s}\), which can be represented by \(\phi=\phi_{\text{ext}}+L_{s}I_{s}(\phi)\). Thus, the flux dependent coupling between resonators is written as \(J_{1,2}=-\sqrt{\frac{V_{a}}{L_{s}}}\sqrt{\frac{V_{b}}{L_{b}}}\frac{L_{0}^{2}}{L (\phi)}\), where \(L_{0}\) is the inductance of the segment shared between resonator and SQUID and the effective SQUID inductance with respect to external fluxes \(L(\phi)=\partial\phi_{\text{ext}}/\partial I_{s}\)[74]. Therefore, the simplest way to tune the coupling strength \(J_{1,2}\) is to apply a control magnetic flux to this loop dynamically with \(\phi_{\text{ext}}(\vec{x})=\int_{s}\mathbf{B}(\vec{x},t)\cdot d\mathbf{S}\), by adding the external flux-bias line (FBL) to connect with an arbitrary waveforms generator (AWG) by adopting controlled voltage pulses [68; 69], as shown in Fig. 18. Thus, superconducting circuits possessing advantages of flexibility, scalability and tunability [75; 76; 77], providing an excellent platform for realizing fast and robust QST in a symmetrical topological beam splitter with high fidelity. ### Possibility of further accelerating QST process in the symmetrical beam splitter As noted above, we have realized fast QST in the symmetrical beam splitter through exponential modulation of the nearest-neighbor coupling strengths and the on-site energies. The scheme accelerates the beam splitting process through subtle control of the driving functions according to the instantaneous eigenspectrum and is still limited by the adiabatic requirements. To further accelerate the QST process, we can consider incorporating moderately nonadiabatic resonant process between eigenstates into the adiabatic process. For example, a fast topological edge pumping protocol in which quantum state transfers rapidly from the left edge to the right was recently presented in an SSH chain [61]. The intracell and intercell nearest neighbor coupling strenghts are governed by the following 3-step modulation functions \[J_{1}=\begin{cases}J_{1}(0),&t\leq t^{*}-t_{op}\\ \frac{(J_{1}(0)-J_{2}(0))t^{*}}{t_{op}}\left(1-\frac{t}{t^{*}}\right),&t>t^{*}- t_{op}\end{cases} \tag{23a}\] \[J_{2}=\begin{cases}J_{2}(0)+\frac{(J_{1}(0)-J_{2}(0))t}{t_{op}},&t\leq t_{op} \\ J_{1}(0),&t>t_{op}\end{cases} \tag{23b}\] where \(J_{1}(0)\) and \(J_{2}(0)\) denote the intracell and intercell coupling coefficients at the initial moment, \(t^{*}\) is the total evolution time, and \(t_{op}\) is the time interval for the coupling strengths to increase (decrease) from initial to terminal value (and vice versa). \(J_{1}\) and \(J_{2}\) are of mirror symmetry, so \(t_{op}\) is the only free parameter. It is possible to find the best \(t_{op}\) value through parameter optimization so as to produce the fastest topological edge pumping. Different from the case of the commonly-used trigonometric protocol in which eigenenergy of the edge mode for an odd-sized SSH model remains constant during the transfer process, for the 3-step protocol the instantaneous eigenenergy are bended and its mean value is significantly increased. Therefore, the timescales which may be considered inversely proportional to the time average of eigenenergy of the edge state effectively decrease. The key to this scheme is incorporating the nonadiabatic resonance transition, whose dynamical evolution is closely related to the pulse area. Analogously, with the help of such a 3-step modulation protocol, it is of great potential to furthur speed up the QST in the symmetrical beam splitter proposed here. ## V Conclusion To sum up, we have proposed a protocol of fast and robust topological pumping via edge channel through exponential modulation of the driving functions for generating a symmetrical topological beam splitter and further for deriving a topological router. We show both analytically and numerically that by continuously modulating the intracell and intercell coupling strengths and onsite energies, we can achieve topologically protected quantum state transfer (QST) from the interface site towards end sites with equal probabilities. Based on numerical analysis of the instantaneous energy spectrum of the system, we confirm that the value of the instantaneous energy gap suitably adapts to the slope of the driving functions, and then present numerical evidence of accelerated adiabatic edge pumping in the symmetrical beam splitter. Furthermore, we investigate how the selection of the exponential parameter impact the QST process. The robustness of the topological beam splitter is extensively discussed by taking into consideration the impact of diagonal and off-diagonal disorders and systematic losses. In addition, we prove the scalability in the size and number of chains for the symmetrical beam splitter and topological router, respectively. Last but not least, we propose superconducting circuit devices as a feasible platform to implement fast and robust QST in the symmetrical beam splitter discussed in this article. The scheme provides detailed assumption of topological beam splitter and topological router assisted by fast and robust topological edge pumping, which is expected to make substantial contribution to efficient quantum information processing and the construction of large-scale quantum networks. ## Acknowledgements The authors acknowledge the financial support by the National Natural Science Foundation of China (Grant No. 62075048) and Natural Science Foundation of Shandong Province of China (Grant No. ZR2020MF129). ## Appendix A Hybridized Edge States for Even-sized SSH Model For the even-sized SSH model, the matrix representation of the Hamiltonian on a real-space basis reads \[H_{M}=\left(\begin{array}{cccccccc}0&J_{1}&0&0&\cdots&0&0&0\\ J_{1}&0&J_{2}&0&\cdots&0&0&0\\ 0&J_{2}&0&J_{1}&\cdots&0&0&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 0&0&0&0&\cdots&J_{2}&0&J_{1}\\ 0&0&0&0&\cdots&0&J_{1}&0\end{array}\right). \tag{30}\] In the following we prove that in the thermodynamic limit of \(N\rightarrow\infty\), the topological nontrivial phase hosts two edge states localized on the boundaries of the chain. We first consider a semi-infinite long SSH lattice with left boundry. In topological nontrivial phase, we assume that the SSH lattice exhibits zero-energy eigenstate \[|L\rangle=|\psi_{a_{1}},\psi_{b_{1}},\psi_{a_{2}},\psi_{b_{2}},\cdots,\psi_{a_ {n}},\psi_{b_{n}},\cdots\rangle, \tag{31}\] where \(\psi_{a_{n}}\) (\(\psi_{b_{n}}\)) are the amplitudes on lattice site \(a_{n}\) (\(b_{n}\)). Solving the eigenvalue equation \(H_{M}|L\rangle=0\) (\(H_{M}\) under right semi-infinite boundary condition), we get \[J_{1}\psi_{b_{1}}=0, \tag{32a}\] \[J_{1}\psi_{a_{n}}+J_{2}\psi_{a_{n+1}}=0,(n=1,2,\cdots),\] (32b) \[J_{2}\psi_{b_{n}}+J_{1}\psi_{b_{n+1}}=0,(n=1,2,\cdots), \tag{32c}\] The zero-energy edge state is derived as \[|L\rangle=|\psi_{a_{1}},0,\xi\psi_{a_{1}},0,\cdots,\xi^{n-1}\psi_{a_{1}},0, \cdots\rangle, \tag{33}\] where \(\xi=-J_{1}/J_{2}\) denotes the localization factor. Obviously, the edge state is exponentially localized in the left side of the lattice in topological nontrivial phase and only occupies the \(a\)-type sites. Similiarly, for a semi-infinite long SSH lattice with right boundry, the zero-energy edge state can be derived as \[|R\rangle=|\cdots,0,\xi^{N-n}\psi_{b_{N}},0,\cdots,0,\xi\psi_{b_{N}},0,\psi_{b _{N}}\rangle, \tag{34}\] which is exponentially localized in the right side of the lattice in topological nontrivial phase and only occupies the \(b\)-type sites. For an even-sized SSH lattice composed of finite sites, we can get its eigenvalues and corresponding eigenstates by diagonalizing the real-space Hamiltonian under base vectors \(|L\rangle\) and \(|R\rangle\) \[H_{M}^{\prime}=\left(\begin{array}{cc}O_{L,L}&O_{L,R}\\ O_{R,L}&O_{R,R}\end{array}\right), \tag{35}\] where \(O_{L,L}=\langle L\mid H_{M}\mid L\rangle=0\), \(O_{L,R}=\langle L\mid H_{M}\mid R\rangle=\frac{-J_{2}\xi^{N}(\xi^{2}-1)}{\xi^{ 2N-1}}\), \(O_{R,L}=\langle R\mid H_{M}\mid L\rangle=O_{L,R}^{*}\), and \(O_{R,R}=\langle R\mid H_{M}\mid R\rangle=0\). The eigenvalues and corresponding eigenstates are \[E_{0,\pm}=\pm\left|O_{L,R}\right|, \tag{36a}\] \[\left|\Psi_{0,\pm}\right\rangle=(\left|L\right\rangle\pm\left|R\right\rangle)/ \sqrt{2}, \tag{36b}\] Obviously, for an even-sized SSH lattice composed of finite sites, energies of the hybridized edge states in topological nontrivial phase do not equal zero, but a pair of numbers opposite to ecah other due to chiral symmetry (\(E_{0,\pm}\to 0\) in the thermodynamic limit of \(N\rightarrow\infty\)). The wavefunctions of almost-zero-energy eigenstates are odd and even superpositions of states localized exponentially on the left and right edges.
2308.14259
Partially Constrained GRAND of Linear Block Codes
This paper is concerned with a search-number-reduced guessing random additive noise decoding (GRAND) algorithm for linear block codes, called partially constrained GRAND (PC-GRAND). In contrast to the original GRAND, which guesses error patterns without constraints, the PC-GRAND guesses only those error patterns satisfying partial constraints of the codes. In particular, the PC-GRAND takes partial rows of the parity-check matrix as constraints for generating candidate error patterns and the remaining rows as checks for validating the candidates. The number of searches can be reduced when the serial list Viterbi algorithm (SLVA) is implemented for searching over a trellis specified by the partial parity-check matrix. This is confirmed by numerical results. Numerical simulations are also provided for comparison with other decoding algorithms.
Yixin Wang, Jifan Liang, Xiao Ma
2023-08-28T02:30:13Z
http://arxiv.org/abs/2308.14259v1
# Partially Constrained GRAND of Linear Block Codes ###### Abstract This paper is concerned with a search-number-reduced guessing random additive noise decoding (GRAND) algorithm for linear block codes, called partially constrained GRAND (PC-GRAND). In contrast to the original GRAND, which guesses error patterns without constraints, the PC-GRAND guesses only those error patterns satisfying partial constraints of the codes. In particular, the PC-GRAND takes partial rows of the parity-check matrix as constraints for generating candidate error patterns and the remaining rows as checks for validating the candidates. The number of searches can be reduced when the serial list Viterbi algorithm (SLVA) is implemented for searching over a trellis specified by the partial parity-check matrix. This is confirmed by numerical results. Numerical simulations are also provided for comparison with other decoding algorithms. Locally constrained ordered statistic decoding (LC-OSD), partially constrained guessing random additive noise (PC-GRAND), serial list Viterbi algorithm (SLVA). ## I Introduction The maximum likelihood (ML) decoding is optimal in terms of minimizing frame-error rate (FER) when the prior distribution of the codewords is unknown or uniform. However, the high complexity of ML decoding makes it impractical for decoding a general code [1]. Hence, the researchers mainly focus on practical near-ML decoders. The ordered statistic decoding (OSD) algorithm is a near-ML decoding algorithm [2]. For a binary linear block code of dimension \(k\) and minimum Hamming distance \(d_{\text{min}}\), OSD can approach ML if order-\(t\) reprocessing is implemented with \(t=\lceil d_{\text{min}}/4-1\rceil\) but has a time complexity of \(O(k^{t})\). So the OSD is more promising for short block codes, and many efforts have been paid to reduce the complexity [3, 4, 5, 6, 7, 8]. Recently, a new variant of the OSD algorithm called locally constrained OSD (LC-OSD) with much lower time complexity is investigated in [9, 10]. Instead of order-\(t\) reprocessing on the most reliable independent bits, the LC-OSD searches for test error patterns using the serial list Viterbi algorithm (SLVA) [11] over a trellis specified by a locally constrained parity-check matrix. Motivated by the success of LC-OSD, we present a search-number-reduced guessing random additive noise decoding (GRAND), called partially constrained GRAND (PC-GRAND). The GRAND algorithm searches for the error patterns from the most likely to the least likely [12], which is ML. This idea was also mentioned in the introductory paragraph of [13]. The order of generating error patterns for GRAND can be specified in soft-GRAND (SGRAND) [14] and ordered reliability bits GRAND (ORBGRAND) [15]. Compared with the OSD algorithm, the GRAND algorithm does not need any matrix manipulations. If the decoding is successful, the resulting codeword for the GRAND algorithm is definitely an ML codeword, which cannot be guaranteed by the OSD algorithm. The complexity of GRAND can be roughly measured by that of generating and checking a candidate error pattern multiplied by the number of searches. The conventional GRAND can generate one candidate in a simple way but has many unnecessary searches. One way to reduce the complexity is to reduce the number of searches at the expense of generating the candidates by imposing some constraints on the error patterns. In particular, several rows with disjoint non-zero positions are selected in [13] to limit the search space for ORBGRAND. In this paper, we present a more general algorithm, called PC-GRAND, which guesses only those error patterns satisfying partial constraints of the linear block codes. Precisely, we partition the parity check matrix into two sub-matrices. One sub-matrix is used to generate candidate error patterns, while the other sub-matrix is used to check whether the searched pattern is valid. Distinguished from [16], the choice of the constraints is arbitrary, and the search is implemented by SLVA [11] over the associated partially constrained trellis. Such an implementation over a trellis has at least two advantages. First, it provides a convenient way (for those engineers familiar with trellis codes) to trade off the complexity, the throughput and the performance. Second, it provides a direct way to generalize the GRAND to the memory systems such as Markov noise channels [17], intersymbol interference (ISI) channels [18] and trellis coded modulations. Numerical results show that the PC-GRAND has less number of searches on average and can achieve the same performance as the LC-OSD algorithm for high-rate linear block codes. ## II PC-GRAND ### _System Model_ Let \(\mathbb{F}_{2}\triangleq\{0,1\}\) be the binary field and \(\mathscr{C}[n,k]\) be a binary linear block code of length \(n\) and dimension \(k\). Let \(\mathbf{G}\) of size \(k\times n\) be a generator matrix of \(\mathscr{C}\) and \(\mathbf{H}\) of size \((n-k)\times n\) be a parity-check matrix of \(\mathscr{C}\). We have \(\mathbf{GH}^{T}=\mathbf{O}\), where \(\mathbf{O}\) is the all-zero matrix. Let \(\mathbf{u}=\{u_{0},u_{1},\cdots,u_{k-1}\}\in\mathbb{F}_{2}^{k}\) be an information vector to be transmitted. The information vector is first encoded into \(\mathbf{c}=\mathbf{u}\mathbf{G}\in\mathbb{F}_{2}^{n}\) and then modulated by the binary phase shift keying (BPSK) into a bipolar signal vector \(\mathbf{x}\in\mathbb{R}^{n}\) as \(x_{i}=1-2c_{i},\ 0\leq i<n\). Then the signal vector \(\mathbf{x}\) is transmitted over an additive white Gaussian noise (AWGN) channel, resulting in a received vector \(\mathbf{y}\in\mathbb{R}^{n}\) given by \(\mathbf{y}=\mathbf{x}+\mathbf{n}\), where \(\mathbf{n}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I}_{n})\) is a sample vector of white Gaussian noise. At the receiver, the hard-decision vector \(\mathbf{z}\in\mathbb{F}_{2}^{n}\) is first delivered from the received vector \(\mathbf{y}\) with \[z_{i}=\begin{cases}0,&\text{if }y_{i}\geq 0\\ 1,&\text{if }y_{i}<0\end{cases},\ 0\leq i<n. \tag{1}\] The log-likelihood ratio (LLR) vector, denoted by \(\mathbf{r}\in\mathbb{R}^{n}\), is defined as \[r_{i}=\log\frac{p\{y_{i}\mid c_{i}=0\}}{p\{y_{i}\mid c_{i}=1\}}=\frac{2y_{i}} {\sigma^{2}},\ 0\leq i<n, \tag{2}\] where \(p\{\cdot\}\) is the (conditional) probability density function (PDF). We also refer \(|r_{i}|\) to as the reliability of \(z_{i}\). Also, the hypothetical error pattern \(\mathbf{e}\in\mathbb{F}_{2}^{n}\) for a test codeword \(\mathbf{v}\in\mathbb{F}_{2}^{n}\) is given by \(\mathbf{e}=\mathbf{z}-\mathbf{v}\). The ML decoding consists of finding a codeword \(\mathbf{v}^{*}\) such that \[\mathbf{v}^{*}=\operatorname*{arg\,max}_{\mathbf{v}\in\mathscr{C}}p\{\mathbf{y}\mid\mathbf{c} =\mathbf{v}\}, \tag{3}\] which is equivalent to \[\mathbf{v}^{*}=\operatorname*{arg\,min}_{\mathbf{v}\in\mathscr{C}}\,\log\frac{p\{\mathbf{y }\mid\mathbf{c}=\mathbf{z}\}}{p\{\mathbf{y}\mid\mathbf{c}=\mathbf{v}\}}. \tag{4}\] If we define the soft-weight of a test error pattern \(\mathbf{e}\) by \[\Gamma(\mathbf{e}) =\log\frac{p\{\mathbf{y}\mid\mathbf{c}=\mathbf{z}\}}{p\{\mathbf{y}\mid\mathbf{c}=\mathbf{ v}\}}=\log\frac{p\{\mathbf{y}\mid\mathbf{c}=\mathbf{z}\}}{p\{\mathbf{y}\mid\mathbf{c}=\mathbf{z}-\mathbf{e}\}} \tag{5}\] \[=\sum_{i=0}^{n-1}\log\frac{p\{y_{i}\mid c_{i}=z_{i}\}}{p\{y_{i} \mid c_{i}=z_{i}-e_{i}\}}=\sum_{i=0}^{n-1}e_{i}|r_{i}|,\] we see that the ML decoding is equivalent to the lightest-soft-weight (LSW) decoding. ### _Sgrand_ The SGRAND is a soft detection ML decoder [14]. The decoder first sorts the bits in the hard-decision vector \(\mathbf{z}\) in ascending order according to their reliabilities, resulting in \(\widetilde{\mathbf{z}}\). The error patterns are searched according to the reliabilities of bits with a maximum searching number \(\ell_{\text{max}}\). At each search step, the pattern \(\mathbf{e}\) with the lightest soft weight will be chosen and removed from a priority queue \(S\). The searched pattern \(\mathbf{e}\) is then used to check whether \(\mathbf{z}-\mathbf{e}\) is a valid codeword with the parity check matrix \(\mathbf{H}\). In the case when \(\mathbf{e}\) does not satisfy the checks, the successors of \(\mathbf{e}\) will be inserted into \(S\). The details of the SGRAND can be found in [14]. Notice that the procedure specified in [14], Algorithm 2] to generate the ordered error patterns can be described with a flipping pattern tree (FPT) [19]1. Footnote 1: The description with FPT in [19] is more general and applicable to nonbinary codes. ``` 0:\(\mathbf{z}\), \(\delta\), \(\ell_{\text{max}}\). 0: the optimal searched codeword \(\mathbf{v}\). 1: Perform preprocessing. 2:\(\mathbf{e}_{\text{opt}}\leftarrow\mathbf{0}\) 3:for\(i=1,2,3,\ldots,\ell_{\text{max}}\)do 4: Find the \(i\)-th lightest-soft-weight test error pattern \(\mathbf{e}^{(i)}\) such that \(\mathbf{H}_{1}(\mathbf{e}^{(i)})^{T}=\mathbf{H}_{1}\mathbf{z}^{T}\). 5:if\(\mathbf{H}_{2}(\mathbf{e}^{(i)})^{T}=\mathbf{H}_{2}\mathbf{z}^{T}\)then 6:\(\mathbf{e}_{\text{opt}}\leftarrow\mathbf{e}^{(i)}\) 7:break 8:endif 9:endfor 10:\(\mathbf{v}\leftarrow(\mathbf{z}-\mathbf{e}_{\text{opt}})\). 11:return\(\mathbf{v}\) ``` **Algorithm 1** Search Scheme for PC-GRAND ### _Pc-Grand_ In this subsection, we present the PC-GRAND algorithm. #### Ii-C1 Preprocessing Let \(\delta\) be an integer such that \(0\leq\delta\leq n-k\). Divide the parity check matrix \(\mathbf{H}\) into two sub-matrices denoted as \[\mathbf{H}=\begin{bmatrix}\mathbf{H}_{1}\\ \mathbf{H}_{2}\end{bmatrix}, \tag{6}\] where the sub-matrix \(\mathbf{H}_{1}\) is of size \(\delta\times n\) and the sub-matrix \(\mathbf{H}_{2}\) is of size \((n-k-\delta)\times n\). If a test vector \(\mathbf{v}\) is a valid codeword, we have \[\mathbf{H}\mathbf{v}^{T}=\mathbf{H}(\mathbf{z}^{T}-\mathbf{e}^{T})=\mathbf{0}. \tag{7}\] Equivalently, we have both \[\mathbf{H}_{1}\mathbf{e}^{T}=\mathbf{H}_{1}\mathbf{z}^{T} \tag{8}\] and \[\mathbf{H}_{2}\mathbf{e}^{T}=\mathbf{H}_{2}\mathbf{z}^{T}. \tag{9}\] Upon receiving \(\mathbf{y}\), the hard-decision vector \(\mathbf{z}\) is determined, and the right-hand side (RHS) of (8) and (9) are calculated and stored for future use. #### Ii-C2 Search Scheme We can use (8) as constraints to search for candidate error patterns with the soft weights in non-decreasing order, i.e., \(\Gamma(\mathbf{e}^{(i)})\leq\Gamma(\mathbf{e}^{(i+1)})\), which can be achieved by the SLVA [11] over a trellis specified by the sub-matrix \(\mathbf{H}_{1}\). For every searched candidate pattern \(\mathbf{e}^{(i)}\), the equation (9) is used to check whether the pattern \(\mathbf{e}^{(i)}\) is valid. The search scheme is described in Algorithm 1 by setting the maximum number of searches, \(\ell_{\text{max}}\). ``` 0:\(\mathbf{z}\), \(\delta\), \(\ell_{\text{max}}\). 0: the optimal searched codeword \(\mathbf{v}\). 1: Perform preprocessing. 2:\(\mathbf{e}_{\text{opt}}\leftarrow\mathbf{0}\) 3:for\(i=1,2,3,\ldots,\ell_{\text{max}}\)do 4: Find the \(i\)-th lightest-soft-weight test error pattern \(\mathbf{e}^{(i)}\) such that \(\mathbf{H}_{1}(\mathbf{e}^{(i)})^{T}=\mathbf{H}_{1}\mathbf{z}^{T}\). 5:if\(\mathbf{H}_{2}(\mathbf{e}^{(i)})^{T}=\mathbf{H}_{2}\mathbf{z}^{T}\)then 6:\(\mathbf{e}_{\text{opt}}\leftarrow\mathbf{e}^{(i)}\) 7:break 8:endif 9:endfor 10:\(\mathbf{v}\leftarrow(\mathbf{z}-\mathbf{e}_{\text{opt}})\). 11:return\(\mathbf{v}\) ``` **Algorithm 2** Search Scheme for PC-GRAND ## III Complexity Analysis We analyze the computational complexity of the decoding algorithm by evaluating the number of floating point operations (FLOPs) and binary operations (BOPs) of each step. Denote by \(\ell_{\text{avg}}\) the average number of test patterns per received vector. ### _Computational Complexity_ We first analyze the complexity for SGRAND [14]. The sorting requires \(\mathcal{O}(n\log n)\) BOPs and FLOPs. In the case when a candidate error pattern \(\mathbf{e}\) does not pass the parity checks, two new error patterns are generally needed to be inserted into the queue \(S\). One new pattern \(\mathbf{e}_{1}\) can be immediately generated from \(\mathbf{e}\) by flipping one bit with \(\mathcal{O}(1)\) BOPs. The other new error pattern is then generated by copying \(\mathbf{e}_{1}\) and flipping one bit with \(\mathcal{O}(n)\) BOPs. The store of the error patterns can be implemented with the min-heap, and hence the insertion and deletion complexity of new patterns can be upper bounded by \(\ell_{\text{avg}}\log\ell_{\text{avg}}\) FLOPs. The checking is the multiplication of the matrix and the vector, which requires \(\mathcal{O}(n(n-k))\) BOPs. Thus, the computational complexity of SGRAND can be evaluated as \[T_{\text{avg}}=\underbrace{\mathcal{O}(n\log n)}_{\text{soring}}+\underbrace {\mathcal{O}(\ell_{\text{avg}}(n+\log\ell_{\text{avg}}))}_{\text{searching}}+ \underbrace{\mathcal{O}(\ell_{\text{avg}}n(n-k))}_{\text{checking}}. \tag{10}\] For the proposed PC-GRAND, the trellis specified by the local parity-check matrix \(\mathbf{H}_{1}\) has \(n\) sections, and each section has at most \(2^{\delta}\) states. To find the best path \(\mathbf{e}^{(1)}\), the SLVA needs to calculate and store the best paths associated with all allowable states, each requiring \(\mathcal{O}(1)\) BOPs and FLOPs. Thus, for finding the best path, the complexity is \(\mathcal{O}(2^{\delta}\cdot n)\). With the previous \(i-1\) paths found, searching for a candidate \(\mathbf{e}^{(i)}\left(i>1\right)\) by the SLVA requires \(\mathcal{O}(n)\) FLOPs. Upon generating a candidate error pattern, checking with \(\mathbf{H}_{2}\) requires \(\mathcal{O}(n(n-k))\) BOPs. Thus, the complexity of PC-GRAND can be evaluated as \[T_{\text{avg}}=\underbrace{\mathcal{O}(2^{\delta}\cdot n+\ell_{\text{avg}}n) }_{\text{searching over trellis}}+\underbrace{\mathcal{O}(\ell_{\text{avg}}n(n-k ))}_{\text{checking}}. \tag{11}\] From the analysis above, we see that the time complexity is dominated by the average search number \(\ell_{\text{avg}}\). In the case when the \(\ell_{\text{avg}}\) of PC-GRAND is far less than that of SGRAND, the time complexity for PC-GRAND can be lower than SGRAND. ### _Space Complexity_ We analyze the space complexity of the algorithms for the searching in PC-GRAND and SGRAND by calculating the number of bytes used. In the searching for SGRAND, upon selecting a candidate error pattern \(\mathbf{e}\) in \(S\), at most two new patterns generated from \(\mathbf{e}\) will be inserted into \(S\). Thus, the main storage space for SGRAND is to store the set \(S\) of size \(\mathcal{O}(\ell_{\text{max}}n)\). The main storage space of SLVA includes the space occupied by the initialization, \(\mathcal{O}(2^{\delta}\cdot n)\) and the space occupied by searching and storing paths, \(\mathcal{O}(\ell_{\text{max}}\cdot n)\). In summary, the space complexity for PC-GRAND is given by \(\mathcal{O}((2^{\delta}+\ell_{\text{max}})n)\). From the analysis above, if the value of \(\delta\) and \(2^{\delta}\) is far less than \(\ell_{\text{max}}\), the space complexity of PC-GRAND and SGRAND is almost the same. Actually, to achieve the same performance, the \(\ell_{\text{max}}\) for PC-GRAND can be less than that for the SGRAND. ## IV Simulation Results In this section, we present simulation results to demonstrate the performance of the PC-GRAND algorithm. The SGRAND algorithm [14] and LC-OSD algorithm [10] are also implemented for comparison. We denote by \(\ell_{\text{avg}}\) and \(\ell_{\text{max}}\), respectively, the average search number and the maximum search number for PC-GRAND, LC-OSD and SGRAND. We denote by \(\delta\) the number of partial constraints for PC-GRAND and local constraints for LC-OSD [10]. _Example 1_.: In this example, we consider the cyclic redundancy check (CRC)-aided polar (CA-polar) code \(\mathscr{C}[128,105]\) in 5G new ratio (NR) for uplink communication. The simulation results for FER with different \(\delta\) for PC-GRAND with \(\ell_{\text{max}}=10^{6}\) are shown in Fig. 1(a). The performance of the CRC-aided successive cancellation list decoding (CA-SCL) [20, 21] with list size \(32\) is also shown in this figure. From Fig. 1(a), we can see that, for all values of \(\delta\), the PC-GRAND outperforms the SCL (with list size 32), achieving a gain 0.3 dB at FER \(\approx 10^{-4}\). Generally, the FER can be improved as \(\delta\) grows. Also notice that, the performance of FER can hardly be improved as \(\delta\) grows when \(\delta>6\). Fig. 1: The FER and the average search number \(\ell_{\text{avg}}\)for CA-polar \(\mathscr{C}[128,105]\) with different \(\delta\) in PC-GRAND. The maximum search number is \(\ell_{\text{max}}=10^{6}\). We also investigate the average search number \(\ell_{\text{avg}}\) for different \(\delta\) and the simulation results are shown in Fig. 1(b). We can observe from Fig. 1(b) that \(\ell_{\text{avg}}\) can be reduced as the partial constraints of the SLVA increase. To make a trade-off between performance and complexity, we choose \(\delta=6\) for PC-GRAND in the following comparison. _Example 2_.: We compare the performance of PC-GRAND (\(\delta=6,\ell_{\text{max}}=10^{6}\)), LC-OSD [10] (\(\delta=8\), \(\ell_{\text{max}}=16384\)) and SGRAND [14] (\(\ell_{\text{max}}=10^{6}\)) for CA-polar code \(\mathscr{C}[128,105]\) and Bose-Chaudhuri-Hocquenghem (BCH) code \(\mathscr{C}[127,113]\). The simulation results are shown in Fig. 2. From Fig. 2, we see that all the three algorithms can approach the ML performance. For CA-polar code, PC-GRAND performs slightly better than SGRAND. This is because SGRAND with \(\ell_{\text{max}}\) is set to abandon before it identifies the ML codeword while PC-GRAND can identify the ML codeword in fewer queries with the constraints allowing it to skip invalid error patterns. _Example 3_.: In this example, we show the computational and space complexity of the decoding algorithms. Since the computational complexity is dominated by the average search number \(\ell_{\text{avg}}\), we first compare the average search number of PC-GRAND (\(\delta=6,\ell_{\text{max}}=10^{6}\)), LC-OSD [10] (\(\delta=8\), \(\ell_{\text{max}}=16384\)) and SGRAND [14] (\(\ell_{\text{max}}=10^{6}\)) for CA-polar code \(\mathscr{C}[128,105]\) and BCH code \(\mathscr{C}[127,113]\). The simulation results are shown in Fig. 3. We also count the BOPs and FLOPs on average needed to decode a codeword in software implementation for the three algorithms and the results are shown in Fig. 4. With the results in Figs. 3 and 4, we see that, the LC-OSD has the least \(\ell_{\text{avg}}\) but requires Gaussian elimination for preprocessing with computational complexity of almost \(\mathcal{O}(n^{3})\) for decoding every noisy codeword. Thus, LC-OSD can have high decoding complexity in high SNR region, compared with SGRAND and PC-GRAND. PC-GRAND can significantly reduce the average search number, compared with SGRAND, and hence can have less average computational complexity for decoding some codes. For the simulations, we set \(\ell_{\text{max}}\) for both SGRAND and PC-GRAND to \(10^{6}\). With this setting, \(2^{\delta}\) is far less than \(\ell_{\text{max}}\) and the space complexity for SGRAND and PC-GRAND is almost the same. ## V Conclusion In this paper, we have proposed the PC-GRAND to reduce the average search number of SGRAND and hence can reduce the decoding complexity. More specifically, a small number of rows from the parity check matrix are used to constrain the candidate error pattern search in SLVA over an associated partially constrained trellis. The remaining rows are used as checks for validating the candidates. The computational complexity analysis and numerical results show that introducing partial constraints can reduce decoding complexity compared with SGRAND. The comparison results show that the PC-GRAND performs the same as LC-OSD. Since the PC-GRAND is implemented over a trellis, it can be easily generalized to memory channels. Although it is presented as a serial algorithm over a trellis in this paper, the PC-GRAND can be implemented in parallel by partitioning the trellis into multiple sub-trellises and performing separately the SLVA over each sub-trellis simultaneously.
2308.15147
T-Dualities and Courant Algebroid Relations
We develop a new approach to T-duality based on Courant algebroid relations which subsumes the usual T-duality as well as its various generalisations. Starting from a relational description for the reduction of exact Courant algebroids over foliated manifolds, we introduce a weakened notion of generalised isometries that captures the generalised geometry counterpart of Riemannian submersions when applied to transverse generalised metrics. This is used to construct T-dual backgrounds as generalised metrics on reduced Courant algebroids which are related by a generalised isometry. We prove an existence and uniqueness result for generalised isometric exact Courant algebroids coming from reductions. We demonstrate that our construction reproduces standard T-duality relations based on correspondence spaces. We also describe how it applies to generalised T-duality transformations of almost para-Hermitian manifolds.
Thomas C. De Fraja, Vincenzo Emilio Marotta, Richard J. Szabo
2023-08-29T09:28:15Z
http://arxiv.org/abs/2308.15147v3
# T-dualities and Courant algebroid relations ###### Abstract. We develop a new approach to T-duality based on Courant algebroid relations which subsumes the usual T-duality as well as its various generalisations. Starting from a relational description for the reduction of exact Courant algebroids over foliated manifolds, we introduce a weakened notion of generalised isometries that captures the generalised geometry counterpart of Riemannian submersions when applied to transverse generalised metrics. This is used to construct T-dual backgrounds as generalised metrics on reduced Courant algebroids which are related by a generalised isometry. We prove an existence and uniqueness result for generalised isometric exact Courant algebroids coming from reductions. We demonstrate that our construction reproduces standard T-duality relations based on correspondence spaces. We also describe how it applies to generalised T-duality transformations of almost para-Hermitian manifolds. ###### Contents * 1 Introduction * 1.1 Exact Courant Algebroids from Sigma-Models * 1.2 The "Category" of Courant Algebroids * 1.3 T-duality and the Fourier-Mukai Transform * 1.4 Generalised T-duality and Para-Hermitian Geometry * 1.5 Summary of Results and Outline * 1.6 Acknowledgements * 2 Exact Courant Algebroids: Isomorphism and Reduction * 2.1 Courant Algebroids * 2.2 Isomorphisms of Courant Algebroids * 2.3 Reduction of Exact Courant Algebroids * 2.3.1 Reduction of Subbundles of Exact Courant Algebroids * 2.3.2 Adapted Splittings of Exact Courant Algebroids * 3 Courant Algebroid Relations: Reduction and Composition * 3.1 Courant Algebroid Relations * 3.2 Relational Approach to Reduction * 3.3 Composition of Courant Algebroid Relations * 4 Courant Algebroid Relations as Isometries * 4.1 Generalised Metrics * 4.2 Generalised Isometries * 4.3 Transverse Generalised Isometries * 4.3.1 Transverse Generalised Metrics * 4.3.2 Relations for Transverse Generalised Metrics * 4.4 Composition of Transverse Generalised Isometries * 5 T-duality as a Courant Algebroid Relation 5.1 Topological T-duality * 5.1.1 Topological T-duality for Standard Courant Algebroids * 5.2 Geometric T-duality * 5.2.1 Geometric T-duality for Standard Courant Algebroids * 6 T-duality Relations and Doubled Geometry * 6.1 T-duality for Correspondence Spaces * 6.2 Generalised T-duality for Para-Hermitian Manifolds * 6.2.1 Para-Hermitian Manifolds * 6.2.2 The Reduced Courant Algebroid * 6.2.3 Generalised T-duality * 6.3 Doubled Nilmanifolds * 6.3.1 The Drinfel'd Double of the Heisenberg Group * 6.3.2 The Doubled Heisenberg Nilmanifold * 6.3.3 Generalised T-duality * A Change of Splitting for Compositions ## 1. Introduction Courant algebroids, as introduced in [1, 2, 3, 4], have proved to be an indispensable geometric tool for understanding many aspects of supergravity and string theory, see e.g. [5, 6] and references therein. In this paper we are interested in their role in capturing the geometric, topological, and physical properties of spaces related by T-duality symmetries of string theory, see e.g. [3, 7, 8, 9, 10, 11, 12, 13]. Finalising a fully geometric description of T-duality and its various generalisations, such as non-abelian T-duality, Poisson-Lie T-duality, and non-isometric T-duality, has proven to be an elusive task. In this paper we will build on the geometric picture of Poisson-Lie T-duality proposed by Vysoky in [13] to give a generalised and unified framework for T-dualities in terms of Courant algebroid relations. Let us start by explaining why T-duality is generally expected to be subsumed into the geometry of Courant algebroids from the perspective of the worldsheet formulation of string theory, which is the point of view taken throughout this paper. Following [14, Section 2.2], we describe the intimate relationship between two-dimensional sigma-models and Courant algebroids, originally stated in [3]. ### Exact Courant Algebroids from Sigma-Models The background data for the bosonic part of the worldsheet theory of closed oriented strings consists of a closed oriented Riemann surface with metric \((\Sigma,h)\), a Riemannian manifold \((M,g)\) and a closed three-form \(H\in\Omega^{3}_{\mathrm{cl}}(M)\), called an \(H\)_-flux_, which represents an integer cohomology class \([H]\in\mathsf{H}^{3}(M,\mathbb{Z})\). The string sigma-model is a field theory of smooth maps \(\mathbb{X}\in C^{\infty}(\Sigma,M)\), whose action functional is a sum of two terms. The kinetic term is given by a Dirichlet-type functional, called the _Polyakov functional_. It is defined by endowing the space of maps \(\mathrm{d}\mathbb{X}:T\Sigma\to\mathbb{X}^{*}TM\) with a metric induced by \(g\), regarded as a metric on the vector space of sections of the pullback \(\mathbb{X}^{*}TM\) of the tangent bundle \(TM\) to \(\Sigma\) by \(\mathbb{X}\), and the cometric \(h^{-1}\) on \(T^{*}\Sigma.\) This gives a well-defined metric \(h^{-1}\otimes\mathbb{X}^{*}g\) on the vector bundle \(T^{*}\Sigma\otimes\mathbb{X}^{*}TM\) over \(\Sigma\) which allows one to write the Polyakov functional as \[S_{0}[\mathbb{X}]=\frac{1}{2}\,\int_{\Sigma}\,h^{-1}\big{(}\mathbb{X}^{*}g( \mathrm{d}\mathbb{X},\mathrm{d}\mathbb{X})\big{)}\ \mathrm{d}\mu(h)\, \tag{1.1}\] where \(\mathrm{d}\mathbb{X}\in\mathsf{\Gamma}(T^{*}\Sigma\otimes\mathbb{X}^{*}TM)\) and \(\mu(h)\) is the area measure on \(\Sigma\) induced by \(h\). The topological term, called the _Wess-Zumino functional_, is the functional \[S_{H}\colon C^{\infty}(\Sigma,M)\longrightarrow\mathbb{R}/\mathbb{Z}\] defined by \[S_{H}[\mathbb{X}]\coloneqq\int_{V}\,\mathbb{X}_{V}^{*}H\, \tag{1.2}\] where \(V\) is any three-manifold with boundary \(\Sigma\), and \(\mathbb{X}_{V}\colon V\to M\) is any smooth extension of \(\mathbb{X}\in C^{\infty}(\Sigma,M)\) to \(V.\) The space of Lagrangian densities of the Wess-Zumino functional \(S_{H}\) is \(\Omega^{3}_{\mathrm{cl}}(M)\). Consider the variational problem for the topological term. A variation of the Wess-Zumino functional (1.2) is generated by a vector field \(X\in\mathsf{\Gamma}(TM)\) acting via the Lie derivative. Since \(H\) is closed, via Stokes' Theorem this is given by \[\delta_{X}S_{H}[\mathbb{X}]=\int_{\Sigma}\,\mathbb{X}^{*}\big{(} \iota_{X}H\big{)}\.\] The solutions of the variational problem \(\delta_{X}S_{H}[\mathbb{X}]=0\) (see e.g. [3, 10, 14]) are given by the maps \(\mathbb{X}\in C^{\infty}(\Sigma,M)\) such that, for all \(X\in\mathsf{\Gamma}(TM)\), there exists \(\bar{\alpha}\in\mathsf{\Gamma}(T^{*}\Sigma)\) satisfying \[\mathbb{X}^{*}\big{(}\iota_{X}H\big{)}=\mathrm{d}\bar{\alpha}. \tag{1.3}\] Let us introduce the vector bundle \[\mathbb{T}M\coloneqq TM\oplus T^{*}M\,\] called the double tangent bundle of \(M\). Given a section \(X+\alpha\in\mathsf{\Gamma}(\mathbb{T}M)\), Equation (1.3) is then satisfied if \[\iota_{X}H=\mathrm{d}\alpha\, \tag{1.4}\] where Equation (1.3) is obtained by pulling back Equation (1.4) by \(\mathbb{X}.\) Equation (1.4) can be interpreted as giving the tangent directions on \(C^{\infty}(\Sigma,M)\) along which \(S_{H}\) is constant. We shall now show that these flat directions are related to symmetries of the Wess-Zumino functional (1.2). The Lie group \[\mathsf{G}:=\mathsf{Diff}(M)\ltimes\Omega^{2}(M)\] acts on the space of Lagrangian densities \(\Omega^{3}_{\mathrm{cl}}(M)\) by \[(\varphi,B)\cdot H=(\varphi^{-1})^{*}(H-\mathrm{d}B)=:H^{\prime}\, \tag{1.5}\] for all \((\varphi,B)\in\mathsf{G}\) and \(H\in\Omega^{3}_{\mathrm{cl}}(M).\) Under the group action (1.5), the Wess-Zumino functional \(S_{H}\) transforms as \[S_{H^{\prime}}[\mathbb{X}]=S_{H}[\varphi^{-1}\circ\mathbb{X}]- \int_{\Sigma}\,\mathbb{X}^{*}\big{(}(\varphi^{-1})^{*}B\big{)}\,\] where \(H^{\prime}=(\varphi^{-1})^{*}(H-\mathrm{d}B).\) This is invariant if and only if \(B\) is closed, and \((\varphi^{-1})^{*}H=H.\) Such pairs \((\varphi,B)\) define the isotropy subgroup \(\mathsf{G}_{H}\subset\mathsf{G}\) of the \(H\)-flux. Let \(\mathsf{Lie}(\mathsf{G}_{H})\subset\mathsf{\Gamma}\big{(}TM\oplus\bigwedge^{2}T^{*} M\big{)}\) be the Lie algebra of the isotropy group \(\mathsf{G}_{H}\). Infinitesimal symmetries of \(S_{H}\) are characterised by the condition (1.4), and are generated by the normal Lie subalgebra of \(\mathsf{Lie}(\mathsf{G}_{H})\) given by \[\mathsf{Lie}(\mathsf{G}_{H})^{\mathfrak{n}}=\{\,X+\mathrm{d}\alpha-\iota_{X}H \,\mid\,X\in\mathsf{\Gamma}(TM)\,\,,\,\,\alpha\in\mathsf{\Gamma}(T^{*}M)\,\}\,\,.\] See [14] for the proof. The Lie group \(\mathsf{G}\) acts on \(\mathsf{\Gamma}(\mathbb{T}M)\) by \[(\varphi,B)\cdot(X+\alpha)=\varphi_{*}X+(\varphi^{-1})^{*}(\alpha+\iota_{X}B) \,\,,\] for all \((\varphi,B)\in\mathsf{G}\) and \(X+\alpha\in\mathsf{\Gamma}(\mathbb{T}M)\). Let \(\Psi\colon\mathsf{\Gamma}(\mathbb{T}M)\to\mathsf{Lie}(\mathsf{G}_{H})^{ \mathfrak{n}}\) be the \(\mathsf{G}_{H}\)-equivariant map defined by \[\Psi(X+\alpha)=X+\mathrm{d}\alpha-\iota_{X}H\,\,, \tag{1.6}\] for all \(X+\alpha\in\mathsf{\Gamma}(\mathbb{T}M).\) The Dorfman bracket \([\![\,\cdot\,,\,\cdot\,]\!]_{H}\) is obtained from the Lie algebra action induced by the restriction of the representation of \(\mathsf{G}\) on \(\mathsf{\Gamma}(\mathbb{T}M)\) to \(\mathrm{im}(\Psi)\subset\mathsf{Lie}(\mathsf{G})\): \[\Psi(X+\alpha)\cdot(Y+\beta)=[X,Y]+\pounds_{X}\beta-\iota_{Y}\,\mathrm{d} \alpha+\iota_{Y}\,\iota_{X}H=:[\![X+\alpha,Y+\beta]\!]_{H}\,\,, \tag{1.7}\] for all \(X+\alpha,Y+\beta\in\mathsf{\Gamma}(\mathbb{T}M)\).1 The Jacobi identity for \([\![\,\cdot\,,\,\cdot\,]\!]_{H}\) is a consequence of \(\mathsf{G}_{H}\)-equivariance of \(\Psi\). Footnote 1: Notation: \([X,Y]\) denotes the Lie bracket of vector fields \(X,Y\in\mathsf{\Gamma}(TM)\), \(\pounds_{X}\) denotes the Lie derivative along \(X\), and \(\iota_{(\cdot\,)}\) generally denotes the contraction of a section of a vector bundle with a tensor of the dual vector bundle. The vector bundle \(\mathbb{T}M\) endowed with the bracket \([\![\,\cdot\,,\,\cdot\,]\!]_{H}\) (and a suitable pairing) on its sections is our first example of a (exact) Courant algebroid. We will see later on that its automorphism group is the same as the isotropy group \(\mathsf{G}_{H}\subset\mathsf{G}\) of \(H\) for the action (1.5) on \(\Omega^{3}_{\mathrm{cl}}(M)\), i.e. the symmetries leaving \(S_{H}\) invariant (see Corollary 2.16). Incorporating the kinetic term (1.1) amounts to introducing a generalised metric on \(\mathbb{T}M\). We conclude that, from the worldsheet perspective, the data of a string background is encoded in an exact Courant algebroid endowed with a generalised metric. Courant algebroids may be regarded as a generalisation of quadratic Lie algebras from vector spaces to vector bundles, i.e. as vector bundles endowed with a bracket operation on their module of sections and a non-degenerate symmetric bilinear pairing satisfying a kind of invariance property with respect to the bracket operation. The crux to understanding T-dualities in this framework resides in the complexity of the description of morphisms of these structures. As we have seen above, isomorphisms of Courant algebroids provide a symmetry of the corresponding sigma-model, and they prove to be vastly well-behaved. Meanwhile, the notion of T-duality for sigma-models goes beyond the concept of symmetry, and more powerful tools are needed to explore the relation between T-dual sigma-models. In this paper we advocate the use of Courant algebroid relations which, while less constraining than isomorphisms, present subtle and more complex features that before the work of Vysoky [13] were not fully understood. This conundrum motivates a discussion of what a category-like notion for Courant algebroids might be. ### The "Category" of Courant Algebroids As argued in [13], an extended notion of the category of Courant algebroids can be introduced by allowing morphisms of Courant algebroids to be involutive isotropic subbundles of a product Courant algebroid supported on a submanifold of the base; these are called _Courant algebroid relations_. By introducing this notion one avoids the problem of not having enough arrows. On the other hand, extending the notion of arrow between Courant algebroids in this way means that not all arrows are composable: the support of the composition of subbundles might fail to be a submanifold itself. Nonetheless, the notion of Courant algebroid relation provides a complete characterisation of what Courant algebroid morphisms are and how their composition behaves. In [13] a well-defined notion of relations between generalised metrics on Courant algebroids is further presented, i.e. a _generalised isometry_ which is also well-behaved under composition of relations. Based on this, in the present paper we will extend the notion of generalised isometry to structures similar to generalised metrics: the _transverse generalised metrics_ which include in their definition a controlled degeneracy that, in turn, helps in defining their invariance conditions. This extension also arises from a more concrete problem. As shown in [15], an exact \(\mathsf{G}\)-equivariant Courant algebroid, over a principal \(\mathsf{G}\)-bundle, can be reduced to a Courant algebroid on the base of the principal bundle. For isotropic \(\mathsf{G}\)-actions, the reduced Courant algebroid is exact as well. As shown in [13], there exists a Courant algebroid relation between the original Courant algebroid and the reduced Courant algebroid. However, if both are endowed with generalised metrics, this Courant algebroid relation always fails to be a generalised isometry if the \(\mathsf{G}\)-action is isotropic. Our goal in this paper is to find a notion of isometry between a suitable structure on the original Courant algebroid, e.g. a transverse generalised metric, and a generalised metric on the reduced Courant algebroid. Courant algebroid relations pave the way for a complete formulation of T-duality as a generalised isometry. This idea was first proposed by Severa2 in order to describe Poisson-Lie T-duality as an "almost isomorphism" of Hamiltonian systems [16]. The almost isomorphism arises from a Courant algebroid relation that is required to be both Lagrangian - i.e. a Dirac structure for the product Courant algebroid - and a generalised isometry. This is explored in detail in [13], where Poisson-Lie T-duality is shown to be a generalised isometry between reduced Courant algebroids coming from different group actions which are not isotropic. However, there is no general geometric framework encompassing T-duality in all of its flavours. In this paper we will provide such framework for T-dualities arising from generic isotropic Courant algebroid reductions. A particularly relevant instance that we are interested in is the T-duality for torus bundles with \(H\)-flux discussed in [17, 18, 8, 19]. This has also recently been discussed in [20] from a more general geometric perspective which is closer in spirit to our treatment; however, our framework also covers more general cases of affine torus bundles [21]. Footnote 2: See [http://thphys.irb.hr/dualities2017/files/Jun06Severa.pdf](http://thphys.irb.hr/dualities2017/files/Jun06Severa.pdf). ### T-duality and the Fourier-Mukai Transform In the picture proposed by [17] and later expanded in [8], T-duality is accompanied by a module isomorphism between invariant sections of Courant algebroids over torus bundles with a common base. It arises from a (smooth version of the) Fourier-Mukai transform through a correspondence between these bundles. One of our applications in this paper is the description of this form of T-duality in terms of Courant algebroid relations. Let us therefore discuss this approach to T-duality in more depth. The setting of [8, 17] involves two principal \(\mathsf{T}^{k}\)-bundles \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\) over a common base manifold \(\mathcal{B}\). Let \(M=\mathcal{Q}_{1}\times_{\mathcal{B}}\mathcal{Q}_{2}\) be the fibred product, with respective projections \(\varpi_{1}\) and \(\varpi_{2}\) to \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\). This is a principal \(\mathsf{T}^{2k}\)-bundle over \(\mathcal{B}\), called a correspondence space or a doubled torus bundle, which sits in the commutative diagram On \(M\) the fibrewise T-duality group acts geometrically via diffeomorphisms [22, 23]. Suppose that both \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\) are endowed with closed \(\mathsf{T}^{k}\)-invariant three-forms \(\underline{H}_{1}\) and \(\underline{H}_{2}\), respectively. Then \((\mathcal{Q}_{1},\underline{H}_{1})\) and \((\mathcal{Q}_{2},\underline{H}_{2})\) are said to be T-dual if \[\varpi_{1}^{*}\,\underline{H}_{1}-\varpi_{2}^{*}\,\underline{H}_{2}=\mathrm{d}B\] on the correspondence space \(M\), for some \(\mathsf{T}^{2k}\)-invariant two-form \(B\in\Omega^{2}_{\mathsf{T}^{2k}}(\mathcal{Q}_{1}\times_{\mathcal{B}}\mathcal{ Q}_{2})\) whose restriction to \(\ker(\varpi_{1*})\otimes\ker(\varpi_{2*})\) is non-degenerate. Gauging the string sigma-model for the doubled torus bundle \(M\) then relates the sigma-models for the quotient spaces \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\) via reduction along the projections \(\varpi_{1}\) and \(\varpi_{2}\), respectively [24]. In this case, since the fibres are compact, the Fourier-Mukai transform is well-defined and given by \[\varrho(\alpha)\coloneqq\int_{\mathsf{T}^{k}}\,\mathrm{e}^{\,B}\,\wedge\, \varpi_{1}^{*}\,\alpha\] for any \(\alpha\in\Omega^{\bullet}_{\mathsf{T}^{k}}(\mathcal{Q}_{1})\), where the fibrewise integration is the pushforward of forms by the projection \(\varpi_{2}:M\to\mathcal{Q}_{2}\). It is shown in [17] that \(\varrho\) defines a degree-shifting isomorphism between the twisted differential complexes \(\big{(}\Omega^{\bullet}_{\mathsf{T}^{k}}(\mathcal{Q}_{1}),\mathrm{d} \underline{H}_{1}\big{)}\) and \(\big{(}\Omega^{\bullet}_{\mathsf{T}^{k}}(\mathcal{Q}_{2}),\mathrm{d} \underline{H}_{2}\big{)}\), where \(\mathrm{d}\underline{H}_{i}:=\mathrm{d}+\underline{H}_{i}\wedge\,\). It describes the transformation of Ramond-Ramond fields in type II string theory under T-duality. This becomes an isomorphism of irreducible Clifford modules by choosing an isomorphism [8] \[\mathscr{R}\colon\mathsf{\Gamma}_{\mathsf{T}^{k}}(T\mathcal{Q}_{1}\oplus T^{ *}\mathcal{Q}_{1})\longrightarrow\mathsf{\Gamma}_{\mathsf{T}^{k}}(T\mathcal{Q }_{2}\oplus T^{*}\mathcal{Q}_{2})\] of \(C^{\infty}(\mathcal{B})\)-modules such that \[\varrho(e\cdot\alpha)=\mathscr{R}(e)\cdot\varrho(\alpha)\,\] for any \(e\in\mathsf{\Gamma}_{\mathsf{T}^{k}}(T\mathcal{Q}_{1}\oplus T^{*}\mathcal{Q}_ {1})\) and \(\alpha\in\Omega^{\bullet}_{\mathsf{T}^{k}}(\mathcal{Q}_{1})\), where \(e\cdot\alpha\) is the natural representation on \(\Omega^{\bullet}(\mathcal{Q}_{1})\) of the Clifford algebra induced by the pairing \(\langle\,\cdot\,,\,\cdot\,\rangle\) on the fibres of \(T\mathcal{Q}_{1}\oplus T^{*}\mathcal{Q}_{1}\), and similarly on \(\Omega^{\bullet}(\mathcal{Q}_{2})\). Thus T-duality, as an isomorphism of \(\mathsf{T}^{k}\)-invariant sections of the Courant algebroids \(\mathbb{T}\mathcal{Q}_{1}\) and \(\mathbb{T}\mathcal{Q}_{2}\), is given by the unique choice of \(\mathscr{R}\) in terms of horizontal lifts of vector fields on \(\mathcal{Q}_{1}\) determined by the non-degeneracy of the two-form \(B\). Our general framework for T-duality, seen as a relation between Courant algebroids, will encapsulate this construction, but it reproduces the isomorphism \(\mathscr{R}\) for any doubled fibration \(M\) without any conditions on the topology of the fibres. Thus while the above construction is restricted to the possibility of integrating along the fibres, in this paper we propose an alternative construction without introducing the Fourier-Mukai transform \(\varrho\), based on the reduction of Courant algebroids over foliated manifolds. Our approach is thus powerful enough to cover T-dualities based on correspondence spaces which involve non-compact fibres, such as those which arise in Poisson-Lie T-duality [25, 26]. In our framework, the metric and Kalb-Ramond field of a string background, given by specifying a generalised metric on \(\mathcal{Q}_{1}\), play a crucial role in determining a natural definition of T-duality, providing the necessary restrictions for the definition of \(\mathscr{R}\) in a unique way. Remarkably, the crucial non-degeneracy condition for \(B\) arises naturally in our construction. This extends the work of [8], where only isotropic reduction of a Courant algebroid by a product Lie group action is considered. At the same time it preserves all of its features, such as the description of the Buscher rules for T-dual backgrounds in terms of the module isomorphism \(\mathscr{R}\) as a generalised isometry. ### Generalised T-duality and Para-Hermitian Geometry The correspondence space picture of T-duality was extended to doubled twisted tori in [27, 28], which further double the base \(\mathcal{B}\) and provide instances of almost para-Hermitian manifolds. This incorporates a geometric description of a space together with all of its T-duals, including those that may be 'non-geometric'. This perspective was extended by [29, 30] to give a geometric notion of generalised T-duality transformations of arbitrary foliated almost para-Hermitian manifolds, endowed with generalised para-Hermitian metrics, as diffeomorphisms which preserve the split-signature metric \(\eta\) and naturally incorporate non-isometric T-duality as well as non-abelian T-duality. The string sigma-model for the doubled twisted torus considered in [31, 32], and its generalisation to the Born sigma-model for almost para-Hermitian manifolds in [30], allow for a Lie algebroid gauging along the leaves of a foliation. This relates T-dual sigma-models via reductions to the respective leaf spaces, though the explicit nature of the relation between T-dual backgrounds has so far not been identified. In this paper we will make this notion of generalised T-duality more precise by viewing it in terms of our Courant algebroid relations, which provide a'map-like' device preserving all Courant algebroid structures, and in turn the dynamics of the corresponding sigma-models. ### Summary of Results and Outline The outline of this paper is as follows: Sections 2-4 give the necessary background material for our main results which are presented in Section 5, with each section each being relatively self-contained, while Section 6 considers some concrete applications and examples. **Section 2** begins with a review of Courant algebroids. After discussing some important properties of Courant algebroids and their isomorphisms, in Section 2.3 we introduce the first of the three major building blocks required for Section 5, that of Courant algebroid reduction, following closely the work of Bursztyn-Cavalcanti-Gualtieri [15], and in particular of Zambon [33]. The reduction begins with an exact Courant algebroid \(E\) over a fibred manifold, whose fibres are defined by an involutive isotropic subbundle \(K\subset E\), and produces an exact Courant algebroid on the leaf space (here assumed to be smooth). We highlight the relevance of the _basic sections_ of3\(K^{\perp}\) (Definition 2.24), i.e. sections of \(K^{\perp}\) that admit a form of \(K\)-invariance, since it suffices that there are enough basic sections to make the reduction possible [33]. Equivalently, it suffices that there exists an adapted splitting (Definition 2.36), and we lean heavily on such objects throughout Section 5. We further explore their crucial role in this construction in order to prove some properties of reducible exact Courant algebroids. **Section 3** provides an introduction to the second, and perhaps most important, building block of Section 5, that of Courant algebroid relations. Courant algebroid relations generalise Courant algebroid isomorphisms to cases where there is no map between the base manifolds. T-duality gives a duality between sigma-models over manifolds which are not necessarily diffeomorphic. This makes Courant algebroid relations a powerful framework for the formalisation of T-duality. Section 3.1 repeats arguments of [13, Section 3], defining Courant algebroid relations, as well as how and when they can be composed. Section 3.2 then reformulates the reduction of Section 2.3 by an isotropic bundle \(K\) as a Courant algebroid relation, denoted \(Q(K)\), following [13, Section 4.2] closely. **Section 4** presents the final building block by introducing the notion of (transverse) generalised isometries for Courant algebroid relations. Section 4.1 reviews generalised metrics while Section 4.2 reviews generalised isometries, following [13, Section 5]. We also show that the relation \(Q(K)\) defined by isotropic reduction of an exact Courant algebroid cannot be a generalised isometry, as similarly discussed by Vysoky in [13] for the case of reduction of \(\mathsf{G}\)-equivariant Courant algebroids with an isotropic action of a Lie group \(\mathsf{G}\). In order to make this relation a generalised isometry, Vysoky relinquishes the assumption that the action is isotropic and shows that the reduction relation can be a generalised isometry only if \(K\cap K^{\perp}=\{\,0\,\}\,.\) In this paper we choose another approach and try to find a structure such that the relation may be seen as a different type of isometry whilst preserving the isotropicity of \(K.\) We defer the exploration of non-isotropic foliated reduction to future work. Thus in Section 4.3 we extend isometries of generalised metrics to isometries of transverse generalised metrics. Transverse generalised metrics were first introduced by Severa-Strobl in [34] as degenerate generalised metrics which admit an invariance condition with respect to an involutive isotropic subbundle \(K\) of an exact Courant algebroid. The Courant algebroid relation \(Q(K)\) discussed in Section 3.2 is our main example of what we call a transverse generalised isometry, as seen in **Theorem 4.30**.: If the reduced exact Courant algebroid admits a generalised metric, then there exists a unique transverse generalised metric on the original exact Courant algebroid, invariant with respect to the subbundle inducing the fibration for the reduction, which makes the reduction relation \(Q(K)\) a transverse generalised isometry. The converse is also true. We then discuss the composition of transverse generalised isometries in Section 4.4; this is a technical undertaking, and most of the technicalities are not necessary for our purposes, so this discussion is kept brief with some details delegated to Appendix A. **Section 5** develops in detail the main idea of this paper: rephrasing T-duality as a Courant algebroid relation. Such a Courant algebroid relation \(R(\Phi)\) fits into the diagram where \(Q(K_{i})\) is the Courant algebroid relation characterising the reduction of \(E_{i}\) over \(M_{i}\) by \(K_{i}\) to \(\underline{E}_{i}\) over \(\mathcal{Q}_{i}\), as in Section 3.2, and \(\Phi\) is a Courant algebroid isomorphism over \(\varphi\in\mathsf{Diff}(M_{1},M_{2})\). Section 5.1 deals with the construction of the Courant algebroid relation \(R(\Phi)\), thought of as the reduction of the isomorphism \(\Phi\), and gives a first notion of T-duality: we say that \(\underline{E}_{1}\) and \(\underline{E}_{2}\) are _T-duality related_ (Definition 5.3). This may also be stated as **Theorem 5.8**.: \(\underline{E}_{1}\) and \(\underline{E}_{2}\) are T-duality related if and only if \(\Phi^{-1}(K_{2})\cap K_{1}\) has constant rank. We also see that the T-duality relation \(R(\Phi)\) is maximally isotropic, i.e. it is a Dirac structure. This definition alone provides a notion of T-duality only for the topological (Wess-Zumino) term of sigma-models, disregarding the dynamics, i.e. the Polyakov functional (1.1). Section 5.1 can be seen as a prelude to Section 5.2, passing from purely topological data to include dynamical data, by introducing geometric structure in the form of generalised metrics into the picture. In this vein, we give our second and principal definition of T-duality: \(\underline{E}_{1}\) and \(\underline{E}_{2}\) are _geometrically T-dual_ when the Courant algebroid relation \(R(\Phi)\) is additionally a generalised isometry (Definition 5.17). The remainder of Section 5.2 is devoted to stating an equivalence condition akin to Theorem 5.8 in the geometric case. We introduce the notion of T-duality directions (Definition 5.18), and discuss some necessary invariance-like conditions for generalised metrics and adapted splittings in the T-duality directions. Our main result is then **Theorem 5.27**.: Starting with a generalised metric on \(\underline{E}_{1}\), invariant in the T-duality directions, the following are equivalent: 1. \(K_{2}^{\perp}\cap\Phi(K_{1})\subseteq K_{2}\). 2. \(\Phi^{-1}(K_{2})\cap K_{1}^{\perp}\subseteq K_{1}\). 3. \(\underline{E}_{1}\) and \(\underline{E}_{2}\) are geometrically T-dual. Indeed, starting with a generalised metric on \(\underline{E}_{1}\), we are able to construct a generalised metric on \(\underline{E}_{2}\), making \(R(\Phi)\) into a generalised isometry. The generalised metric on \(\underline{E}_{2}\) is the correct one, for instance Remark 6.15 shows that it satisfies generalised Buscher rules. In both Sections 5.1 and 5.2, we also describe the relation \(R(\Phi)\) in the special case of split exact Courant algebroids, see Proposition 5.15 and Theorem 5.44. **Section 6** puts the Courant algebroid relation \(R(\Phi)\) to the test. Section 6.1 is concerned with T-duality in the correspondence space picture for principal \(k\)-torus bundles, as described by Cavalcanti-Gualtieri [8], and we show that this is a subclass of our definition of T-duality through **Proposition 6.6**.: For two principal \(\mathsf{T}^{k}\)-bundles over the same base manifold, Definition 5.17 and the definition of T-duality in [8] are equivalent. Furthermore, we show that, without using the Fourier-Mukai transform, we can reproduce the isomorphism between \(\mathsf{T}^{k}\)-invariant sections of the T-dual Courant algebroids as arising from the generalised isometry obtained from our reduction, see Proposition 6.13. Lastly, we show how global Buscher rules naturally arise from our definition of geometric T-duality, see Remark 6.15, and illustrate our approach on the explicit example of T-dualities between three-dimensional lens spaces in Example 6.17. Another class of examples falling under our definition is presented in Section 6.2: the case of para-Hermitian manifolds and generalised T-duality as described in [30]. Here almost para-Hermitian structures are taken on a pair of \(\eta\)-isometric manifolds \(\varphi:M_{1}\to M_{2}\), and where (after pulling back by \(\varphi\)) they differ tells us in which direction T-duality should be taken. We give conditions on the Lie algebra of a local diagonalising frame for the almost para-complex structure so that firstly the T-duality relation exists (Proposition 6.32), and secondly when it is a generalised isometry (Lemma 6.40). We also give an alternative description of the T-dual backgrounds in terms of generalised para-Hermitian metrics \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\), respectively, resulting in an extension of the Buscher rules given in concise form by **Proposition 6.43**.: The T-dual backgrounds give rise to generalised para-Hermitian metrics satisfying \(\varphi^{*}\mathcal{H}_{2}=\mathcal{H}_{1}\). An explicit illustration of generalised T-duality is presented in Section 6.3 wherein we consider the doubled Heisenberg nilmanifold in six dimensions, and demonstrate the well-known T-duality between the three-dimensional Heisenberg nilmanifold and the three-torus with \(H\)-flux in our approach. ### Acknowledgements We thank Alex Arvanitakis and Daniel Thompson for helpful discussions, and Pavol Severa for insightful comments on our manuscript. This article is based upon work from COST Action CaLISTA CA21109 supported by COST (European Cooperation in Science and Technology). The work of T.C.D. was supported by an EPSRC Doctoral Training Partnership Award. The work of V.E.M. was supported in part by a Maxwell Institute Research Fellowship and by the GACR Grant EXPRO 19-28268X. The work of R.J.S. was supported in part by the STFC Consolidated Grant ST/P000363/1. ## 2. Exact Courant Algebroids: Isomorphism and Reduction We briefly recall the main notions and results concerning Courant algebroids. We refer to [6, 14, 35, 36] and references therein for more extensive treatments. ### Courant Algebroids The structure of the vector bundle \(\mathbb{T}M=TM\oplus T^{*}M\) endowed with the Dorfman bracket \(\llbracket\,\cdot\,,\,\cdot\,\rrbracket_{H}\) introduced in Section 1.1 suggests that the fundamental notion to introduce is that of a Courant algebroid. In the following we provide a brief account of Courant algebroids. **Definition 2.1**.: A _Courant algebroid_ is a quadruple \((E,\llbracket\,\cdot\,,\,\cdot\,\rrbracket,\langle\,\cdot\,,\,\cdot\,\rangle,\rho)\), where \(E\) is a vector bundle over a manifold \(M\) with a fibrewise non-degenerate pairing \(\langle\,\cdot\,,\,\cdot\,\rangle\in\mathsf{\Gamma}(\bigodot^{2}E^{*})\), a vector bundle morphism \(\rho\colon E\to TM\) called the _anchor_, and a bracket operation \[\llbracket\,\cdot\,,\,\cdot\,\rrbracket\colon\mathsf{\Gamma}(E)\times\mathsf{ \Gamma}(E)\longrightarrow\mathsf{\Gamma}(E)\] called the _Dorfman bracket_, which together satisfy 1. \(\rho(e)\cdot\langle e_{1},e_{2}\rangle=\langle\llbracket e,e_{1}\rrbracket,e_ {2}\rangle+\langle e_{1},\llbracket e,e_{2}\rrbracket\rangle\), 2. \(\langle\llbracket e,e\rrbracket,e_{1}\rangle=\frac{1}{2}\,\rho(e_{1})\cdot \langle e,e\rangle\), 3. \(\llbracket e,\llbracket e,e_{1},e_{2}\rrbracket\rrbracket=\llbracket\llbracket e,e_{1}\rrbracket,e_{2}\rrbracket+\llbracket e_{1},\llbracket e,e_{2} \rrbracket\rrbracket\), for all \(e,e_{1},e_{2}\in\mathsf{\Gamma}(E)\). **Remark 2.2**.: The anchored Leibniz rule for the Dorfman bracket \[\llbracket e_{1},f\,e_{2}\rrbracket=f\,\llbracket e_{1},e_{2}\rrbracket+ \left(\rho(e_{1})\cdot f\right)e_{2}\, \tag{2.3}\] for all \(e_{1},e_{2}\in\mathsf{\Gamma}(E)\) and \(f\in C^{\infty}(M),\) follows from item (i). The Jacobi identity (iii) and the anchored Leibniz rule (2.3) imply that the anchor \(\rho\) is a bracket homomorphism: \[\rho(\llbracket e_{1},e_{2}\rrbracket)=[\rho(e_{1}),\rho(e_{2})]\,\] for all \(e_{1},e_{2}\in\mathsf{\Gamma}(E)\). We refer to [36, Section 2] for a complete account of all the main properties of Courant algebroids. Throughout this paper we will denote the Courant algebroid \((E,\llbracket\,\cdot\,,\,\cdot\rrbracket,\langle\,\cdot\,,\,\cdot\,\rangle,\rho)\) simply by \(E\) when there is no ambiguity in the structure maps. If there is more than one Courant algebroid \(E\) involved in the discussion, we label its operations with a subscript \({}_{E}\,\). A Courant algebroid \(E\) is called _regular_ if its anchor map \(\rho\colon E\to TM\) has constant rank, and _transitive_ if \(\rho\) is surjective. A _split Courant algebroid_ is a Courant algebroid whose underlying vector bundle \(E\to M\) is the Whitney sum \(E=A\oplus A^{*}\) of a vector bundle \(A\to M\) and its dual \(A^{*}\to M\). **Remark 2.4**.: For any Courant algebroid \(E\) over \(M\) there is a map \(\rho^{*}\colon T^{*}M\to E\) given by \[\langle\rho^{*}(\alpha),e\rangle\coloneqq\iota_{e}\,\rho^{\mathrm{t}}(\alpha )\, \tag{2.5}\] for all \(\alpha\in\Omega^{1}(M)\) and \(e\in\mathsf{\Gamma}(E)\), where \(\rho^{\mathrm{t}}\colon T^{*}M\to E^{*}\) is the transpose of \(\rho.\) The map \(\rho^{*}\) induces a map \(\mathcal{D}\colon C^{\infty}(M)\to\mathsf{\Gamma}(E)\) defined by \[\mathcal{D}f=\rho^{*}\,\mathrm{d}f\,\] for all \(f\in C^{\infty}(M)\), which obeys a derivation-like rule and is the natural generalisation of the exterior derivative in the algebroid \(E\). Then item (ii) of Definition 2.1 is equivalent to \[\llbracket e,e\rrbracket=\tfrac{1}{2}\,\mathcal{D}\langle e,e\rangle\,\] which together with Equation (2.3) imply the additional Leibniz rule \[\llbracket f\,e_{1},e_{2}\rrbracket=f\,\llbracket e_{1},e_{2} \rrbracket-\left(\rho(e_{2})\cdot f\right)e_{1}+\langle e_{1},e_{2}\rangle\, \mathcal{D}f. \tag{2.6}\] Since \[\rho\circ\rho^{*}=0\,\] see e.g. [36] for a short proof, this motivates **Definition 2.7**.: A Courant algebroid \(E\) over \(M\) is _exact_ if the chain complex \[0\longrightarrow T^{*}M\xrightarrow{\rho^{*}}E\xrightarrow{ \rho}TM\longrightarrow 0 \tag{2.8}\] is a short exact sequence of vector bundles. ### Isomorphisms of Courant Algebroids Throughout this paper we will make extensive use of isomorphisms of exact Courant algebroids. Here we recall their main properties, together with some of the standard examples, closely following [35, 14]. **Definition 2.9**.: Let \(E_{1}\) and \(E_{2}\) be Courant algebroids over manifolds \(M_{1}\) and \(M_{2}\), respectively. A vector bundle morphism \(\Phi\colon E_{1}\to E_{2}\) covering a diffeomorphism \(\varphi\colon M_{1}\to M_{2}\) is a _Courant algebroid morphism_ if it satisfies 1. \(\langle\Phi(\,\cdot\,)\,,\,\Phi(\,\cdot\,)\rangle_{E_{2}}\circ\varphi= \langle\,\cdot\,,\,\cdot\,\rangle_{E_{1}}\,\ \text{(isometry)}\) 2. \(\llbracket\Phi(\,\cdot\,)\,,\,\Phi(\,\cdot\,)\rrbracket_{E_{2}}=\Phi(\, \llbracket\,\cdot\,,\,\cdot\,\rrbracket_{E_{1}}\,)\.\ \text{(bracket homomorphism)}\) Here and in the following the word _isomorphism_ in the context of Courant algebroids refers to vector bundle isomorphisms covering diffeomorphisms between different Courant algebroid structures, even if defined on the same vector bundle. **Remark 2.10**.: Definition 2.9 together with the anchored Leibniz rule (2.3) implies a compatibility condition with the anchor maps: \[\rho_{E_{2}}\circ\Phi=\varphi_{*}\circ\rho_{E_{1}}. \tag{2.11}\] **Example 2.12**.: Let \((\mathbb{T}M_{1},[\![\,\cdot\,,\,\cdot\,]\!]_{H_{1}},\langle\,\cdot\,,\,\cdot\, \rangle_{\mathbb{T}M_{1}},\mathrm{pr}_{1}),\) be the \((H_{1}\)_-twisted_) _standard Courant algebroid_ over \(M_{1}\) with \(H_{1}\in\Omega^{3}_{\mathrm{cl}}(M_{1}),\) the Dorfman bracket \([\![\,\cdot\,,\,\cdot\,]\!]_{H_{1}}\) given by Equation (1.7), the pairing \(\langle\,\cdot\,,\,\cdot\,\rangle\) induced by the usual duality pairing between \(TM_{1}\) and \(T^{*}M_{1},\) and anchor \(\mathrm{pr}_{1}\) the projection to the first summand of \(\mathbb{T}M_{1}=TM_{1}\oplus T^{*}M_{1}\). From now on, we will denote this split exact Courant algebroid by \((\mathbb{T}M_{1},H_{1}).\) For any \(\varphi\in\mathsf{Diff}(M_{1},M_{2}),\) consider the induced vector bundle isomorphism \[\overline{\varphi}:=\varphi_{*}+(\varphi^{-1})^{*}:\mathbb{T}M_{1}\longrightarrow \mathbb{T}M_{2}\] which covers \(\varphi\). It is straightforward to see that the Dorfman bracket on \(\mathbb{T}M_{1}\) transforms to \[\overline{\varphi}\big{(}[\overline{\varphi}^{-1}(X+\alpha),\overline{ \varphi}^{-1}(Y+\beta)]\!]_{H_{1}}\big{)}=[\![X+\alpha,Y+\beta]\!]_{(\varphi^{ -1})^{*}H_{1}}\,\] for all \(X+\alpha,Y+\beta\in\mathsf{\Gamma}(\mathbb{T}M_{2}).\) Thus \(\overline{\varphi}\) is an isomorphism between the \(H_{1}\)-twisted standard Courant algebroid \(\mathbb{T}M_{1}\) and the \((\varphi^{-1})^{*}H_{1}\)-twisted standard Courant algebroid \(\mathbb{T}M_{2}.\) More generally, given an \(H_{2}\)-twisted standard Courant algebroid over \(M_{2}\) and a diffeomorphism \(\varphi\in\mathsf{Diff}(M_{1},M_{2})\) such that \(\varphi^{*}H_{2}=H_{1},\) then \(\overline{\varphi}\) preserves the Dorfman brackets, and hence \(\overline{\varphi}\) is an isomorphism of the twisted standard Courant algebroids \((\mathbb{T}M_{1},\,H_{1})\) and \((\mathbb{T}M_{2},\,H_{2}).\) Among the main examples of isomorphisms of exact Courant algebroids are the \(B\)-field transformations for a split exact Courant algebroid. **Definition 2.13**.: Let \((\mathbb{T}M,H)\) be the \(H\)-twisted standard Courant algebroid over \(M,\) where \(H\) is the three-form on \(M\) characterising its Dorfman bracket. Let \(B\in\Omega^{2}(M)\) be any two-form on \(M\); it induces the vector bundle morphism \(B^{\flat}:TM\to T^{*}M\) given by \(B^{\flat}(X)=\iota_{X}B,\) for all \(X\in\mathsf{\Gamma}(TM).\) The vector bundle isomorphism \(\,\mathrm{e}\,^{B}:\mathbb{T}M\to\mathbb{T}M\) given by \[\mathrm{e}\,^{B}\left(X+\alpha\right)=X+B^{\flat}(X)+\alpha\,\] for all \(X\in\mathsf{\Gamma}(TM)\) and \(\alpha\in\mathsf{\Gamma}(T^{*}M),\) is a \(B\)_-field transformation_. The graph \(\mathrm{gr}(B)=\mathrm{im}(\,\mathrm{e}\,^{B}\,)\subset\mathbb{T}M\) is a maximally isotropic subbundle, since \[\langle\,\mathrm{e}\,^{B}\,\left(X\right),\,\mathrm{e}\,^{B}\,\left(Y\right) \rangle=\iota_{X}B^{\flat}(Y)+\iota_{Y}B^{\flat}(X)=0\,\] for all \(X,Y\in\mathsf{\Gamma}(TM)\). Any \(B\)-field transformation is compatible with the anchor \(\mathrm{pr}_{1},\) since \[\mathrm{pr}_{1}\big{(}\,\mathrm{e}\,^{B}\left(X+\alpha\right)\big{)}=X\,\] for all \(X\in\mathsf{\Gamma}(TM)\) and \(\alpha\in\mathsf{\Gamma}(T^{*}M).\) It further transforms the Dorfman bracket to \[[\![\,\mathrm{e}\,^{B}\left(X+\alpha\right),\,\mathrm{e}\,^{B}\left(Y+\beta \right)]\!]_{H}=\,\mathrm{e}\,^{B}\left([\![X+\alpha,Y+\beta]\!]_{H}\right)+ \iota_{Y}\,\iota_{X}\,\mathrm{d}B\,\] for all \(X+\alpha,Y+\beta\in\mathsf{\Gamma}(\mathbb{T}M),\) see e.g. [35]. We adapt the arguments used in [35, Section 3.2] to characterise isomorphisms of split exact Courant algebroids as **Proposition 2.14**.: Let \(\Phi\colon\mathbb{T}M_{1}\to\mathbb{T}M_{2}\) be an isomorphism of the twisted standard Courant algebroids \(\mathbb{T}M_{1}\) and \(\mathbb{T}M_{2}\), covering \(\varphi\in\operatorname{\mathsf{Diff}}(M_{1},M_{2})\), with twisting three-forms \(H_{1}\in\Omega^{3}_{\operatorname{cl}}(M_{1})\) and \(H_{2}\in\Omega^{3}_{\operatorname{cl}}(M_{2})\). Then \(\varphi^{*}H_{2}=H_{1}\) and \(\Phi\) can be expressed as the composition \(\Phi=\overline{\varphi}\,\circ\,\mathrm{e}\,^{B}\,\) of a \(B\)-field transformation with closed two-form \(B\in\Omega^{2}_{\operatorname{cl}}(M_{1})\) and the isomorphism \(\overline{\varphi}\) induced by the diffeomorphism \(\varphi\). Proof.: Set \(F\coloneqq\overline{\varphi}^{-1}\circ\Phi\), which covers the identity \(\mathds{1}_{M_{1}}.\) By Remark 2.10 it follows that \[\rho_{E_{1}}\circ F=\rho_{E_{1}}. \tag{2.15}\] Equation (2.15) gives \[\langle e,F^{-1}(\mathrm{d}f)\rangle_{E_{1}}=\langle F(e),\mathrm{d}f\rangle _{E_{1}}=\iota_{\rho_{E_{1}}(F(e))}\,\mathrm{d}f=\iota_{\rho_{E_{1}}(e)}\, \mathrm{d}f=\langle e,\mathrm{d}f\rangle_{E_{1}}\] for all \(e\in\mathsf{\Gamma}(\mathbb{T}M_{1})\) and \(f\in C^{\infty}(M_{1}).\) This shows that \(F^{-1}\) acts as the identity on \(T^{*}M_{1}.\) Therefore \(F=\,\mathrm{e}\,^{B}\,\) for some \(B\in\Omega^{2}(M_{1})\), because \(F\) is an automorphism of \(\mathbb{T}M_{1}.\) Since \(F\) preserves the Dorfman bracket, \(B\) must be a closed two-form and \(\varphi\in\operatorname{\mathsf{Diff}}(M_{1},M_{2})\) must preserve the twisting three-forms, i.e. \(\varphi^{*}H_{2}=H_{1}.\) When \(M_{1}=M_{2}=M\), the group action defined by Proposition 2.14 is precisely the group action (1.5) transforming the Wess-Zumino functional \(S_{H}\) of the sigma-model discussed in Section 1.1. **Corollary 2.16**.: The automorphism group of the \(H\)-twisted standard Courant algebroid \((\mathbb{T}M,H)\) consists of those \(\Phi=\overline{\varphi}\circ\,\mathrm{e}\,^{B}\,\) such that \(\varphi^{*}H=H\) and \(B\) is closed. **Remark 2.17** (**Severa's Classification**).: The classification of exact Courant algebroids \(E\), due to Severa [3], can be given when there exists an exact Courant algebroid isomorphism \(\Phi\colon E\to\mathbb{T}M\) covering a diffeomorphism \(\varphi\) which is in the identity component of \(\operatorname{\mathsf{Diff}}(M)\). Such isomorphisms always exist, since any choice of isotropic splitting \(\sigma\) of the short exact sequence (2.8) yields an exact Courant algebroid isomorphism covering the identity, by using \(\sigma+\rho^{*}\) to identify \(E\) with \(\mathbb{T}M\). If \(M\) is connected, the equivalence classes of such isomorphisms are in one-to-one correspondence with cohomology classes in \(\mathsf{H}^{3}(M,\mathbb{R})\): an isotropic splitting \(\sigma\colon TM\to E\) of the sequence (2.8) induces a closed three-form \(H_{\sigma}\in\Omega^{3}_{\operatorname{cl}}(M)\) given by \[H_{\sigma}(X,Y,Z)=\langle[\sigma(X),\sigma(Y)],\sigma(Z)\rangle \tag{2.18}\] for \(X,Y,Z\in\mathsf{\Gamma}(TM)\). Since the difference between two splittings \(\sigma-\sigma^{\prime}\) defines a two-form \(B\in\Omega^{2}(M)\) via \((\sigma-\sigma^{\prime})(X)=\iota_{X}B\), \(H_{\sigma}\) is shifted by \(\mathrm{d}B\) under a change of splitting. Hence there is a well-defined cohomology class \([H_{\sigma}]\in\mathsf{H}^{3}(M,\mathbb{R})\) associated to the exact sequence (2.8) which completely determines the Courant algebroid structure. This is called the Severa class of the exact Courant algebroid. A general discussion can be found in [14, Section 2.2]. To aid in understanding how the transformation of standard Courant algebroids is related to the isomorphism of exact Courant algebroids, we establish **Proposition 2.19**.: Let \(E_{1}\) and \(E_{2}\) be exact Courant algebroids over \(M_{1}\) and \(M_{2}\), respectively, and \(\Phi\colon E_{1}\to E_{2}\) an isomorphism of Courant algebroids covering \(\varphi\in\operatorname{\mathsf{Diff}}(M_{1},M_{2})\). Let \(H_{1}\in\Omega^{3}_{\operatorname{cl}}(M_{1})\) and \(H_{2}\in\Omega^{3}_{\operatorname{cl}}(M_{2})\) be closed three-forms defining the Dorfman brackets of the twisted standard Courant algebroids \(\mathbb{T}M_{1}\) and \(\mathbb{T}M_{2}\) isomorphic to \(E_{1}\) and \(E_{2}\), respectively, and denote again by \(\Phi\) the map between the standard Courant algebroids induced by \(\Phi\colon E_{1}\to E_{2}\). Then \(\Phi\) maps the Dorfman bracket on \(\mathbb{T}M_{1}\) twisted by \(H_{1}\) to the Dorfman bracket on \(\mathbb{T}M_{2}\) twisted by \(H_{2}\) with \[\varphi^{*}H_{2}=H_{1}-\mathrm{d}B\, \tag{2.20}\] for some \(B\in\Omega^{2}(M_{1})\). Proof.: By choosing splittings for \(E_{1}\) and \(E_{2}\), it follows that they are isomorphic to \(\mathbb{T}M_{1}\) endowed with the Dorfman bracket \([\![\,\cdot\,,\,\cdot\,]\!]_{H_{1}}\) and \(\mathbb{T}M_{2}\) with \([\![\,\cdot\,,\,\cdot\,]\!]_{H_{2}}\), respectively. Then, with a slight abuse of notation, \[\Phi([\![e_{1},e_{2}]\!]_{H_{1}})=[\![\Phi(e_{1}),\Phi(e_{2})]\!]_{H_{2}}\, \tag{2.21}\] for all \(e_{1},e_{2}\in\mathsf{\Gamma}(E_{1}).\) As discussed in the proof of Proposition 2.14, \(\Phi\) can be written as \(\Phi=\overline{\varphi}\circ\,\mathrm{e}\,^{B}\,,\) for some \(B\in\Omega^{2}(M_{1})\) and \(\overline{\varphi}=\varphi_{*}+(\varphi^{-1})^{*}.\) Therefore Equation (2.21) gives (2.22) Recall from Example 2.12 that \[\overline{\varphi}^{-1}([\![\overline{\varphi}(e_{1}),\overline{\varphi}(e_{2 })]\!]_{H_{2}})=[\![e_{1},e_{2}]\!]_{\varphi^{*}H_{2}}. \tag{2.23}\] Hence Equations (2.22) and (2.23) yield \[\mathrm{e}\,^{B}\left([\![e_{1},e_{2}]\!]_{H_{1}}\right)=[\![\,\mathrm{e}\,^{B }\left(e_{1}\right),\,\mathrm{e}\,^{B}\left(e_{2}\right)]\!]_{\varphi^{*}H_{ 2}}=\,\mathrm{e}\,^{B}\left([\![e_{1},e_{2}]\!]_{\varphi^{*}H_{2}}\right)+ \iota_{\rho_{E_{1}}(e_{1})}\,\iota_{\rho_{E_{1}}(e_{2})}\,\mathrm{d}B\,\] which gives Equation (2.20). ### Reduction of Exact Courant Algebroids We now summarise the reduction of exact Courant algebroids over foliated manifolds, as shown by Zambon [33] generalising work of Bursztyn-Cavalcanti-Gualtieri [15] to foliations that are not necessarily generated by a group action. We specialise Zambon's discussion to subbundles of a Courant algebroid over \(M\) which are supported on all of \(M\), rather than on some submanifold of \(M\). As discussed in [3, 10], this case naturally arises in the description of sigma-models related to equivariant exact Courant algebroids. In particular, the infinitesimal symmetries of the Wess-Zumino functional (1.2) generate directions in which \(S_{H}\) is constant. This motivates a "gauging-like" reduction of the string background defining the sigma-model: the condition (1.4) naturally arises from the definition of the map \(\Psi\) in Equation (1.6) by looking at the sections \(X+\alpha\in\mathsf{\Gamma}(\mathbb{T}M)\) for which \(\Psi(X+\alpha)=X,\) i.e. \(\iota_{X}H=\mathrm{d}\alpha.\) Therefore the infinitesimal action associated with the variation generated by \(X\in\mathsf{\Gamma}(TM)\) is given by the differential operator \([\![X+\alpha,\,\cdot\,]\!]_{H}\), for any \(X+\alpha\in\mathsf{\Gamma}(\mathbb{T}M)\) for which \(\Psi(X+\alpha)=X,\) which are the elements associated with the flat directions for \(S_{H}\). This shows how reduction of exact Courant algebroids naturally arises from the symmetries of the topological term of a string sigma-model. The construction which follows can be motivated by the observation [37] that the constraints for gauging the full sigma-model by isometries of \(M\) are equivalent to requiring that the sections \(X+\alpha\) span an involutive isotropic subbundle \(K\subset\mathbb{T}M\). Let \(E\) be an exact Courant algebroid over \(M\), and \(K\) an isotropic subbundle of \(E\) supported on \(M\) such that \(\rho(K^{\perp})=TM\). **Definition 2.24**.: The space of sections of \(K^{\perp}\) which are _basic with respect to \(K\)_ is \[\mathsf{\Gamma}_{\mathrm{bas}}(K^{\perp})\coloneqq\left\{\,e\in\mathsf{\Gamma} (K^{\perp})\,\mid\,[\![\mathsf{\Gamma}(K),e]\!]\subset\mathsf{\Gamma}(K)\, \right\}\.\] When there are enough basic sections, i.e. \(\mathsf{\Gamma}_{\mathrm{bas}}(K^{\perp})\) spans \(K^{\perp}\) pointwise, then \(K\) and \(K^{\perp}\) satisfy the main properties [33, Lemma 3.5] \[[\![\Gamma(K),\mathsf{\Gamma}(K^{\perp})]\!]\subset\mathsf{\Gamma}(K^{\perp})\, \tag{2.25}\] \[[\![\Gamma(K),\mathsf{\Gamma}(K)]\!]\subset\mathsf{\Gamma}(K). \tag{2.26}\] In particular, the distribution \(\rho(K)\subset TM\) induces a smooth integrating foliation \(\mathcal{F}\) of \(M\). These properties imply the following central result for reduction of exact Courant algebroids over foliated manifolds, as stated by Zambon in [33, Theorem 3.7], generalising the analogous statement of Bursztyn-Cavalcanti-Gualtieri in [15, Theorem 3.3] for the case of reduction by group actions. **Theorem 2.27**.: Let \(E\) be an exact Courant algebroid over \(M\), and \(K\) an isotropic subbundle of \(E\) supported on \(M\) such that \(\rho(K^{\perp})=TM\). Assume that the space of basic sections \(\mathsf{\Gamma}_{\mathrm{bas}}(K^{\perp})\) spans \(K^{\perp}\) pointwise, and that the quotient \(\mathcal{Q}\) of \(M\) by the foliation \(\mathcal{F}\) integrating \(\rho(K)\) is a smooth manifold, so that the quotient map \(\varpi\colon M\to\mathcal{Q}\) is a surjective submersion. Then there is an exact Courant algebroid \(\underline{E}\) over \(\mathcal{Q}\) which fits in the pullback diagram of vector bundles. Sketch of Proof.: We sketch the first part of the proof given in [33] in order to discuss how elements in \(K^{\perp}/K\) at two points in a leaf of \(\mathcal{F}\) are identified and to introduce our notation. Let \(\mathscr{P}\colon K^{\perp}\to K^{\perp}/K\) be the quotient map and denote by \(N_{q}=\varpi^{-1}(q)\), for any \(q\in\mathcal{Q}\), a leaf of \(\mathcal{F}\). Then for any \(m,m^{\prime}\in N_{q}\), we say that \(e\in(K^{\perp}/K)_{m}\) and \(e^{\prime}\in(K^{\perp}/K)_{m^{\prime}}\) are identified if and only if there exists a basic section \(\hat{e}\in\mathsf{\Gamma}_{\mathrm{bas}}(K^{\perp})\) such that \[\mathscr{P}(\hat{e}(m))=e\qquad\text{and}\qquad\mathscr{P}(\hat{e}(m^{\prime }))=e^{\prime}. \tag{2.28}\] This means that there exists a trivialisation of \(K^{\perp}/K|_{N_{q}}\) obtained by projecting onto it basic sections giving a frame for \(K^{\perp}.\) By assumption there are enough basic sections, so that they induce a frame for \(K^{\perp}/K|_{N_{q}}\). It is shown in [33] that this identification is well-defined, i.e. for any \(\hat{e},\hat{e}\in\mathsf{\Gamma}_{\mathrm{bas}}(K^{\perp})\) such that \(\mathscr{P}(\hat{e}(m))=\mathscr{P}(\hat{e}(m)),\) then \(\mathscr{P}(\hat{e}(m^{\prime}))=\mathscr{P}(\hat{e}(m^{\prime})).\) This is achieved by considering a finite sequence \(\{\,k_{i}\,\}_{i=1,\dots,n}\) of (local) sections \(k_{i}\in\mathsf{\Gamma}(K)\) such that the integral curves of their corresponding vector fields \(\rho(k_{i})\) can be linked so as to join \(m\) and \(m^{\prime}.\) Let us denote by \(t_{i}\) the parameter for each integral curve. Denote by \[\exp(\mathsf{ad}_{k_{i}})\ \in\ \mathsf{Aut}(E)\] the automorphism of \(E\) obtained by integrating the \(K\)-action \(\mathsf{ad}_{k_{i}}=[\![k_{i},\,\cdot\,]\!]\). Then we can define the automorphism \[\mathsf{Ad}_{k}\coloneqq\exp(t_{1}\,\mathsf{ad}_{k_{1}})\circ\dots\circ\exp(t _{n}\,\mathsf{ad}_{k_{n}}). \tag{2.29}\] Since \(\hat{e}\) and \(\check{e}\) are basic and the property (2.25) holds, it follows that \(\mathsf{Ad}_{k}(\hat{e}(m))-\hat{e}(m^{\prime})\in K_{m^{\prime}}\) and \(\mathsf{Ad}_{k}(\check{e}(m))-\check{e}(m^{\prime})\in K_{m^{\prime}}.\) We assumed that \(\hat{e}(m)\) and \(\check{e}(m)\) reduce to the same element in \((K^{\perp}/K)_{m}\), hence \(\hat{e}(m)-\check{e}(m)\in K_{m}.\) Thus \(\mathsf{Ad}_{k}(\hat{e}(m)-\check{e}(m))\in K_{m^{\prime}}\) by Equation (2.26). Therefore \(\hat{e}(m^{\prime})-\check{e}(m^{\prime})\in K_{m^{\prime}}\), which means that \(\hat{e}(m^{\prime})\) and \(\check{e}(m^{\prime})\) project to the same element in \((K^{\perp}/K)_{m^{\prime}}\). The identification (2.28) yields a pointwise isomorphism \[\mathscr{J}_{q,m}\colon\underline{E}_{q}\longrightarrow(K^{\perp}/K)_{m} \tag{2.30}\] for any \(m\in N_{q}\), which together with the surjective submersion \(\varpi\) show that \(\underline{E}\) is a vector bundle. The map \(\mathscr{P}\), together with the pointwise isomorphisms \(\mathscr{J}_{q,m}\), imply that \[\mathsf{\Gamma}(\underline{E})\simeq\mathsf{\Gamma}_{\mathrm{bas}}(K^{\perp} )\,\big{/}\,\mathsf{\Gamma}(K)\] as \(C^{\infty}(\mathcal{Q})\)-modules, where the \(C^{\infty}(\mathcal{Q})\)-module structure on \(\mathsf{\Gamma}_{\mathrm{bas}}(K^{\perp})/\mathsf{\Gamma}(K)\) is given by pulling back functions to \(M\) with \(\varpi\).4 Footnote 4: This is the same construction given in [38] for the quotient of \(\mathsf{G}\)-equivariant vector bundles by the action of a group \(\mathsf{G}\). In that case the invariance condition is dictated by the involutive vector subbundle \(K\) inducing the foliation of the base manifold, which in turn corresponds to the orbits of the \(\mathsf{G}\)-action. The completion of the proof can be found in [15, 33], wherein \(\underline{E}\) is shown to be an exact Courant algebroid with the bracket, anchor and pairing inherited from \(E\). In the notation of [15, 33], we write \[\underline{E}=\frac{K^{\perp}}{K}\Big{/}\mathcal{F}\] for the above construction of \(\underline{E}\), and we denote by \[\natural\colon K^{\perp}\longrightarrow\underline{E}\] the vector bundle morphism covering \(\varpi\) given by the two quotients by \(K\) and \(\mathcal{F}\). When Theorem 2.27 holds, we can also establish that basic sections have the property of **Lemma 2.31**.: Let \(E\) be an exact Courant algebroid endowed with an involutive isotropic subbundle \(K\) satisfying the assumptions of Theorem 2.27. Consider the short exact sequence of vector bundles \[0\longrightarrow K\longrightarrow K^{\perp}\longrightarrow K^{\perp}/K \longrightarrow 0\.\] Then any splitting \(s\colon K^{\perp}/K\to K^{\perp}\) of this sequence satisfies \(s([e])\in\mathsf{\Gamma}_{\mathrm{bas}}(K^{\perp})\), for any \([e]\in\mathsf{\Gamma}_{\mathrm{bas}}(K^{\perp})/\mathsf{\Gamma}(K)\) corresponding to \(\underline{e}\in\mathsf{\Gamma}(\underline{E})\). Proof.: By the isomorphism \(\mathsf{\Gamma}_{\mathrm{bas}}(K^{\perp})/\mathsf{\Gamma}(K)\simeq\mathsf{ \Gamma}(\underline{E})\) of \(C^{\infty}(\mathcal{Q})\)-modules, to any section \(\underline{e}\) of \(\underline{E}\) there corresponds an element \([e]\in\mathsf{\Gamma}_{\mathrm{bas}}(K^{\perp})/\mathsf{\Gamma}(K)\). Thus for any splitting \(s\) of the sequence we get \[s([e])=e+k_{s}\,\] for some \(k_{s}\in\mathsf{\Gamma}(K)\), where \(e\in\mathsf{\Gamma}_{\mathrm{bas}}(K^{\perp})\). Hence \[[\![k,s([e])]\!]=[\![k,e]\!]+[\![k,k_{s}]\!]\ \in\ \mathsf{\Gamma}(K)\,\] for any section \(k\) of \(K\), by Definition 2.24 and involutivity of \(K\). **Example 2.32** (**Reduction of Standard Courant Algebroids)**.: Of importance to us is the case of a foliated manifold \((M,\mathcal{F})\) such that \(\mathcal{Q}=M/\mathcal{F}\) is smooth. Take \[K=T\mathcal{F}\oplus\{\,0\,\}\] as a subbundle of the standard Courant algebroid \((\mathbb{T}M,0)\). Then \(K^{\perp}=TM\oplus\mathrm{Ann}(T\mathcal{F})\), and basic sections are given by projectable vector fields and one-forms of the type \(\mathrm{d}(\varpi^{*}f)\), where \(f\in C^{\infty}(\mathcal{Q})\), which respectively span \(TM\) and \(\operatorname{Ann}(T\mathcal{F})\) pointwise. Hence the conditions of Theorem 2.27 are satisfied, and our reduced Courant algebroid becomes \[\underline{E}=\big{(}TM/T\mathcal{F}\oplus\operatorname{Ann}(T\mathcal{F}) \big{)}\,\big{/}\,\mathcal{F}=T\mathcal{Q}\oplus T^{*}\mathcal{Q}=\mathbb{T} \mathcal{Q}\.\] Remark 2.42 below gives the reduction of the twisted standard Courant algebroid \((\mathbb{T}M,H)\), after introducing adapted splittings. #### 2.3.1. Reduction of Subbundles of Exact Courant Algebroids Following [33, Proposition 4.1] which deals with the reduction of Dirac structures, a useful result concerning the reduction of subbundles of \(E\) is **Proposition 2.33**.: Let \(E\) be an exact Courant algebroid over \(M\) endowed with an isotropic subbundle \(K\) satisfying the assumptions of Theorem 2.27, and let \(W\) be a subbundle of \(E\) such that \(W\cap K^{\perp}\) has constant rank. If \[[\![\Gamma(K),\Gamma(W\cap K^{\perp})]\!]\subset\Gamma(W+K)\, \tag{2.34}\] then \(W\) descends to a subbundle \(\underline{W}\) of the reduced Courant algebroid \(\underline{E}\) over \(\mathcal{Q}\). Proof.: To show that elements of \(W\cap K^{\perp}\) have a well-defined notion of identification under the map \(\mathscr{P}\colon K^{\perp}\to K^{\perp}/K\) when restricted to a leaf \(N_{q}\subset\mathcal{F}\), for some \(q\in\mathcal{Q}\), recall the definition of the automorphism \(\mathsf{Ad}_{k}\) from Equation (2.29), together with the identification of elements in \(K^{\perp}\) given in the sketch of the proof of Theorem 2.27. If the condition (2.34) holds, then \[\mathsf{Ad}_{k}(W\cap K^{\perp})\subseteq(W+K)\cap K^{\perp}=(W\cap K^{\perp} )+K\,\] where we also use Equation (2.25). Therefore \(\underline{W}\coloneqq\natural(W\cap K^{\perp})\) is a subbundle of \(\underline{E}\), since \(W\cap K^{\perp}\) has constant rank. **Remark 2.35**.: If \(K\subset W\subset K^{\perp}\), then Equation (2.34) becomes \[[\![\Gamma(K),\Gamma(W)]\!]\subset\Gamma(W)\.\] We will be particularly interested in this case in the following sections. Any subbundle \(W\) satisfying these properties is pointwise the span of \(\mathsf{\Gamma}_{\mathrm{bas}}(W)\): there are \(w_{i}\in W\) such that \([\![\Gamma(K),w_{i}]\!]\subset\Gamma(K)\), and \(W=\operatorname{Span}\left\{\,w_{i}\,\right\}\). The proof can be found in [39, Theorem 4.1].5 Footnote 5: In their notation, \(S=K\), \(D=D^{S}=W\), and “canonical” means existence of basic sections. Note that we do not assume that \(W\) is a Dirac structure, and hence \(W\) is not necessarily involutive, thus we only have the implication \(\mathrm{c})\implies\mathrm{a})\) of [39, Theorem 4.1], rather than the full equivalence. #### 2.3.2. Adapted Splittings of Exact Courant Algebroids We give a brief summary of Zambon's notion [33, Section 5] of adapted splittings and the reduced Severa class. **Definition 2.36**.: Let \(E\) be an exact Courant algebroid over \(M.\) A splitting \(\sigma\colon TM\to E\) is _adapted to_\(K\) if 1. the image of \(\sigma\) is isotropic, 2. \(\sigma(TM)\subset K^{\perp}\), 3. \(\sigma(X)\in\mathsf{\Gamma}_{\mathrm{bas}}(K^{\perp})\), for any vector field \(X\) on \(M\) which is projectable to \(\mathcal{Q}\). **Remark 2.37** (**Properties of Adapted Splittings**).: Adapted splittings \(\sigma\) are in one-to-one correspondence with maximally isotropic subbundles \(L_{\sigma}\subset E\) satisfying 1. \(\rho(L_{\sigma}\cap K^{\perp})=TM\), 2. \([\![\,\mathsf{f}\,(K),\mathsf{f}\,(L_{\sigma}\cap K^{\perp})]\!]\subset\mathsf{f} \,(L_{\sigma}+K)\), via the prescription \(L_{\sigma}=\sigma(TM)\). If \(\sigma\) is an adapted splitting, then \[\sigma(\rho(K))\subseteq K\,\] see [33, Remark 5.2]. Using this construction, we can establish properties of the anchor through **Lemma 2.38**.: Let \(E\) be an exact Courant algebroid with an involutive isotropic subbundle \(K\) such that \(\rho(K^{\perp})=TM\), endowed with a splitting \(\sigma\) adapted to \(K.\) Then \(\rho|_{K}\) is injective and \(\sigma\circ\rho|_{K}=\mathds{1}_{K}\). Proof.: Since \(K^{\perp}\cap\rho^{*}(T^{*}M)=\rho^{*}\big{(}\mathrm{Ann}(\rho(K))\big{)}\), it follows that \[K^{\perp}=\sigma(TM)\oplus\rho^{*}\big{(}\mathrm{Ann}(\rho(K))\big{)}\, \tag{2.39}\] where we used \(\sigma(TM)\cap\rho^{*}(T^{*}M)=\{\,0\,\}\,.\) Equation (2.39) and the Rank-Nullity Theorem imply \[\mathrm{rk}(K^{\perp})=2\,\mathrm{rk}(TM)-\mathrm{rk}(K)+\mathrm{rk}(\ker( \rho|_{K}))\.\] Since \(\mathrm{rk}(K^{\perp})=\mathrm{rk}(E)-\mathrm{rk}(K)\), we get \(\mathrm{rk}(\ker(\rho|_{K}))=0\), because \(E\) is exact (note that \(\rho^{*}\) is injective). Thus \(\rho|_{K}\) is injective. Since \(\rho|_{K}\) injective, it follows that \(\sigma\circ\rho|_{K}=\mathds{1}_{K}\), because \[0=\rho(k)-\rho(k)=\rho\big{(}\sigma(\rho(k))\big{)}-\rho(k)\implies\sigma( \rho(k))-k=0\,\] for all \(k\in K.\) **Remark 2.40** (**Existence of Adapted Splittings)**.: The condition of having enough basic sections is closely related to the existence of adapted splittings: by [33, Proposition 5.5], splittings adapted to \(K\) exist if and only if \(\mathsf{f}_{\mathrm{bas}}(K^{\perp})\) spans \(K^{\perp}\) pointwise. This result will be very useful in the rest of the paper. **Remark 2.41** (**Reduction of Adapted Splittings)**.: By [33, Proposition 5.7], a splitting \(\sigma\) of an exact Courant algebroid \(E\) adapted to \(K\), where \(K\) satisfies the assumptions of Theorem 2.27, induces a splitting \(\underline{\sigma}\) of the reduced Courant algebroid \(\underline{E}\) over \(\mathcal{Q}.\) This follows from property (ii) of Remark 2.37 applied to the maximally isotropic subbundle \(L_{\sigma}=\sigma(TM)\), then applying Proposition 2.33. Thus \(L_{\sigma}\) reduces to a maximally isotropic subbundle \(\underline{L}_{\underline{\sigma}}\) of \(\underline{E}\) which is the image of the splitting \(\underline{\sigma}\) of \(\underline{E}\). This shows how the three-form \(H_{\sigma}\) from Equation (2.18) descends to a closed three-form \(\underline{H}_{\underline{\sigma}}\) representing the Severa class of \(\underline{E}\): by applying Lemma 2.31 to any \(\underline{X}_{i}\in\mathsf{f}(T\mathcal{Q})\) for \(i=1,2,3\) that are the images of projectable vector fields \(X_{i}\in\mathsf{f}(TM)\), we can lift \(\underline{\sigma}(\underline{X}_{i})\in\mathsf{f}(\underline{E})\) to \(\sigma(X_{i})\in\mathsf{f}_{\mathrm{bas}}(K^{\perp}).\) Therefore \[H_{\sigma}(X_{1},X_{2},X_{3}) =\left\langle[\![\sigma(X_{1}),\sigma(X_{2})]\!]_{E},\sigma(X_{ 3})\right\rangle_{E}\] \[=2\,\left\langle[\![\underline{\sigma}(\underline{X}_{1}), \underline{\sigma}(\underline{X}_{2})]\!]_{\underline{E}},\underline{\sigma}( \underline{X}_{3})\right\rangle_{\underline{E}}=:\underline{H}_{\underline{ \sigma}}(\underline{X}_{1},\underline{X}_{2},\underline{X}_{3})\,\] showing that \(H_{\sigma}\) descends to the three-form \(\underline{H}_{\underline{\sigma}}\) associated with the reduced splitting \(\underline{\sigma}\). **Example 2.42** (**Reduction of Twisted Standard Courant Algebroids**).: Let \((M,\mathcal{F})\) be a foliated manifold with smooth leaf space \(\mathcal{Q}=M/\mathcal{F}\). Consider the twisted standard Courant algebroid \((\mathbb{T}M,H)\) for some \(H\in\Omega^{3}_{\mathrm{cl}}(M).\) We take as our maximally isotropic subbundle \(L_{\sigma}\) in Remark 2.41 to be the tangent bundle \[L_{\sigma}=TM\oplus\left\{\,0\,\right\}\.\] Then \(K=T\mathcal{F}\oplus\left\{\,0\,\right\}\), and \(\llbracket\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \llbracket(K),\mathsf{\Gamma}(L_{\sigma})\rrbracket\subset\mathsf{\Gamma}(L_{ \sigma})\) if and only if \(\iota_{X}H=0\) for every \(X\in\mathsf{\Gamma}(T\mathcal{F})\). This suffices to show how \(H\) reduces, since \(K\subset L_{\sigma}\subset K^{\perp}\) in this case, hence Remark 2.35 applies. Thus \(\pounds_{X}H=0\), for all \(X\in\mathsf{\Gamma}(T\mathcal{F})\), hence \(H\) is the pullback of a three-form \(\underline{H}\in\Omega^{3}_{\mathrm{cl}}(\mathcal{Q})\) by the quotient map \(\varpi:M\to\mathcal{Q}\), and the reduced Courant algebroid is \(\underline{E}=(\mathbb{T}\mathcal{Q},\underline{H})\). ## 3. Courant Algebroid Relations: Reduction and Composition In this section we introduce the notion of Courant algebroid relations, following the work of Vysoky [13]. They will be used to give a new look at the reduction processes from Section 2.3, and used extensively throughout Section 5. ### Courant Algebroid Relations To define Courant algebroid relations, we first need the notion of involutive structures on a Courant algebroid [40]. **Definition 3.1**.: Let \(E\) be a Courant algebroid over \(M\). A subbundle \(L\) of \(E\) supported on a submanifold \(C\subset M\) is an _almost involutive structure supported on \(C\)_ if * \(L\) is isotropic, * \(L^{\perp}\) is compatible with the anchor: \(\rho(L^{\perp})\subset TC\). If \(L\) is moreover involutive, then \(L\) is an _involutive structure supported on \(C\)_. If in addition \(L=L^{\perp}\), then \(L\) is a _Dirac structure supported on \(C\)_. For two Courant algebroids \(E_{1}\) and \(E_{2}\) over \(M_{1}\) and \(M_{2}\), respectively, the product \((E_{1}\times E_{2},\llbracket\,\cdot\,,\,\cdot\rrbracket,\langle\,\cdot\,,\, \cdot\,\rangle,\rho)\) is a Courant algebroid over \(M_{1}\times M_{2}\), with the Courant algebroid structures defined by \[\rho(e_{1},e_{2}) \coloneqq(\rho_{E_{1}}(e_{1}),\rho_{E_{2}}(e_{2}))\,\] \[\left\langle(e_{1},e_{2}),(e_{1}^{\prime},e_{2}^{\prime})\right\rangle \coloneqq\left\langle e_{1},e_{1}^{\prime}\right\rangle_{E_{1}} \circ\mathrm{pr}_{1}+\left\langle e_{2},e_{2}^{\prime}\right\rangle_{E_{2}} \circ\mathrm{pr}_{2}\,\] \[\llbracket\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! By viewing a Courant algebroid relation \(R\) as a subset of \(E_{2}\times\overline{E}_{1}\), we obtain the _transpose relation_\(R^{\top}\colon E_{2}\dashrightarrow E_{1}\), whose support is denoted \(C^{\top}\subseteq M_{2}\times M_{1}\), and similarly the transpose morphism \(R^{\top}\colon E_{2}\to E_{1}\) when \(\varphi\) is a diffeomorphism. **Example 3.3**.: Let \(\Phi\colon E_{1}\to E_{2}\) be a vector bundle map which covers a diffeomorphism \(\varphi\colon M_{1}\to M_{2}\). To see why the notion of Courant algebroid relation is well-defined, suppose that its graph \(\operatorname{gr}(\Phi)\subset E_{1}\times\overline{E}_{2}\) defines a classical Courant algebroid morphism. Then for any \(e,e^{\prime}\in\mathsf{\Gamma}(E_{1})\), isotropy of \(\operatorname{gr}(\Phi)\) gives so \(\Phi\) is an isometry. Note the minus sign, which is why we consider subbundles of \(E_{1}\times\overline{E}_{2}\), rather than of \(E_{1}\times E_{2}\). Since \(\operatorname{gr}(\Phi)\) is involutive, \(\Phi\) is a bracket morphism, and hence a Courant algebroid morphism. That \(\operatorname{gr}(\Phi)^{\perp}\) is compatible with the anchor (item (ii) of Definition 3.1) gives Equation (2.11). Conversely, if \(\Phi\) is a Courant algebroid isomorphism covering a diffeomorphism \(\varphi\), then \(\operatorname{gr}(\Phi)\) is an isotropic and involutive subbundle of \(E_{1}\times\overline{E}_{2}\). If moreover the diagram commutes, where \(\rho^{*}_{E_{i}}\) are defined by Equation (2.5), then \(\operatorname{gr}(\Phi)^{\perp}\) is compatible with the anchor \(\rho\), and hence \(\Phi\) defines a classical Courant algebroid morphism. This last condition is satisfied when \(E_{i}=\mathbb{T}M_{i}\) and \(\rho_{E_{i}}\) is the projection to \(TM_{i}\). We will also need a notion of sections related by a Courant algebroid relation. If \(E\) is a Courant algebroid over \(M\) and \(L\subset E\) is a subbundle supported on a submanifold \(C\subset M\), denote by \(\mathsf{\Gamma}(E;L)\) the \(C^{\infty}(M)\)-submodule of sections of \(E\) which take values in \(L\) when restricted to \(C\). **Definition 3.4**.: Let \(R\colon E_{1}\dashrightarrow E_{2}\) be a Courant algebroid relation. Two sections \(e_{1}\in\mathsf{\Gamma}(E_{1})\) and \(e_{2}\in\mathsf{\Gamma}(E_{2})\) are \(R\)-_related_, denoted \(e_{1}\sim_{R}e_{2}\), if \((e_{1},e_{2})\in\mathsf{\Gamma}(E_{1}\times\overline{E}_{2};R)\). ### Relational Approach to Reduction In the reduction procedure of Theorem 2.27, there can be no vector bundle morphism between the exact Courant algebroids \(E\) and \(\underline{E}\). Using the language of Courant algebroid relations, we can describe the reduction of exact Courant algebroids over foliated manifolds, analogously to the approach given in [13, Section 4.3] for equivariant exact Courant algebroids. Let \(E\) be an exact Courant algebroid as in Theorem 2.27, i.e. \(E\) is endowed with an isotropic subbundle \(K\) such that \(\rho(K^{\perp})=TM\) and \(\mathsf{\Gamma}_{\text{bas}}(K^{\perp})\) spans \(K^{\perp}\) pointwise, inducing a foliation \(\mathcal{F}\) of the base manifold \(M\) with smooth leaf space \(\mathcal{Q}\) such that the quotient map \(\varpi\colon M\to\mathcal{Q}\) is a surjective submersion. Thus \(\natural\colon K^{\perp}\to\underline{E}\) is a vector bundle map covering \(\varpi\). Consider the vector subbundle, supported on \(\operatorname{gr}(\varpi)\), defined fibrewise as \[Q(K)_{(m,\varpi(m))}=\{\,(e,\natural(e))\,\mid\,e\in K^{\perp}_{m}\,\}\ \subset\ E_{m}\times\underline{\overline{E}}_{\varpi(m)}\,\] for any \(m\in M.\) We show that \(Q(K)\) defines an involutive structure on \(E\times\underline{\overline{E}}\), in the sense of Definition 3.1, following the approach used in [13]. **Lemma 3.5**.: \(Q(K)\) is an almost involutive structure. Proof.: Since the pairing on \(\underline{E}\) is induced by the pairing on \(E\), it follows that \(Q(K)\) is isotropic, proving item (i) of Definition 3.1. To show item (ii) of Definition 3.1, since \(Q(K)^{\perp}\) is supported on \(\operatorname{gr}(\varpi)\), we are required to show that \((\rho_{E}(e),\underline{\rho_{E}}(\underline{e}))\in T_{(m,\varpi(m))} \mathrm{gr}(\varpi)=\mathrm{gr}(\varpi_{*})\) for each \((e,\underline{e})\in Q(K)^{\perp}_{(m,\varpi(m))}\), where \(e\in K^{\perp}_{m}\) with \(m\in M\). We first notice that one may write \[Q(K)^{\perp}_{(m,\varpi(m))}=K_{m}\times\{\,0^{\underline{E}}_{\varpi(m)}\,\} +Q(K)_{(m,\varpi(m))}\, \tag{3.6}\] where \(0^{\underline{E}}:\mathcal{Q}\to\underline{E}\) denotes the zero section. To see this, let \(\tilde{k}\in Q(K)^{\perp}_{(m,\varpi(m))}\). Then \(\tilde{k}=(k,\natural(k^{\prime}))\) for some \(k\in E_{m}\) and \(k^{\prime}\in K^{\perp}_{m}\). Then, for each \(\tilde{e}=(e,\natural(e))\in Q(K)_{(m,\varpi(m))}\), it follows that Hence \(k-k^{\prime}\in(K^{\perp}_{m})^{\perp}=K_{m}\), i.e. \(k=\underline{k}+k^{\prime}\) for some \(\underline{k}\in K_{m}\). We can therefore write \(\tilde{k}=(\underline{k},0)+(k^{\prime},\natural(k^{\prime}))\). The opposite inclusion also holds, giving the decomposition (3.6). Since the anchor on \(\underline{E}\) descends from the anchor on \(E\), it follows that \(Q(K)\) is compatible with the anchor \(\rho\). Thus we are left to show \[\rho_{E}(K_{m})\times\underline{\rho_{E}}\big{(}\,\{\,0^{\underline{E}}_{ \varpi(m)}\,\}\,\big{)}\subseteq\mathrm{gr}(\varpi_{*})\.\] If \(k\in K_{m}\), then \(\varpi_{*}(\rho_{E}(k))=0^{TM}_{\varpi(m)}\). But \(\underline{\rho_{E}}\big{(}0^{\underline{E}}_{\varpi(m)}\big{)}=0^{TM}_{\varpi (m)}\), hence \[\big{(}\rho_{E}\times\underline{\rho_{E}}\big{)}\big{(}k,0^{\underline{E}}_{ \varpi(m)}\big{)}=\big{(}\rho_{E}(k),\varpi_{*}(\rho_{E}(k))\big{)}\ \in\ \mathrm{gr}(\varpi_{*})\] as required. **Proposition 3.7**.: \(Q(K)\) is an involutive structure, hence it defines a Courant algebroid morphism \(Q(K)\colon E\hookrightarrow\underline{E}\). Proof.: Suppose \(e\) and \(e^{\prime}\) are basic sections for \(K^{\perp}\), and consider the corresponding elements \((e,\natural(e))\) and \((e^{\prime},\natural(e^{\prime}))\) of \(\mathsf{\Gamma}(Q(K))\). Since such sections span \(Q(K)\) pointwise, by [13, Proposition 2.23] it is enough to show that \([\![(e,\natural(e)),(e^{\prime},\natural(e^{\prime}))]\!]\in\mathsf{\Gamma}(Q(K))\). For every \(k\in\mathsf{\Gamma}(K)\), using property (i) from Definition 2.1 we compute hence \([\![e,e^{\prime}]\!]_{E}\in\mathsf{\Gamma}(K^{\perp})\). In particular, because of the Jacobi identity (iii) and property (ii) of the Dorfman bracket from Definition 2.1, it follows that \([\![e,e^{\prime}]\!]_{E}\in\mathsf{\Gamma}_{\mathrm{bas}}(K^{\perp})\). Thus, by the way the bracket on \(\underline{E}\) is constructed, \[[\![(e,\natural(e)),(e^{\prime},\natural(e^{\prime}))]\!]=\big{(}[\![e,e^{ \prime}]\!]_{E},\natural([\![e,e^{\prime}]\!]_{E})\big{)}\ \in\ \mathsf{\Gamma}(Q(K))\,\] hence involutivity follows. Following on from Example 2.42, we then have **Corollary 3.8**.: If \(E=(\mathbb{T}M,H)\) is a split exact Courant algebroid, let \(K=T\mathcal{F}\oplus\{\,0\,\}\) where \(\mathcal{F}\) is a foliation of \(M\) with smooth leaf space \(\mathcal{Q}=M/\mathcal{F}\). Then \(Q(K)\) is a Courant algebroid relation if and only if \(\iota_{X}H=0\) for every \(X\in\mathsf{\Gamma}(T\mathcal{F})\). In the setting of Corollary 3.8, we denote \(Q(K)\) by \(Q(\mathcal{F})\). ### Composition of Courant Algebroid Relations It will be particularly relevant in the rest of this paper to discuss the circumstances under which the composition of Courant algebroid relations gives another Courant algebroid relation. In this brief summary we will follow [40, 13]. **Definition 3.9**.: Let \(R\colon E_{1}\dasharrow E_{2}\) and \(R^{\prime}\colon E_{2}\dasharrow E_{3}\) be Courant algebroid relations. The _composition_\(R^{\prime}\circ R\) is the subset of \(E_{1}\times\overline{E}_{3}\) given by \[R^{\prime}\circ R=\left\{\,(e_{1},e_{3})\in E_{1}\times\overline{E}_{3}\,\mid \,(e_{1},e_{2})\in R\,\ (e_{2},e_{3})\in R^{\prime}\ \text{ for some }e_{2}\in E_{2}\,\right\}. \tag{3.10}\] The requirements for the set (3.10) to be a Courant algebroid relation are two-fold, given in Propositions 3.12 and 3.13 below, and can be equivalently formulated at the level of either the Courant algebroid or the base manifold, but both boil down to checking if the set defined by Equation (3.10) is a smooth subbundle of \(E_{1}\times\overline{E}_{3}\). We recall the important definitions and results from [13]. There are two manifolds that are important: \(R\times R^{\prime}\) and \(E_{1}\times\Delta(E_{2})\times\overline{E}_{3}\), where \(\Delta(E)\) is the diagonal embedding of \(E\) into \(\overline{E}\times E\). Their intersection, which we denote by \[R^{\prime}\circ R=(R\times R^{\prime})\cap\left(E_{1}\times\Delta(E_{2}) \times\overline{E}_{3}\right)\,,\] projects to \(R^{\prime}\circ R\). Clearly \(R^{\prime}\circ R\) is an \(\mathbb{R}\)-module, but in general it fails to be a submanifold of \(E_{1}\times\overline{E}_{2}\times E_{2}\times\overline{E}_{3}.\) If the latter holds as well, it follows immediately that \(R^{\prime}\circ R\) is a subbundle of \(E_{1}\times\overline{E}_{2}\times E_{2}\times\overline{E}_{3}\) over \[C^{\prime}\circ C\coloneqq(C\times C^{\prime})\cap\left(M_{1}\times\Delta(M_{ 2})\times M_{3}\right)\] by the Grabowski-Rotkiewicz Theorem [41, Theorem 2.3]. Thus let us briefly discuss the conditions under which \(R^{\prime}\circ R\) is a submanifold. **Definition 3.11**.: Two submanifolds \(C\) and \(C^{\prime}\) of a manifold \(M\)_intersect cleanly in \(M\)_ if \(C\cap C^{\prime}\) is a submanifold of \(M\), and \[T_{c}(C\cap C^{\prime})=T_{c}C\cap T_{c}C^{\prime}\,\] for each \(c\in C\cap C^{\prime}\). The inclusion \(\subseteq\) is always true. The importance of this condition is that if \(C\) and \(C^{\prime}\) intersect cleanly, then \(C\) and \(C^{\prime}\) look locally like a pair of intersecting vector subspaces; see e.g. [42, Proposition C.3.1]. Following Vysoky [13], we state two propositions that are essential for the characterisation of relations and will be needed in the rest of this paper. **Proposition 3.12**.: Let \(R\colon E_{1}\dasharrow E_{2}\) and \(R^{\prime}\colon E_{2}\dasharrow E_{3}\) be Courant algebroid relations over \(C\subseteq M_{1}\times M_{2}\) and \(C^{\prime}\subseteq M_{2}\times M_{3}\) respectively. The following conditions are equivalent: * \(R\times R^{\prime}\) and \(E_{1}\times\Delta(E_{2})\times\overline{E}_{3}\) intersect cleanly. * \(C\times C^{\prime}\) and \(M_{1}\times\Delta(M_{2})\times M_{3}\) intersect cleanly, and the dimension of \((R^{\prime}\circ R)_{c}\) is independent of \(c\in C^{\prime}\circ C\). Both these conditions ensure that \(R^{\prime}\circ R\) is a subbundle of \(E_{1}\times\overline{E}_{2}\times E_{2}\times\overline{E}_{3}\) over \(C^{\prime}\circ C\). Similar conditions can be stated for \(R^{\prime}\circ R\) in order to make it a subbundle of \(E_{1}\times\overline{E}_{3}\) over \[C^{\prime}\circ C\coloneqq\left\{\,(m_{1},m_{3})\in M_{1}\times M_{3}\,\mid \,(m_{1},m_{2})\in C\,\ (m_{2},m_{3})\in C^{\prime}\ \text{ for some }m_{2}\in M_{2}\,\right\}\,\] since \(R^{\prime}\circ R\) is an \(\mathbb{R}\)-module. **Proposition 3.13**.: Let \(R\colon E_{1}\dashrightarrow E_{2}\) and \(R^{\prime}\colon E_{2}\dashrightarrow E_{3}\) be Courant algebroid relations over \(C\subseteq M_{1}\times M_{2}\) and \(C^{\prime}\subseteq M_{2}\times M_{3}\) respectively. Suppose at least one of the conditions of Proposition 3.12 are satisfied. The following conditions are equivalent: 1. \(R^{\prime}\circ R\) is a submanifold of \(E_{1}\times\overline{E}_{3}\) such that the induced map6\(p\colon R^{\prime}\circ R\to R^{\prime}\circ R\) is a smooth surjective submersion. Footnote 6: Here \(p\colon E_{1}\times\overline{E}_{2}\times E_{2}\times\overline{E}_{3}\to E_{1} \times\overline{E}_{3}\) is the projection to the first and the fourth factor of the Cartesian product, and similarly for \(\pi\colon M_{1}\times M_{2}\times M_{3}\to M_{1}\times M_{3}\) below. 2. \(C^{\prime}\circ C\) is a submanifold of \(M_{1}\times M_{3}\) such that the induced map \(\pi\colon C^{\prime}\circ C\to C^{\prime}\circ C\) is a smooth surjective submersion and the rank of the linear map \(p\colon(R^{\prime}\circ R)_{c}\to(R^{\prime}\circ R)_{\pi(c)}\) is independent of \(c\in C^{\prime}\circ C\). Both these conditions ensure that \(R^{\prime}\circ R\) is a subbundle of \(E^{\prime}\) supported on \(C^{\prime}\circ C\) and \(p\colon R^{\prime}\circ R\to R^{\prime}\circ R\) is a fibrewise surjective vector bundle map over \(\pi\colon C^{\prime}\circ C\to C^{\prime}\circ C\). If any of the two equivalent conditions (i) or (ii) occurs, we say that \(R\) and \(R^{\prime}\)_compose cleanly_. Vysoky shows in [13] that Propositions 3.12 and 3.13 together give **Theorem 3.14**.: Let \(R\colon E_{1}\dashrightarrow E_{2}\) and \(R^{\prime}\colon E_{2}\dashrightarrow E_{3}\) be Courant algebroid relations over \(C\) and \(C^{\prime}\), respectively, which compose cleanly. Then \(R^{\prime}\circ R\) is an involutive structure supported on \(C^{\prime}\circ C\), hence it defines a Courant algebroid relation \(R^{\prime}\circ R\colon E_{1}\dashrightarrow E_{3}\). **Example 3.15** (**Relations between Reduced Courant Algebroids)**.: Let \(K\) be an isotropic subbundle of an exact Courant algebroid \(E\) over \(M\) such that \(\rho(K^{\perp})=TM\). Suppose there exists a subbundle \(K_{1}\subset K\) such that \(K_{1}^{\perp}\) has enough basic sections which are also basic with respect to \(K.\) Then since \(K^{\perp}\subset K_{1}^{\perp}\), both \(K\) and \(K_{1}\) are involutive subbundles, hence they induce foliations \(\mathcal{F}\) and \(\mathcal{F}_{1}\) of \(M\), respectively, such that \(\mathcal{F}_{1}\subset\mathcal{F}\). We also assume that they have smooth leaf spaces \(\mathcal{Q}_{1}=M/\mathcal{F}_{1}\) and \(\mathcal{Q}=M/\mathcal{F}\), so that we can complete the diagram of surjective submersions. Since \(K^{\perp}\) and \(K_{1}^{\perp}\) have enough basic sections, we may form the Courant algebroid relations \(Q(K)\) and \(Q(K_{1})\), which fit into the diagram We consider the composition of \(Q(K)\) with \(Q(K_{1})^{\top}\), and check the conditions in Propositions 3.12 and 3.13. It can be shown that the submanifolds \(\operatorname{gr}(\varpi_{1})^{\top}\times\operatorname{gr}(\varpi)\) and \(\mathcal{Q}_{1}\times\Delta(M)\times\mathcal{Q}\) intersect cleanly, and the projection \(\operatorname{gr}(\varpi)\circ\operatorname{gr}(\varpi_{1})^{\top}\to \operatorname{gr}(\pi)\) is a smooth surjective submersion. For \((q_{1},\pi(q_{1}))\in\operatorname{gr}(\pi)\), the space \[\big{(}Q(K)\circ Q(K_{1})^{\top}\big{)}_{(q_{1},\pi(q_{1}))}=\{\,\big{(}\Bbbk _{E_{1}}(e),e,e,\natural(e)\big{)}\,\,\big{|}\,\,e\in K_{m}^{\perp}\,\}\] is isomorphic to \(K_{m}^{\perp}\), for any \(m\in M\) such that \(q_{1}=\varpi_{1}(m)\), and hence has constant dimension. Finally, the projection \(Q(K)\circ Q(K_{1})^{\top}\to Q(K)\circ Q(K_{1})^{\top}\) has kernel given by \(K\cap K_{1}=K_{1}\), and hence has constant rank. Thus the conditions (i) of both Propositions 3.12 and 3.13 are satisfied, and the composition is clean, giving the relation \[R(\mathcal{F}_{1})\coloneqq Q(K)\circ Q(K_{1})^{\top}=\{\,(\natural_{E_{1}}(e), \natural(e))\,\,\big{|}\,\,e\in K^{\perp}\,\}\,\,\subset\,\,\underline{E}_{1} \times\underline{\underline{E}}\,\] supported on the graph \(\operatorname{gr}(\pi)\). Example 3.15 may be seen as the foliated reduction counterpart of the well-known case of reduction of a \(\mathsf{G}\)-equivariant exact Courant algebroid by a closed Lie subgroup7\(\mathsf{H}\subset\mathsf{G}\) that represents one of the building blocks of Poisson-Lie T-duality (see e.g. [12, Section 5.1]). We will explore this case in detail along the lines of [12, 13] in future work, including how to properly deal with reduction of generalised metrics. Footnote 7: This is usually chosen to be Lagrangian with respect to the split-signature pairing. ## 4. Courant Algebroid Relations as Isometries In this section we review important ideas of [13, Section 5]. We then generalise these to cases of interest to us, proceeding to define generalised isometries for transverse generalised metrics. ### Generalised Metrics We wish to know when a given Courant algebroid relation is able to carry geometric structure between Courant algebroids. In particular, we are interested in relations between generalised metrics as a building block for our approach to T-duality. **Definition 4.1**.: A _generalised metric_ on a Courant algebroid \(E\) is an automorphism \(\tau\colon E\to E\) covering the identity with \(\tau^{2}=\mathds{1}_{E}\) which, together with the pairing \(\left\langle\,\cdot\,,\,\cdot\,\right\rangle\), defines a positive-definite fibrewise metric \[\mathcal{G}(e,e^{\prime})=\left\langle e,\tau(e^{\prime})\right\rangle \tag{4.2}\] on \(E\), for \(e,e^{\prime}\in E\). We denote the Courant algebroid \(E\) endowed with a generalised metric \(\tau\) by \((E,\tau)\). Generalised metrics on exact Courant algebroids are characterised as follows, see e.g. [6, Proposition 3.5] for details. **Proposition 4.3**.: Let \(E\) be an exact Courant algebroid over \(M\). A generalised metric \(\mathcal{G}\), as in Equation (4.2), uniquely determines a pair \((g,b)\in\mathsf{\Gamma}(\bigodot^{2}T^{*}M)\times\Omega^{2}(M)\) where \(g\) is non-degenerate. Conversely, a Riemannian metric \(g\) and a two-form \(b\) on \(M\) define a generalised metric \(\mathcal{G}\) given by \[\mathcal{G}=\begin{pmatrix}g-b\,g^{-1}\,b&b\,g^{-1}\\ -g^{-1}\,b&g^{-1}\end{pmatrix} \tag{4.4}\] on the isomorphic twisted standard Courant algebroid \(\mathbb{T}M=TM\oplus T^{*}M\). An alternative characterisation of generalised metrics is **Definition 4.5**.: A _generalised metric_ is a subbundle \(V^{+}\subseteq E\) which is a maximal positive-definite subbundle of \(E\) with respect to \(\left\langle\,\cdot\,,\,\cdot\,\right\rangle\). We denote the Courant algebroid \(E\) endowed with a generalised metric \(V^{+}\) by \((E,V^{+})\). **Remark 4.6** (**Generalised Metrics as Subbundles**).: Definitions 4.1 and 4.5 are equivalent, see e.g. [6, Proposition 3.3]: the maximal positive-definite subbundle \(V^{+}\) is the \(+1\)-eigenbundle for the generalised metric \(\tau\). Indeed, if \(\tau\) is given by \(g\) and \(b\) as in Proposition 4.3, then one can show \[V^{+}=\operatorname{gr}(g+b)=\{\,v+\iota_{v}(g+b)\in\mathbb{T}M\mid v\in TM\, \}\enspace. \tag{4.7}\] The \(-1\)-eigenbundle of \(\tau\) is given by \(V^{-}=\operatorname{gr}(-g+b)=(V^{+})^{\perp}\). Since Definitions 4.1 and 4.5 are equivalent, we can switch between the notations \((E,\tau)\) and \((E,V^{+})\). ### Generalised Isometries We now have enough machinery to talk about a notion of isometries of generalised metrics for Courant algebroid relations, following [13]. **Definition 4.8**.: Suppose \(R\colon E_{1}\dasharrow E_{2}\) is a Courant algebroid relation supported on a submanifold \(C\subseteq M_{1}\times M_{2}\). Let \(\tau_{1}\) and \(\tau_{2}\) be generalised metrics for \(E_{1}\) and \(E_{2}\) respectively, and set \(\tau\coloneqq\tau_{1}\times\tau_{2}\). Then \(R\) is a _generalised isometry between \(\tau_{1}\) and \(\tau_{2}\)_ if \(\tau(R)=R\). It follows from the definitions that \(\tau=\tau_{1}\times\tau_{2}\) is a generalised metric for \(E_{1}\times E_{2}\). It is not, however, a generalised metric for \(E_{1}\times\overline{E}_{2}\): the \(\pm 1\)-eigenbundles are not necessarily maximally positive-definite with respect to the pairing on \(E_{1}\times\overline{E}_{2}\). See [13, Remark 5.2] for details. **Remark 4.9** (**Classical Generalised Isometries**).: If \(R\) is a classical Courant algebroid morphism, then \(R=\operatorname{gr}(\Phi)\) for some vector bundle map \(\Phi:E_{1}\to E_{2}\) covering a smooth map \(\varphi\colon M_{1}\to M_{2}\). For \(\operatorname{gr}(\Phi)\) to be a generalised isometry, the equation \[\tau_{2}\circ\Phi=\Phi\circ\tau_{1}\] must hold, or equivalently \[\mathcal{G}_{2}\big{(}\Phi(e_{1}),\Phi(e_{1}^{\prime})\big{)}_{\varphi(m_{1}) }=\mathcal{G}_{1}(e_{1},e_{1}^{\prime})_{m_{1}}\enspace,\] for each \(e_{1},e_{1}^{\prime}\in(E_{1})_{m_{1}}\) with \(m_{1}\in M_{1}\), for the metrics \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) induced by \(\tau_{1}\) and \(\tau_{2}\) respectively. This justifies the terminology 'generalised isometry' [13]. For later purposes, let us stress that a further equivalent condition for a classical Courant algebroid morphism \(\Phi\) to be a generalised isometry is \[\Phi(V_{1}^{+})=V_{2}^{+}\enspace, \tag{4.10}\] where \(V_{i}^{+}=\ker(\mathds{1}_{E_{i}}-\tau_{i})\) is the \(+1\)-eigenbundle of \(\tau_{i}\) for \(i=1,2\). When \(\Phi\) is a Courant algebroid isomorphism, generalised isometries are characterised by **Proposition 4.11**.: Let \(E_{1}\) and \(E_{2}\) be exact Courant algebroids over \(M_{1}\) and \(M_{2}\), respectively, and suppose they are endowed with generalised metrics \(\tau_{1}\) and \(\tau_{2}\), respectively. A Courant algebroid isomorphism \(\Phi\colon E_{1}\to E_{2}\), covering \(\varphi\in\operatorname{\mathsf{Diff}}(M_{1},M_{2})\), is a generalised isometry if and only if \[\varphi^{*}g_{2}=g_{1}\qquad\text{and}\qquad\varphi^{*}b_{2}=b_{1}+B\enspace,\] where, by using the canonical isomorphism \(E_{i}\simeq TM_{i}\oplus TM_{i}^{*}\) given by the splitting induced by the generalised metric, \(\tau_{i}\) is determined by the pair \((g_{i},b_{i})\) for \(i=1,2\), and (with a slight abuse of notation) \(\Phi=\overline{\varphi}\circ\,\mathrm{e}^{\,B}\) for some \(B\in\Omega^{2}(M_{1})\). Proof.: By choosing a splitting for \(E_{1}\), its generalised metric is \(V_{1}^{+}\simeq\operatorname{gr}(g_{1}+b_{1}).\) It then follows that \[\Phi\big{(}v_{1}+\iota_{v_{1}}(g_{1}+b_{1})\big{)}=\varphi_{*}(v_{1})+(\varphi^{- 1})^{*}\big{(}\iota_{v_{1}}(g_{1}+b_{1}+B)\big{)}\,\] for all \(v_{1}\in TM_{1}\), is an element in \(V_{2}^{+}\) if and only if \(\varphi^{*}g_{2}=g_{1}\) and \(\varphi^{*}b_{2}=b_{1}+B\), since \(V_{2}^{+}\simeq\operatorname{gr}(g_{2}+b_{2})\) in the chosen splitting for \(E_{2}\). **Remark 4.12**.: Proposition 4.11 reduces to [14, Proposition 2.41] when \(M_{1}=M_{2}=M\) and \(\tau_{1}=\tau_{2}=\tau\), i.e. in this case \(\Phi\) is a generalised isometry if and only if \(\varphi^{*}g=g\) and \(B=0\). Following the equivalence between Definitions 4.1 and 4.5, we can formulate an alternative definition of generalised isometry in terms of the eigenbundles of the automorphisms \(\tau_{1}\) and \(\tau_{2}\). **Definition 4.13**.: Suppose \(R\colon E_{1}\dashrightarrow E_{2}\) is a Courant algebroid relation supported on a submanifold \(C\subseteq M_{1}\times M_{2}\). Let \(V_{1}^{+}\) and \(V_{2}^{+}\) be generalised metrics for \(E_{1}\) and \(E_{2}\) respectively, and set \(V_{i}^{-}\coloneqq(V_{i}^{+})^{\perp}\). Let \(\mathcal{V}^{\pm}=V_{1}^{\pm}\times V_{2}^{\pm}\). Then \(R\) is a _generalised isometry between \(V_{1}^{+}\) and \(V_{2}^{+}\)_ if \[R_{c}=(\mathcal{V}_{c}^{+}\cap R_{c})\oplus(\mathcal{V}_{c}^{-}\cap R_{c})\, \tag{4.14}\] for each \(c\in C\). We briefly comment on the pointwise nature of this definition, which will be a theme throughout this section. Due to the nature of relations, we only consider their pointwise description, and do not demand that our definition holds globally over the submanifold \(C\) that \(R\) is supported on. In particular, in Equation (4.14) we do not demand that \(\mathcal{V}^{\pm}\big{|}_{C}\cap R\) are both subbundles supported on \(C\), since the intersection might not have constant rank. We could distinguish from cases where global descriptions (on \(C\)) are possible, however this is not necessary for the description of generalised isometries, as made clear through **Proposition 4.15**.: Definition 4.8 and Definition 4.13 are equivalent. Proof.: Suppose \(\tau(R)=R\). The bundles \(\mathcal{V}^{\pm}\) are the \(\pm 1\)-eigenbundles for \(\tau=\tau_{1}\times\tau_{2}\). Take \(c\in C\) and let \(r\in R_{c}\). Since \(E_{1}\times\overline{E}_{2}=\mathcal{V}^{+}\oplus\mathcal{V}^{-}\), we can write \(r=r^{+}+r^{-}\), for \(r^{\pm}\in\mathcal{V}_{c}^{\pm}\). Thus \[\tau(r^{+}+r^{-})=r^{+}-r^{-}\ \in\ R_{c}\.\] Hence both \(r^{+}+r^{-}\) and \(r^{+}-r^{-}\) are in \(R_{c}\), so that \(r^{+},r^{-}\in R_{c}\), and it follows that \(R_{c}\subseteq(\mathcal{V}_{c}^{+}\cap R_{c})\oplus(\mathcal{V}_{c}^{-}\cap R _{c})\), hence Equation (4.14) holds. Conversely, if \(R_{c}\) decomposes as in Equation (4.14) for every \(c\in C\), then each \(r\in R_{c}\) can be written as \(r=r^{+}+r^{-}\) where \(r^{\pm}\in\mathcal{V}_{c}^{\pm}\cap R_{c}\). Hence \(\tau(r)=r^{+}-r^{-}\in R_{c}\), so \(\tau(R_{c})=R_{c}\). Since this holds for every \(c\in C\), it follows that \(\tau(R)=R\). Composition of generalised isometries works well: the composition \(R^{\prime}\circ R\) of two Courant algebroid relations that are generalised isometries and that compose cleanly is also a generalised isometry, see [13, Proposition 5.7]. **Example 4.16** (\(\boldsymbol{Q(K)}\) is not a Generalised Isometry).: Let us look at the relation \(Q(K)\) of Section 3.2, where \(K\) is an isotropic subbundle8 inducing a regular foliation \(\mathcal{F}\) of \(M\) given by the integral manifolds of \(\rho(K)\) with smooth leaf space \(\mathcal{Q}=M/\mathcal{F}\) and projection map \(\varpi:M\to\mathcal{Q}\). Take generalised metrics \(\mathscr{V}^{+}\subset E\) and \(V^{+}\subset\underline{E}\): a generalised metric \(V^{+}\) may be constructed from \(\mathscr{V}^{+}\) whenever \(W=\mathscr{V}^{+}\) satisfies Equation (2.34) of Proposition 2.33. This is similar to [13, Example 5.6], where Equation (2.34) is tantamount to \(\mathsf{G}\)-equivariance; therein Vysoky concludes that \(Q(K)\) is not a generalised isometry. In our case we argue as follows. If \(Q(K)\) is a generalised isometry between \(\mathscr{V}^{+}\) and \(V^{+}\), at a point \((m,\varpi(m))\in\operatorname{gr}(\varpi)\) we could write \[Q(K)_{(m,\varpi(m))}=\big{(}\mathcal{V}^{+}_{(m,\varpi(m))}\cap Q(K)_{(m, \varpi(m))}\big{)}\oplus\big{(}\mathcal{V}^{-}_{(m,\varpi(m))}\cap Q(K)_{(m, \varpi(m))}\big{)}\,\] where \(\mathcal{V}^{\pm}=\mathscr{V}^{\pm}\times V^{\pm}.\) Since \(\big{(}k,0^{\underline{E}}_{\varpi(m)}\big{)}\in Q(K)_{(m,\varpi(m))}\), for \(k\in K_{m}\), this implies \[\big{(}k,0^{\underline{E}}_{\varpi(m)}\big{)}=\big{(}k^{+},0^{\underline{E}} _{\varpi(m)}\big{)}+\big{(}k^{-},0^{\underline{E}}_{\varpi(m)}\big{)}\,\] where \(\big{(}k^{\pm},0^{\underline{E}}_{\varpi(m)}\big{)}\in\mathcal{V}^{\pm}_{(m, \varpi(m))}\cap Q(K)_{(m,\varpi(m))}\). Since these live inside \(Q(K)\), and the kernel of \(\natural\) is \(K\), it follows that \(k^{\pm}\in K_{m}\). Thus \(k^{\pm}\in K_{m}\cap\mathscr{V}^{\pm}_{m}\). In particular \[0<\big{\langle}k^{+},k^{+}\big{\rangle}_{E}=0\,\] which is a contradiction. Hence \(Q(K)\) cannot be a generalised isometry. Notice that the isotropy of \(K\) here plays a crucial role. It is shown in [13, Example 5.6] that \(Q(K)\) may be a generalised isometry for non-isotropic \(\mathsf{G}\)-actions only if \(K\cap K^{\perp}=\{\,0\,\}\,\). We will explore non-isotropic reductions by foliations in future work. Example 4.16 motivates a characterising property of generalised isometries given by **Proposition 4.17**.: Let \(R\colon E_{1}\dashrightarrow E_{2}\) be a generalised isometry between \(\tau_{1}\in\mathsf{Aut}(E_{1})\) and \(\tau_{2}\in\mathsf{Aut}(E_{2})\) supported on \(C\subseteq M_{1}\times M_{2}\). For each \(c=(m_{1},m_{2})\in C\), define subsets \(K_{1}\subset E_{1}\) and \(K_{2}\subset E_{2}\) by \[(K_{1})_{m_{1}}\times\{\,0^{E_{2}}_{m_{2}}\,\}=R_{c}\cap\big{(}(E_{1})_{m_{1}} \times\{\,0^{E_{2}}_{m_{2}}\,\}\,\big{)}\, \tag{4.18}\] \[\{\,0^{E_{1}}_{m_{1}}\,\}\times(K_{2})_{m_{2}}=R_{c}\cap\big{(}\,\{\,0^{E_{1}}_ {m_{1}}\,\}\times(E_{2})_{m_{2}}\big{)}. \tag{4.19}\] Then \((K_{i})_{m_{i}}=\{\,0^{E_{i}}_{m_{i}}\,\}\) for \(i=1,2\). Proof.: Let \(c=(m_{1},m_{2})\in C\) and \(k_{1}\in(K_{1})_{m_{1}}\). Then \[(\tau_{1}\times\tau_{2})(k_{1},0)=\big{(}\tau_{1}(k_{1}),0\big{)}\ \in\ (K_{1})_{m_{1}}\times\{\,0^{E_{2}}_{m_{2}}\,\}\,\] hence \((K_{1})_{m_{1}}\) is invariant under \(\tau_{1}\). Since \(R\) is isotropic, it follows that \((K_{1})_{m_{1}}\) is also isotropic, as \(0=\langle(k_{1},0),(k^{\prime}_{1},0)\rangle=\langle k_{1},k^{\prime}_{1} \rangle_{E_{1}}-\langle 0,0\rangle_{E_{2}}\) for all \(k_{1},k^{\prime}_{1}\in(K_{1})_{m_{1}}\), hence \[\mathcal{G}_{1}(k_{1},k_{1})=\langle k_{1},\tau_{1}(k_{1})\rangle_{E_{1}}=0\.\] Therefore \((K_{1})_{m_{1}}=\{\,0^{E_{1}}_{m_{1}}\,\}\) for every \(m_{1}\in\operatorname{pr}_{1}(C)\). Similarly \((K_{2})_{m_{2}}=\{\,0^{E_{2}}_{m_{2}}\,\}\) for every \(m_{2}\in\operatorname{pr}_{2}(C)\). A useful uniqueness result for generalised metrics now follows from **Corollary 4.20**.: Let \(R\colon E_{1}\dashrightarrow E_{2}\) be a generalised isometry between \(\tau_{1}\in\mathsf{Aut}(E_{1})\) and \(\tau_{2}\in\mathsf{Aut}(E_{2})\) which is also a generalised isometry between \(\tau_{1}\in\mathsf{Aut}(E_{1})\) and \(\tau^{\prime}_{2}\in\mathsf{Aut}(E_{2})\). Then \(\tau_{2}|_{\operatorname{pr}_{2}(R)}=\tau^{\prime}_{2}|_{\operatorname{pr}_{2} (R)}\), where \(\operatorname{pr}_{2}:E_{1}\times\overline{E}_{2}\to\overline{E}_{2}\) is the projection. Proof.: Let \((e_{1},e_{2})\in R\). Then \((\tau_{1}(e_{1}),\tau_{2}(e_{2}))\in R\) and \((\tau_{1}(e_{1}),\tau^{\prime}_{2}(e_{2}))\in R\), therefore \((0,\tau_{2}(e_{2})-\tau^{\prime}_{2}(e_{2}))\in R\). Hence \(\tau_{2}(e_{2})=\tau^{\prime}_{2}(e_{2})\) for all \(e_{2}\in\operatorname{pr}_{2}(R)\) by Proposition 4.17. ### Transverse Generalised Isometries Example 4.16 and Proposition 4.17 highlight the need to go beyond the notion of generalised isometry. In particular, we wish to extend the definition of generalised isometry to the case when the subsets \(K_{i}\subset E_{i}\) defined by Equations (4.18) and (4.19) are non-trivial. For this, we assume from now on that the subsets \(K_{i}\) are subbundles supported on \(M_{i}.\) Hence \(\operatorname{pr}_{i}(C)=M_{i},\) where \(\operatorname{pr}_{i}\colon M_{1}\times M_{2}\to M_{i}\) are the Cartesian projections for \(i=1,2.\) #### 4.3.1. Transverse Generalised Metrics We recall the notion of transverse generalised metrics, as first introduced in [34], which will aid in understanding the properties of our extension of generalised isometry. **Definition 4.21**.: Let \(E\) be an exact Courant algebroid over \(M\) and \(K\) an involutive isotropic subbundle of \(E\). A subbundle \(W\subset E\) of rank \(\operatorname{rk}(W)=\dim(M)\) is a _pre-\(K\)-transverse generalised metric_ if \(K\subset W\subset K^{\perp}\) and \[\langle w,w\rangle_{E}>0\,\] for all \(w\in W\) with \(w\notin K\). A pre-\(K\)-transverse generalised metric \(W\) is a \(K\)-_transverse generalised metric_ if it is invariant with respect to \(K\), in the sense that \[[\![\Gamma(K),\Gamma(W)]\!]_{E}\subseteq\Gamma(W)\.\] An exact Courant algebroid \(E\) endowed with a transverse generalised metric \(W\) is denoted by \((E,W)\). Note that this is the same condition appearing in the context of reducible subbundles of exact Courant algebroids, see Remark 2.35. **Remark 4.22** (**Transverse Generalised Metrics as Graphs**).: Similarly to Remark 4.6 (see Equation (4.7)), suppose that the restriction of the anchor \(\rho_{E}|_{K}\colon K\to TM\) to \(K\) is injective, and that \(W\) is a pre-\(K\)-transverse generalised metric. Then in a given splitting, there is a (degenerate) symmetric bilinear pairing \(g\) and a (degenerate) two-form \(b,\) with \(\ker(b)\supset\ker(g),\) such that \[W\simeq\operatorname{gr}(g+b)=\left\{\,v+\iota_{v}(g+b)\in\mathbb{T}M\ |\ v\in TM\,\right\}\.\] In this splitting, \(K\simeq\ker(g)\) is a subbundle of \(TM\subset E\). If \(W\) is a \(K\)-transverse generalised metric, then \[\pounds_{X}g=\pounds_{X}b=0\qquad\text{and}\qquad\iota_{X}H=0\,\] for every \(X\in\Gamma(\rho_{E}(K)),\) where \(H\in\Omega^{3}_{\operatorname{cl}}(M)\) represents the Severa class of \(E\). The converse is also true. The result of Remark 4.22 allows an interpretation in terms of Riemannian submersions and Riemannian foliations, as shown in [34]. **Example 4.23** (**Riemannian Foliations**).: This construction extends to any manifold \((M,g)\) endowed with a foliation \(\mathcal{F}\) such that the degenerate symmetric bilinear tensor \(g\) satisfies \(\ker(g)=T\mathcal{F}\) and is leaf-invariant: \[\pounds_{X}g=0\,\] for all \(X\in\Gamma(T\mathcal{F}).\) In other words, \((M,g,\mathcal{F})\) is a Riemannian foliation. It is easy to see that \[W=\operatorname{gr}(g)\] is a \(K\)-transverse generalised metric, where \(K\simeq T\mathcal{F}\) is an involutive isotropic subbundle of \(\mathbb{T}M\) and \(K^{\perp}=TM\oplus\operatorname{Ann}(T\mathcal{F})\). #### 4.3.2. Relations for Transverse Generalised Metrics The main goal of this section may be rephrased as giving a definition of isometry of transverse generalised metrics. In order to provide more insight into what our definition should look like, let us first explore the properties of the most natural way to extend generalised isometries: by adapting Equation (4.10) to (pre-)transverse generalised metrics. **Remark 4.24** (**Isomorphisms and Transverse Generalised Metrics**).: Suppose \(E_{1}\) and \(E_{2}\) are exact Courant algebroids over \(M_{1}\) and \(M_{2}\), respectively, which are isomorphic via a map \(\Phi\colon E_{1}\to E_{2}\) covering a diffeomorphism \(\varphi\colon M_{1}\to M_{2}\). Suppose \(K_{1}\) and \(K_{2}\) are involutive isotropic subbundles of \(E_{1}\) and \(E_{2}\), respectively, and that \(W_{i}\) are pre-\(K_{i}\)-transverse generalised metrics for \(i=1,2\). The natural definition would be to call \(\Phi\) an isometry of these metrics if \(\Phi(W_{1})=W_{2}\). This turns out to be too restrictive. Let us discuss the conditions arising from this choice. Upon choosing splittings for \(E_{i}\), we have \(\Phi=\overline{\varphi}\circ\operatorname{e}^{B}\) for some \(B\in\Omega^{2}(M_{1})\), and \(W_{i}\simeq\operatorname{gr}(g_{i}+b_{i})\) for some \(g_{i}\in\mathsf{\Gamma}(\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}} )\) and \(b_{i}\in\Omega^{2}(M_{i}).\) As in Proposition 4.11, \(\Phi(W_{1})=W_{2}\) if and only if \(\varphi^{*}g_{2}=g_{1}\) and \(\varphi^{*}b_{2}=b_{1}+B\). However, the condition that \(\varphi\) should map the degenerate symmetric tensors \(g_{i}\) into each other implies that \[\varphi_{*}\big{(}\ker(g_{1})\big{)}=\ker(g_{2})\,\] which is equivalent to requiring \(\Phi(K_{1})=K_{2}.\) In particular, if \(M_{i}\) are endowed with Riemannian foliations, by Example 4.23 this is tantamount to requiring that \(\varphi\) is a foliation-preserving diffeomorphism. We would like to introduce a notion of generalised isometry for transverse generalised metrics that is not constrained to being foliation-preserving. This choice is motivated by the possibility to formalise the description of T-duality where T-dual manifolds may not be diffeomorphic, and hence the foliations defining the reductions to these backgrounds cannot be bijectively mapped into each other (see Section 5 for more details). Instead we consider \(W_{i}^{+}\coloneqq W_{i}/K_{i}\). We then require \(\Phi(\widetilde{W}_{1}^{+})=\widetilde{W}_{2}^{+}\) for some lifts \(\widetilde{W}_{i}^{+}\) of \(W_{i}^{+}\) to \(E_{i}\). In order to extend this condition to a general Courant algebroid relation, we mimic Definition 4.13 using this idea. **Definition 4.25**.: Let \(E_{1}\) and \(E_{2}\) be Courant algebroids over \(M_{1}\) and \(M_{2}\), respectively, and let \(R\colon E_{1}\dashrightarrow E_{2}\) be a Courant algebroid relation supported on \(C\subset M_{1}\times M_{2}\). Suppose \(K_{1}\) and \(K_{2}\) are involutive isotropic subbundles of \(E_{1}\) and \(E_{2}\), respectively, and that \(W_{i}\) are pre-\(K_{i}\)-transverse generalised metrics for \(i=1,2\). Set \(W_{i}^{+}\coloneqq W_{i}/K_{i}\) and \(W_{i}^{-}\coloneqq W_{i}^{\perp}/K_{i}\). Then \(R\) is a _transverse generalised isometry9 between \(W_{1}\) and \(W_{2}\)_ if there are pointwise lifts \(\widetilde{W}_{i}^{\pm}\subseteq W_{i}\) of \(W_{i}^{\pm}\) to \(E_{i}\) such that Footnote 9: We are aware that our terminology ‘transverse generalised isometry’ might be a misnomer, since these relations do not necessarily preserve the subbundles \(K_{i}\) defining the transversality conditions. We chose this name for the sake of simplicity, since the \(K_{i}\)-preserving case is included in our definition. \[R_{c}\cap(\mathcal{W}_{c}^{+}\oplus\mathcal{W}_{c}^{-})=(R_{c}\cap\mathcal{W }_{c}^{+})\oplus(R_{c}\cap\mathcal{W}_{c}^{-})\, \tag{4.26}\] for all \(c\in C\), where \(\mathcal{W}^{\pm}=\widetilde{W}_{1}^{\pm}\times\widetilde{W}_{2}^{\pm}\). A transverse generalised isometry \(R\) is _regular_ if the lifts \(\widetilde{W}_{i}^{\pm}\) are subbundles of \(E_{i}\) for \(i=1,2\). Note that the inclusion \(\supseteq\) in Equation (4.26) always holds. **Remark 4.27**.: This definition is independent of the choices of lifts, since the transverse generalised metrics are given by \(W_{i}=\widetilde{W}_{i}^{+}\oplus K_{i}\) for any lift \(\widetilde{W}_{i}^{+}\) of \(W_{i}^{+}\). We will see how lifting \(W_{1}^{+}\) plays a crucial role in the description of T-duality, generalising the picture presented in [8]. Definition 4.25 may be better understood by looking at the case of a classical Courant algebroid morphism. **Remark 4.28** (**Lifts for Transverse Generalised Isometries**).: For a classical Courant algebroid morphism \(\Phi\colon E_{1}\to E_{2}\) that is a generalised isometry between some (not transverse) generalised metrics \(V_{i}^{+}\) on \(E_{i}\), the respective \(\pm 1\)-eigenbundles of \(\tau_{i}\) are mapped onto each other. For a transverse generalised metric \(W\), it would carry not only the information of the \(+1\)-eigenbundle of some (degenerate) endomorphism,10 but also the subspace \(K\) of the \(0\)-eigenbundle. We want to avoid mapping this degenerate part, so we consider \(W_{i}^{+}\), which is isomorphic to the \(+1\)-eigenbundle, though it is actually given by the quotient space \(W_{i}/K_{i}\). Footnote 10: In this paper we adopt only the subbundle approach for the sake of simplicity. The lift \(\widetilde{W}_{i}^{+}\) is a "choice" of \(+1\)-eigenbundle: \[\left\langle w_{i},w_{i}\right\rangle_{E_{i}}>0\,\] for every \(w_{i}\in\widetilde{W}_{i}^{+}\). In other words, it is a choice of splitting \(s_{m_{i}}^{+}\colon(W_{i}^{+})_{m_{i}}\to(W_{i})_{m_{i}}\), at each point \(m_{i}\in M_{i}\), of the short exact sequence \[0\longrightarrow(K_{i})_{m_{i}}\longrightarrow(W_{i})_{m_{i}}\longrightarrow( W_{i}^{+})_{m_{i}}\longrightarrow 0\, \tag{4.29}\] with \((\widetilde{W}_{i}^{+})_{m_{i}}=s_{m_{i}}^{+}\big{(}(W_{i}^{+})_{m_{i}}\big{)}\). Similarly, the lift \(\widetilde{W}_{i}^{-}\) is a "choice" of \(-1\)-eigenbundle. This shows that we recover Definition 4.13 for generalised metrics when \(K_{i}\) is the zero subbundle. That we may have a choice of splitting at every point implies the splittings can be wildly different from point to point, but the definition is not affected since Equation (4.26) is given pointwise. If the splitting \(s^{+}\) varies smoothly, then we may consider the short exact sequence (4.29) to be a short exact sequence of vector bundles and the splitting is a bundle map, thus \(R\) is a regular transverse generalised isometry. This will be the case in Section 5. The main case in which regular transverse generalised isometries exist is given by **Theorem 4.30**.: Consider the reduction relation \(Q(K)\colon E\mapsto\underline{E}\) over \(\varpi\colon M\to\mathcal{Q}\) of Section 3.2 and let \(V^{+}\subset\underline{E}\) be a generalised metric on \(\underline{E}.\) There exists a unique \(K\)-transverse generalised metric \(W\) on \(E\) such that \(Q(K)\) is a regular transverse generalised isometry between \(W\) and \(V^{+}\). Conversely, if there is a \(K\)-transverse generalised metric \(W\) on \(E\), then \(V^{+}\coloneqq\natural(W)\) is a generalised metric on \(\underline{E}\) and \(Q(K)\) is a regular transverse generalised isometry between \(W\) and \(V^{+}\). Proof.: Take a generalised metric \(V^{+}\) on \(\underline{E}\). Recall that for every \(q\in\mathcal{Q}\) there is an isomorphism \(\mathscr{J}_{q,m}\colon\underline{E}_{q}\to K_{m}^{\perp}/K_{m}\) given by Equation (2.30). Define \(W_{m}^{+}=\mathscr{J}_{q,m}(V_{q}^{+})\). For two points \(m,m^{\prime}\in\mathcal{F}_{q}\), take \(w^{+}\in W_{m}^{+}\) and \(w^{\prime+}\in W_{m^{\prime}}^{+}\) such that \(\mathscr{J}_{q,m}^{+}(w^{+})=\mathscr{J}_{q,m^{\prime}}^{-1}(w^{\prime+})\). By the construction in the sketched proof of Theorem 2.27, there is a basic section \(w\) such that, under the projection \(K^{\perp}\to K^{\perp}/K\), \(w\) maps to \(w^{+}\) at \(m\) and \(w^{\prime+}\) at \(m^{\prime}\). Take \(W\) to be the span of such basic sections \(w\). Then \(K\subset W\subset K^{\perp}\) and \(\left\langle w,w\right\rangle_{E}>0\) for every \(w\in W\) such that \(w\notin K\). Hence \(W\) is a pre-\(K\)-transverse generalised metric on \(E\). By Remark 2.35, for \(w\in W\) we can write \(w=\sum_{i}\,f_{i}\,w_{i}\), for \(f_{i}\in C^{\infty}(M)\) (so that the sum is locally finite) and \(w_{i}\in\mathsf{\Gamma}_{\mathrm{bas}}(W)\). Then \[[\![k,w]\!]_{E}=\sum_{i}\,f_{i}\,[\![k,w_{i}]\!]_{E}+\sum_{i}\,\left(\rho_{E}( k)\cdot f_{i}\right)w_{i}\,\] for each \(k\in K\). Since \(w_{i}\) are basic, it follows that \([\![\Gamma(K),\Gamma(W)]\!]_{E}\subseteq\mathsf{\Gamma}(W)\), hence \(W\) is a \(K\)-transverse generalised metric and \(\natural(W)=V^{+}\). Uniqueness is seen as follows: Suppose there exists another \(K\)-transverse generalised metric \(W^{\prime}\) such that \(\natural(W^{\prime})=V^{+}\). By Remark 2.35, it follows that \(W^{\prime}\) is spanned pointwise by \(\{\,w_{i}^{\prime}\,\}\subset\mathsf{\Gamma}_{\mathrm{bas}}(W^{\prime})\). Thus \(v_{i}=\natural(w_{i}^{\prime})\) give a set of sections spanning \(V^{+}\) pointwise. Now we apply the previous construction to \(v_{i}\) to construct \(W\). Hence we get \(W\) spanned by \(w_{i}\) with \(\natural(w_{i})=\natural(w_{i}^{\prime})\). Therefore \(w_{i}\) and \(w_{i}^{\prime}\) differ by an element in \(K\), so \(W=W^{\prime}\). Conversely, starting with a \(K\)-transverse generalised metric \(W\) on \(E\), then \(V^{+}\coloneqq\natural(W)\) defines a generalised metric on \(\underline{E}\), since the \(K\)-transversality condition makes \(V_{+}\) a well-defined subbundle of \(\underline{E}\) over \(\mathcal{Q}\) by Proposition 2.33 and \[\left\langle v,v\right\rangle_{\underline{E}}=\left\langle w,w\right\rangle_{ E}\geq 0\,\] for every \(v\in V^{+}\), with equality if and only if \(w\in K\), i.e. \(v=0\). It also follows that \(\mathrm{rk}(V^{+})=\dim(\mathcal{Q})\). We now show, in both cases, that \(Q(K)\) is a regular transverse generalised isometry between \(W\) and \(V^{+}\). Fix \(p=(m,\varpi(m))\in\mathrm{gr}(\varpi)\subset M\times\mathcal{Q}\). Take arbitrary smooth splittings \(\widetilde{W}^{\pm}\) of the sequence (4.29), i.e. a splitting of the short exact sequence of vector bundles \[0\longrightarrow K\longrightarrow W\longrightarrow W^{+}\longrightarrow 0\] over \(M\), and set \(\mathcal{W}^{\pm}=\widetilde{W}^{\pm}\times V^{\pm}\). Then \[Q(K)_{p}\cap(\mathcal{W}^{+}\oplus\mathcal{W}^{-})_{p}=\{\,(e,\natural(e))\, \mid\,e\in K_{m}^{\perp}\cap(\widetilde{W}_{m}^{+}\oplus\widetilde{W}_{m}^{- })\,\}\.\] Thus if \((e,\natural(e))\in Q(K)_{p}\cap(\mathcal{W}^{+}\oplus\mathcal{W}^{-})_{p}\), then \[\left(e,\natural(e)\right)=\left(e_{+}+e_{-},\natural(e_{+}+e_{-})\right)\,,\] with \(e_{\pm}\in\widetilde{W}_{m}^{\pm}\). By construction \(\natural(\widetilde{W}^{\pm})=V^{\pm}\). Hence \[\left(e,\natural(e)\right) =(e_{+}+e_{-},\underline{e}_{+}+\underline{e}_{-})\] \[=(e_{+},\underline{e}_{+})+(e_{-},\underline{e}_{-})\ \in\ \left(Q(K)_{p}\cap\mathcal{W}_{p}^{+}\right)\oplus\left(Q(K)_{p}\cap \mathcal{W}_{p}^{-}\right)\,,\] where \(\underline{e}_{\pm}=\natural(e_{\pm})=\natural(e)_{\pm}\in V^{\pm}\). Thus Equation (4.26) follows, and \(Q(K)\) is a regular transverse generalised isometry between \(W\) and \(V^{+}\). **Example 4.31**.: In the split case \(E=(\mathbb{T}M,H)\), a generalised metric on \(\underline{E}=(\mathbb{T}\mathcal{Q},\underline{H})\) is given by a metric \(\underline{g}\in\Gamma(\bigodot^{2}T^{*}\mathcal{Q})\) and a two-form \(\underline{b}\in\Omega^{2}(\mathcal{Q}).\) Then the pullbacks \(\varpi^{*}\underline{g}\) and \(\varpi^{*}\underline{b}\) define a \(K\)-transverse generalised metric \(W\) such that \(\natural(W)=V^{+}\), and \(Q(\mathcal{F})\) is a regular transverse generalised isometry between \(W\) and \(V^{+}\). **Remark 4.32**.: We can take arbitrary lifts \(\widetilde{W}^{+}\) of \(W/K\) and \(\widetilde{W}^{-}\) of \(W^{\perp}/K\) such that \(Q(K)\) can still be regarded as an isometry. We will exploit this in Section 5 for our description of T-duality where we will see that, when composing transverse generalised isometries, there are restrictions on the lifts that we can take. ### Composition of Transverse Generalised Isometries Composition of transverse generalised isometries presents a multi-faceted problem by nature of its less restrictive construction compared to generalised isometries. To elucidate why this is so, suppose \(R\colon E_{1}\dashrightarrow E_{2}\) and \(R^{\prime}\colon E_{2}\dashrightarrow E_{3}\) are Courant algebroid relations composing cleanly, and let \(W_{i}\) be transverse generalised metrics on \(E_{i}\) for \(i=1,2,3\). Suppose further that \(R\) is a transverse generalised isometry between \(W_{1}\) and \(W_{2}\), and \(R^{\prime}\) is a transverse generalised isometry between \(W_{2}\) and \(W_{3}\). The problems encountered are two-fold. Firstly, since the lifts \(\widetilde{W}_{i}^{\pm}\) of \(W_{i}^{\pm}\) are not unique (as would be the case for a generalised metric) it may not be the case that the lift \(\widetilde{W}_{2}^{\pm}\) making \(R\) a transverse generalised metric is the same as the lift \(\widetilde{W}_{2}^{\prime\pm}\) making \(R^{\prime}\) a transverse generalised metric. Secondly, even if we had \(\widetilde{W}_{2}^{\pm}=\widetilde{W}_{2}^{\prime\pm}\), this may still not be enough. For instance, suppose \((e_{1},e_{3})\in(R^{\prime}\circ R)\cap(\widetilde{W}_{1}^{+}\times\widetilde {W}_{3}^{+})\). By definition, there exists \(e_{2}\) such that \((e_{1},e_{2})\in R\) and \((e_{2},e_{3})\in R^{\prime}\). But it is in no way guaranteed that \(e_{2}\in\widetilde{W}_{2}^{+}\). The first problem is discussed in Appendix A. For now we assume that \(\widetilde{W}_{2}^{\pm}=\widetilde{W}_{2}^{\prime\pm}\) and address the second problem through **Proposition 4.33**.: Suppose that \(R_{1}\colon E_{1}\dashrightarrow E_{2}\) and \(R_{2}\colon E_{2}\dashrightarrow E_{3}\) are Courant algebroid relations supported on \(C_{1}\) and \(C_{2}\), respectively, that compose cleanly. For \(i=1,2,3\) let \(K_{i}\) be isotropic involutive subbundles of \(E_{i}\), and let \(W_{i}\) be pre-\(K_{i}\)-transverse generalised metrics such that \(R_{i}\) is a transverse generalised isometry between \(W_{i}\) and \(W_{i+1}\). Suppose there are splittings \(\widetilde{W}_{1}^{\pm}\), \(\widetilde{W}_{2}^{\pm}\) and \(\widetilde{W}_{3}^{\pm}\) giving the decomposition (4.26) for \(R_{1}\) and \(R_{2}\). Let \(\mathcal{A}^{\pm}=\widetilde{W}_{1}^{\pm}\times\widetilde{W}_{2}^{\pm}\times \widetilde{W}_{2}^{\pm}\times\widetilde{W}_{3}^{\pm}\). If \[\operatorname{rk}\bigl{(}(R_{2}\circ R_{1})_{(e_{1},e_{2})}\cap(\mathcal{A}^{ +}\oplus\mathcal{A}^{-})_{(e_{1},e_{2})}\bigr{)}>0\, \tag{4.34}\] for every \((c_{1},c_{2})\in C_{2}\diamond C_{1}\), then \(R_{2}\circ R_{1}\) is a transverse generalised isometry between \(W_{1}\) and \(W_{3}\). Proof.: Take \((e_{1},e_{3})\in(R_{2}\circ R_{1})\cap(\mathcal{W}^{+}\oplus\mathcal{W}^{-})\), where \(\mathcal{W}^{\pm}=\widetilde{W}_{1}^{\pm}\times\widetilde{W}_{3}^{\pm}\). Then there exists \(e_{2}\in E_{2}\) such that \((e_{i},e_{i+1})\in R_{i}\) for \(i=1,2\). That is, \((e_{1},e_{2},e_{2},e_{3})\in R_{2}\diamond R_{1}\). By Equation (4.34) we may assume \(e_{2}=e_{2}^{+}+e_{2}^{-}\) where \(e_{2}^{\pm}\in\widetilde{W}_{2}^{\pm}\). Then \[(e_{i},e_{i+1}) =(e_{i}^{+}+e_{i}^{-},e_{i+1}^{+}+e_{i+1}^{-})\] \[=(e_{i}^{+},e_{i+1}^{+})+(e_{i}^{-},e_{i+1}^{-})\ \in\ R_{i}\cap\bigl{(}( \widetilde{W}_{i}^{+}\times\widetilde{W}_{i+1}^{+})\oplus(\widetilde{W}_{i}^{ -}\times\widetilde{W}_{i+1}^{-})\bigr{)}\,\] for \(i=1,2\). Since \(R_{i}\) is a transverse generalised isometry between \(W_{i}\) and \(W_{i+1}\), the decomposition (4.26) holds for \(R_{1}\) and \(R_{2}\), hence \((e_{i}^{\pm},e_{i+1}^{\pm})\in R_{i}\). It then follows by definition that \((e_{1}^{\pm},e_{3}^{\pm})\in R_{2}\circ R_{1}\), and so the decomposition (4.26) also holds for \(R_{2}\circ R_{1}\). **Corollary 4.35**.: If either \(R_{1}\) or \(R_{2}\) is the graph of a classical Courant algebroid isomorphism, then \(R_{2}\circ R_{1}\) is always a transverse generalised isometry between \(W_{1}\) and \(W_{3}\). Proof.: Suppose \(R_{1}=\operatorname{gr}(\Phi)\), where \(\Phi\colon E_{1}\to E_{2}\) is a Courant algebroid isomorphism over a diffeomorphism \(\varphi\colon M_{1}\to M_{2}\) (hence \(C_{1}=\operatorname{gr}(\varphi)\)). Since \(\Phi\) is an isomorphism, we have already seen that \(\widetilde{W}_{2}^{\pm}=\Phi(\widetilde{W}_{1}^{\pm})\). To show that Equation (4.34) holds, fix \((c_{1},c_{2})\in C_{2}\diamond C_{1}\). Then at this point \[R_{2}\diamond\operatorname{gr}(\Phi)=\left\{\,(e_{1},\Phi(e_{1}),\Phi(e_{1}),e _{3})\,\mid\,e_{1}\in E_{1}\,\ (\Phi(e_{1}),e_{3})\in R_{2}\,\right\}\.\] But if \(e_{1}\in\widetilde{W}_{1}^{+}\oplus\widetilde{W}_{1}^{-}\) then \(\Phi(e_{1})\in\widetilde{W}_{2}^{+}\oplus\widetilde{W}_{2}^{-}\), and hence Equation (4.34) holds. If \(R_{2}\) is the graph of a classical Courant algebroid isomorphism \(\Phi\), then this argument can be applied to \((R_{1}\circ\mathrm{gr}(\Phi))^{\top}=\mathrm{gr}(\Phi^{-1})\circ R_{1}^{\top}\), as the transpose of a transverse generalised isometry is a transverse generalised isometry. ## 5. T-duality as a Courant Algebroid Relation Recall that infinitesimal symmetries of the Wess-Zumino functional \(S_{H_{1}}\) of a string sigma-model are characterised by the Dorfman bracket of the standard Courant algebroid \((\mathbb{T}M_{1},H_{1})\). A Courant algebroid isomorphism thus maps symmetries of \(S_{H_{1}}\) to symmetries of \(S_{H_{2}}\), with \(H_{1}\) and \(H_{2}\) related by Equation (1.5), hence giving the same equations of motion. This however requires the string backgrounds to be diffeomorphic. T-duality is a correspondence between dynamics arising from sigma-models whose backgrounds are not necessarily diffeomorphic, but whose symmetries are related in a certain sense. Courant algebroid relations preserve the Dorfman bracket even when the backgrounds are not diffeomorphic. This motivates an attempt to reformulate T-duality in terms of Courant algebroid relations. In this section we formalise this idea, where the Courant algebroid relation may be viewed as the reduction of a Courant algebroid isomorphism. The Courant algebroid isomorphism is interpreted as a symmetry of the gauged sigma-model. That is, we consider isomorphic Courant algebroids \(E_{1}\) and \(E_{2}\) over manifolds \(M_{1}\) and \(M_{2}\), respectively. We then take isotropic subbundles \(K_{1}\subset E_{1}\) and \(K_{2}\subset E_{2}\), inducing regular foliations by \(\rho_{E_{1}}(K_{1})\) of \(M_{1}\) and by \(\rho_{E_{2}}(K_{2})\) of \(M_{2}\), and apply the reduction procedure of Theorem 2.27 to both Courant algebroids. Under the right conditions, the Courant algebroid isomorphism induces a Courant algebroid relation between the reduced Courant algebroids. In other words, we will explore circumstances under which the Wess-Zumino functionals of our sigma-models are related. ### Topological T-duality Let \(E_{1}\) and \(E_{2}\) be exact Courant algebroids over \(M_{1}\) and \(M_{2}\), respectively, which are isomorphic via \(\Phi\) covering \(\varphi\in\mathsf{Diff}(M_{1},M_{2})\), and let them be endowed with isotropic subbundles \(K_{1}\) and \(K_{2}\), respectively. Here and throughout the rest of the paper we will assume that \[\mathrm{rk}(K_{1})=\mathrm{rk}(K_{2}). \tag{5.1}\] Assume as well that \(K_{1}^{\perp}\) and \(K_{2}^{\perp}\) have enough basic sections and denote by \(\mathcal{F}_{i}\) the foliation of \(M_{i}\) given by the integral manifolds of the distribution \(\rho_{E_{i}}(K_{i}).\) Suppose that the leaf spaces \(\mathcal{Q}_{i}=M_{i}/\mathcal{F}_{i}\) have a smooth structure, hence there are unique surjective submersions \(\varpi_{i}\colon M_{i}\to\mathcal{Q}_{i}\) for \(i=1,2\). From Section 3.2 we can then form the Courant algebroid morphisms \(Q(K_{i})\colon E_{i}\to\underline{E}_{i}\) over \(\varpi_{i}\). This gives the diagram (5.2) which prepares for our first definition of T-duality. **Definition 5.3**.: Let \(E_{1}\) and \(E_{2}\) be exact Courant algebroids fitting into the diagram (5.2). Then \(\underline{E}_{1}\) and \(\underline{E}_{2}\) are _T-duality related_ over \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\) if the composition \[R(\Phi)=Q(K_{2})\circ\mathrm{gr}(\Phi)\circ Q(K_{1})^{\top}\colon\underline{E} _{1}\dashrightarrow\underline{E}_{2} \tag{5.4}\] is a Courant algebroid relation supported on \[\underline{C}=\{\,(\varpi_{1}(m_{1}),\varpi_{2}(\varphi(m_{1})))\,\mid\,m_{1} \in M_{1}\,\}\ \subset\ \mathcal{Q}_{1}\times\mathcal{Q}_{2}\.\] The Courant algebroid relation \(R(\Phi)\) is the _T-duality relation_ between \(\underline{E}_{1}\) and \(\underline{E}_{2}\). **Remark 5.5**.: We first discuss the smoothness of the supporting submanifold \(\underline{C}\), as in general it may not be smooth. The most interesting case is when \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\) are fibred manifolds over a common base manifold \(\mathcal{B}\). In this case there is the commutative diagram (5.6) Since the fibred product \(\mathcal{Q}_{1}\times_{\mathcal{B}}\mathcal{Q}_{2}\) also fits into the diagram (5.6), there is a smooth map \(M_{1}\to\mathcal{Q}_{1}\times_{\mathcal{B}}\mathcal{Q}_{2}\) given by the quotient of \(M_{1}\) by the foliation induced by the distribution \(T\mathcal{F}_{1}\cap(\varphi^{-1})_{*}T\mathcal{F}_{2}\). Thus in this case the rank of \(T\mathcal{F}_{1}\cap(\varphi^{-1})_{*}T\mathcal{F}_{2}\) must be constant.11 Footnote 11: We will see below that this must also be true in the general case without a common base \(\mathcal{B}\). **Lemma 5.7**.: If \(M_{1}\) fits into the commutative diagram (5.6), then \(\underline{C}=\mathcal{Q}_{1}\times_{\mathcal{B}}\mathcal{Q}_{2}\) is smooth. Proof.: Note that \(\underline{C}\subseteq\mathcal{Q}_{1}\times_{\mathcal{B}}\mathcal{Q}_{2}\). For the opposite inclusion, consider a point \((q_{1},q_{2})\) in \(\mathcal{Q}_{1}\times_{\mathcal{B}}\mathcal{Q}_{2}\). Then there exist \(m,m^{\prime}\in M_{1}\) such that \(q_{1}=\varpi_{1}(m)\) and \(q_{2}=\varpi_{2}(\varphi(m^{\prime}))\), and \(b\coloneqq\pi_{1}(\varpi_{1}(m))=\pi_{2}(\varpi_{2}(\varphi(m^{\prime})))\in \mathcal{B}\). Since also \(\pi_{2}(\varpi_{2}(\varphi(m)))=b\), with \(q_{0}\coloneqq\varpi_{2}(\varphi(m))\) the points \(q_{2}\) and \(q_{0}\) belong to the same fibre of \(\pi_{2}\). Hence there is a path \(\gamma:[0,1]\to\mathcal{Q}_{2}\) such that \(\gamma(0)=q_{0}\) and \(\gamma(1)=q_{2}\). By the homotopy lifting property we get a path \(\tilde{\gamma}\) in \((\mathcal{F}_{1})_{q_{1}}\) such that \(\tilde{\gamma}(0)=m\), where \((\mathcal{F}_{1})_{q_{1}}\) is the leaf of the foliation \(\mathcal{F}_{1}\) given by \(\varpi_{1}^{-1}(q_{1})\). We then define \(m^{\prime\prime}\coloneqq\tilde{\gamma}(1).\) It follows that \(\varpi_{1}(m^{\prime\prime})=q_{1}\) and \(\varpi_{2}(\varphi(m^{\prime\prime}))=q_{2}\), hence \(\mathcal{Q}_{1}\times_{\mathcal{B}}\mathcal{Q}_{2}\subseteq\underline{C}\). Assuming \(\underline{C}\) is smooth, we can now state the conditions under which the T-duality relation \(R(\Phi)\) can be formed as **Theorem 5.8**.: Let \(E_{1}\) and \(E_{2}\) be exact Courant algebroids over \(M_{1}\) and \(M_{2}\), respectively, isomorphic via \(\Phi\), and endowed with isotropic subbundles \(K_{1}\) and \(K_{2}\) satisfying the conditions for the diagram (5.2) discussed above. Let us also assume that \(T\mathcal{F}_{1}\cap(\varphi^{-1})_{*}T\mathcal{F}_{2}\) has constant rank and that \(\underline{C}\) is a smooth manifold. Then \(\underline{E}_{1}\) and \(\underline{E}_{2}\) are T-duality related if and only if \(\Phi^{-1}(K_{2})\cap K_{1}\) has constant rank. Proof.: Forming \(R(\Phi)\) involves two compositions of relations, which are both required to be clean compositions. For each composition, we check conditions (ii) of Propositions 3.12 and 3.13. First,12 since \(Q(K_{2})\) and \(\mathrm{gr}(\Phi)\) are Courant algebroid morphisms, [13, Example 3.17] shows that the conditions on the base manifolds \(\mathrm{gr}(\varpi_{2})\) and \(\mathrm{gr}(\varphi)\) in items (ii) of Propositions 3.12 and 3.13 are met. Hence we need only check the constant rank requirements in both cases. For \(\tilde{m}_{1}\coloneqq(m_{1},\varphi(m_{1}),\varphi(m_{1}),\varpi_{2}(\varphi(m_{1}) ))\in\operatorname{gr}(\varpi_{2})\circ\operatorname{gr}(\varphi)\), \[\big{(}Q(K_{2})\circ\operatorname{gr}(\Phi)\big{)}_{\tilde{m}_{1}}=\{\,(e_{1}, \Phi(e_{1}),\Phi(e_{1}),\natural_{E_{2}}(\Phi(e_{1})))\,\mid\,e_{1}\in\Phi^{-1 }(K_{2}^{\perp})_{m_{1}}\,\}\enspace,\] thus \(Q(K_{2})\circ\operatorname{gr}(\Phi)\) is diffeomorphic to \(K_{2}^{\perp}\) and hence has constant rank. Moreover, the projection \(p\colon Q(K_{2})\circ\operatorname{gr}(\Phi)\to Q(K_{2})\circ \operatorname{gr}(\Phi)\) is a diffeomorphism. By item (i) of Proposition 3.13, the composition is clean and hence \(Q(K_{2})\circ\operatorname{gr}(\Phi)\) is a Courant algebroid relation supported on \(\operatorname{gr}(\varpi_{2})\circ\operatorname{gr}(\varphi)\). To compose the relation \(Q(K_{2})\circ\operatorname{gr}(\Phi)\) with \(Q(K_{1})^{\top}\), we first check the conditions on the base manifolds. Setting \(C=\operatorname{gr}(\varpi_{1})\) and \(C^{\prime}=\{\,(m_{1},\varpi_{2}(\varphi(m_{1})))\,\mid\,m_{1}\in M_{1}\,\}\), the intersection of \(C^{\top}\times C^{\prime}\) with \(\mathcal{Q}_{1}\times\Delta(M_{1})\times\mathcal{Q}_{2}\) is clean: one can show that they intersect transversally13 in \(\mathcal{Q}_{1}\times M_{1}\times M_{1}\times\mathcal{Q}_{2}\). For if \(v\in T_{\tilde{m}}(\mathcal{Q}_{1}\times M_{1}\times M_{1}\times\mathcal{Q}_{ 2})\), for any \(\hat{m}\in(C^{\top}\times C^{\prime})\cap(\mathcal{Q}_{1}\times\Delta(M_{1}) \times\mathcal{Q}_{2})\), then Footnote 13: We say that submanifolds \(S\) and \(S^{\prime}\) of \(M\)_intersect transversally_ if \(T_{s}S+T_{s}S^{\prime}=T_{s}M\) for each \(s\in S\cap S^{\prime}\). Submanifolds that intersect transversally also intersect cleanly [42, Appendix C.3]. \[v=(\underline{v}_{1},v_{1},v_{1}^{\prime},\underline{v}_{2}) =\big{(}(\varpi_{1})_{*}(v_{1}),v_{1},v_{1}^{\prime},(\varpi_{2} \circ\varphi)_{*}(v_{1}^{\prime})\big{)}\] \[\quad+\big{(}\underline{v}_{1}-(\varpi_{1})_{*}(v_{1}),0,0, \underline{v}_{2}-(\varpi_{2}\circ\varphi)_{*}(v_{1}^{\prime})\big{)}\enspace,\] where the first term is an element of \(T_{\hat{m}}(C^{\top}\times C^{\prime})\) and the second term is an element of \(T_{\hat{m}}(\mathcal{Q}_{1}\times\Delta(M_{1})\times\mathcal{Q}_{2})\). This implies the condition (ii) on the base manifolds of Proposition 3.12. Next since \(\underline{C}=C^{\prime}\circ C^{\top}\) is a smooth manifold and \(T\mathcal{F}_{1}\cap\varphi^{*}T\mathcal{F}_{2}\) has constant rank, the map \(C^{\prime}\circ C^{\top}\to C^{\prime}\circ C^{\top}\) is a smooth surjective submersion, thereby satisfying the condition (ii) on the base manifolds of Proposition 3.13. To check the constant rank criteria of conditions (ii) of Propositions 3.12 and 3.13, we note that since \(Q(K_{1})=\{\,(e,\natural_{E_{1}}(e))\,\mid\,e\in K_{1}^{\perp}\,\}\), \[\big{(}Q(K_{2})\circ\operatorname{gr}(\Phi)\big{)}\circ Q(K_{1})^{\top}=\{\,( \natural_{E_{1}}(e),e,e,\natural_{E_{2}}(\Phi(e)))\,\mid\,e\in\Phi^{-1}(K_{2}^ {\perp})\cap K_{1}^{\perp}\,\}\enspace. \tag{5.9}\] This is pointwise isomorphic to \(\Phi^{-1}(K_{2}^{\perp})\cap K_{1}^{\perp}\), and for \(m\in M_{1}\) we find \[\dim\big{(}\Phi^{-1}(K_{2}^{\perp})_{m}\cap(K_{1}^{\perp})_{m} \big{)} =\operatorname{rk}\!\big{(}\Phi(K_{1}^{\perp})\big{)}+ \operatorname{rk}(K_{2}^{\perp})-\dim\big{(}\Phi(K_{1}^{\perp})_{\varphi(m)}+( K_{2}^{\perp})_{\varphi(m)}\big{)}\] \[=\operatorname{rk}\!\big{(}\Phi(K_{1}^{\perp})\big{)}+ \operatorname{rk}(K_{2}^{\perp})-\operatorname{rk}(E_{2})\] \[\qquad\qquad\qquad\qquad+\dim\big{(}\Phi(K_{1}^{\perp})_{\varphi( m)}^{\perp}\cap(K_{2}^{\perp})_{\varphi(m)}\big{)}\] \[=\operatorname{rk}\!\big{(}\Phi(K_{1}^{\perp})\big{)}+ \operatorname{rk}(K_{2}^{\perp})-\operatorname{rk}(E_{2})\] \[\qquad\qquad\qquad\qquad+\dim\big{(}\Phi(K_{1})_{\varphi(m)} \cap(K_{2})_{\varphi(m)}\big{)}\enspace.\] It follows that condition (ii) of Proposition 3.12 is satisfied if and only if the dimension of \(\Phi^{-1}(K_{2})_{m}\cap(K_{1})_{m}\) is independent of \(m\in M_{1}\). Finally, the kernel of the projection of Equation (5.9) to \((Q(K_{2})\circ\operatorname{gr}(\Phi))\circ Q(K_{1})^{\top}\) is \(\Phi^{-1}(K_{2})\cap K_{1}\). Hence by item (ii) of Proposition 3.13, the composition is clean if and only if \(\Phi^{-1}(K_{2})\cap K_{1}\) has constant rank. Thus by Theorem 3.14, we get the Courant algebroid relation (5.4). The diagram (5.2) now becomes Explicitly, if \(\natural_{i}\colon K_{i}^{\perp}\to\bigl{(}K_{i}^{\perp}/K_{i}\bigr{)}/\mathcal{F}_ {i}=\underline{E}_{i}\) denotes the quotient map for \(i=1\), \(2\), then \[R(\Phi)=\left\{\,(\natural_{E_{1}}(e),\natural_{E_{2}}(\Phi(e)))\,\mid\,e\in\Phi ^{-1}(K_{2}^{\perp})\cap K_{1}^{\perp}\,\right\}\.\] Note that the vertical arrow on the left of the diagram points upwards, as the composition of the relations does not commute. As mentioned in Section 1, there is a notion that the T-duality relation should be Lagrangian. Our relation is consistent with this, through **Proposition 5.10**.: \(R(\Phi)\) is maximally isotropic, hence a Dirac structure in \(\underline{E}_{1}\times\overline{\underline{E}}_{2}\). Proof.: Since \(R(\Phi)\) is involutive by construction, we only need to show that it is maximally isotropic in \(\underline{E}_{1}\times\overline{\underline{E}}_{2}\). Using \(\operatorname{rk}(E_{1})=\operatorname{rk}(E_{2})\) and Equation (5.1), we have \[\operatorname{rk}(\underline{E}_{1}\times\overline{\underline{E}}_{2})=2 \left(\operatorname{rk}(E_{1})-2\operatorname{rk}(K_{1})\right)\,.\] We have already seen that \(\bigl{(}Q(K_{2})\circ\operatorname{gr}(\Phi)\bigr{)}\circ Q(K_{1})^{\top}\) is isomorphic to \(\Phi^{-1}(K_{2}^{\perp})\cap K_{1}^{\perp}\) pointwise, and that its projection to \(R(\Phi)\) has kernel \(\Phi^{-1}(K_{2})\cap K_{1}\). Thus \[\operatorname{rk}\bigl{(}R(\Phi)\bigr{)} =\operatorname{rk}\bigl{(}\Phi^{-1}(K_{2}^{\perp})\cap K_{1}^{ \perp}\bigr{)}-\operatorname{rk}\bigl{(}\Phi^{-1}(K_{2})\cap K_{1}\bigr{)}\] \[=\operatorname{rk}(K_{1}^{\perp})+\operatorname{rk}(K_{2}^{ \perp})-\operatorname{rk}(E_{1})=\operatorname{rk}(E_{1})-2\operatorname{rk }(K_{1})\] as required. A useful property of the anchor maps in this construction is given by **Lemma 5.11**.: Let \(E_{1}\) and \(E_{2}\) be exact Courant algebroids satisfying the assumptions of Theorem 5.8. Then \(\rho_{E_{1}}|_{\Phi^{-1}(K_{2})}\) is injective. Proof.: This follows from the compatibility of the anchors with Courant algebroid isomorphisms (2.11), written as \[\rho_{E_{1}}\circ\Phi^{-1}=(\varphi^{-1})_{*}\circ\rho_{E_{2}}\,\] together with Lemma 2.38 which shows that \(\rho_{E_{2}}|_{K_{2}}\) is injective. **Remark 5.12** (**Bisubmersions**).: In the picture of T-duality related Courant algebroids of Definition 5.3, at the level of base manifolds there are surjective submersions where \(\varphi^{*}\varpi_{2}=\varpi_{2}\circ\varphi\), with the fibres of \(\varpi_{1}\) given by the leaves of the foliation \(\mathcal{F}_{1}\) and the fibres of \(\varphi^{*}\varpi_{2}\) given by the leaves of the pullback foliation \(\varphi^{*}\mathcal{F}_{2}\). If both surjective submersions have connected fibres and the Lie bracket of vector fields satisfies \[\big{[}\mathsf{f}\big{(}\ker(\varpi_{1*})\big{)},\mathsf{f}\big{(}\ker((\varphi^ {*}\varpi_{2})_{*})\big{)}\big{]}\subset\mathsf{f}\big{(}\ker(\varpi_{1*})\big{)} +\mathsf{f}\big{(}\ker((\varphi^{*}\varpi_{2})_{*})\big{)}\, \tag{5.13}\] then there exist unique (possibly singular) foliations \(\underline{\mathcal{F}}_{1}\) on \(\mathcal{Q}_{1}\) and \(\underline{\mathcal{F}}_{2}\) on \(\mathcal{Q}_{2}\) such that [43, Corollary 2.16] \[\varpi_{1*}^{-1}(T\underline{\mathcal{F}}_{1})=(\varphi^{*}\varpi_{2})_{*}^{-1 }(T\underline{\mathcal{F}}_{2})=\ker(\varpi_{1*})+\ker((\varphi^{*}\varpi_{2} )_{*})\.\] This means that there is a _bisubmersion_ between \((\mathcal{Q}_{1},\underline{\mathcal{F}}_{1})\) and \((\mathcal{Q}_{2},\underline{\mathcal{F}}_{2})\); see e.g. [43] and references therein for the general definition of bisubmersion. This construction is an example of _Hausdorff Morita equivalent foliations_ in the sense of [43, Definition 2.1]. Equation (5.13) is an involutivity condition for the subbundle \[\ker(\varpi_{1*})+\ker((\varphi^{*}\varpi_{2})_{*})=T\mathcal{F}_{1}+\varphi^ {*}T\mathcal{F}_{2}\,\] hence \(M_{1}\) is endowed with a (possibly singular) foliation \(\mathcal{F}\) that induces the Hausdorff Morita equivalent foliations. This is a regular foliation if \(\ker(\varpi_{1*})\cap\ker((\varphi^{*}\varpi_{2})_{*})\) has constant rank. This notion will be crucial in the description of the symmetries of the background fields defined on \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\), and we will see how it naturally arises in the construction of T-duality relations. **Remark 5.14**.: Courant algebroid reduction is reformulated and extended in the language of graded symplectic reduction by [44]. It would be interesting to fit our perspective on T-duality, and more generally Courant algebroid relations, into this framework, particularly in light of the differential graded symplectic geometry approach to generalised T-duality taken in [25]. #### 5.1.1. Topological T-duality for Standard Courant Algebroids Our main focus will be the case where \(E_{1}=(\mathbb{T}M_{1},H_{1})\) and \(E_{2}=(\mathbb{T}M_{2},H_{2})\) are twisted standard Courant algebroids, where \(M_{1}\) and \(M_{2}\) are diffeomorphic via a map \(\varphi\colon M_{1}\to M_{2}\), and \(H_{1}\) and \(H_{2}\) are closed three-forms on \(M_{1}\) and \(M_{2}\) respectively. Then an exact Courant algebroid isomorphism \(\Phi\colon E_{1}\to E_{2}\) covering \(\varphi\) can be decomposed as \(\Phi=\overline{\varphi}\circ\,\mathrm{e}\,^{B}\), where \(\overline{\varphi}=\varphi_{*}+(\varphi^{-1})^{*}\) and \(B\in\Omega^{2}(M_{1})\) with \(\mathrm{d}B=H_{1}-\varphi^{*}H_{2}\), as described in Proposition 2.19. Suppose that \(M_{1}\) and \(M_{2}\) are foliated by \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\), respectively, such that \(\mathcal{Q}_{1}=M_{1}/\mathcal{F}_{1}\) and \(\mathcal{Q}_{2}=M_{2}/\mathcal{F}_{2}\) are smooth manifolds.14 An involutive isotropic subbundle can always be taken as \(K_{1}=T\mathcal{F}_{1}\oplus\{\,0\,\}\subset\mathbb{T}M_{1}.\) By Example 2.42, if \(\iota_{X_{1}}H_{1}=0\) for every \(X_{1}\in\mathsf{f}(T\mathcal{F}_{1})\), then the Courant algebroid \(E_{1}\) reduces to \(\underline{E}_{1}=(\mathbb{T}\mathcal{Q}_{1},\,\underline{H}_{1})\). Similarly, with \(K_{2}=T\mathcal{F}_{2}\oplus\{\,0\,\}\) and \(\iota_{X_{2}}H_{2}=0\) for any \(X_{2}\in\mathsf{f}(T\mathcal{F}_{2})\), the Courant algebroid \(E_{2}\) reduces to \(\underline{E}_{2}=(\mathbb{T}\mathcal{Q}_{2},\,\underline{H}_{2})\). Footnote 14: In general we do not require that \(\varphi_{*}T\mathcal{F}_{1}=T\mathcal{F}_{2}\). As we will see in Example 5.48 below, this can lead to the case of T-duality with no topology change. **Proposition 5.15**.: Let \((\mathbb{T}M_{1},H_{1})\) and \((\mathbb{T}M_{2},H_{2})\) be twisted standard Courant algebroids over foliated manifolds which are isomorphic by \(\Phi\) as described above. Suppose that * \(\iota_{X_{1}}H_{1}=0\), for all \(X_{1}\in\mathsf{f}(T\mathcal{F}_{1})\); * \(\iota_{(\varphi^{-1})_{*}X_{2}}H_{1}=\iota_{(\varphi^{-1})_{*}X_{2}}\,\mathrm{ d}B\), for all \(X_{2}\in\mathsf{f}(T\mathcal{F}_{2})\); * \(\ker\big{(}B|_{T\mathcal{F}_{1}\cap(\varphi^{-1})_{*}T\mathcal{F}_{2}}\big{)}\) has constant rank. Then \((\mathbb{T}\mathcal{Q}_{1},\,\underline{H}_{1})\) and \((\mathbb{T}\mathcal{Q}_{2},\,\underline{H}_{2})\) are T-duality related. Proof.: As discussed above, \(H_{2}=(\varphi^{-1})^{*}(H_{1}-\mathrm{d}B)\) and thus \(\iota_{X_{2}}H_{2}=0\) by item b). Hence by items a) and b) together with Corollary 3.8, there are Courant algebroid relations \(Q(\mathcal{F}_{1})\colon(\mathbb{T}M_{1},H_{1})\dashrightarrow(\mathbb{T} \mathcal{Q}_{1},\underline{H}_{1})\) and \(Q(\mathcal{F}_{2})\colon(\mathbb{T}M_{2},H_{2})\dashrightarrow(\mathbb{T} \mathcal{Q}_{2},\underline{H}_{2})\). Since \(\Phi^{-1}(K_{2})=\{\,(\varphi^{-1})_{*}v_{2}-\iota_{(\varphi^{-1})_{*}v_{2}}B \,\mid\,v_{2}\in T\mathcal{F}_{2}\,\}\), in this case \[\Phi^{-1}(K_{2})\cap K_{1}=\{\,v_{1}\in T\mathcal{F}_{1}\cap(\varphi^{-1})_{*} T\mathcal{F}_{2}\,\mid\,\iota_{v_{1}}B=0\,\}=\ker\left(B|_{T\mathcal{F}_{1}\cap( \varphi^{-1})_{*}T\mathcal{F}_{2}}\right)\,.\] By item c) this has constant rank, so \(R(\Phi)=Q(\mathcal{F}_{2})\circ\operatorname{gr}(\Phi)\circ Q(\mathcal{F}_{1}) ^{\top}\) is a Courant algebroid relation by Theorem 5.8. **Remark 5.16**.: In general \(R(\Phi)\) is not a classical Courant algebroid morphism, i.e. the isomorphism \(\Phi\colon E_{1}\to E_{2}\) does not reduce to an isomorphism \(\Phi\colon\underline{E}_{1}\to\underline{E}_{2}\). If \(\varphi_{*}T\mathcal{F}_{1}\neq T\mathcal{F}_{2}\), then there is no smooth map between \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\), so that \(R(\Phi)\) is not even a Courant algebroid morphism. This exemplifies the need for a relational approach to T-duality. ### Geometric T-duality We now introduce geometric data into our picture of T-duality, which amounts to incorporating the Polyakov functionals (1.1) into our string sigma-models; the symmetries of \(S_{0}\) are generated by Killing vectors of the metric \(g\). We will make the following assumptions. Let \(E_{1}\) and \(E_{2}\) be exact Courant algebroids over \(M_{1}\) and \(M_{2}\), respectively, isomorphic via \(\Phi\), and let them be endowed with involutive isotropic subbundles \(K_{1}\) and \(K_{2}\), respectively, that have the same rank. Suppose they satisfy the conditions to fit the diagram (5.2) and suppose that the reduced Courant algebroids \(\underline{E}_{1}\) and \(\underline{E}_{2}\) are T-duality related, i.e. the constant rank assumption of Theorem 5.8 holds. Denote the resulting T-duality relation by \(R(\Phi)\). We will work under these assumptions for the remainder of this section. We begin by endowing the reduced Courant algebroids with generalised metrics. **Definition 5.17**.: The reduced Courant algebroids \((\underline{E}_{1},V_{1}^{+})\) and \((\underline{E}_{2},V_{2}^{+})\) are _geometrically T-dual_ if \(R(\Phi)\) is a generalised isometry between the generalised metrics \(V_{1}^{+}\) and \(V_{2}^{+}\). The starting point of any T-duality transformation is the identification of the symmetries of the original background. **Definition 5.18**.: Let \(\underline{E}_{1}\) and \(\underline{E}_{2}\) be T-duality related Courant algebroids in the sense of Definition 5.3 and suppose that the unique surjective submersions \(\varpi_{i}\colon M_{i}\to\mathcal{Q}_{i}\) satisfy the involutivity condition (5.13) discussed in Remark 5.12, for \(i=1,\,2\). The subbundle \(D_{1}\coloneqq T\underline{\mathcal{F}}_{1}=\varpi_{1*}((\varphi^{-1})_{*}T \mathcal{F}_{2})\) of \(T\mathcal{Q}_{1}\) is the _distribution of T-duality directions_.15 Footnote 15: This definition can be adapted to the case where \(\underline{\mathcal{F}}_{1}\) is a singular foliation. Then we consider the T-duality directions to be defined by the locally finitely generated \(C_{c}^{\infty}(\mathcal{Q}_{1})\)-module \(\Gamma_{c}(D_{1})\) of compactly supported vector fields integrating to \(\underline{\mathcal{F}}_{1}\). We do not explicitly discuss this in the following in order to avoid further technicalities. Invariance with respect to a singular foliation relates to the work of Kotov-Strobl [45]. Since we may form the relations \(Q(K_{i})\) for \(i=1,2\), by Remark 2.40 there exists splittings \(\sigma_{i}\colon TM_{i}\to E_{i}\) of the short exact sequences which are adapted to \(K_{i}\), where \(\sigma_{i}^{*}\coloneqq\sigma_{i}^{\mathrm{t}}\circ\flat_{E_{i}}\) is the induced left splitting and \(\flat_{E_{i}}\colon E_{i}\to E_{i}^{*}\) is the isomorphism induced by the pairing. By Remark 2.37, \(\sigma_{i}\) induces a splitting \(\underline{\sigma}_{i}\) of \(\underline{E}_{i}\), giving also a left splitting \(\underline{\sigma}_{i}^{*}\). We are now ready to relate the distribution \(D_{1}\) to the symmetries of our initial background and begin to justify our terminology 'T-duality directions'. **Definition 5.19**.: Let \(\underline{\sigma}_{1}\colon T\mathcal{Q}_{1}\to\underline{E}_{1}\) be the splitting of the exact Courant algebroid \(\underline{E}_{1}\) over \(\mathcal{Q}_{1}\) induced by an adapted splitting \(\sigma_{1}\colon TM_{1}\to E_{1}\) of \(E_{1}\). The generalised metric \(V_{1}^{+}\) on \(\underline{E}_{1}\) is _invariant with respect to \(D_{1},\)_or \(D_{1}\)-invariant_, if \[[\underline{\sigma}_{1}(\underline{X}),\underline{w}_{1}]_{\underline{E}_{1} }-\underline{\rho}_{\underline{E}_{1}}^{*}\big{(}\underline{\sigma}_{1}^{*}([ \underline{\sigma}_{1}(\underline{X}),\underline{\sigma}_{1}(\underline{ \rho}_{\underline{E}_{1}}(\underline{w}_{1}))]_{\underline{E}_{1}})\big{)}\ \in\ \mathsf{ \Gamma}(V_{1}^{+}) \tag{5.20}\] for every \(\underline{X}\in\mathsf{\Gamma}(D_{1})\) and \(\underline{w}_{1}\in\mathsf{\Gamma}(V_{1}^{+}).\) **Remark 5.21**.: On an exact Courant algebroid \(E\) over \(M\) with Severa class \([H]\) and an isotropic splitting \(\sigma\colon TM\to E,\) for \(X,Y,Z\in\mathsf{\Gamma}(TM)\) one has \[H(X,Y,Z)=\left\langle[\sigma(X),\sigma(Y)]_{E},\sigma(Z)\right\rangle_{E} =\iota_{Z}\,\sigma^{*}([\![\sigma(X),\sigma(Y)]\!]_{E})\] Hence the condition (5.20) has the interpretation of an invariance condition which itself imposes no restrictions on the Severa class; Remark 5.22 below makes this point clearer. In the reduction processes, we demand that a subbundle \(W\) satisfies \([\![\mathsf{\Gamma}(K),\mathsf{\Gamma}(W)]\!]_{E}\subset\mathsf{\Gamma}(W),\) which implies that the Severa class vanishes along \(K\). Since no reduction along the distribution of T-duality directions \(D_{1}\) occurs, we need not impose \(\iota_{\underline{X}}\underline{H}=0\) for each \(\underline{X}\in\mathsf{\Gamma}(D_{1}).\) **Remark 5.22**.: Let us discuss the condition (5.20) in the case of an \(H_{1}\)-twisted standard Courant algebroid. In this case the induced splitting \(\underline{\sigma}_{1}\colon T\mathcal{Q}_{1}\to\mathbb{T}\mathcal{Q}_{1}\) is the inclusion, \(V_{1}^{+}\) corresponds to a pair \((\underline{g}_{1},\underline{b}_{1}),\) and any element \(\underline{w}_{1}\in\mathsf{\Gamma}(V_{1}^{+})\) can be written as \[\underline{w}_{1}=\underline{Y}+\iota_{\underline{Y}}(\underline{g}_{1}+ \underline{b}_{1})\,\] for some \(\underline{Y}\in\mathsf{\Gamma}(T\mathcal{Q}_{1}).\) Then \[[\![\underline{X},\underline{Y}+\iota_{\underline{Y}}(\underline{g}_{1}+ \underline{b}_{1})]_{\underline{H}_{1}}-\mathrm{pr}_{2}([\![\underline{X}, \underline{Y}]\!]_{\underline{H}_{1}}) =[\underline{X},\underline{Y}]+\iota_{\underline{Y}}(\pounds_{ \underline{X}}(\underline{g}_{1}+\underline{b}_{1})) \tag{5.23}\] \[\quad+\iota_{[\underline{X},\underline{Y}]}(\underline{g}_{1}+ \underline{b}_{1})\,\] for any \(\underline{X}\in\mathsf{\Gamma}(D_{1}),\) where \(\underline{\sigma}_{1}^{*}=\mathrm{pr}_{2}\) and \(\underline{\rho}_{1}^{*}\) is the inclusion of \(T^{*}\mathcal{Q}_{1}\) in \(\mathbb{T}\mathcal{Q}_{1}.\) Hence by imposing the condition (5.20), the second term on the right-hand side of Equation (5.23) must vanish: \[\pounds_{\underline{X}}(\underline{g}_{1}+\underline{b}_{1})=0\,\] for all \(\underline{X}\in\mathsf{\Gamma}(D_{1}).\) This implies that the metric and Kalb-Ramond field \((\underline{g}_{1},\underline{b}_{1})\) defining \(V_{1}^{+}\) have to be invariant with respect to sections of \(D_{1}.\) Next we introduce a notion of when the adapted splittings are compatible. There is an induced splitting \(\sigma_{1}^{\prime}\colon TM_{1}\to E_{1}\) coming from the diagram (5.24) **Definition 5.25**.: The adapted splittings \(\sigma_{1}\) and \(\sigma_{2}\) are _compatible_ if the induced splitting \(\sigma_{1}^{\prime}\) satisfies \[[\![(\sigma_{1}-\sigma_{1}^{\prime})(X),e]\!]_{E_{1}}=\rho_{E_{1}}^{*}\big{(} \sigma_{1}^{*}([\![\sigma_{1}(X),\sigma_{1}(\rho_{E_{1}}(e))]\!]_{E_{1}}) \big{)}\] for every \(X\in\mathsf{\Gamma}(\rho_{E_{1}}(\Phi^{-1}(K_{2})))=\mathsf{\Gamma}((\varphi^{-1}) _{*}T\mathcal{F}_{2})\) which is projectable with respect to \(\mathcal{F}_{1}\) and \(e\in\mathsf{\Gamma}(E_{1})\). **Remark 5.26**.: Let us discuss Definition 5.25 for a twisted standard Courant algebroid \(E_{1}=(\mathbb{T}M_{1},H_{1})\) and consider the diagram (5.24) with \(\Phi=\overline{\varphi}\circ\,\mathrm{e}\,^{B}\,\). Recall that \(\sigma_{i}\) is the inclusion of \(TM_{i}\) in \(\mathbb{T}M_{i}\), for \(i=1,2\). Then \[(\sigma_{1}-\sigma_{1}^{\prime})(X)=\iota_{X}B\] for all \(X\in\mathsf{\Gamma}(TM_{1})\), and \[\llbracket\iota_{X}B,Y+\xi\rrbracket_{H_{1}}-\operatorname{pr}_{2 }(\llbracket X,Y\rrbracket_{H_{1}}) =-\iota_{Y}\operatorname{d}\iota_{X}B-\iota_{Y}\iota_{X}H_{1}\] \[=-\iota_{Y}\operatorname{d}\iota_{X}B-\iota_{Y}\iota_{X} \operatorname{d}B=-\iota_{Y}\operatorname{\pounds}_{X}B\,\] for all \(Y+\xi\in\mathsf{\Gamma}(\mathbb{T}M_{1})\), where we invoked item b) of Proposition 5.15 for the second equality. Thus the splitting \(\sigma_{1}\) given by the inclusion and the splitting \(\sigma_{1}^{\prime}\) given by \(\operatorname{im}(\sigma_{1}^{\prime})=\mathrm{e}\,^{-B}\,(TM_{1})\) are compatible if and only if \(\operatorname{\pounds}_{X}B=0\) for every \(X\in\mathsf{\Gamma}((\varphi^{-1})_{*}T\mathcal{F}_{2})\). We are now ready to state our main result about the existence and uniqueness of geometrically T-dual backgrounds in the sense of Definition 5.17 through **Theorem 5.27**.: Let \(E_{1}\) and \(E_{2}\) be exact Courant algebroids over \(M_{1}\) and \(M_{2}\), respectively, which are isomorphic via \(\Phi\). Suppose they are endowed with involutive isotropic subbundles \(K_{1}\) and \(K_{2}\), respectively, of the same rank giving T-duality related Courant algebroids \(\underline{E}_{1}\) and \(\underline{E}_{2}\) with T-duality relation \(R(\Phi)\). Assume further that the bisubmersion involutivity condition (5.13) holds, and that the adapted splittings \(\sigma_{1}\) and \(\sigma_{2}\) of \(E_{1}\) and \(E_{2}\) respectively are compatible. Finally, assume that \(\underline{E}_{1}\) is endowed with a \(D_{1}\)-invariant generalised metric \(V_{1}^{+}\). Then the following are equivalent: * \(K_{2}^{\perp}\cap\Phi(K_{1})\subseteq K_{2}\.\) * \(\Phi^{-1}(K_{2})\cap K_{1}^{\perp}\subseteq K_{1}\.\) * There exists a unique generalised metric \(V_{2}^{+}\) on \(\underline{E}_{2}\) such that \(R(\Phi)\) is a generalised isometry between \(V_{1}^{+}\) and \(V_{2}^{+}\), i.e. \((\underline{E}_{1},V_{1}^{+})\) and \((\underline{E}_{2},V_{2}^{+})\) are geometrically T-dual. **Remark 5.28**.: We split the proof in three parts, beginning with Lemma 5.30 below which establishes the implications \[\text{(i)}\Longleftrightarrow\text{(ii)}\Longleftarrow\text{(iii)}. \tag{5.29}\] The final implication requires the construction of the generalised metric \(V_{2}^{+}\) on \(\underline{E}_{2}\). The first step in this construction is to show that one can lift \(V_{1}^{+}\) to \(\widetilde{W}_{1}^{+}\subset K_{1}^{\perp}\cap\Phi^{-1}(K_{2}^{\perp})\). Proposition 5.31 below establishes this result. The remainder of the proof is devoted to showing that the induced pre-\(K_{2}\)-transverse generalised metric \(W_{2}\coloneqq K_{2}\oplus\Phi(\widetilde{W}_{1}^{+})\) is a \(K_{2}\)-transverse generalised metric, and hence reduces to a generalised metric \(V_{2}^{+}\). **Lemma 5.30**.: Suppose that \(E_{1}\) and \(E_{2}\) are exact Courant algebroids over \(M_{1}\) and \(M_{2}\), respectively, isomorphic via \(\Phi\), such that \(\underline{E}_{1}\) and \(\underline{E}_{2}\) are T-duality related with T-duality relation \(R(\Phi)\). Assume that \(\underline{E}_{1}\) and \(\underline{E}_{2}\) are endowed with generalised metrics \(V_{1}^{+}\) and \(V_{2}^{+}\), respectively, such that \((\underline{E}_{1},V_{1}^{+})\) and \((\underline{E}_{2},V_{2}^{+})\) are geometrically T-dual. Then the implications (5.29) hold. Proof.: To show that (i) implies (ii), we note that \[K_{2}^{\perp}\cap\Phi(K_{1})\subseteq K_{2}\implies\Phi^{-1}(K_{2}^{\perp}) \cap K_{1}\subseteq\Phi^{-1}(K_{2})\cap K_{1}\.\] Since \(K_{2}^{\perp}\supseteq K_{2}\), it follows that \[\Phi^{-1}(K_{2}^{\perp})\cap K_{1}=\Phi^{-1}(K_{2})\cap K_{1}\.\] By Theorem 5.8, \(\Phi^{-1}(K_{2})\cap K_{1}\) has constant rank. Using \(\operatorname{rk}(K_{1})=\operatorname{rk}(K_{2})\) (condition (5.1)), we then obtain \[\operatorname{rk}\bigl{(}\Phi(K_{1})\cap K_{2}^{\perp}\bigr{)} =\operatorname{rk}\bigl{(}\Phi(K_{1})\bigr{)}+\operatorname{rk}( K_{2}^{\perp})-\operatorname{rk}\bigl{(}\Phi(K_{1})+K_{2}^{\perp}\bigr{)}\] \[=\operatorname{rk}\bigl{(}\Phi(K_{2})\bigr{)}+\operatorname{rk}( K_{2}^{\perp})-\bigl{(}\operatorname{rk}(E_{2})-\operatorname{rk}(\Phi(K_{1}^{ \perp})\cap K_{2})\bigr{)}\] \[=\operatorname{rk}(E_{2})-\operatorname{rk}(E_{2})+\operatorname{ rk}\bigl{(}\Phi(K_{1}^{\perp})\cap K_{2})\bigr{)}=\operatorname{rk}\bigl{(}\Phi(K_{1}^{ \perp})\cap K_{2})\bigr{)}\.\] Hence \[\operatorname{rk}\bigl{(}\Phi(K_{1})\cap K_{2}\bigr{)}=\operatorname{rk} \bigl{(}\Phi(K_{1})\cap K_{2}^{\perp}\bigr{)}=\operatorname{rk}\bigl{(}\Phi(K_{ 1}^{\perp})\cap K_{2}\bigr{)}\.\] One always has \(\Phi(K_{1}^{\perp})\cap K_{2}\supseteq\Phi(K_{1})\cap K_{2}\). Since their ranks are the same, this becomes an equality. In particular, we obtain condition (ii). The converse implication (ii) \(\Longrightarrow\) (i) follows from the same argument. Finally, suppose that (i) does not hold. Then there is an element \(k_{1}\in K_{1}\cap\Phi^{-1}(K_{2}^{\perp})\) such that \(\Phi(k_{1})\notin K_{2}\). Then by Proposition 4.17, \(R(\Phi)\) cannot be a generalised isometry. **Proposition 5.31**.: Suppose that \(E_{1}\) and \(E_{2}\) are exact Courant algebroids over \(M_{1}\) and \(M_{2}\), respectively, isomorphic via \(\Phi\), which are endowed with involutive isotropic subbundles \(K_{1}\) and \(K_{2}\), respectively, of the same rank, such that \(\underline{E}_{1}\) and \(\underline{E}_{2}\) are T-duality related. If either of the equivalent conditions (i) or (ii) of Theorem 5.27 holds, then there exists a splitting \(s_{0}\colon K_{1}^{\perp}/K_{1}\to K_{1}^{\perp}\) of the short exact sequence \[0\longrightarrow K_{1}\longrightarrow K_{1}^{\perp}\longrightarrow K_{1}^{ \perp}/K_{1}\longrightarrow 0 \tag{5.32}\] such that \(\operatorname{im}(\Phi\circ s_{0})\subseteq K_{2}^{\perp}\), which is unique up to elements of \(\Phi^{-1}(K_{2})\cap K_{1}\). Proof.: Take an arbitrary splitting \(s\colon K_{1}^{\perp}/K_{1}\to K_{1}^{\perp}\) of the short exact sequence (5.32). If \(\operatorname{im}(\Phi\circ s)\nsubseteq K_{2}^{\perp}\), then there exists \([e]\in K_{1}^{\perp}/K_{1}\) and \(k_{2}\in K_{2}\) such that \(\left\langle\Phi(s([e])),k_{2}\right\rangle_{E_{2}}\neq 0\). We define the vector bundle map \(\beta_{s}\) over the diffeomorphism \(\varphi\) by \[\beta_{s}=\flat_{E_{2}}\circ\Phi\circ s\colon K_{1}^{\perp}/K_{1}\longrightarrow K _{2}^{*}\,\] where \(\flat_{E_{2}}\colon E_{2}\to E_{2}^{*}\) is the isomorphism induced by the pairing \(\left\langle\,\cdot\,,\,\cdot\,\right\rangle_{E_{2}}\) on \(E_{2}\): \[\iota_{k_{2}}\,\beta_{s}([e_{1}])\coloneqq\left\langle\Phi(s([e_{1}])),k_{2} \right\rangle_{E_{2}}\,\] for \([e_{1}]\in K_{1}^{\perp}/K_{1}\) and \(k_{2}\in K_{2}\). If \(k_{2}\in K_{2}\cap\Phi(K_{1})\), then \[\left\langle\Phi(s([e_{1}])),k_{2}\right\rangle_{E_{2}}=0\] since \(s([e_{1}])\in K_{1}^{\perp}\). Thus we can further define a map \(\bar{\beta}_{0}\) by quotienting out this subspace: \[\bar{\beta}_{0}\colon K_{1}^{\perp}/K_{1}\longrightarrow\left(K_{2}/(K_{2} \cap\Phi(K_{1}))\right)^{*}\,\qquad\iota_{[k_{2}]}\bar{\beta}_{0}([e_{1}])\coloneqq\left\langle\Phi(s([e_ {1}])),k_{2}\right\rangle_{E_{2}}\,\] for \([e_{1}]\in K_{1}^{\perp}/K_{1}\) and \([k_{2}]\in K_{2}/(K_{2}\cap\Phi(K_{1}))\). Similarly we define \[\bar{\beta}\colon K_{1}\longrightarrow\left(K_{2}/(K_{2}\cap\Phi(K_{1})) \right)^{*}\,\qquad\iota_{[k_{2}]}\bar{\beta}(k_{1})\coloneqq\left\langle\Phi(k_{1}),k_{2 }\right\rangle_{E_{2}}\,\] for \(k_{1}\in K_{1}\) and \([k_{2}]\in K_{2}/(K_{2}\cap\Phi(K_{1}))\), which is again well-defined. We show that \(\bar{\beta}\) is surjective: the kernel of \(\bar{\beta}\) is given by \[\ker(\bar{\beta})=\{\,k_{1}\in K_{1}\,\mid\,\langle\Phi(k_{1}),k_{2}\rangle_{E_{ 2}}=0\text{ for all }k_{2}\in K_{2}\,\}=\Phi^{-1}(K_{2}^{\perp})\cap K_{1}. \tag{5.33}\] By assumption, \(\Phi^{-1}(K_{2}^{\perp})\cap K_{1}\subseteq\Phi^{-1}(K_{2})\), hence16 Footnote 16: See the start of the proof of Lemma 5.30. \[\ker(\bar{\beta})=\Phi^{-1}(K_{2}^{\perp})\cap K_{1}=\Phi^{-1}(K_{2})\cap K_{1 }\.\] Recalling that \(\Phi^{-1}(K_{2})\cap K_{1}\) has constant rank and that \(\operatorname{rk}(K_{1})=\operatorname{rk}(K_{2})\), it thus follows that \[\operatorname{rk}(\bar{\beta}) =\operatorname{rk}(K_{1})-\operatorname{rk}\bigl{(}\Phi^{-1}(K_{2 })\cap K_{1}\bigr{)}\] \[=\operatorname{rk}(K_{2})-\operatorname{rk}\bigl{(}\Phi(K_{1}) \cap K_{2}\bigr{)}=\operatorname{rk}\bigl{(}(K_{2}/(K_{2}\cap\Phi(K_{1})))^{*} \bigr{)}\.\] Hence \(\bar{\beta}\) is onto, and we can find a map \(\alpha\colon K_{1}^{\perp}/K_{1}\to K_{1}\) which fits into the diagram (5.34) We can now define a new splitting \(s_{0}\colon K_{1}^{\perp}/K_{1}\to K_{1}^{\perp}\) of the short exact sequence (5.32) by \[s_{0}\coloneqq s-\alpha. \tag{5.35}\] It follows that \(\operatorname{im}(\Phi\circ s_{0})\subseteq K_{2}^{\perp}\), since for every \(k_{2}\in K_{2}\) and \([e_{1}]\in K_{1}^{\perp}/K_{1}\) we can compute \[\langle\Phi(s_{0}([e_{1}])),k_{2}\rangle_{E_{2}} =\langle\Phi(s([e_{1}])),k_{2}\rangle_{E_{2}}-\langle\Phi(\alpha( [e_{1}])),k_{2}\rangle_{E_{2}}\] \[=\iota_{[k_{2}]}\bar{\beta}_{0}([e_{1}])-\iota_{[k_{2}]}\bar{ \beta}(\alpha([e_{1}]))\] \[=\iota_{[k_{2}]}\bar{\beta}_{0}([e_{1}])-\iota_{[k_{2}]}\bar{ \beta}_{0}([e_{1}])=0\.\] Hence the new splitting is a map \(s_{0}\colon K_{1}^{\perp}/K_{1}\to\Phi^{-1}(K_{2}^{\perp})\cap K_{1}^{\perp}\). To show uniqueness up to elements of \(\Phi^{-1}(K_{2})\cap K_{1}\), note that there are two sources of non-uniqueness for \(s_{0}\): one from the choice of the initial splitting \(s\colon K_{1}^{\perp}/K_{1}\to K_{1}^{\perp}\), and one from the choice of \(\alpha\) fitting the diagram (5.34). Since the map \(\alpha\) is given by the right inverse of \(\bar{\beta}\), it is unique up to elements of the kernel of \(\bar{\beta}\), which by Equation (5.33) is \(\Phi^{-1}(K_{2})\cap K_{1}\). That is, if \(\alpha\) and \(\alpha^{\prime}\) close the diagram (5.34), then there are induced splittings \(s_{0}\) and \(s_{0}^{\prime}\) respectively defined by Equation (5.35), and \[s_{0}-s_{0}^{\prime}=\alpha-\alpha^{\prime}\colon K_{1}^{\perp}/K_{1}\longrightarrow \Phi^{-1}(K_{2})\cap K_{1}\.\] Finally, suppose we started with different splittings \(s,s^{\prime}\colon K_{1}^{\perp}/K_{1}\to K_{1}^{\perp}\). Using the above construction, we obtain splittings \(s_{0},s_{0}^{\prime}\colon K_{1}^{\perp}/K_{1}\to\Phi^{-1}(K_{2}^{\perp})\cap K _{1}^{\perp}\). These splittings differ by elements of \(K_{1}\), hence \[s_{0}-s_{0}^{\prime}\colon K_{1}^{\perp}/K_{1}\longrightarrow\Phi^{-1}(K_{2}^ {\perp})\cap K_{1}=\Phi^{-1}(K_{2})\cap K_{1}\,\] giving the required uniqueness. Proof of Theorem 5.27.: Assume that condition (i) of Theorem 5.27 holds. By Theorem 4.30, there is a unique \(K_{1}\)-transverse generalised metric \(W_{1}\) on \(E_{1}\) such that \(Q(K_{1})\) is a regular transverse generalised isometry between \(W_{1}\) and \(V_{1}^{+}\), using an arbitrary lift \(\widetilde{W}_{1}^{+}\) of \(W_{1}^{+}=W_{1}/K_{1}\) to \(E_{1}\). From Proposition 5.31 it follows that we can choose \(\widetilde{W}_{1}^{+}\) such that \(\widetilde{W}_{1}^{+}\subseteq\Phi^{-1}(K_{2}^{\perp})\cap K_{1}^{\perp}\). Set \(\widetilde{W}_{2}^{+}\coloneqq\Phi(\widetilde{W}_{1}^{+})\subseteq K_{2}^{\perp}\). By the uniqueness of the splitting \(s:K_{1}^{\perp}/K_{1}\to K_{1}^{\perp}\) up to elements in \(\Phi^{-1}(K_{2})\cap K_{1}\), \(\widetilde{W}_{2}^{+}\) is unique up to elements of \(K_{2}\cap\Phi(K_{1})\). Thus we may uniquely define the bundle \[W_{2}=\widetilde{W}_{2}^{+}\oplus K_{2}\.\] The sum is direct: if \(k_{2}\in\widetilde{W}_{2}^{+}\cap K_{2}\), then \(\Phi^{-1}(k_{2})\in\widetilde{W}_{1}^{+}\). Since \(\widetilde{W}_{1}^{+}\cap K_{1}=\{\,0\,\}\), it follows that \(\Phi^{-1}(k_{2})\notin K_{1}\). But this contradicts condition (ii) unless \(k_{2}=0\). It follows that \(W_{2}\) is a pre-\(K_{2}\)-transverse generalised metric on \(E_{2}\). This can be seen by noting firstly that \(K_{2}\subset W_{2}\subset K_{2}^{\perp}\), since \(\widetilde{W}_{1}^{+}\subset\Phi^{-1}(K_{2}^{\perp})\). Moreover, for every \(w_{2}\in W_{2}\) such that \(w_{2}\notin K_{2}\), it follows that \(w_{2}\in\widetilde{W}_{2}^{+}\), hence there is an element \(w_{1}\in\widetilde{W}_{1}^{+}\) such that \(w_{2}=\Phi(w_{1})\). It then follows that17 Footnote 17: Here we abuse notation by omitting mention of the diffeomorphism \(\varphi\). \[\left\langle w_{2},w_{2}\right\rangle_{E_{2}}=\left\langle w_{1},w_{1}\right \rangle_{E_{1}}>0\.\] Thus \(\operatorname{gr}(\Phi)\) is a regular transverse generalised isometry between \(W_{1}\) and \(W_{2}\). From Corollary 4.35 it follows that \(\operatorname{gr}(\Phi)\circ Q(K_{1})^{\top}\) is a transverse generalised isometry between \(V_{1}^{+}\) and \(W_{2}\). We shall now show that \(W_{2}\) is a \(K_{2}\)-transverse generalised metric which descends to the quotient. Since \(V_{1}^{+}\) is \(D_{1}\)-invariant, for every \(\underline{w}\in\mathsf{\Gamma}(V_{1}^{+})\) and \(\underline{X}\in\mathsf{\Gamma}(D_{1})\) it follows by definition that \[[\![\underline{\sigma}_{1}(\underline{X}),\underline{w}_{1}]\!]_{\underline{ E}_{1}}-\underline{\rho}_{E_{1}}^{*}\big{(}\underline{\sigma}_{1}^{*}([\![ \underline{\sigma}_{1}(\underline{X}),\underline{\sigma}_{1}(\underline{ \rho}_{E_{1}}(\underline{w}_{1}))]\!]_{\underline{E}_{1}})\big{)}\ \in\ \mathsf{\Gamma}(V_{1}^{+}). \tag{5.36}\] Take a subset of sections \(\{\,\underline{w}_{j}\,\}\) spanning \(V_{1}^{+}\) pointwise. These lift to basic sections \(\{\,w_{j}\,\}\) spanning \(\widetilde{W}_{1}^{+}\) pointwise. Because of the bisubmersion condition (5.13), we may take a set of \(K_{1}\)-projectable vector fields \(\{\,X_{i}\,\}\) spanning \(\rho_{E_{1}}(\Phi^{-1}(K_{2}))\) pointwise. Since \(\sigma_{1}\) and \(\sigma_{2}\) are compatible, it follows that \[[\![\sigma_{1}^{\prime}(X_{i}),w_{j}]\!]_{E_{1}}=[\![\sigma_{1}(X_{i}),w_{j}]\!] _{E_{1}}-\rho_{E_{1}}^{*}\big{(}\sigma_{1}^{*}([\![\sigma_{1}(X_{i}),\sigma_{1} (\rho_{E_{1}}(w_{j}))]\!]_{E_{1}})\big{)}. \tag{5.37}\] If \(X\in\mathsf{\Gamma}(TM_{1})\) is projectable, then \(\sigma_{1}(X)\) is a basic section, since \(\sigma_{1}\) is an adapated splitting. Thus since the sections \(\sigma(X_{i})\) and \(w_{j}\) are basic, the right-hand side of Equation (5.37) is basic18 and descends to the quotient; by Equation (5.36), it moreover descends to a section of \(V_{1}^{+}\). It follows that the right-hand side of Equation (5.37) is a section of \(W_{1}\), hence \([\![\sigma_{1}^{\prime}(X_{i}),w_{j}]\!]_{E_{1}}\in\mathsf{\Gamma}(\widetilde{ W}_{1}^{+})\oplus\mathsf{\Gamma}(K_{1})\). Footnote 18: Recall that \([\![\sigma_{1}^{\prime}(X_{i}),w_{j}]\!]_{E_{1}}\in\mathsf{\Gamma}(\widetilde{ W}_{1}^{+})\oplus\mathsf{\Gamma}(\Phi^{-1}(K_{2})\cap K_{1})\). Suppose that \([\![\sigma_{1}^{\prime}(X_{i}),w_{j}]\!]_{E_{1}}\in\mathsf{\Gamma}(K_{1})\). Recall that \(X_{i}\in\mathsf{\Gamma}(\rho_{E_{1}}(\Phi^{-1}(K_{2})))\) and \(\sigma_{1}^{\prime}\) is the splitting induced by \(\sigma_{1}\). Therefore \(\sigma_{1}^{\prime}(X_{i})\in\mathsf{\Gamma}(\Phi^{-1}(K_{2}))\), and since \(w_{j}\in\mathsf{\Gamma}(\Phi^{-1}(K_{2}^{\perp})\cap K_{1}^{\perp})\) it follows from Equation (2.25) that \[[\![\sigma_{1}^{\prime}(X_{i}),w_{j}]\!]_{E_{1}}\ \in\ \mathsf{\Gamma}\big{(}\Phi^{-1}(K_{2}^{ \perp})\cap K_{1}\big{)}=\mathsf{\Gamma}\big{(}\Phi^{-1}(K_{2})\cap K_{1}\big{)}\,\] with the last equality following from condition (i). We must therefore have \[\llbracket\sigma^{\prime}_{1}(X_{i}),w_{j}\rrbracket_{E_{1}}\ \in\ \mathsf{\Gamma}(\widetilde{W}_{1}^{+})\oplus\mathsf{\Gamma}\big{(}\Phi^{-1}(K_{ 2})\cap K_{1}\big{)}. \tag{5.38}\] The final piece of the puzzle is found by noting that, since \(\rho_{E_{1}}\) is injective on \(\Phi^{-1}(K_{2})\) as shown in Lemma 5.11, it follows that \(\{\,\sigma^{\prime}_{1}(X_{i})\,\}\) is a set of sections spanning \(\Phi^{-1}(K_{2})\) pointwise. Let us denote these sections by \(k_{i}\coloneqq\sigma^{\prime}_{1}(X_{i})\). Thus for any \(k^{\prime}\in\mathsf{\Gamma}(\Phi^{-1}(K_{2}))\) and \(w\in\mathsf{\Gamma}(\widetilde{W}_{1}^{+})\) there are expansions \(k^{\prime}=\sum_{i}\,f_{i}\,k_{i}\) and \(w=\sum_{j}\,h_{j}\,w_{j}\) for some functions \(f_{i},h_{j}\in C^{\infty}(M_{1})\) such that the sums are locally finite. Using the anchored Leibniz rule (2.3) and Equation (2.6) we may then write \[\llbracket k^{\prime},w\rrbracket_{E_{1}}=\sum_{i,j}\,\Big{(}f_{i }\,h_{j}\,\llbracket k_{i},w_{j}\rrbracket_{E_{1}}-h_{j}\,\big{(}\rho_{E_{1}}( w_{j})\cdot f_{i}\big{)}\,k_{i}\] \[\qquad\qquad\qquad\qquad\qquad+\big{(}\rho_{E_{1}}(f_{i}\,k_{i}) \cdot h_{j}\big{)}\,w_{j}+h_{j}\,\left\langle k_{i},w_{j}\right\rangle_{E_{1} }\,\mathcal{D}_{E_{1}}f_{i}\Big{)}\.\] The first term lives in \(\mathsf{\Gamma}(\widetilde{W}_{1}^{+})\oplus\mathsf{\Gamma}(\Phi^{-1}(K_{2}) \cap K_{1})\) by Equation (5.38), the second term in \(\mathsf{\Gamma}(\Phi^{-1}(K_{2}))\), the third term in \(\mathsf{\Gamma}(\widetilde{W}_{1}^{+})\), and the final term is zero since \(\widetilde{W}_{1}^{+}\subset\Phi^{-1}(K_{2}^{\perp})\). Hence \[\llbracket\mathsf{\Gamma}(\Phi^{-1}(K_{2})),\mathsf{\Gamma}( \widetilde{W}_{1}^{+})\rrbracket_{E_{1}}\ \subset\ \mathsf{\Gamma}(\widetilde{W}_{1}^{+})+\mathsf{\Gamma}\big{(}\Phi^{-1}(K_{2} )\big{)}\.\] Thus \[\llbracket\mathsf{\Gamma}(K_{2}),\mathsf{\Gamma}(W_{2}) \rrbracket_{E_{1}}=\llbracket\mathsf{\Gamma}(K_{2}),\mathsf{\Gamma}( \widetilde{W}_{2}^{+})\rrbracket_{E_{1}}+\llbracket\mathsf{\Gamma}(K_{2}), \mathsf{\Gamma}(K_{2})\rrbracket_{E_{1}} \tag{5.39}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \subset\ \mathsf{\Gamma}(\widetilde{W}_{2}^{+})+\mathsf{\Gamma}(K_{2})=\mathsf{\Gamma}(W_ {2})\,\] where we have also used the property that \(K_{2}\) is involutive. Equation (5.39) shows that \(W_{2}\) is \(K_{2}\)-invariant, so by Proposition 2.33 it descends to a subbundle \(V_{2}^{+}\coloneqq\natural_{E_{2}}(W_{2})\) of \(\underline{E}_{2}.\) We know from Theorem 4.30 that \(V_{2}^{+}\) defines a generalised metric on \(\underline{E}_{2}\) such that \(Q(K_{2})\) is a regular transverse generalised isometry between \(W_{2}\) and \(V_{2}^{+}\). Finally, since \[Q(K_{2})\circ\big{(}\mathrm{gr}(\Phi)\circ Q(K_{1})^{\top}\big{)}=\{\,(\natural _{E_{1}}(e),\Phi(e),\Phi(e),\natural_{E_{2}}(\Phi(e)))\,\mid\,e\in\Phi^{-1}(K_{2 }^{\perp})\cap K_{1}^{\perp}\,\}\,\] and since \(\widetilde{W}_{1}^{\pm}\subseteq\Phi^{-1}(K_{2}^{\perp})\cap K_{1}^{\perp}\), the rank condition (4.34) is satisfied. It follows that \(Q(K_{2})\circ\mathrm{gr}(\Phi)\circ Q(K_{1})^{\top}=R(\Phi)\) is a generalised isometry between \(V_{1}^{+}\) and \(V_{2}^{+}\). Thus \((\underline{E}_{1},V_{1}^{+})\) and \((\underline{E}_{2},V_{2}^{+})\) are geometrically T-dual. Uniqueness of \(V_{2}^{+}\) follows by Corollary 4.20. **Remark 5.40** (**Classification of T-duality)**.: In Theorem 5.27 we are able to construct geometrically T-dual backgrounds because of the bisubmersion condition (5.13). This leads us to speculate that geometrically T-dual Courant algebroids can only be considered over Hausdorff Morita equivalent foliated manifolds, i.e. they might represent a subset of this equivalence class. This may lead to a classification of T-dual backgrounds. **Remark 5.41**.: So far we have not discussed the dilaton field of the string background. In order to include the dilaton in our picture, similarly to [12], we should discuss how divergence operators on Courant algebroids behave under transverse generalised isometries. We defer this to future work. #### 5.2.1. Geometric T-duality for Standard Courant Algebroids We shall now specialise Theorem 5.27 to the split case where \(({\mathbb{T}}M_{i},H_{i})\) are twisted standard Courant algebroids with Dorfman bracket characterised by \(H_{i}\in{\mathcal{O}}^{3}_{\rm cl}(M_{i})\), for \(i=1,2\). Thus, as in Section 5.1.1, the isomorphism \(\Phi\) over \(\varphi\in{\sf Diff}(M_{1},M_{2})\) decomposes as \(\Phi=\overline{\varphi}\circ\,{\rm e}\,^{B}\,,\) where \(B\in{\Omega}^{2}(M_{1})\). Suppose that \(H_{1}\), \(H_{2}\) and \(B\) satisfy the conditions of Proposition 5.15. Given foliations \({\mathcal{F}}_{1}\) and \({\mathcal{F}}_{2}\) of \(M_{1}\) and \(M_{2}\) respectively, the \(B\)-field is completely characterised by **Proposition 5.42**.: Let \(({\mathbb{T}}M_{1},H_{1})\) and \(({\mathbb{T}}M_{2},H_{2})\) be twisted standard Courant algebroids, isomorphic via \(\Phi\) as above, and suppose that the assumptions of Proposition 5.31 hold. Then \(B\in{\Omega}^{2}(M_{1})\) takes the form \[B=B_{\rm ver}+B_{\rm hor}+B_{\rm mix}\, \tag{5.43}\] where \[B_{\rm mix}\ \in\ {\sf\Gamma}\big{(}(T^{*}{\mathcal{F}}_{1}\cap\varphi^{*}{ \rm Ann}(T{\mathcal{F}}_{2}))\wedge({\rm Ann}(T{\mathcal{F}}_{1})\cap\varphi^ {*}T^{*}{\mathcal{F}}_{2})\big{)}\] gives an isomorphism between \(T{\mathcal{F}}_{1}\cap(\varphi^{-1})_{*}{\rm Ann}^{*}(T{\mathcal{F}}_{2})\) and \({\rm Ann}(T{\mathcal{F}}_{1})\cap\varphi^{*}T^{*}{\mathcal{F}}_{2}\), whereas \[B_{\rm ver}\in{\sf\Gamma}\left(\bigwedge^{2}T^{*}{\mathcal{F}}_{1}\right) \qquad\text{and}\qquad B_{\rm hor}\in{\sf\Gamma}\left(\bigwedge^{2}{\rm Ann}( T{\mathcal{F}}_{1})\right)\] vanish on \[{\sf\Gamma}\big{(}(T^{*}{\mathcal{F}}_{1}\cap\varphi^{*}T^{*}{\mathcal{F}}_{2 })\wedge(T^{*}{\mathcal{F}}_{1}\cap\varphi^{*}{\rm Ann}(T{\mathcal{F}}_{2})) \big{)}\] and \[{\sf\Gamma}\big{(}({\rm Ann}(T{\mathcal{F}}_{1})\cap\varphi^{*}{\rm Ann}(T{ \mathcal{F}}_{2}))\wedge({\rm Ann}(T{\mathcal{F}}_{1})\cap\varphi^{*}T^{*}{ \mathcal{F}}_{2})\big{)}\,\] respectively. Proof.: We find the conditions \(B\) must meet to satisfy the conditions (i) or (ii) of Theorem 5.27. Write \(N^{*}{\mathcal{F}}={\rm Ann}(T{\mathcal{F}})\). We can decompose \[B=B_{\rm ver}+B_{\rm hor}+B_{\rm mix}\] where \[B_{\rm ver}\in{\sf\Gamma}(\bigwedge^{2}T^{*}{\mathcal{F}}_{1})\,\quad B_{\rm hor }\in{\sf\Gamma}(\bigwedge^{2}N^{*}{\mathcal{F}}_{1})\qquad\text{and}\qquad B_{ \rm mix}\in{\sf\Gamma}(N^{*}{\mathcal{F}}_{1}\wedge T^{*}{\mathcal{F}}_{1})\.\] With \(\Phi=\overline{\varphi}\circ\,{\rm e}\,^{B}\,\) we have \[\Phi^{-1}(K_{2})=\left\{\,(\varphi^{-1})_{*}v_{2}-\iota_{(\varphi^{-1})_{*}v_ {2}}B\,\mid\,v_{2}\in T{\mathcal{F}}_{2}\,\right\}\.\] Thus \[\Phi^{-1}(K_{2})\cap K_{1}^{\perp}=\left\{\,(\varphi^{-1})_{*}v_{2}-\iota_{( \varphi^{-1})_{*}v_{2}}B\,\mid\,\,v_{2}\in T{\mathcal{F}}_{2}\,\ \iota_{(\varphi^{-1})_{*}v_{2}}B\in{\rm Ann}(T{ \mathcal{F}}_{1})\,\right\}\.\] Similarly \[K_{2}^{\perp}\cap\Phi(K_{1})=\left\{\,\varphi_{*}v_{1}-(\varphi^{-1})^{*}( \iota_{v_{1}}B)\,\mid\,\,v_{1}\in T{\mathcal{F}}_{1}\,\ (\varphi^{-1})^{*}(\iota_{v_{1}}B)\in{\rm Ann}(T{ \mathcal{F}}_{2})\,\right\}\.\] Conditions (i) and (ii) of Theorem 5.27 state that these must be respectively contained in \(K_{1}\) and \(K_{2}\). The conditions given in the statement of the proposition ensure that this happens. For example, if \(v_{1}\in T{\mathcal{F}}_{1}\cap(\varphi^{-1})_{*}N{\mathcal{F}}_{2}\), then \(v_{1}\in K_{1}\) but \(\varphi_{*}v_{1}\notin K_{2}\). Since \(\Phi=\overline{\varphi}\circ\,{\rm e}\,^{B}\,,\) it follows that \(\Phi(v_{1})\) cannot be in \(K_{2}^{\perp}\). The only way this can happen is if \(\iota_{v_{1}}B_{\rm mix}\in\varphi^{*}T^{*}{\mathcal{F}}_{2}\) and is non-zero. Since this happens for all \(v_{1}\in T{\mathcal{F}}_{1}\cap(\varphi^{-1})_{*}N{\mathcal{F}}_{2}\), \(B_{\rm mix}\) gives an isomorphism between \(T\mathcal{F}_{1}\cap(\varphi^{-1})_{*}N\mathcal{F}_{2}\) and \(N^{*}\mathcal{F}_{1}\cap\varphi^{*}T^{*}\mathcal{F}_{2}\). The other conditions follow from similar considerations. **Theorem 5.44**.: Let \((\mathbb{T}M_{1},H_{1})\) and \((\mathbb{T}M_{2},H_{2})\) be twisted standard Courant algebroids as in Proposition 5.15, isomorphic via \(\Phi=\overline{\varphi}\circ\mathrm{e}\,^{B}\), and suppose that \(B\) decomposes as in Equation (5.43). Take a generalised metric \(V_{1}^{+}\) on \(\mathbb{T}\mathcal{Q}_{1}\) defined by \(\underline{g}_{1}\in\mathsf{\Gamma}(\bigodot^{2}T^{*}\mathcal{Q}_{1})\) and \(\underline{b}_{1}\in\Omega^{2}(\mathcal{Q}_{1})\), and suppose that \[\pounds_{X}B=0\qquad\text{and}\qquad\pounds_{\underline{X}}\underline{g}_{1}= \pounds_{\underline{X}}\underline{b}_{1}=0 \tag{5.45}\] for all \(X\in\mathsf{\Gamma}\big{(}(\varphi^{-1})_{*}T\mathcal{F}_{2}\big{)}\) and \(\underline{X}\in\mathsf{\Gamma}(D_{1})\). Then there exists a unique generalised metric \(V_{2}^{+}\) on \(\mathbb{T}\mathcal{Q}_{2}\) such that \((\mathbb{T}\mathcal{Q}_{1},H_{1},V_{1}^{+})\) and \((\mathbb{T}\mathcal{Q}_{2},H_{2},V_{2}^{+})\) are geometrically T-dual, i.e. \(R(\Phi)\) is a generalised isometry between \(V_{1}^{+}\) and \(V_{2}^{+}\). Proof.: By Remarks 5.22 and 5.26, the conditions (5.45) ensure that the generalised metric \((\underline{g}_{1},\underline{b}_{1})\) is \(D_{1}\)-invariant and the adapted splittings are compatible. The decomposition (5.43) ensures that conditions (i) and (ii) of Theorem 5.27 are satisfied. Hence the conclusion follows by Theorem 5.27. **Remark 5.46**.: The condition \(\pounds_{X}B=0\) for \(X\in\mathsf{\Gamma}((\varphi^{-1})_{*}T\mathcal{F}_{2})\) together with item b) of Proposition 5.15 tells us that \[\iota_{X}H_{1}=\iota_{X}\,\mathrm{d}B=-\mathrm{d}(\iota_{X}B)\,\] similarly to Equation (1.4), i.e. it gives an additional symmetry of the sigma-model Wess-Zumino functional for \(M_{1}\), as discussed in Section 1.1. **Remark 5.47**.: Let us justify here how the bundle of T-duality directions is used to mimic a T-duality transformation. Notice that \(D_{1}\) is an involutive distribution of \(T\mathcal{Q}_{1}\) and hence, assuming Equation (5.45), it gives a Lie subalgebra of the isometries of \(\underline{g}_{1}\) and \(\underline{b}_{1}\). This follows by lifting vector fields in \(D_{1}\) to projectable vector fields in \((\varphi^{-1})_{*}T\mathcal{F}_{2}\) and using the Jacobi identity for the Lie bracket on \(TM_{1}\). To see why we named \(D_{1}\) the distribution of T-duality directions, consider a point \((q_{1},q_{2})\in\underline{C}\subset\mathcal{Q}_{1}\times\mathcal{Q}_{2}\), and take \(\underline{v}\in(D_{1})_{q_{1}}\). Let \(v\in\big{(}(\varphi^{-1})_{*}T\mathcal{F}_{2}\big{)}_{m_{1}}\) be such that \(\varpi_{1*}(v)=\underline{v}\), where \(\varpi_{1}(m_{1})=q_{1}\). Denote \(\varphi(m_{1})=m_{2}\), so that \(\varpi_{2}(m_{2})=q_{2}\). It follows that \(v\in(\varphi^{-1})_{*}T_{m_{2}}\mathcal{F}_{2}\cap\mathrm{Ann}^{*}(T_{m_{1}} \mathcal{F}_{1}).\) By Proposition 5.42 we have \[\iota_{v}B=\iota_{v}B_{\mathrm{mix}}\ \in\ T_{m_{1}}^{*}\mathcal{F}_{1}\cap \varphi^{*}\mathrm{Ann}(T_{m_{2}}\mathcal{F}_{2})\.\] Thus the corresponding element in \(R(\Phi)_{(q_{1},q_{2})}\) is \[\big{(}\sharp_{E_{1}}(v),\natural_{E_{2}}(\Phi(v))\big{)}=\big{(}\underline{v},\natural_{E_{2}}((\varphi^{-1})^{*}(\iota_{v}B))\big{)}\.\] This is the usual exchange of tangent and cotangent (momentum and winding) directions seen in T-duality. **Example 5.48** (**T-duality without Topology Change**).: Suppose that \(\varphi_{*}T\mathcal{F}_{1}=T\mathcal{F}_{2}\), and that \(R(\Phi)\) is a generalised isometry, i.e. \(\mathbb{T}\mathcal{Q}_{1}\) and \(\mathbb{T}\mathcal{Q}_{2}\) are geometrically T-dual. Since \(\varphi\) is foliation preserving, it descends to a diffeomorphism \(\underline{\varphi}\colon\mathcal{Q}_{1}\to\mathcal{Q}_{2}\). Then \(R(\Phi)\) is supported on \(\mathrm{gr}(\underline{\varphi})\), and hence defines a Courant algebroid morphism. By [13, Proposition 5.4] there is a fibrewise injective map \(\underline{\Phi}\colon\mathrm{pr}_{1}(R(\Phi))\to\mathbb{T}\mathcal{Q}_{2}\) such that \[R(\Phi)=\left\{\,(\underline{e}_{1},\underline{\Phi}(\underline{e}_{1})\ \bigm{|}\varepsilon_{1}\in\mathrm{pr}_{1}(R(\Phi))\right\}\,\] where \(\operatorname{pr}_{1}\colon R(\Phi)\to\mathbb{T}\mathcal{Q}_{1}\) is the projection to the first component. We now show that \(\operatorname{pr}_{1}(R(\Phi))=\mathbb{T}\mathcal{Q}_{1}\). For every \(\underline{e}_{1}\in\mathbb{T}\mathcal{Q}_{1}\), we know that using the splitting (5.35) we can find \(e_{1}\in\Phi^{-1}(K_{2}^{\perp})\cap K_{1}^{\perp}\). Hence \(\operatorname{pr}_{1}\) is onto, \(R(\Phi)\) is a classical Courant algebroid morphism, and \(\underline{\Phi}\) is a Courant algebroid isomorphism. This can be viewed as a trivial T-duality, i.e. the Severa class is unchanged. Notice also that the T-duality direction distributions \(D_{1}\) and \(D_{2}\) are zero. ## 6. T-duality Relations and Doubled Geometry In this Section we will show how the construction of Section 5 describes the geometry of various kinds of T-duality such as T-duality with a correspondence space, where the Buscher rules for torus bundles are perfectly reproduced by the generalised isometry of exact Courant algebroids, and generalised T-duality of doubled sigma-models whose target space is endowed with an almost para-Hermitian structure. In particular, we discuss explicitly the Buscher rules for arbitrary rank torus bundles, as well as the examples of T-duality for lens spaces in three dimensions and generalised T-duality for the doubled Heisenberg nilmanifold in six dimensions. ### T-duality for Correspondence Spaces We briefly recall the definition of T-duality from [8] for principal torus bundles, and show that this nicely fits into our definition. **Definition 6.1**.: Let \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\) be principal \(\mathsf{T}^{k}\)-bundles over a common base manifold \(\mathcal{B}\), and let \(\underline{H}_{1}\in\Omega^{3}_{\mathsf{T}^{k}}(\mathcal{Q}_{1})\) and \(\underline{H}_{2}\in\Omega^{3}_{\mathsf{T}^{k}}(\mathcal{Q}_{2})\) be \(\mathsf{T}^{k}\)-invariant closed three-forms. Consider the fibred product \(M:=\mathcal{Q}_{1}\times_{\mathcal{B}}\mathcal{Q}_{2}\) giving the correspondence space diagram (6.2) with \(M\) endowed with the closed three-form \(\varpi_{1}^{*}\,\underline{H}_{1}-\varpi_{2}^{*}\,\underline{H}_{2}\). Then \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\) are _T-dual_ if there is a \(\mathsf{T}^{2k}\)-invariant two-form \(B\in\Omega^{2}_{\mathsf{T}^{2k}}(M)\) such that \[\varpi_{1}^{*}\,\underline{H}_{1}-\varpi_{2}^{*}\,\underline{H}_{2}=\mathrm{ d}B\, \tag{6.3}\] and the smooth skew-symmetric map \[B\colon\mathfrak{t}_{1}^{k}\otimes\mathfrak{t}_{2}^{k}\longrightarrow\mathbb{R}\] is non-degenerate, where \(\mathfrak{t}_{i}^{k}=\ker(\varpi_{i*})\) for \(i=1,2\). **Remark 6.4** (**Correspondence Spaces as Bisubmersions)**.: The commutative diagram (6.2) represents a particular case of a bisubmersion, as discussed in Remark 5.12. The condition (5.13) is clearly satisfied, since the generators of the \(\mathfrak{t}_{1}^{k}\)-action commute with the generators of the \(\mathfrak{t}_{2}^{k}\)-action. Here the \(\mathsf{T}^{k}\)-action whose orbits are the integral manifolds of \(\ker(\varpi_{2*})\) induces a \(\mathsf{T}^{k}\)-action on \(\mathcal{Q}_{1}\) which, in order to provide Definition 6.1, is supposed to be free and proper. The distribution induced by this \(\mathsf{T}^{k}\)-action on \(\mathcal{Q}_{1}\) defines the distribution of T-duality directions \(D_{1}\), whose integral manifolds are the fibres of \(\pi_{1}\colon\mathcal{Q}_{1}\to\mathcal{B}\). Similarly, \(\mathcal{Q}_{2}\) inherits a \(\mathsf{T}^{k}\)-action, and hence a distribution \(D_{2}\), whose foliation, given by the fibres of \(\pi_{2}\colon\mathcal{Q}_{2}\to\mathcal{B}\), is Hausdorff Morita equivalent to the foliation induced by \(D_{1}\). Let us reinterpret this picture in the setting of Section 5. We start with **Lemma 6.5**.: Let \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\) be T-dual in the sense of Definition 6.1. Then there is a T-duality relation in the sense of Definition 5.3. Proof.: It is enough to find Courant algebroids \(E_{1}\) and \(E_{2}\) over manifolds \(M_{1}\) and \(M_{2}\) respectively such that Definition 5.3 is satisfied. Take \(M_{1}=M_{2}=M=\mathcal{Q}_{1}\times_{\mathcal{B}}\mathcal{Q}_{2}\) with \(E_{1}=(\mathbb{T}M,\,\varpi_{1}^{*}\,\underline{H}_{1})\) and \(E_{2}=(\mathbb{T}M,\,\varpi_{2}^{*}\,\underline{H}_{2})\), along with \(K_{i}=\mathfrak{t}_{i}^{k}\) for \(i=1,2\). Since \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\) are T-dual in the sense of Definition 6.1, there exists \(B\in\Omega^{2}_{\mathsf{T}^{2k}}(M)\) satisfying Equation (6.3) which is non-degenerate on \(\mathfrak{t}_{1}^{k}\otimes\mathfrak{t}_{2}^{k}\). Then \(E_{1}\) and \(E_{2}\) are isomorphic via \(\Phi=\,\mathrm{e}\,^{B}\) because of Equation (6.3). Moreover, \(\,\mathrm{e}\,^{-B}\left(\mathfrak{t}_{2}^{k}\right)\cap\mathfrak{t}_{1}^{k}= \{\,0\,\}\) has constant rank. Hence we may form the Courant algebroid relation \(R(\,\mathrm{e}\,^{B}\,)\) fitting the diagram which is supported on \(M=\mathcal{Q}_{1}\times_{\mathcal{B}}\mathcal{Q}_{2}\) by Lemma 5.7. Note that the converse is not true, since our notion of T-duality related Courant algebroids imposes no condition on \(B\) other than the constant rank condition c) from Proposition 5.15, which is trivially satisfied for any two-form \(B\in\Omega^{2}(M)\) since \(\mathfrak{t}_{1}^{k}\cap\mathfrak{t}_{2}^{k}=\{\,0\,\}\). We will see that, upon introducing geometric data, the possible choices for \(B\) coincide with the \(B\)-field choice in Definition 6.1. This is exemplified through **Proposition 6.6**.: Let \((\mathbb{T}\mathcal{Q}_{1},\underline{H}_{1})\) and \((\mathbb{T}\mathcal{Q}_{2},\underline{H}_{2})\) be twisted standard Courant algebroids with \(\underline{H}_{1}\in\Omega^{3}_{\mathsf{T}^{k}}(\mathcal{Q}_{1})\) and \(\underline{H}_{2}\in\Omega^{3}_{\mathsf{T}^{k}}(\mathcal{Q}_{2})\). Take a \(\mathfrak{t}_{1}^{k}\)-invariant19 generalised metric \(V_{1}^{+}\) on \((\mathbb{T}\mathcal{Q}_{1},\underline{H}_{1})\) and a \(\mathfrak{t}_{2}^{k}\)-invariant generalised metric \(V_{2}^{+}\) on \((\mathbb{T}\mathcal{Q}_{2},\underline{H}_{2})\), such that \((\mathbb{T}\mathcal{Q}_{1},\underline{H}_{1},V_{1}^{+})\) and \((\mathbb{T}\mathcal{Q}_{2},\underline{H}_{2},V_{2}^{+})\) are geometrically T-dual via \(R(\,\mathrm{e}\,^{B}\,)\), where \(B\) is \(\mathsf{T}^{2k}\)-invariant.20 Then \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\) are T-dual in the sense of Definition 6.1. Footnote 19: Here invariance is in the sense of Definition 5.19. Footnote 20: Note that this is more symmetry than we require for our definition of T-duality. Conversely, if \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\) are T-dual in the sense of Definition 6.1, and there is a \(\mathfrak{t}_{1}^{k}\)-invariant generalised metric \(V_{1}^{+}\) on \((\mathbb{T}\mathcal{Q}_{1},\underline{H}_{1},V_{1}^{+})\) and \((\mathbb{T}\mathcal{Q}_{2},\underline{H}_{2},V_{2}^{+})\) are geometrically T-dual. Proof.: Since \(R(\,\mathrm{e}\,^{B}\,)\) is a T-duality relation, it follows that Equation (6.3) is satisfied. Moreover, we have \((\mathfrak{t}_{1}^{k})^{*}\cap\mathrm{Ann}(\mathfrak{t}_{2}^{k})=(\mathfrak{t} _{1}^{k})^{*}\) and \((\mathfrak{t}_{2}^{k})^{*}\cap\mathrm{Ann}(\mathfrak{t}_{1}^{k})=(\mathfrak{t} _{2}^{k})^{*}\). Hence by Proposition 5.42, \(B_{\mathrm{mix}}\in\mathsf{\Gamma}\big{(}(\mathfrak{t}_{1}^{k})^{*}\wedge( \mathfrak{t}_{2}^{k})^{*}\big{)}\) is non-degenerate. Therefore \(B\) is non-degenerate on \(\mathfrak{t}_{1}^{k}\otimes\mathfrak{t}_{2}^{k}\). Conversely, if \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\) are T-dual in the sense of Definition 6.1, then \(B\) decomposes as in Equation (5.43), and since \(B\) is \(\mathsf{T}^{2k}\)-invariant it follows that \(\pounds_{\mathfrak{t}_{2}^{k}}B=0\). The conclusion then follows by Theorem 5.44. In this picture, we can also provide a further characterisation of the components of the three-form \(\underline{H}_{1}\in\Omega^{3}_{\mathrm{cl}}(\mathcal{Q}_{1})\) through **Lemma 6.7**.: Let \((\mathbb{T}\mathcal{Q}_{1},\underline{H}_{1})\) and \((\mathbb{T}\mathcal{Q}_{2},\underline{H}_{2})\) be twisted standard Courant algebroids with \(\underline{H}_{1}\in\Omega^{3}_{\mathbb{T}^{k}}(\mathcal{Q}_{1})\) and \(\underline{H}_{2}\in\Omega^{3}_{\mathbb{T}^{k}}(\mathcal{Q}_{2}),\) and let them be endowed with generalised metrics \(V_{1}^{+}\) and \(V_{2}^{+}\) such that they are geometrically T-dual. Then \(\varpi_{1}^{*}\,\underline{H}_{1}\) can be written as the sum of a doubly pulled back three-form \(H_{\mathcal{B}}\in\Omega^{3}(\mathcal{B})\) and a component determined by \[\iota_{X_{\mathbb{v}_{2}}}\varpi_{1}^{*}\,\underline{H}_{1}=\iota_{X_{\mathbb{ v}_{2}}}\,\mathrm{d}B\, \tag{6.8}\] for any \(X_{\mathbb{v}_{2}}\in\mathsf{\Gamma}(\mathfrak{t}_{2}^{k}).\) Proof.: Definition 6.1 implies that \(\iota_{Y_{\mathbb{v}_{1}}}\,\iota_{X_{\mathbb{v}_{1}}}\varpi_{1}^{*}\, \underline{H}_{1}=0,\) for all \(X_{\mathbb{v}_{1}},Y_{\mathbb{v}_{1}}\in\mathsf{\Gamma}(\mathfrak{t}_{1}^{k}),\) since \(B\) is \(\mathsf{T}^{2k}\)-invariant and \(\iota_{X_{\mathbb{v}_{1}}}\,\varpi_{2}^{*}\,\underline{H}_{2}=0,\) for all \(X_{\mathbb{v}_{1}}\in\mathsf{\Gamma}(\mathfrak{t}_{1}^{k}).\) Hence only two components of \(\varpi_{1}^{*}\,\underline{H}_{1}\) are non-vanishing and, because of \(\mathsf{\Gamma}^{k}\)-invariance of \(\underline{H}_{1},\) one of them must be the double pullback of a three-form on \(\mathcal{B}.\) The other component is characterised as follows. Since any \(\underline{X}\in\mathsf{\Gamma}(D_{1})\) lifts to a section \(X_{\mathbb{v}_{2}}\in\mathsf{\Gamma}(\mathfrak{t}_{2}^{k}),\) and \(\varpi_{2}^{*}\,\underline{H}_{2}\) is basic with respect to the orbits of \(\mathsf{\Gamma}^{k}\) whose tangent distribution is \(\mathfrak{t}_{2}^{k},\) that is it comes from the base \(\mathcal{Q}_{2},\) Equation (6.8) follows. **Example 6.9** (\(\boldsymbol{B}\)**-fields from Connections**).: Let \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\) be principal \(\mathsf{T}^{k}\)-bundles over a common base manifold \(\mathcal{B},\) and let them be endowed with closed three-forms \(\underline{H}_{1}\in\Omega^{3}_{\mathsf{T}^{k}}(\mathcal{Q}_{1})\) and \(\underline{H}_{2}\in\Omega^{3}_{\mathsf{T}^{k}}(\mathcal{Q}_{2}).\) Choose connections \(\theta_{1}\in\Omega^{1}(\mathcal{Q}_{1},\mathfrak{t}_{1}^{k})\) and \(\theta_{2}\in\Omega^{1}(\mathcal{Q}_{2},\mathfrak{t}_{2}^{k}),\) respectively, and suppose that \[B=-\varpi_{1}^{*}\theta_{1}\wedge\varpi_{2}^{*}\theta_{2}\.\] When Lemma 6.7 holds, we can write \[\varpi_{1}^{*}\,\underline{H}_{1}=-\varpi_{1}^{*}\theta_{1}\wedge\varpi_{2}^ {*}\big{(}c_{1}(\mathcal{Q}_{2})\big{)}+\varpi_{1}^{*}(\pi_{1}^{*}H_{\mathcal{ B}})\, \tag{6.10}\] where \(c_{1}(\mathcal{Q}_{2})=\mathrm{d}\theta_{2}\) represents the Chern class of \(\mathcal{Q}_{2}.\) This can be shown by letting \(X_{\mathbb{v}_{2}}\in\mathsf{\Gamma}(\mathfrak{t}_{2}^{k})\) be a generator of the \(\mathfrak{t}_{2}^{k}\)-action. Then \[\iota_{X_{\mathbb{v}_{2}}}\varpi_{1}^{*}\,\underline{H}_{1}=\mathrm{d}(\varpi_ {2}^{*}\theta_{2})=\varpi_{2}^{*}\big{(}c_{1}(\mathcal{Q}_{2})\big{)}\,\] and Equation (6.10) follows. A symmetric argument holds for \(\varpi_{2}^{*}\,\underline{H}_{2}.\) This is the key case considered in [8, 18] which provides criteria for the existence of T-dual pairs based on the components of the \(H\)-flux. It shows how T-duality in this case is characterised by an interchange of the Chern classes of the torus bundle with topological data associated to the \(H\)-flux. The analogy with the correspondence space picture extends further when we consider the construction in [8, Section 3], which uses the Fourier-Mukai transform, of the isomorphism21 of \(\mathsf{\Gamma}^{k}\)-invariant sections \(\mathscr{R}\colon\mathsf{\Gamma}_{\mathsf{\Gamma}^{k}}(\mathbb{T}\mathcal{Q}_ {1})\to\mathsf{\Gamma}_{\mathsf{\Gamma}^{k}}(\mathbb{T}\mathcal{Q}_{2})\). In that construction a \(\mathsf{\Gamma}^{k}\)-invariant section \(X_{1}+\xi_{1}\in\mathsf{\Gamma}_{\mathsf{\Gamma}^{k}}(\mathbb{T}\mathcal{Q}_{1})\) is lifted to a \(\mathsf{\Gamma}^{2k}\)-invariant section \(\hat{X}_{1}+\varpi_{1}^{*}\xi_{1}\in\mathsf{\Gamma}_{\mathsf{\Gamma}^{2k}}( \mathbb{T}M)\) whose image under \(B\)-field transformation is basic: Footnote 21: In [8] the isomorphism is denoted by \(\varphi,\) but here we use the symbol \(\mathscr{R}\) to distinguish it from the already used \(\varphi\) in the present paper. \[(\varpi_{1}^{*}\xi_{1})(Y_{2})+B(\hat{X}_{1},Y_{2})=0\, \tag{6.11}\] for all \(Y_{2}\in\mathsf{\Gamma}(\mathfrak{t}_{2}^{k})\). The non-degeneracy of \(B\) on \(\mathfrak{t}_{1}^{k}\otimes\mathfrak{t}_{2}^{k}\) ensures that the lift \(\hat{X}_{1}\) satisfying Equation (6.11) is unique. The isomorphism \(\mathscr{R}\) is then constructed as the pushforward by \(\varpi_{2}\) of \(\hat{X}_{1}+\varpi_{1}^{*}\xi_{1}+\iota_{\hat{X}_{1}}B\): \[\mathscr{R}(X_{1}+\xi_{1})\coloneqq\varpi_{2*}\hat{X}_{1}+\varpi_{1}^{*}\xi_{1}+ \iota_{\hat{X}_{1}}B. \tag{6.12}\] In our language, since the Courant algebroid relation \(R(\,\mathrm{e}^{\,B}\,)\) is a generalised isometry, it follows by Lemma 5.30 that conditions (i) and (ii) of Theorem 5.27 are satisfied. Hence by Proposition 5.31 there is a splitting \(s\colon(\mathfrak{t}_{1}^{k})^{\perp}/\mathfrak{t}_{1}^{k}\to(\mathfrak{t}_{1} ^{k})^{\perp}\) such that \(\mathrm{im}(s)\subset\,\mathrm{e}^{\,-B}\left((\mathfrak{t}_{2}^{k})^{\perp}\right)\). This splitting is unique up to elements in \(\,\mathrm{e}^{\,-B}\left(\mathfrak{t}_{2}^{k}\right)\cap\mathfrak{t}_{1}^{k}= \{\,0\,\}\). It follows that, at a point \((m,q_{1})\in\mathrm{gr}(\varpi_{1})\subset M\times\mathcal{Q}_{1}\), an element \(v_{1}+\nu_{1}\in\mathbb{T}_{q_{1}}\mathcal{Q}_{1}\) lifts uniquely to an element \(\hat{v}_{1}+\varpi_{1}^{*}\nu_{1}\in\mathbb{T}_{m}M\) given by \[\hat{v}_{1}+\varpi_{1}^{*}\nu_{1}\coloneqq s_{m}\big{(}\mathscr{J}_{q_{1},m} (v_{1}+\nu_{1})\big{)}\,\] such that \[(\varpi_{1}^{*}\nu_{1})(v_{2})+B(\hat{v}_{1},v_{2})=0\,\] for all \(v_{2}\in(\mathfrak{t}_{2}^{k})_{m}\). When we restrict this construction to \(\mathsf{T}^{k}\)-invariant sections, the analogy becomes exact through **Proposition 6.13**.: The Courant algebroid relation \(R(\,\mathrm{e}^{\,B}\,)\) gives rise to an isomorphism \(\underline{\Phi}:\mathsf{\Gamma}_{\mathsf{T}^{k}}(\mathbb{T}\mathcal{Q}_{1}) \to\mathsf{\Gamma}_{\mathsf{T}^{k}}(\mathbb{T}\mathcal{Q}_{2})\) of \(C^{\infty}(\mathcal{B})\)-modules which coincides with the isomorphism \(\mathscr{R}\) defined by Equation (6.12). Proof.: Recall that in this case, a basic section \(\hat{\psi}_{1}=X_{1}+\xi_{1}\in\mathsf{\Gamma}_{\mathrm{bas}}\big{(}( \mathfrak{t}_{1}^{k})^{\perp}\big{)}\) satisfies \[[\![Y_{1},X_{1}+\xi_{1}]\!]_{\underline{H}_{1}}=[Y_{1},X_{1}]+\mathscr{L}_{Y_{ 1}}\xi_{1}\,\] for every \(Y_{1}\in\mathsf{\Gamma}(\mathfrak{t}_{1}^{k})\), where \(X_{1}\in\mathsf{\Gamma}_{\mathsf{T}^{k}}(TM)\), i.e. \(X_{1}\) is projectable with respect to \(\mathfrak{t}_{1}^{k}\), and \(\xi_{1}\in\mathsf{\Gamma}(\mathrm{Ann}(\mathfrak{t}_{1}^{k})).\) Suppose that \(\psi_{1}\in\mathsf{\Gamma}_{\mathsf{T}^{k}}(\mathbb{T}\mathcal{Q}_{1})\). We know that \(\mathscr{J}\) extends to an isomorphism of \(C^{\infty}(\mathcal{Q}_{1})\)-modules \(\mathscr{J}\colon\mathsf{\Gamma}(\mathbb{T}\mathcal{Q}_{1})\to\mathsf{\Gamma}_ {\mathrm{bas}}\big{(}(\mathfrak{t}_{1}^{k})^{\perp}\big{)}/\mathsf{\Gamma}( \mathfrak{t}_{1}^{k})\). By Proposition 5.31, the unique splitting \(s\colon(\mathfrak{t}_{1}^{k})^{\perp}/\mathfrak{t}_{1}^{k}\to(\mathfrak{t}_{1 }^{k})^{\perp}\) extends to a map between sections as \[s:\mathsf{\Gamma}\big{(}(\mathfrak{t}_{1}^{k})^{\perp}\big{)}\,\big{/}\, \mathsf{\Gamma}(\mathfrak{t}_{1}^{k})\longrightarrow\mathsf{\Gamma}\big{(}( \mathfrak{t}_{1}^{k})^{\perp}\big{)}\,\] since it covers the identity, and so we may uniquely define \[\hat{\psi}_{1}\coloneqq s\big{(}\mathscr{J}(\psi_{1})\big{)}\ \in\ \mathsf{\Gamma}_{\mathsf{T}^{2k}}\big{(}\,\mathrm{e}^{\,-B}\left(( \mathfrak{t}_{2}^{k})^{\perp}\right)\big{)}\.\] Since \(B\) is \(\mathsf{T}^{2k}\)-invariant, it follows that \(e^{B}(\hat{\psi}_{1})\in\mathsf{\Gamma}_{\mathrm{bas}}\big{(}(\mathfrak{t}_{2 }^{k})^{\perp}\big{)}\). Define \(\psi_{2}\in\mathsf{\Gamma}_{\mathsf{T}^{k}}(\mathbb{T}\mathcal{Q}_{2})\) by \[\psi_{2}\coloneqq\natural_{E_{2}}\big{(}\,\mathrm{e}^{\,B}\,(\hat{\psi}_{1}) \big{)}\.\] It follows that \(\psi_{1}\sim_{R(\,\mathrm{e}^{\,B}\,)}\psi_{2}\). By construction, for every \(\psi_{1}\in\mathsf{\Gamma}_{\mathsf{T}^{k}}(\mathbb{T}\mathcal{Q}_{1})\) there is a unique section \(\psi_{2}\in\mathsf{\Gamma}_{\mathsf{T}^{k}}(\mathbb{T}\mathcal{Q}_{2})\) such that \(\psi_{1}\sim_{R(\,\mathrm{e}^{\,B}\,)}\psi_{2}\). Similarly, for every \(\psi_{2}\in\mathsf{\Gamma}_{\mathsf{T}^{k}}(\mathbb{T}\mathcal{Q}_{2})\) there is a unique section \(\psi_{1}\in\mathsf{\Gamma}_{\mathsf{T}^{k}}(\mathbb{T}\mathcal{Q}_{1})\) such that \(\psi_{2}\sim_{R(\,\mathrm{e}^{\,B}\,)^{\top}}\psi_{1}\), where \(R(\,\mathrm{e}^{\,B}\,)^{\top}\) is the transpose Courant algebroid relation. Thus the map \[\underline{\Phi}\colon\mathsf{\Gamma}_{\mathsf{T}^{k}}(\mathbb{T}\mathcal{Q}_{1} )\longrightarrow\mathsf{\Gamma}_{\mathsf{T}^{k}}(\mathbb{T}\mathcal{Q}_{2})\] defined by \[\underline{\Phi}(\psi_{1})=\psi_{2}\qquad\text{with}\quad\psi_{1}\sim_{R(\, \mathrm{e}^{\,B}\,)}\psi_{2}\] is well-defined and injective, hence bijective. It follows that \(\underline{\Phi}\) is a \(C^{\infty}(\mathcal{B})\)-module isomorphism, where the \(C^{\infty}(\mathcal{B})\)-module structures on \(\mathsf{\Gamma}_{\mathsf{T}^{k}}(\mathbb{T}\mathcal{Q}_{1})\) and \(\mathsf{\Gamma}_{\mathsf{T}^{k}}(\mathbb{T}\mathcal{Q}_{2})\) are given by pulling back smooth functions on \(\mathcal{B}\) by \(\pi_{1}\) and \(\pi_{2}\), respectively. It is clear by construction that \(\psi_{1}\sim_{R(\,\mathrm{e}^{\,B}\,)}\psi_{2}\) if and only if \(\mathscr{R}(\psi_{1})=\psi_{2}\), hence \(\underline{\Phi}=\mathscr{R}\) **Remark 6.14**.: In the construction of \(\underline{\Phi}\) in Proposition 6.13 there is no mention of a Fourier-Mukai integral transform as in the analogous construction of \(\mathscr{R}\) in [8]. Thus it may be possible to extend a correspondence space type picture for T-duality to cases of non-compact manifolds, where the Fourier-Mukai transform is not defined, such as Drinfel'd doubles. We defer this to future work. **Remark 6.15** (**Buscher Rules**).: We pick connections \(\theta_{1}\in\Omega^{1}(\mathcal{Q}_{1},\mathfrak{t}_{1}^{k})\) and \(\theta_{2}\in\Omega^{1}(\mathcal{Q}_{2},\mathfrak{t}_{2}^{k})\) for the principal \(\mathsf{T}^{k}\)-bundles \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}.\) Since \(B\) is \(\mathsf{T}^{k}\times\mathsf{T}^{k}\)-invariant, \(B_{\mathrm{hor}}\) is basic with respect to the fibration given by \(\varpi_{1}\) and \(B_{\mathrm{ver}}\) is basic with respect to \(\varpi_{2}\). Thus we may twist \(H_{1}=\varpi_{1}^{*}\,\underline{H}_{1}\) by \(-\mathrm{d}B_{\mathrm{hor}}\) and \(H_{2}=\varpi_{2}\,\underline{H}_{2}\) by \(-\mathrm{d}B_{\mathrm{ver}}\) to safely ignore these components of \(B\), since the respective Severa classes are unchanged. To fulfil the conditions of Theorem 5.44, we use Remark 6.4 to consider a \(\mathsf{T}^{k}\)-invariant generalised metric \((\underline{g}_{1},\underline{b}_{1})\) on \(\mathbb{T}\mathcal{Q}_{1}\), since \(D_{1}=\ker(\pi_{1*})\). The connection \(\theta_{i}\) gives a splitting \[T\mathcal{Q}_{i}\simeq\mathsf{Hor}(\mathcal{Q}_{i})\oplus\mathsf{Ver}( \mathcal{Q}_{i})\] into horizontal and vertical subbundles, for \(i=1,2\), whose sections we denote by \(\underline{X}^{\mathrm{h}}+\underline{X}^{\mathrm{v}_{i}}\). This also gives a decomposition of \[\bigotimes^{2}\!T^{*}\mathcal{Q}_{i}=\big{(}\bigotimes^{2}\!\mathsf{Hor}^{*}( \mathcal{Q}_{i})\big{)}\ \oplus\ \big{(}\bigotimes^{2}\!\mathsf{Ver}^{*}(\mathcal{Q}_{i})\big{)}\ \oplus\ \big{(}\mathsf{Hor}^{*}( \mathcal{Q}_{i})\otimes\mathsf{Ver}^{*}(\mathcal{Q}_{i})\big{)}\.\] We denote the corresponding components of sections as \(\underline{\alpha}^{\mathrm{h}}+\underline{\alpha}^{\mathrm{v}_{i}}+ \underline{\alpha}^{\mathrm{m}_{i}}\). Thus, denoting \[\underline{h}_{1}=\underline{g}_{1}+\underline{b}_{1}\,\] a section \(\underline{w}_{1}\in\mathsf{\Gamma}_{\mathsf{T}^{k}}(V_{1}^{+})\) can be written as \[\underline{w}_{1}=\underline{X}^{\mathrm{v}_{1}}+\underline{X}^{\mathrm{h}}+ \underline{A}_{1}\big{(}\underline{X}^{\mathrm{v}_{1}}+\underline{X}^{\mathrm{ h}}\big{)}\,\] where \[\underline{A}_{1}=\begin{pmatrix}\underline{h}_{1}^{\mathrm{v}_{1}}&\underline{ h}_{1}^{\mathrm{m}_{1}}\\ \underline{h}_{1}^{\mathrm{m}_{1}}&\underline{h}_{1}^{\mathrm{h}}\end{pmatrix} \qquad\text{with}\quad\tilde{\underline{h}}_{1}^{\mathrm{m}_{1}}=(\underline{ h}_{1}^{\mathrm{m}_{1}})^{\mathrm{t}}\.\] The pullbacks of the connections \(\theta_{1}\) and \(\theta_{2}\) give a splitting \[TM\simeq\mathsf{Hor}(M)\oplus\mathsf{Ver}_{1}(M)\oplus\mathsf{Ver}_{2}(M)\,\] and we identify \(\mathsf{\Gamma}_{\mathsf{T}^{k}}(\mathsf{Ver}_{i}(M))\simeq\mathsf{\Gamma}( \mathsf{Ver}(\mathcal{Q}_{i}))\) since they are isomorphic as \(C^{\infty}(\mathcal{Q}_{i})\)-modules.22 Thus \(\underline{w}_{1}\in\mathsf{\Gamma}_{\mathsf{T}^{k}}(V_{1}^{+})\) lifts to \(w_{1}\in\mathsf{\Gamma}_{\mathsf{T}^{k}}(W_{1})\) given by Footnote 22: This follows from the fact that there exists a fibrewise bijective map covering \(\varpi_{i}\) between them, see [38]. \[w_{1}=X^{\mathrm{v}_{1}}+X^{\mathrm{h}}+X^{\mathrm{v}_{2}}+A_{1}\big{(}X^{ \mathrm{v}_{1}}+X^{\mathrm{h}}\big{)}\,\] where by using the non-degenerate map \(B\colon\mathfrak{t}_{1}^{k}\to(\mathfrak{t}_{2}^{k})^{*}\) we write \[X^{\mathrm{v}_{2}}=-(B^{\mathrm{t}})^{-1}\big{(}\big{(}A_{1}(X^{\mathrm{v}_{1} }+X_{1}^{\mathrm{h}})\big{)}^{\mathrm{v}_{1}}\big{)}=-(B^{\mathrm{t}})^{-1} \big{(}h_{1}^{\mathrm{v}_{1}}(X^{\mathrm{v}_{1}})+h_{1}^{\mathrm{m}_{1}}(X^{ \mathrm{h}})\big{)}\, \tag{6.16}\] and \(h_{1}\) is the pullback of \(\underline{h}_{1}\) determining the \(\mathfrak{t}_{1}^{k}\)-transverse generalised metric \(W_{1}\). We can now compute \[w_{2}:=\,\mathrm{e}^{\,B}\left(w_{1}\right) =X^{\mathrm{v}_{1}}+X^{\mathrm{h}}+X^{\mathrm{v}_{2}}+\iota_{X^{ \mathrm{v}_{1}}+X^{\mathrm{h}}+X^{\mathrm{v}_{2}}}B+A_{1}\big{(}X^{\mathrm{v}_{1 }}+X^{\mathrm{h}}\big{)}\] \[=X^{\mathrm{v}_{1}}+X^{\mathrm{h}}+X^{\mathrm{v}_{2}}+B(X^{\mathrm{ v}_{1}})^{\mathrm{v}_{2}}+B(X^{\mathrm{v}_{2}})^{\mathrm{v}_{1}}+A_{1}\big{(}X^{ \mathrm{v}_{1}}+X^{\mathrm{h}}\big{)}\] \[=X^{\mathrm{v}_{1}}+X^{\mathrm{h}}+X^{\mathrm{v}_{2}}+B(X^{\mathrm{ v}_{1}})^{\mathrm{v}_{2}}+\big{(}A_{1}(X^{\mathrm{v}_{1}}+X^{\mathrm{h}})\big{)}^{ \mathrm{h}}\,\] using \(B(X^{\mathrm{h}})=0\), \(B(X^{\mathrm{v_{i}}})^{\mathrm{v_{i}}}=0\), and Equation (6.16) for the last equality. It follows that \(w_{2}\in\Gamma_{\mathsf{T}^{\mathrm{h}}}(W_{2})\) for some \(\mathfrak{k}_{2}^{\mathrm{e}}\)-transverse generalised metric \(W_{2}\), with \[A_{2}=\begin{pmatrix}h_{2}^{\mathrm{v_{2}}}&h_{2}^{\mathrm{m_{2}}}\\ \tilde{h}_{2}^{\mathrm{m_{2}}}&h_{2}^{\mathrm{h}}\end{pmatrix}\.\] This means that \[B(X^{\mathrm{v_{1}}})^{\mathrm{v_{2}}}+\big{(}A_{1}\,(X^{\mathrm{v_{1}}}+X^{ \mathrm{h}})\big{)}^{\mathrm{h}}=A_{2}(X^{\mathrm{v_{2}}}+X^{\mathrm{h}})\,\] or equivalently \[B(X^{\mathrm{v_{1}}})=h_{2}^{\mathrm{v_{2}}}(X^{\mathrm{v_{2}}})+h_{2}^{ \mathrm{m_{2}}}(X^{\mathrm{h}})\,\] and \[\tilde{h}_{1}^{\mathrm{m_{1}}}(X^{\mathrm{v_{1}}})+h_{1}^{\mathrm{h}}(X^{ \mathrm{h}})=\tilde{h}_{2}^{\mathrm{m_{2}}}(X^{\mathrm{v_{2}}})+h_{2}^{ \mathrm{h}}(X^{\mathrm{h}})\.\] Substituting Equation (6.16), we get \[B(X^{\mathrm{v_{1}}}) =-h_{2}^{\mathrm{v_{2}}}\big{(}(B^{\mathrm{t}})^{-1}(h_{1}^{ \mathrm{v_{1}}}(X^{\mathrm{v_{1}}}))\big{)}\,\] \[0 =-h_{2}^{\mathrm{v_{2}}}\big{(}(B^{\mathrm{t}})^{-1}(h_{1}^{ \mathrm{m_{1}}}(X^{\mathrm{h}}))\big{)}+h_{2}^{\mathrm{m_{2}}}(X^{\mathrm{h}})\,\] \[\tilde{h}_{1}^{\mathrm{m_{1}}}(X^{\mathrm{v_{1}}}) =-\tilde{h}_{2}^{\mathrm{m_{2}}}\big{(}(B^{\mathrm{t}})^{-1}(h_{ 1}^{\mathrm{v_{1}}}(X^{\mathrm{v_{1}}}))\big{)}\,\] \[h_{1}^{\mathrm{h}}(X^{\mathrm{h}}) =-\tilde{h}_{2}^{\mathrm{m_{2}}}\big{(}(B^{\mathrm{t}})^{-1}(h_{ 1}^{\mathrm{m_{1}}}(X^{\mathrm{h}}))\big{)}+h_{2}^{\mathrm{h}}(X^{\mathrm{h}} )\,\] yielding \[h_{2}^{\mathrm{v_{2}}} =-B\,(h_{1}^{\mathrm{v_{1}}})^{-1}\,B^{\mathrm{t}}\,\] \[h_{2}^{\mathrm{m_{2}}} =B\,(h_{1}^{\mathrm{v_{1}}})^{-1}\,h_{1}^{\mathrm{m_{1}}}\,\] \[h_{2}^{\mathrm{h}} =h_{1}^{\mathrm{h}}-\tilde{h}_{1}^{\mathrm{m_{1}}}\,(h_{1}^{ \mathrm{v_{1}}})^{-1}\,h_{1}^{\mathrm{m_{1}}}\.\] By reducing to \(h_{2}=g_{2}+b_{2}\), one can now unravel these formulas to read off the structure tensors \(\underline{g}_{2}\in\mathsf{\Gamma}(\odot^{2}T^{*}\mathcal{Q}_{2})\) and \(\underline{b}_{2}\in\mathsf{\Gamma}(\wedge^{2}T^{*}\mathcal{Q}_{2})\) of the corresponding \(\mathsf{T}^{k}\)-invariant generalised metric on \(\mathbb{T}\mathcal{Q}_{2}\). This results in global Buscher rules for the transformation of the metric and Kalb-Ramond field under T-duality. The case of an \(\mathsf{S}^{1}\)-fibration discussed in [8, Section 4] is a simplified case of the construction of Remark 6.15, which here comes from the defining conditions for the generalised isometry \(R(\,\mathrm{e}^{\,B}\,)\). **Example 6.17** (**Lens Spaces**).: We apply the construction above to an explicit example. Consider a three-dimensional \(\mathsf{S}^{1}\)-bundle \(\pi_{1}\colon\mathcal{Q}_{1}\to\mathsf{S}^{2}\). Choose a chart with coordinates \((x,y,z)\) such that \(z\) is adapted to the circle fibres and \((x,y)\) are pullback coordinates from the two-sphere \(\mathsf{S}^{2}\). Pick a connection \[\theta_{1}=\mathrm{d}z+m\,x\,\mathrm{d}y\] on \(\mathcal{Q}_{1}\) and an integral volume form \[\underline{H}_{1}=k\,\mathrm{d}x\wedge\mathrm{d}y\wedge\mathrm{d}z\,\] with \(m,k\in\mathbb{Z}\), such that \[\int_{\mathcal{Q}_{1}}\,\underline{H}_{1}=k\qquad\text{and}\qquad\int_{\mathsf{ S}^{2}}\,c_{1}(\mathcal{Q}_{1})=m\,\] where \(c_{1}(\mathcal{Q}_{1})=\mathrm{d}\theta_{1}=m\,\mathrm{d}x\wedge\mathrm{d}y\) represents the Chern class of \(\mathcal{Q}_{1}\). On \(\mathcal{Q}_{1}\) we take as metric \[\underline{g}_{1}=\pi_{1}^{*}\,g_{\mathsf{S}^{2}}+\theta_{1}\otimes\theta_{1}\,\] where \(g_{\mathsf{S}^{2}}\) is the standard round metric on \(\mathsf{S}^{2}\). Then \(\frac{\partial}{\partial\tilde{z}}\), a local expression of the generator of the \(\mathfrak{u}(1)\)-action, is a Killing vector field for \(\underline{g}_{1}\). Consider another three-dimensional \(\mathsf{S}^{1}\)-bundle \(\pi_{2}\colon\mathcal{Q}_{2}\to\mathsf{S}^{2}\) endowed with connection \[\theta_{2}=\mathrm{d}\tilde{z}+n\,x\,\mathrm{d}y\] and Chern number \(n\in\mathbb{Z}\), where \(\tilde{z}\) is the coordinate adapted to the fibres. Let \(M=\mathcal{Q}_{1}\times_{\mathsf{S}^{2}}\mathcal{Q}_{2}\) with projections \(\varpi_{i}\colon M\to\mathcal{Q}_{i}\), for \(i=1,2\), and coordinates23\((x,y,z,\tilde{z})\) pulled back from \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}.\) In order to satisfy the conditions of Theorem 5.44, we consider the two-form Footnote 23: We omit the pullback notation for the coordinates. \[B=-\varpi_{1}^{*}\theta_{1}\wedge\varpi_{2}^{*}\theta_{2}=x\,\mathrm{d}y\wedge (n\,\mathrm{d}z-m\,\mathrm{d}\tilde{z})-\mathrm{d}z\wedge\mathrm{d}\tilde{z}\] for our \(B\)-field transformation. Applying the automorphism \(\Phi=\,\mathrm{e}\,^{B}\,\), we then obtain \[H_{2}=\varpi_{1}^{*}\,\underline{H}_{1}-\mathrm{d}B =k\,\mathrm{d}x\wedge\mathrm{d}y\wedge\mathrm{d}z+m\,\mathrm{d}x \wedge\mathrm{d}y\wedge\mathrm{d}\tilde{z}-n\,\mathrm{d}z\wedge\mathrm{d}x \wedge\mathrm{d}y\] \[=(k-n)\,\mathrm{d}x\wedge\mathrm{d}y\wedge\mathrm{d}z+m\, \mathrm{d}x\wedge\mathrm{d}y\wedge\mathrm{d}\tilde{z}\.\] It follows that \(H_{2}\) is basic with respect to the fibres of \(\varpi_{2}\) if and only if \(n=k\). Thus \(\mathcal{Q}_{2}\) has three-form \(\underline{H}_{2}\) and Chern form \(c_{1}(\mathcal{Q}_{2})\) satisfying \[\int_{\mathcal{Q}_{2}}\underline{H}_{2}=m\qquad\text{and}\qquad\int_{\mathsf{ S}^{2}}c_{1}(\mathcal{Q}_{2})=k\,\] as well as metric obtained from Remark 6.15 as \[\underline{g}_{2}=\pi_{2}^{*}\,g_{\mathsf{S}^{2}}+\theta_{2}\otimes\theta_{2}\,\] which is again \(\mathsf{S}^{1}\)-invariant. Hence the lens space \(\mathsf{L}(m,k)\) is T-dual to the lens space \(\mathsf{L}(k,m)\), realising a topology change between T-dual spaces, see e.g. [17]. In particular, the Hopf fibration \(\mathsf{L}(1,0)\simeq\mathsf{S}^{3}\) without \(H\)-flux is T-dual to the trivial circle bundle \(\mathsf{L}(0,1)\simeq\mathsf{S}^{2}\times\mathsf{S}^{1}\) with \(H\)-flux. Moreover, the Hopf fibration with \(H\)-flux \(\mathsf{L}(1,1)\) is self-T-dual. ### Generalised T-duality for Para-Hermitian Manifolds We shall now study the conditions under which the relation \(R(\Phi)\) becomes a generalised isometry when the manifolds \(M_{1}\) and \(M_{2}\) admit almost para-Hermitian structures, such that the diffeomorphism \(\varphi\) preserves their split-signature metrics. #### 6.2.1. Para-Hermitian Manifolds Let us start by reviewing the main properties of para-Hermitian manifolds, see [46, 47, 48, 30] for more details. **Definition 6.18**.: An _almost para-complex structure_ on an even-dimensional manifold \(M\) is an automorphism of the tangent bundle \(TM\) such that \(\mathscr{K}^{2}=\mathds{1}_{TM}\) and whose \(\pm 1\)-eigenbundles \(L_{\pm}=\ker(\mathds{1}_{TM}\mp\mathscr{K})\) have the same rank. An _almost para-Hermitian structure_ on \(M\) is a pair \((\eta,\mathscr{K})\) of a split-signature metric \(\eta\) and an almost para-complex structure \(\mathscr{K}\in\mathsf{Aut}(TM)\) which is compatible with \(\eta\) in the sense that \[\eta\big{(}\mathscr{K}(X),\mathscr{K}(Y)\big{)}=-\eta(X,Y) \tag{6.19}\] for all \(X,Y\in\mathsf{\Gamma}(TM).\) The triple \((M,\eta,\mathscr{K})\) is an _almost para-Hermitian manifold_. An almost para-Hermitian structure \((\eta,\mathscr{K})\) is a _para-Hermitian structure_ if \(L_{\pm}\) are integrable distributions, and in this case \((M,\eta,\mathscr{K})\) is a _para-Hermitian manifold_. We will often omit the adjective 'almost' for brevity, when no confusion can arise. Equation (6.19) implies that the eigenbundles \(L_{\pm}\) are maximally isotropic with respect to \(\eta.\) The splitting \(TM=L_{+}\oplus L_{-}\) defines a splitting \[\not{\bigwedge}^{p}T^{*}M=\bigoplus_{m+n=p}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ We require a result describing generalised metrics on the tangent bundle \(TM\) of a para-Hermitian manifold \((M,\eta,\mathscr{K})\), rather than on the usual double tangent bundle \(\mathbb{T}M\).26 To distinguish between these two notions of generalised metrics, we rename the former as Footnote 26: The latter is called the ‘large Courant algebroid’ in [55, 56], where the reductions to T-dual backgrounds are carried out in the language of three-dimensional (membrane) sigma-models. **Definition 6.22**.: Let \((M,\eta,\mathscr{K})\) be an almost para-Hermitian manifold. A _generalised para-Hermitian metric_ is an automorphism \(I\in\mathsf{Aut}(TM)\) covering \(\mathds{1}_{M}\) such that \(I^{2}=\mathds{1}_{TM}\) and which, together with \(\eta\), defines a Riemannian metric \(\mathcal{H}\) via \[\mathcal{H}(X,Y)=\eta\big{(}I(X),Y\big{)}\,\] for \(X,Y\in\mathsf{\Gamma}(TM)\). This is the counterpart of Definition 4.1 for tangent bundles of para-Hermitian manifolds. In this case we also have an equivalent formulation in terms of a subbundle \(\mathscr{V}^{+}\) of \(TM\) which is maximally positive-definite with respect to \(\eta.\) This is the \(+1\)-eigenbundle of \(I\) with its orthogonal complement with respect to \(\eta\) being the \(-1\)-eigenbundle. The similarity extends even further, with the analogous results of Proposition 4.3 given by [30] **Proposition 6.23**.: Let \((M,\eta,\mathscr{K})\) be an almost para-Hermitian manifold, and \(L_{\pm}\subset TM\) the \(\pm 1\)-eigenbundles of \(\mathscr{K}\). A generalised para-Hermitian metric \(I\in\mathsf{Aut}(TM)\) defines a unique pair \((g_{+},b_{+})\) given by a fibrewise Riemannian metric \(g_{+}\in\mathsf{\Gamma}(\bigodot^{2}L_{+}^{*})\) and a two-form \(b_{+}\in\mathsf{\Gamma}(\bigwedge^{2}L_{+}^{*})\). Conversely, any such pair \((g_{+},b_{+})\) defines a generalised para-Hermitian metric. **Remark 6.24** (**Characterisation of Generalised Para-Hermitian Metrics**).: Denoting by \(\eta^{\flat}\in\mathsf{Hom}(TM,T^{*}M)\) the musical isomorphism induced by \(\eta,\) one can define a metric \(g_{-}\in\mathsf{\Gamma}(\bigodot^{2}L_{-}^{*})\) by \[g_{-}(X_{-},Y_{-})=g_{+}^{-1}\big{(}\eta^{\flat}(X_{-}),\eta^{\flat}(Y_{-}) \big{)}\,\] for \(X_{-},Y_{-}\in\mathsf{\Gamma}(L_{-}).\) The two-form \(b_{+}\) defines a map \(\gamma_{b}\colon L_{+}\to L_{-}\) given by \[b_{+}(X_{+},Y_{+})=\eta\big{(}\gamma_{b}(X_{+}),Y_{+}\big{)}\,\] for \(X_{+},Y_{+}\in\mathsf{\Gamma}(L_{+})\). The resulting Riemannian metric on \(M\) in the splitting \(TM=L_{+}\oplus L_{-}\) defined by \(\mathscr{K}\) is then \[\mathcal{H}=\begin{pmatrix}g_{+}+\gamma_{b}^{*}\,g_{-}\,\gamma_{b}&-\gamma_{b }^{*}\,g_{-}\\ -g_{-}\,\gamma_{b}&g_{-}\end{pmatrix}\, \tag{6.25}\] where \(\gamma_{b}^{*}\colon L_{-}^{*}\to L_{+}^{*}\) is the transpose map. It follows that a generalised para-Hermitian metric is completely determined by the triple \((g_{+},b_{+},\eta).\) Let us stress that the pair \((g_{+},b_{+})\) is unique for the given splitting determined by the almost para-complex structure \(\mathscr{K}.\) A different pair would result in another splitting. Equation (6.25) resembles Equation (4.4), though the background data in the latter are a Riemannian metric \(g\) and a two-form \(b\) on the tangent bundle \(TM\), rather than on the subbundle \(L_{+}\). **Remark 6.26** (**Bundle-like Generalised Para-Hermitian Metrics**).: Assume that \(L_{-}\) is involutive, thereby inducing a foliation \(\mathcal{F}_{-}\) with smooth leaf space \(\mathcal{Q}=M/\mathcal{F}_{-}.\) Then the condition that \(g_{+}\in\mathsf{\Gamma}(\bigodot^{2}L_{+}^{*})\) and \(b_{+}\in\mathsf{\Gamma}(\bigwedge^{2}L_{+}^{*})\) are pullbacks of a background metric and Kalb-Ramond field on \(\mathcal{Q}\) requires them to be basic: \[\pounds_{X_{-}}g_{+}=0\qquad\text{and}\qquad\pounds_{X_{-}}b_{+}=0\,\] for every \(X_{-}\in\mathsf{\Gamma}(L_{-})\). Notice that in order for \(X_{-}\in\mathsf{\Gamma}(L_{-})\) to be a Killing vector field for \(\mathcal{H}\), it must be an infinitesimal symmetry of \(\eta\) as well: \[\pounds_{X_{-}}\eta=0\.\] Conversely, starting with a Riemannian metric \(\underline{g}\in\mathsf{\Gamma}(\bigodot^{2}T^{*}\mathcal{Q})\) and a two-form \(\underline{b}\in\Omega^{2}(\mathcal{Q})\), we can pull these back via the surjective submersion \(\varpi\colon M\to\mathcal{Q}\), so that \(\varpi^{*}\underline{g}\in\mathsf{\Gamma}(\bigodot^{2}L_{+}^{*})\) and \(\varpi^{*}\underline{b}\in\mathsf{\Gamma}(\bigwedge^{2}L_{+}^{*})\), and hence define a generalised para-Hermitian metric \(\mathcal{H}\) via Equation (6.25). The circumstances under which a diffeomorphism represents an isometry between generalised para-Hermitian metrics is provided by **Proposition 6.27**.: Let \((M_{1},\eta_{1},\mathscr{K}_{1})\) and \((M_{2},\eta_{2},\mathscr{K}_{2})\) be almost para-Hermitian manifolds endowed with generalised para-Hermitian metrics \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\), respectively. A diffeomorphism \(\varphi\in\mathsf{Diff}(M_{1},M_{2})\) is an isometry between \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) if and only if \(\varphi\) is an isometry between \(\eta_{1}\) and \(\eta_{2}\) which intertwines \(I_{1}\in\mathsf{Aut}(TM_{1})\) and \(I_{2}\in\mathsf{Aut}(TM_{2})\): \[\varphi_{*}\circ I_{1}=I_{2}\circ\varphi_{*}. \tag{6.28}\] Proof.: Assume that \(\varphi^{*}\eta_{2}=\eta_{1}\) and Equation (6.28) hold. Then \[\varphi^{*}\mathcal{H}_{2}(X,Y)=\eta_{2}\big{(}\varphi_{*}(X),I_{2}(\varphi_{ *}(Y))\big{)}=\eta_{2}\big{(}\varphi_{*}(X),\varphi_{*}(I_{1}(Y))\big{)}= \mathcal{H}_{1}(X,Y)\] for all \(X,Y\in\mathsf{\Gamma}(TM_{1})\), where we use Equation (6.28) for the second equality and \(\varphi^{*}\eta_{2}=\eta_{1}\) for the third equality. Conversely, assuming \(\varphi^{*}\mathcal{H}_{2}=\mathcal{H}_{1}\), a similar argument establishes Equation (6.28) as well as that \(\varphi\) is an isometry between \(\eta_{1}\) and \(\eta_{2}\). #### 6.2.2. The Reduced Courant Algebroid Let \((M,\eta,\mathscr{K})\) be an almost para-Hermitian manifold such that the \(-1\)-eigenbundle \(L_{-}\) is involutive, and the induced foliation \(\mathcal{F}_{-}\) has smooth leaf space \(\mathcal{Q}=M/\mathcal{F}_{-}\) with smooth surjective submersion \(\varpi\colon M\to\mathcal{Q}\). As discussed in Section 6.2.1, there is a canonical Courant algebroid \((\mathbb{T}M,H)\), where \(H=(H^{\mathsf{can}})^{+3,-0}\) is defined as in Equation (6.20) and assumed to be closed. Since \(H\) is basic with respect to \(\varpi\), the reduction of Theorem 2.27 can be applied to give the Courant algebroid \((\mathbb{T}\mathcal{Q},\underline{H})\) over \(\mathcal{Q}\). The double tangent bundle \(\mathbb{T}\mathcal{Q}=T\mathcal{Q}\oplus T^{*}\mathcal{Q}\) is pointwise isomorphic to \(L_{+}\oplus L_{+}^{*}\). #### 6.2.3. Generalised T-duality We now describe _generalised T-duality_ for para-Hermitian manifolds [30] in the language of Section 5. This is done by first choosing diffeomorphic \(2n\)-dimensional manifolds \(M_{1}\) and \(M_{2}\), and a diffeomorphism \(\varphi\in\mathsf{Diff}(M_{1},M_{2})\). Let \((\eta_{1},\mathscr{K}_{1})\) be an almost para-Hermitian structure on \(M_{1}\) such that the \(-1\)-eigenbundle \(L_{1-}\) is involutive, and the induced foliation \(\mathcal{F}_{1-}\) has smooth leaf space \(\mathcal{Q}_{1}=M_{1}/\mathcal{F}_{1-}\) with smooth surjective submersion \(\varpi_{1}\colon M_{1}\to\mathcal{Q}_{1}\). Let \((M_{2},\eta_{2})\) be a smooth manifold endowed with a split-signature metric \(\eta_{2}\) such that \(\varphi^{*}\eta_{2}=\eta_{1}\). Choosing a T-duality direction in the quotient tangent space \(T\mathcal{Q}_{1}\), an almost para-Hermitian structure \((\eta_{2},\mathscr{K}_{2})\) can be constructed on \(M_{2}\) which, provided its \(-1\)-eigenbundle \(L_{2-}\) is involutive and the induced foliation \(\mathcal{F}_{2-}\) has smooth leaf space, gives the T-dual space \(\mathcal{Q}_{2}=M_{2}/\mathcal{F}_{2-}\) to \(\mathcal{Q}_{1}\). We discuss how this construction is obtained, starting with a local characterisation of the almost para-Hermitian structure \((\eta_{1},\mathscr{K}_{1})\) on \(M_{1}.\) Let \(U_{1}\subset M_{1}\) be an open subset associated with a coordinate chart, and choose a local frame \(\{\,Z_{I}\,\}_{I=1,\dots,2n}=\{\,Z_{i},\tilde{Z}^{i}\,\}_{i=1,\dots,n}\) which diagonalises \(\mathscr{K}_{1},\) i.e. such that \(Z_{i}\in\Gamma_{U_{1}}(L_{1+})\) are \(+1\)-eigenvectors and \(\tilde{Z}^{i}\in\Gamma_{U_{1}}(L_{1-})\) are \(-1\)-eigenvectors of \(\mathscr{K}_{1}\) at every point in \(U_{1}.\) Denote the dual coframe by \(\{\,\Theta^{I}\,\}_{I=1,\dots,2n}=\{\,\Theta^{i},\tilde{\Theta}_{i}\,\}_{i=1, \dots,n}\). In this frame, the tensors \(\mathscr{K}_{1}|_{U_{1}}\), \(\eta|_{U_{1}}\) and \(\omega_{1}|_{U_{1}}\) can be written as27 Footnote 27: We use the Einstein convention for summation over repeated upper and lower indices throughout. \[\mathscr{K}_{1}|_{U_{1}}=\Theta^{i}\otimes Z_{i}-\tilde{\Theta}_{i}\otimes \tilde{Z}^{i}\quad,\quad\eta_{1}|_{U_{1}}=\Theta^{i}\otimes\tilde{\Theta}_{i}+ \tilde{\Theta}_{i}\otimes\Theta^{i}\quad,\quad\omega_{1}|_{U_{1}}=\Theta^{i} \wedge\tilde{\Theta}_{i}\.\] In the chart \(U_{1}\), the frame fields \(\{\,Z_{I}\,\}_{I=1,\dots,2n}\) close a Lie algebra \[[Z_{I},Z_{J}]=C_{IJ}{}^{K}\,Z_{K}\.\] In the splitting \(TM_{1}=L_{1+}\oplus L_{1-}\) this becomes the Lie algebra \[\begin{split}&[Z_{i},Z_{j}]=f_{ij}{}^{k}\,Z_{k}+H_{ijk}\,\tilde{Z} ^{k}\,\\ &[Z_{i},\tilde{Z}^{j}]=f_{ki}{}^{j}\,\tilde{Z}^{k}+Q_{i}{}^{jk}\,Z _{k}\,\\ &[\tilde{Z}^{i},\tilde{Z}^{j}]=Q_{k}{}^{ij}\,\tilde{Z}^{k}+R^{ ijk}\,Z_{k}\.\end{split} \tag{6.29}\] That \(L_{1-}\) is integrable is equivalent to \(R^{ijk}=0\) for all \(i,j,k=1,\dots,n\). Lemma 6.34 below tells us that \(Z_{i}\) are \(\varpi_{1}\)-projectable if and only if \(Q_{i}{}^{jk}=0\). **Remark 6.30**.: In string theory the local structure functions in Equation 6.29 are called _generalised fluxes_. In particular, \(f_{ij}{}^{k}\) and \(H_{ijk}\) are known as _geometric fluxes_, while \(Q_{i}{}^{jk}\) and \(R^{ijk}\) are _non-geometric fluxes_. Only when \(R^{ijk}\) vanishes is a reduction to a smooth quotient space \(\mathcal{Q}_{1}=M_{1}/\mathcal{F}_{1-}\) possible. The condition that \(Q_{i}{}^{jk}\) vanishes tells us that \(L_{1-}\) is locally abelian. Note that the chart diagonalising \(\mathscr{K}_{1}\) need not be a chart adapted to the foliation \(\mathcal{F}_{1-}\), in which we could write \(\tilde{Z}^{i}=\frac{\partial}{\partial\tilde{x}_{i}}\) where \(\tilde{x}_{i}\) are coordinates adapted to the leaves. Conversely, if the diagonalising chart is adapted to the foliation \(\mathcal{F}_{1-}\), then \(Q_{i}{}^{jk}\) vanish. That is, even though \(L_{1-}\) is always locally abelian (by looking in an adapted chart), we require that the (possibly not adapted) chart diagonalising \(\mathscr{K}_{1}\) is also locally abelian. Note also that for the procedure to work, we do not require that the local diagonalising frame fields \(Z_{i}\) and dual one-forms \(\Theta^{i}\) are basic sections, since the only requirement for reduction in the split case is that \(H_{1}\) be basic, see Example 2.42. Thus the condition that \(Q_{i}{}^{jk}\) vanish could be relaxed, though in that case the resulting relations \(Q(L_{i-})\) and \(R(\Phi)\) would have a more complex and less insightful description. Thus, for sake of simplicity, we keep this assumption. Assuming \(Q_{i}{}^{jk}=0\), the vector fields \(\underline{Z}_{i}\coloneqq\varpi_{1*}(Z_{i})\) give local coordinates for the chart \(\underline{U}_{1}\coloneqq\varpi_{1}(U_{1})\subset\mathcal{Q}_{1}\). Fixing \(d\in\{\,1,\dots,n\,\}\), define \(D_{1}|_{\underline{U}_{1}}=\operatorname{Span}\{\,\underline{Z}_{1},\dots, \underline{Z}_{d}\,\}\). This can be done in a neighbourhood of every point in \(M_{1}\), and so gives a subbundle \(D_{1}\subset T\mathcal{Q}_{1}\) which will become the T-duality directions, as in Definition 5.18. On \(M_{1}\) this gives the local frame for \(U_{1}\) as28 Footnote 28: From now on, upper case Latin indices run from \(1\) to \(2n\), lower case Latin indices run from \(1\) to \(n\), lower case Greek letters run from \(d+1\) to \(n\), and mirrored lower case Greek letters run from \(1\) to \(d\). \[\{\,Z_{q},Z_{\mu},\tilde{Z}^{\prime},\tilde{Z}^{\nu}\,\}\,\quad q,\nu\in\{\,1,\dots,d\,\}\,\ \ \mu,\nu\in\{\,d+1,\dots,n\,\}\.\] To obtain an almost para-Hermitian structure on \(M_{2}\), notice that \((\varphi^{-1})^{*}\eta_{1}\) defines a split-signature metric on \(M_{2}\). In the coordinate chart \(U_{2}\coloneqq\varphi(U_{1})\), define the vectors \[Z^{\prime}_{u}\coloneqq\varphi_{*}(\tilde{Z}^{\prime u})\quad,\quad\tilde{Z}^{ \prime u}\coloneqq\varphi_{*}(Z_{u})\quad,\quad Z^{\prime}_{\mu}\coloneqq \varphi_{*}(Z_{\mu})\quad,\quad\tilde{Z}^{\prime\nu}\coloneqq\varphi_{*}( \tilde{Z}^{\nu})\, \tag{6.31}\] which give the respective dual one-forms \(\Theta^{\prime u}\), \(\Theta^{\prime u}\), \(\tilde{\Theta}^{\prime}_{u}\) and \(\tilde{\Theta}^{\prime}_{\nu}\) whose pullbacks satisfy similar equations to those of Equation (6.31). The almost para-Hermitian structure on \(U_{2}\) is then given by \[\mathscr{K}_{2}|_{U_{2}}=\Theta^{\prime i}\otimes Z^{\prime}_{i}-\tilde{ \Theta}^{\prime}_{i}\otimes\tilde{Z}^{\prime i}\quad,\quad\eta_{2}|_{U_{2}}= \Theta^{\prime i}\otimes\tilde{\Theta}^{\prime}_{i}+\tilde{\Theta}^{\prime}_{ i}\otimes\Theta^{\prime i}\quad,\quad\omega_{2}|_{U_{2}}=\Theta^{\prime i}\wedge \tilde{\Theta}^{\prime}_{i}\.\] Since we can make this construction in the neighbourhood of every point, we get an almost para-Hermitian structure \((\eta_{2},\mathscr{K}_{2})\) on \(M_{2}\) for which \(\{\,Z^{\prime}_{I}\,\}_{I=1,\ldots,2n}=\{\,Z^{\prime}_{i},\tilde{Z}^{\prime i} \,\}_{i=1,\ldots,n}\) is a diagonalising frame for \(\mathscr{K}_{2}\) in the coordinate chart \(U_{2}\). We thus see that \(\varphi^{*}\eta_{2}=\eta_{1}\), while \(\varphi^{*}\mathscr{K}_{2}\neq\mathscr{K}_{1}\). The pullback \(\varphi^{*}\mathscr{K}_{2}\) defines an almost para-complex structure \(\mathscr{K}^{\prime}_{1}\) on \(M_{1}\) compatible with \(\eta_{1}\), and hence gives a new splitting \(TM_{1}=L^{\prime}_{1+}\oplus L^{\prime}_{1-}\). On the coordinate chart \(U_{1}\), define \[L^{++}_{1}|_{U_{1}} =L_{1+}|_{U_{1}}\cap L^{\prime}_{1+}|_{U_{1}}\quad,\quad L^{+-}_{ 1}|_{U_{1}}=L_{1+}|_{U_{1}}\cap L^{\prime}_{1-}|_{U_{1}}\,\] \[L^{-+}_{1}|_{U_{1}} =L_{1-}|_{U_{1}}\cap L^{\prime}_{1+}|_{U_{1}}\quad,\quad L^{--}_{ 1}|_{U_{1}}=L_{1-}|_{U_{1}}\cap L^{\prime}_{1-}|_{U_{1}}\.\] Notice that \(L^{++}_{1}|_{U_{1}}=\operatorname{Span}\left\{\,Z_{\mu}\,\right\}_{\mu=d+1, \ldots,n}\), and hence it patches into a subbundle \(L^{++}_{1}\). Similarly \(L^{+-}_{1}\), \(L^{-+}_{1}\) and \(L^{--}_{1}\) are subbundles of \(TM_{1}\). In particular, \(L^{--}_{1}\) has constant rank. Lemma 6.34 below gives the conditions on the local structure functions for the chosen frame such that \(L_{2-}\), the \(-1\)-eigenbundle of \(\mathscr{K}_{2}\), is integrable and hence defines a regular foliation \(\mathcal{F}_{2-}\). Assuming this holds, and assuming the quotient \(\mathcal{Q}_{2}\coloneqq M_{2}/\mathcal{F}_{2-}\) is smooth with smooth surjective submersion \(\varpi_{2}\colon M_{2}\to\mathcal{Q}_{2}\), we obtain the candidate T-dual manifold \(\mathcal{Q}_{2}\). We set \(\underline{C}\coloneqq\left\{\,(\varpi_{1}(m_{1}),\varpi_{2}(\varphi(m_{1}))) \,\mid\,m_{1}\in M_{1}\,\right\}\). As discussed in Section 5.1, this will be the support of our T-duality relation. Theorem 5.8 requires that \(\underline{C}\) is smooth, which we assume, and that \(L^{--}_{1}\) has constant rank, which we have already established. To obtain a Courant algebroid relation \(R(\Phi)\colon\mathbb{T}\mathcal{Q}_{1}\dashrightarrow\mathbb{T}\mathcal{Q}_ {2}\) supported on \(\underline{C}\), we construct the Courant algebroid isomorphism \(\Phi\colon\mathbb{T}M_{1}\to\mathbb{T}M_{2}\) covering \(\varphi\). Thus we need an appropriate \(B\)-field. In para-Hermitian geometry, a \(B\)-field transformation from \(L_{1+}\) to \(L_{1-}\) is induced by a two-form \(B_{+}\in\mathfrak{l}(\wedge^{2}L^{*}_{1+})\). The fundamental two-form \(\omega_{1}\) and the canonical three-form \(H^{\mathsf{can}}_{1}\) map to \[\omega_{B_{+}}=\omega_{1}-2\,B_{+}\qquad\text{and}\qquad H^{\mathsf{can}}_{B_{ +}}=H^{\mathsf{can}}_{1}-\mathrm{d}B_{+}\] by Equation (6.20) and [53, Proposition 5.9], while the split-signature metric \(\eta_{1}\) is preserved. Following this, we use the \(\eta_{1}\)-compatible para-complex structures \(\mathscr{K}_{1}\) and \(\mathscr{K}^{\prime}_{1}\coloneqq\varphi^{*}\mathscr{K}_{2}\) on \(M_{1}\), with respective fundamental two-forms \(\omega_{1}\) and \(\omega^{\prime}_{1}\coloneqq\varphi^{*}\omega_{2}\), to define \[B=\tfrac{1}{2}\left(\omega_{1}-\omega^{\prime}_{1}\right)\,.\] Locally this is given by \[B|_{U_{1}}=\Theta^{\prime u}\wedge\tilde{\Theta}_{\mu}\.\] We now construct the T-duality relation \(R(\Phi)\colon\mathbb{T}\mathcal{Q}_{1}\dashrightarrow\mathbb{T}\mathcal{Q}_ {2}\) coming from the reduction of the twisted standard Courant algebroids over \(\eta\)-isometric almost para-Hermitian manifolds through **Proposition 6.32**.: Suppose that the structure functions for the local Lie algebra (6.29) satisfy \[R^{ijk}=0\,\ Q_{i}{}^{jk}=0\,\ f_{{}_{4\nu}}{}^{i}=0\,\ f_{{}_{4\mu}}{}^{i}=0\,\ H_{{}_{4\nu }\alpha}=0\,\ H_{{}_{4\nu}\alpha}=0\,\ H_{{}_{4\mu}}=H_{{}_{4\mu}\mu} \tag{6.33}\] for every \(i,j,k=1,\ldots,n\), \(\mathpzc{u},\mathpzc{v},\mathpzc{o}=1,\ldots,d\) and \(\mu=d+1,\ldots,n\). Suppose moreover that \(H_{1}\) is closed. Then the T-duality relation \[R(\Phi)\colon(\mathbb{T}\mathcal{Q}_{1},\underline{H}_{1})\ \smash{\hbox{\hbox to 0.0pt{\hbox{$ \ In the splitting \(TM_{1}=L_{1+}\oplus L_{1-}\) this reads \[\begin{array}{l}\mathrm{d}\Theta^{i}=-\frac{1}{2}\left(f_{jk}{}^{i}\,\Theta^{j} \wedge\Theta^{k}+R^{ijk}\,\tilde{\Theta}_{j}\wedge\tilde{\Theta}_{k}\right)-Q_{ k}{}^{ji}\,\Theta^{k}\wedge\tilde{\Theta}_{j}\,\\ \\ \mathrm{d}\tilde{\Theta}_{i}=f_{ij}{}^{k}\,\tilde{\Theta}_{k}\wedge\Theta^{j}- \frac{1}{2}\left(Q_{i}{}^{jk}\,\tilde{\Theta}_{j}\wedge\tilde{\Theta}_{k}+H_{ ijk}\,\Theta^{j}\wedge\Theta^{k}\right)\,.\end{array} \tag{6.39}\] The one-form \(\Theta^{i}\) is basic with respect to \(\varpi_{1}\), for \(i=1,\dots,n\), if and only if \(\iota_{\not{Z}^{j}}\Theta^{i}=0\) and \(\mathcal{L}_{\not{Z}^{j}}\Theta^{i}=0\) for each \(j=1,\dots,n\). The first condition holds by definition, thus assuming Equations (6.35) and (6.36) we get \[\mathcal{L}_{\not{Z}^{j}}\Theta^{i}=\iota_{\not{Z}^{j}}\,\mathrm{d}\Theta^{i }+\mathrm{d}\,\iota_{\not{Z}^{j}}\Theta^{i}=-\frac{1}{2}\,f_{kl}{}^{i}\,\iota _{\not{Z}^{j}}\left(\Theta^{k}\wedge\Theta^{l}\right)=0\.\] To show item 4), recall that \(B=\Theta^{\mu}\wedge\tilde{\Theta}_{\mu}\), and since \(H_{1}=\mathrm{d}\omega_{1}^{+3,-0}\) one must find the constraints such that \((\varphi^{-1})^{*}(H_{1}-\mathrm{d}B)\) is basic, i.e. \((\varphi^{-1})^{*}(H_{1}-\mathrm{d}B)\in\mathsf{\Gamma}\bigl{(}\wedge^{+3,-0} T^{*}M_{2}\bigr{)}\). After a lengthy calculation, one arrives at the conditions (6.38). Proof of Proposition 6.32.: Let \(K_{1}=L_{1-}\oplus\{\,0\,\}\) and \(K_{2}=L_{2-}\oplus\{\,0\,\}\). Since the three-form \(H_{1}=\mathrm{d}\omega_{1}^{+3,-0}\) is closed, item a) of Proposition 5.15 is satisfied. Item b) of Proposition 5.15 is satisfied under the assumption of Equation (6.33), and since \(H_{1}\) is closed it follows that \(H_{2}\coloneqq(\varphi^{-1})^{*}(H_{1}-\mathrm{d}B)\) is closed. Hence the Courant algebroids \((\mathbb{T}M_{1},H_{1})\) and \((\mathbb{T}M_{2},H_{2})\) are isomorphic via \(\Phi\). Finally, \(\ker(B|_{T\mathcal{F}_{1-}\cap(\varphi^{-1})_{*}T\mathcal{F}_{2-}})=T\mathcal{ F}_{1-}\cap(\varphi^{-1})_{*}T\mathcal{F}_{2-}=L_{1}^{--}\) has constant rank \(n-d\), and hence item c) of Proposition 5.15 is satisfied. Applying Proposition 5.15 we thus obtain a Courant algebroid relation supported on \(\underline{C}=\{\,(q_{1},q_{2})\in\mathcal{Q}_{1}\times\mathcal{Q}_{2}\,\mid q _{1}=\varpi_{1}(m_{1})\,\ q_{2}=\varpi_{2}(\varphi(m_{1}))\,\ m_{1}\in M_{1}\,\}\) which, at a point \((q_{1},q_{2})\in\underline{C}\), is given by \[R(\Phi)_{(q_{1},q_{2})}=\mathrm{Span}\,\{\,(\not{Z}_{\mu},\underline{\Theta}^ {\prime\mu})\,,\,(\not{Z}_{\mu},Z_{\mu}^{\prime})\,,\,(\underline{\Theta}^{ \nu},Z_{\nu}^{\prime})\,,\,(\underline{\Theta}^{\nu},\underline{\Theta}^{\prime \nu})\,\}\ \subset\ \mathbb{T}_{q_{1}}\mathcal{Q}_{1}\times\overline{\mathbb{T}_{q_{2}}\mathcal{Q} _{2}}\,\] as required. From now on, we assume that the conditions of Proposition 6.32 are satisfied. Having constructed the T-duality relation \(R(\Phi)\), we show that it gives a generalised isometry when appropriate background fields are taken on \(\mathbb{T}\mathcal{Q}_{1}\) and \(\mathbb{T}\mathcal{Q}_{2}\). The first step is to provide conditions under which the \(B\)-field has the required symmetry, which we establish through **Lemma 6.40**.: \(\mathcal{L}_{X}B=0\) for all \(X\in\mathsf{\Gamma}\bigl{(}(\varphi^{-1})_{*}T\mathcal{F}_{2-}\bigr{)}\) if and only if \(H_{q\nu\mu}=H_{q\mu\nu}\) for each \(\mu,\nu=1,\dots,d\) and \(\mu=d+1,\dots,n\). Proof.: The proof again follows from the Maurer-Cartan equations (6.39) and Cartan calculus. One checks the conditions in a local coordinate chart \(U_{1}\), wherein \(\mathcal{L}_{X}B=0\) for all \(X\in\mathrm{Span}\,\{\,Z_{\mu},\tilde{Z}^{\mu}\,\}=\mathsf{\Gamma}_{U_{1}} \bigl{(}(\varphi^{-1})_{*}L_{2-}\bigr{)}\). Assuming the conditions of Lemma 6.40 from now on, we then find **Corollary 6.41**.: In a coordinate chart \(U_{1}\), the three-form \(H_{1}\) is given by \[H_{1}|_{U_{1}}=\tfrac{1}{2}\,H_{i\mu\nu}\,\Theta^{i}\wedge\Theta^{\mu}\wedge \Theta^{\nu}. \tag{6.42}\] Equation (6.42) is the analog of [8, Equation (2.5)]. Given the Courant algebroid relation \(R(\Phi)\), with \(\mathcal{L}_{X}B=0\) for all \(X\in\mathsf{\Gamma}\bigl{(}(\varphi^{-1})_{*}T\mathcal{F}_{2-}\bigr{)}\), we take a generalised metric \(V_{1}^{+}\) on \(\mathbb{T}\mathcal{Q}_{1}\) specified by \((\underline{g}_{1},\underline{b}_{1})\) such that \(\mathcal{L}_{\not{X}}\underline{g}_{1}=\mathcal{L}_{\not{X}}\underline{b}_{1}=0\) for all \(\underline{X}\in\mathsf{\Gamma}(D_{1})\). The two-form \(B\) decomposes as in Equation (5.43) (with \(\overline{B}_{\mathrm{hor}}=\overline{B}_{\mathrm{ver}}=0\)), and hence the conditions of Theorem 5.44 are satisfied. Thus there is a unique background \(V_{2}^{+}\) on \(\mathbb{T}\mathcal{Q}_{2}\) given by \((\underline{g}_{2},\underline{b}_{2})\) such that \(R(\Phi)\) is a generalised isometry between \(V_{1}^{+}\) and \(V_{2}^{+}\). While there is always a way to construct the unique generalised metric \(V_{2}^{+}\), with the extra structure a para-Hermitian manifold carries there is cleaner way to find it. Remarks 6.24 and 6.26 show that we get a generalised para-Hermitian metric \(\mathcal{H}_{1}\) on \(TM_{1}\). Similarly, the T-dual background \((\underline{g}_{2},\underline{b}_{2})\) on \(\mathbb{T}\mathcal{Q}_{2}\) gives a generalised para-Hermitian metric \(\mathcal{H}_{2}\) on \(TM_{2}\). Since \(\varphi\) is an isometry between the split-signature metrics \(\eta_{1}\) and \(\eta_{2}\), we can show that it is also an isometry between \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) through **Proposition 6.43**.: Let \(R(\Phi)\colon(\mathbb{T}\mathcal{Q}_{1},\underline{H}_{1},(\underline{g}_{1},\underline{b}_{1}))\dashrightarrow(\mathbb{T}\mathcal{Q}_{2},\underline{H} _{2},(\underline{g}_{2},\underline{b}_{2}))\) be the generalised isometry coming from Theorem 5.27 applied to the \(\eta\)-isometric almost para-Hermitian manifolds \((M_{1},\eta_{1},\mathscr{K}_{1})\) and \((M_{2},\eta_{2},\mathscr{K}_{2})\). Then \(\varphi^{*}\mathcal{H}_{2}=\mathcal{H}_{1}\). Proof.: Assume, by absorbing the two-form \(\underline{b}_{1}\) into \(\underline{H}_{1}\) if necessary, that the background on \(\mathbb{T}\mathcal{Q}_{1}\) is given solely by the Riemannian metric \(\underline{g}_{1}\). Using Equation (6.25), the generalised para-Hermitian metric \(\mathcal{H}_{1}\) thus takes the form \[\mathcal{H}_{1}=\begin{pmatrix}g_{1+}&0\\ 0&g_{1-}\end{pmatrix}. \tag{6.44}\] The four-fold splitting \[TM_{1}=L_{1}^{+-}\oplus L_{1}^{++}\oplus L_{1}^{--}\oplus L_{1}^{-+}\] gives a decomposition of \(g_{1+}\) into four parts: \[g_{1}^{\pm\pm}\in\mathsf{\Gamma}\big{(}(L_{1}^{++})^{*}\otimes(L_{1}^{+-})^{* }\big{)}, g_{1}^{\pm\mp}\in\mathsf{\Gamma}\big{(}(L_{1}^{+-})^{*}\otimes(L_{1}^{++}) ^{*}\big{)}\,\] where \(g_{1}^{\pm\pm}=\big{(}g_{1}^{\pm\mp}\big{)}^{\mathrm{t}}=g_{1}^{\pm\mp}\). Locally this decomposition is given explicitly by \[g_{1+}|_{U_{1}}=(g_{1+})_{\mu\nu}\,\Theta^{\mu}\otimes\Theta^{\nu}+(g_{1+})_{ \mu\mu}\,\Theta^{\prime\prime}\otimes\Theta^{\mu}+(g_{1+})_{\mu\nu}\,\Theta^{ \mu}\otimes\Theta^{\nu}\,\] with \((g_{1+})_{\mu\nu}=\big{(}g_{1}^{\pm-}\big{)}_{\mu\nu}\), \((g_{1+})_{\mu\mu}=\big{(}g_{1}^{\pm\pm}\big{)}_{\mu\mu}\) and \((g_{1})_{\mu\nu}=\big{(}g_{1}^{\pm\pm}\big{)}_{\mu\nu}\). The T-dual generalised metric \(V_{2}^{+}\) is determined by the reduction of the basic tensors \(g_{2+}\in\mathsf{\Gamma}\big{(}\bigotimes^{2}(L_{2+})^{*}\big{)}\), with components \(g_{2-}^{\pm-}\), \(g_{2}^{\pm\mp}\) and \(g_{2}^{\pm\mp}\), and \(b_{2+}\in\mathsf{\Gamma}\big{(}\bigwedge^{2}(L_{2+})^{*}\big{)}\), with components \(b_{2}^{\pm-}\), \(b_{2}^{\pm\mp}\) and \(b_{2}^{\pm\mp}\) where \(b_{2}^{\pm\pm}=\big{(}b_{2}^{\pm\mp}\big{)}^{\mathrm{t}}=-b_{2}^{\pm\mp}.\) Since \(B=\Theta^{\mu}\wedge\tilde{\Theta}_{q}\), by a calculation similar to that of Remark 6.15 this can be written locally as \[g_{2+}|_{U_{1}} =(g_{2+})_{\mu\nu}\,\Theta^{\prime\prime}\otimes\Theta^{\prime \prime}+(g_{2+})_{\mu\mu}\,\Theta^{\prime\prime}\otimes\Theta^{\prime\mu}+(g_ {2+})_{\mu\nu}\,\Theta^{\prime\mu}\otimes\Theta^{\prime\nu}\,\] \[b_{2+}|_{U_{1}} =(b_{2+})_{\mu\nu}\,\Theta^{\prime\prime}\wedge\Theta^{\prime \prime}+(b_{2+})_{\mu\mu}\,\Theta^{\prime\prime}\wedge\Theta^{\prime\mu}+(b_{2+ })_{\mu\nu}\,\Theta^{\prime\mu}\wedge\Theta^{\prime\nu}\,\] where \[(g_{2+})_{\mu\nu}=\varphi_{*}\big{(}g_{1}^{\pm-}\big{)}_{\mu\nu}^{-1}\quad, \quad(g_{2+})_{\mu\mu}=0\, \tag{6.45}\] and \((g_{1+})^{\alpha\theta}=\big{(}g_{1}^{\pm}\big{)}_{\alpha\theta}^{-1}\). Conversely, the generalised para-Hermitian metric (6.44) is given by \[\mathcal{H}_{1}=\begin{pmatrix}g_{1}^{\pm-}&g_{1}^{\pm\pm}&0&0\\ g_{1}^{\pm\mp}&g_{1}^{\pm\pm}&0&0\\ 0&0&g_{1}^{\pm\pm}&g_{1}^{\--}\\ 0&0&g_{1}^{\--\pm}&g_{1}^{\--}\end{pmatrix}\, \tag{6.46}\] where \[g_{1}^{\--} =\bigl{(}g_{1}^{\pm\pm}-g_{1}^{\pm\mp}\,\bigl{(}g_{1}^{\pm\--} \bigr{)}^{-1}\,g_{1}^{\pm\pm}\bigr{)}^{-1}\,\] \[g_{1}^{\--\mp} =-\bigl{(}g_{1}^{\pm\--}\bigr{)}^{-1}\,g_{1}^{\pm\pm}\,g_{1}^{\--}\,\] \[g_{1}^{\--\pm} =\bigl{(}g_{1}^{\pm\--}\bigr{)}^{-1}+\bigl{(}g_{1}^{\pm\--}\bigr{)} ^{-1}\,g_{1}^{\pm\pm}\,g_{1}^{\--}\,g_{1}^{\pm\mp}\,\bigl{(}g_{1}^{\pm\--} \bigr{)}^{-1}\,\] and \(g_{1}^{\--\pm}=\bigl{(}g_{1}^{\--\mp}\bigr{)}^{\mathrm{t}}\). This comes from the formula for the inverse of a \(2\times 2\) matrix. Applying \((\varphi^{-1})^{*}\) to Equation (6.46) yields \[(\varphi^{-1})^{*}\mathcal{H}_{1}=\begin{pmatrix}(\varphi^{-1})^{*}g_{1}^{\-- \pm}&0&0&(\varphi^{-1})^{*}g_{1}^{\--\mp}\\ 0&(\varphi^{-1})^{*}g_{1}^{\pm\pm}&(\varphi^{-1})^{*}g_{1}^{\pm\mp}&0\\ 0&(\varphi^{-1})^{*}g_{1}^{\pm\pm}&(\varphi^{-1})^{*}g_{1}^{\pm\--}&0\\ (\varphi^{-1})^{*}g_{1}^{\--\pm}&0&0&(\varphi^{-1})^{*}g_{1}^{\--}\end{pmatrix}\,\] This is in the form of Equation (6.25), and hence we can read off the metric \(g_{2+}\) and two-form \(b_{2+}\) on \(L_{2+}\) as \[g_{2+} =\begin{pmatrix}(\varphi^{-1})^{*}\bigl{(}g_{1}^{\pm\--}\bigr{)} ^{-1}&0\\ 0&(\varphi^{-1})^{*}\bigl{(}g_{1}^{\--}\bigr{)}^{-1}\end{pmatrix}\,\] \[b_{2+} =\begin{pmatrix}0&-(\varphi^{-1})^{*}\bigl{(}g_{1}^{\pm\--}\bigr{)} ^{-1}\,g_{1}^{\pm\pm}\\ (\varphi^{-1})^{*}g_{1}^{\pm\pm}\,\bigl{(}g_{1}^{\pm\--}\bigr{)}^{-1}&0\end{pmatrix}\.\] Looking in a coordinate chart \(\varphi(U_{1})=U_{2}\subset M_{2}\), we find that this agrees with Equation (6.45). Equation (6.45) gives the component form of the Buscher rules for generalised T-duality in para-Hermitian geometry. ### Doubled Nilmanifolds In this final section we consider the class of string theory compactifications provided by nilmanifolds, which are quotients of nilpotent Lie groups by a discrete cocompact subgroup. We focus on the particular example of the three-dimensional Heisenberg nilmanifold \(\mathsf{N}(m,k)\), which is a quotient of the Heisenberg group \(\mathsf{H}\) of upper triangular \(3\times 3\) matrices whose diagonal entries are all equal to \(1\); geometrically it defines a circle bundle of degree \(m\in\mathbb{Z}\) over a two-torus \(\mathsf{T}^{2}\) with \(H\)-flux representing the Severa class labelled by \(k\in\mathbb{Z}.\) This is analogous to the lens space \(\mathsf{L}(m,k)\) from Example 6.17, which is a quotient of \(\mathsf{S}^{3}\simeq\mathsf{SU}(2)\) by a cyclic subgroup \(\mathbb{Z}_{m}\) of the isometry group \(\mathsf{SU}(2)\) of the round three-sphere. In particular, it can be similarly treated in the correspondence space picture of Section 6.1, giving a topology changing T-duality between \(\mathsf{N}(m,k)\) and \(\mathsf{N}(k,m)\). Here we consider this T-duality relation instead in the framework of Section 6.2 by compactifying the Drinfel'd double \(T^{*}{\sf H}={\sf H}\ltimes\mathbb{R}^{3}\), the cotangent bundle of the Heisenberg group \({\sf H}\), by the action of a discrete cocompact subgroup which defines a doubled nilmanifold [31, 32, 57]. We illustrate this explicitly for the simplest case with \(k=0\), where \({\sf N}(m,0)\) is the nilmanifold \({\sf T}^{\sf H}\) of degree \(m\) without \(H\)-flux and \({\sf N}(0,m)\) is the three-torus \({\sf T}^{3}\) with Severa class \(m\). We shall study two foliations of the doubled Heisenberg nilmanifold, and obtain a T-duality between the respective quotients in sense of Section 6.2. #### 6.3.1. The Drinfel'd Double of the Heisenberg Group The doubled Heisenberg nilmanifold is constructed by considering a quotient of the Drinfel'd double \({\sf D}^{\sf H}\) of the three-dimensional Heisenberg group \({\sf H}\) given by \({\sf D}^{\sf H}=T^{*}{\sf H}={\sf H}\ltimes\mathbb{R}^{3}\), with Lie algebra \(\mathfrak{d}=\mathfrak{h}\ltimes\mathbb{R}^{3}\), by a discrete cocompact subgroup \({\sf D}^{\sf H}(\mathbb{Z})\). Here the Heisenberg algebra \(\mathfrak{h}={\sf Lie}({\sf H})\) and the abelian Lie algebra \(\mathbb{R}^{3}\) together with \(\mathfrak{d}=\mathfrak{h}\ltimes\mathbb{R}^{3}\) form a Manin triple. Despite the fact that \({\sf H}\) is not semi-simple, we can still give a matrix representation for the Lie algebra of the Drinfel'd double \(T^{*}{\sf H}\). This is useful for explicitly writing down the coordinate identifications defining the global structure of the doubled nilmanifold. In local coordinates \((x,y,z,\tilde{x},\tilde{y},\tilde{z})\in{\sf H}\times\mathbb{R}^{3}\), an element \(\gamma\) in \({\sf D}^{\sf H}\) may be written as \[\gamma=\left(\begin{array}{cccccc}1&m\,x&y&0&0&\tilde{z}\\ 0&1&z&0&0&-\tilde{y}\\ 0&0&1&0&0&0\\ 0&-m\,\tilde{y}&\tilde{x}-m\,z\,\tilde{y}&1&m\,x&y+\frac{1}{2}\,m\,\tilde{y}^{ 2}\\ 0&0&0&0&1&z\\ 0&0&0&0&0&1\end{array}\right)\.\] Then the left-invariant one-forms on \({\sf D}^{\sf H}\) are the Lie algebra components of the corresponding Maurer-Cartan one-form \(\Theta=\gamma^{-1}\,{\rm d}\gamma\) given by \[\begin{array}{c}\Theta^{x}={\rm d}x\quad,\quad\Theta^{y}={\rm d}y-m\,x\,{ \rm d}z\quad,\quad\Theta^{z}={\rm d}z\,\\ \\ \tilde{\Theta}_{x}={\rm d}\tilde{x}-m\,z\,{\rm d}\tilde{y}\quad,\quad\tilde{ \Theta}_{y}={\rm d}\tilde{y}\quad,\quad\tilde{\Theta}_{z}={\rm d}\tilde{z}+m\, x\,{\rm d}\tilde{y}\.\end{array} \tag{6.47}\] The dual left-invariant vector fields are therefore \[\begin{array}{c}Z_{x}=\frac{\partial}{\partial x}\quad,\quad Z_{y}=\frac{ \partial}{\partial y}\quad,\quad Z_{z}=\frac{\partial}{\partial z}+m\,x\, \frac{\partial}{\partial y}\,\\ \\ \tilde{Z}^{x}=\frac{\partial}{\partial\tilde{x}}\quad,\quad\tilde{Z}^{y}=\frac {\partial}{\partial\tilde{y}}+m\,z\,\frac{\partial}{\partial\tilde{x}}-m\,x\, \frac{\partial}{\partial\tilde{z}}\quad,\quad\tilde{Z}^{z}=\frac{\partial}{ \partial\tilde{z}}\.\end{array} \tag{6.48}\] The nilpotent Lie algebra \(\mathfrak{d}\) of \(T^{*}{\sf H}={\sf H}\ltimes\mathbb{R}^{3}\) thus has non-vanishing brackets \[[Z_{x},Z_{z}]=m\,Z_{y}\,\quad[Z_{x},\tilde{Z}^{y}]=-m\,\tilde{Z}^{z}\qquad \text{and}\qquad[Z_{z},\tilde{Z}^{y}]=m\,\tilde{Z}^{x}\,\] and in particular the only non-vanishing structure function is \(f_{xz}{}^{y}=-f_{zx}{}^{y}=m\). The Lie algebra \(\mathfrak{d}\) is naturally endowed with a para-Hermitian structure which induces a left-invariant para-Hermitian structure \((\eta^{\mathfrak{d}}_{1},\mathscr{K}^{\mathfrak{d}}_{1})\) on \({\sf D}^{\sf H}\) given by \[\mathscr{K}^{\mathfrak{d}}_{1}=\Theta^{i}\otimes Z_{i}-\tilde{\Theta}_{i} \otimes\tilde{Z}^{i}\quad,\quad\eta^{\mathfrak{d}}_{1}=\Theta^{i}\otimes \tilde{\Theta}_{i}+\tilde{\Theta}_{i}\otimes\Theta^{i}\quad,\quad\omega^{ \mathfrak{d}}_{1}=\Theta^{i}\wedge\tilde{\Theta}_{i}. \tag{6.49}\] This makes \(({\sf D}^{\sf H},\eta^{\mathfrak{d}}_{1},\mathscr{K}^{\mathfrak{d}}_{1})\) a para-Hermitian manifold. The eigenbundles of \(\mathscr{K}^{\mathfrak{d}}_{1}\) are then the integrable distributions given by \(L^{\mathfrak{d}}_{+}\) spanned by the left-invariant vector fields \(\{\,Z_{x},Z_{y},Z_{z}\,\}\) induced by the Lie subalgebra \(\mathfrak{h}\) whose leaves are all given by the Heisenberg group \({\sf H}\), and \(L^{\mathfrak{d}}_{-}\) spanned by \(\{\,\tilde{Z}^{x},\tilde{Z}^{y},\tilde{Z}^{z}\,\}\) coming from the generators of \(\mathbb{R}^{3}\) whose leaves are just \(\mathbb{R}^{3}\). The global frame \(\{\,Z_{i},\tilde{Z}_{j}\,\}\) for \(i,j\in\{\,x,y,z\,\}\) defines a diagonalising frame for the para-Hermitian structure \((\eta^{\mathfrak{p}}_{1},\mathscr{K}^{\mathfrak{p}}_{1})\). #### 6.3.2. The Doubled Heisenberg Nilmanifold To compactify \(\mathsf{D}^{\mathsf{H}}\), we consider the left action by a discrete cocompact subgroup \(\mathsf{D}^{\mathsf{H}}(\mathbb{Z})\) whose generic element \(\xi\) is given by \[\xi=\left(\begin{array}{cccccc}1&m\,\alpha&\beta&0&0&\tilde{\delta}\\ 0&1&\delta&0&0&-\tilde{\beta}\\ 0&0&1&0&0&0\\ 0&-m\,\tilde{\beta}&\tilde{\alpha}-m\,\delta\,\tilde{\beta}&1&m\alpha&\beta+ \frac{1}{2}\,m\,\tilde{\beta}^{2}\\ 0&0&0&0&1&\delta\\ 0&0&0&0&1\end{array}\right)\,\] where \(\alpha,\beta,\delta,\tilde{\alpha},\tilde{\beta},\tilde{\delta}\in\mathbb{Z}\). Under the identification \(\gamma\sim\xi\,\gamma\), we get the simultaneous identifications of coordinates as \[\begin{split} x\sim x+\alpha\quad,&y\sim y+m\,\alpha\,z+\beta \quad,\quad z\sim z+\delta\,\\ \tilde{x}\sim\tilde{x}+m\,\delta\,\tilde{y}+\tilde{\alpha}\quad, \quad\tilde{y}\sim\tilde{y}+\tilde{\beta}\quad,\quad\tilde{z}\sim\tilde{z}-m \,\alpha\,\tilde{y}+\tilde{\delta}\,\end{split} \tag{6.50}\] which defines the doubled Heisenberg nilmanifold \(M^{\mathsf{H}}=\mathsf{D}^{\mathsf{H}}(\mathbb{Z})\setminus\mathsf{D}^{ \mathsf{H}}\). The left-invariant one-forms (6.47) and left-invariant vector fields (6.48) are invariant under the identifications (6.50), and hence descend globally through the quotient. Thus the distributions \(L^{\mathfrak{p}}_{+}\) and \(L^{\mathfrak{p}}_{-}\) descend respectively to integrable distributions \(L_{1\pm}\subset TM^{\mathsf{H}}\). Their leaves are, respectively, the Heisenberg nilmanifold \(\mathsf{T}^{\mathsf{H}}=\mathsf{H}(\mathbb{Z})\setminus\mathsf{H}\) and the three-torus \(\mathsf{T}^{3}=\mathbb{R}^{3}/\mathbb{Z}^{3}\). The para-Hermitian structure (6.49) is left-invariant, hence invariant under the left discrete action. Thus it descends to a para-Hermitian structure \((\eta_{1},\mathscr{K}_{1})\) on \(M^{\mathsf{H}}\), with integrable \(\pm 1\)-eigenbundles \(L_{1\pm}\) and globally defined diagonalising frame \(\{\,Z_{i},\tilde{Z}^{j}\,\}\) for \(i,j\in\{\,x,y,z\,\}\). Since \(L_{1\pm}\) are involutive, Remark 6.21 tells us that \(H_{1}=0\). #### 6.3.3. Generalised T-duality Consider the diffeomorphism of \(M_{1}:=M^{\mathsf{H}}\) given in coordinates by \[\varphi(x,y,z,\tilde{x},\tilde{y},\tilde{z})=(x,\tilde{y},z,\tilde{x}-m\,z\, \tilde{y},y,\tilde{z})=:(x^{\prime},y^{\prime},z^{\prime},\tilde{x}^{\prime}, \tilde{y}^{\prime},\tilde{z}^{\prime})\,\] with the coordinate identifications \[\begin{split} x^{\prime}\sim x^{\prime}+\alpha^{\prime}\quad, \quad y^{\prime}\sim y^{\prime}+\tilde{\beta}^{\prime}\quad,\quad z^{\prime} \sim z^{\prime}+\delta^{\prime}\,\\ \tilde{x}^{\prime}\sim\tilde{x}^{\prime}+m\,\delta^{\prime}\, \tilde{y}^{\prime}+m\,z^{\prime}\,(m\,\alpha^{\prime}\,z^{\prime}+\beta^{ \prime})+\tilde{\alpha}^{\prime}\quad,\quad\tilde{y}^{\prime}\sim\tilde{y}^{ \prime}+m\,\alpha^{\prime}\,z^{\prime}+\beta^{\prime}\,\\ \tilde{z}^{\prime}\sim\tilde{z}^{\prime}-m\,\alpha^{\prime}\,y^{ \prime}+\tilde{\delta}^{\prime}\,\end{split} \tag{6.51}\] where \(\alpha^{\prime},\beta^{\prime},\delta^{\prime},\tilde{\alpha}^{\prime},\tilde {\beta}^{\prime},\tilde{\delta}^{\prime}\in\mathbb{Z}\). Since the only non-vanishing structure function is \(f_{xz}{}^{y}\), by Proposition 6.32 it follows that the only T-duality direction we can consider is in the \(y\)-direction, that is, \(D_{1}=\text{Span}\,\{\,\underline{Z}_{y}\,\}\). The global frame fields on \(M_{2}:=\varphi(M^{\mathsf{H}})\), where the T-duality direction is swapped as in Equation (6.31), are given by \[Z^{\prime}_{x}=\frac{\partial}{\partial x^{\prime}}\quad,\quad Z^{\prime}_{y}= \frac{\partial}{\partial y^{\prime}}-m\,x^{\prime}\,\frac{\partial}{\partial \tilde{z}^{\prime}}\quad,\quad Z^{\prime}_{z}=\frac{\partial}{\partial z^{ \prime}}-m\,y^{\prime}\,\frac{\partial}{\partial\tilde{x}^{\prime}}+m\,x^{ \prime}\,\frac{\partial}{\partial\tilde{y}^{\prime}}\,\] \[\tilde{Z}^{\prime x}=\frac{\partial}{\partial\tilde{x}^{\prime}}\quad,\quad \tilde{Z}^{\prime y}=\frac{\partial}{\partial\tilde{y}^{\prime}}\quad,\quad \tilde{Z}^{\prime z}=\frac{\partial}{\partial\tilde{z}^{\prime}}\,\] with the non-vanishing Lie brackets \[[Z^{\prime}_{x},Z^{\prime}_{z}]=m\,\tilde{Z}^{\prime y}\,\quad[Z^{\prime}_{x}, Z^{\prime}_{y}]=-m\,\tilde{Z}^{\prime z}\qquad\text{and}\qquad[Z^{\prime}_{z}, Z^{\prime}_{y}]=m\,\tilde{Z}^{\prime x}\,\] and dual one-forms \[\Theta^{\prime x}=\mathrm{d}x^{\prime}\quad,\quad\Theta^{\prime y}=\mathrm{d} y^{\prime}\quad,\quad\Theta^{\prime z}=\mathrm{d}z^{\prime}\,\] \[\tilde{\Theta}^{\prime}_{x}=\mathrm{d}\tilde{x}^{\prime}+m\,y^{\prime}\, \mathrm{d}z^{\prime}\quad,\quad\tilde{\Theta}^{\prime}_{y}=\mathrm{d}\tilde{ y}^{\prime}-m\,x^{\prime}\,\mathrm{d}z^{\prime}\quad,\quad\tilde{\Theta}^{ \prime}_{z}=\mathrm{d}\tilde{z}^{\prime}+m\,x^{\prime}\,\mathrm{d}y^{\prime}\.\] This is the global diagonalising frame for the almost para-Hermitian structure \((\eta_{2},\mathscr{K}_{2})\) on the doubled nilmanifold \(M^{\mathsf{H}}\), with \(\varphi^{*}\eta_{2}=\eta_{1}\) as well as \[\mathscr{K}_{2}=Z^{\prime}_{i}\otimes\Theta^{\prime i}-\tilde{Z}^{\prime i} \otimes\tilde{\Theta}^{\prime}_{i}\qquad\text{and}\qquad\omega_{2}=\Theta^{ \prime i}\wedge\tilde{\Theta}^{\prime}_{i}\.\] Note that the \(+1\)-eigenbundle \(L_{2+}\) of \(\mathscr{K}_{2}\) is not an integrable distribution. Taking the quotient by the foliation \(\mathcal{F}_{1-}\) with \(L_{1-}=T\mathcal{F}_{1-}\) leaves the coordinates \((x,y,z)\), which by Equation (6.50) parametrise the Heisenberg nilmanifold \(\mathsf{T}^{\mathsf{H}}\), while the quotient by the foliation \(\mathcal{F}_{2-}\) with \(L_{2-}=T\mathcal{F}_{2-}\) leaves the coordinates \((x^{\prime},y^{\prime},z^{\prime})\), which by Equation (6.51) parametrise the three-torus \(\mathsf{T}^{3}\). Thus \(M^{\mathsf{H}}\) is the doubled space for \(\mathsf{T}^{\mathsf{H}}\) and \(\mathsf{T}^{3}\) obtained in [30, Section 7.3], and hence the respective quotients \(M^{\mathsf{H}}/\mathcal{F}_{1-}=\mathsf{T}^{\mathsf{H}}\) and \(M^{\mathsf{H}}/\mathcal{F}_{2-}=\mathsf{T}^{3}\) of \(M^{\mathsf{H}}\) by the foliations \(\mathcal{F}_{1-}\) and \(\mathcal{F}_{2-}\) integrating the \(-1\)-eigenbundles \(L_{1-}\) of \(\mathscr{K}_{1}\) and \(L_{2-}\) of \(\mathscr{K}_{2}\) are smooth manifolds. Since \(\mathsf{T}^{\mathsf{H}}\) and \(\mathsf{T}^{3}\) can be viewed as circle bundles over \(\mathsf{T}^{2}\), Lemma 5.7 implies that \[\underline{C}\coloneqq\left\{\,(\varpi_{1}(m),\varpi_{2}(\varphi(m)))\,\mid\,m \in M^{\mathsf{H}}\,\right\}=\mathsf{T}^{\mathsf{H}}\times_{\mathsf{T}^{2}} \mathsf{T}^{3}\] is hence smooth. We are now ready to apply the results of Section 6.2. We know that \(H_{1}=0\) is closed, and hence we may apply Proposition 6.32 to obtain the T-duality relation \(R(\Phi)\colon(\mathbb{T}\mathsf{T}^{\mathsf{H}},0)\dashrightarrow(\mathbb{T} \mathsf{T}^{3},\underline{H}_{\mathsf{T}^{3}})\), where \(\Phi=\overline{\varphi}\circ\mathrm{e}^{\,B}\) and \[\underline{H}_{\mathsf{T}^{3}}=m\,\mathrm{d}x^{\prime}\wedge\mathrm{d}y^{ \prime}\wedge\mathrm{d}z^{\prime}\.\] Since Lemma 6.40 says that the two-form \[B=\Theta^{y}\wedge\tilde{\Theta}_{y}=(\mathrm{d}y-m\,x\,\mathrm{d}z)\wedge \mathrm{d}\tilde{y}\] has the appropriate invariance,29 the relation Footnote 29: Note that the \(H\)-flux \(H_{2}=(\varphi^{-1})^{*}\mathrm{d}B=\varpi_{2}^{*}\,\underline{H}_{\mathsf{T}^ {3}}\) is exact on \(M^{\mathsf{H}}\). However, this \(B\)-field is not basic and so does not descend to a two-form on \(\mathsf{T}^{3}\). On the other hand, \(H_{2}\) is basic and descends to a cohomologically non-trivial three-form \(\underline{H}_{\mathsf{T}^{3}}\) on \(\mathsf{T}^{3}\). That \(B\) is not basic is the origin of ‘topology change’ in T-duality. \[R(\Phi)=\mathrm{Span}\left\{\,(Z_{x},Z^{\prime}_{x})\,,\,(Z_{y},\Theta^{\prime y })\,,\,(Z_{z},Z^{\prime}_{z})\,,\,(\Theta^{x},\Theta^{\prime x})\,,\,(\Theta^{y },Z^{\prime}_{y})\,,\,(\Theta^{z},\Theta^{\prime z})\,\right\}\] is a generalised isometry in the following way. The standard left-invariant Riemannian metric \(\underline{g}_{\mathsf{T}^{\mathsf{H}}}\) on the Heisenberg nilmanifold \(\mathsf{T}^{\mathsf{H}}\) is given by \[\underline{g}_{\mathsf{T}^{\mathsf{H}}}=\delta_{ij}\,\underline{\Theta}^{i} \otimes\underline{\Theta}^{j}=\mathrm{d}x\otimes\mathrm{d}x+(\mathrm{d}y-m\,x \,\mathrm{d}z)\otimes(\mathrm{d}y-m\,x\,\mathrm{d}z)+\mathrm{d}z\otimes\mathrm{d}z\.\] This pulls back to a Riemannian metric \(g_{1+}=\varpi_{1}^{*}\,\underline{g}_{\mathsf{T}^{\mathsf{H}}}=\delta_{ij}\, \Theta^{i}\otimes\Theta^{j}\) and hence defines a generalised para-Hermitian metric \(\mathcal{H}_{1}=g_{1+}+g_{1-}\) on \(M^{\mathsf{H}}\) given by \[\mathcal{H}_{1}=\delta_{ij}\,\Theta^{i}\otimes\Theta^{j}+\delta^{ij}\,\tilde{ \Theta}_{i}\otimes\tilde{\Theta}_{j}\.\] The pullback by \(\varphi^{-1}\) gives \(\mathcal{H}_{2}=\delta_{ij}\,\Theta^{\prime i}\otimes\Theta^{\prime j}+\delta^ {ij}\,\tilde{\Theta}^{\prime}_{i}\otimes\tilde{\Theta}^{\prime}_{j}\), and hence we get a basic tensor \(g_{2+}=\delta_{ij}\,\Theta^{\prime i}\otimes\Theta^{\prime j}\) which is the pullback by \(\varpi_{2}\) of the standard flat Riemannian metric \[\underline{g}_{\mathsf{T}^{3}}=\delta_{ij}\,\underline{\Theta}^{\prime i} \otimes\underline{\Theta}^{\prime j}=\mathrm{d}x^{\prime}\otimes\mathrm{d}x^{ \prime}+\mathrm{d}y^{\prime}\otimes\mathrm{d}y^{\prime}+\mathrm{d}z^{\prime} \otimes\mathrm{d}z^{\prime}\] on the three-torus \(\mathsf{T}^{3}\). Thus the Heisenberg nilmanifold \(\mathsf{T}^{\mathsf{H}}\) with zero \(H\)-flux and background metric \(\underline{g}_{\mathsf{T}^{\mathsf{H}}}\) is T-dual to the three-torus \(\mathsf{T}^{3}\) with \(H\)-flux \(\underline{H}_{\mathsf{T}^{3}}\) and background metric \(\underline{g}_{\mathsf{T}^{3}}\) via the generalised isometry \[R(\Phi)\colon\big{(}\mathbb{T}\mathsf{T}^{\mathsf{H}}\,,\,0\,,\,(\underline{ g}_{\mathsf{T}^{\mathsf{H}}},0)\big{)}\dashrightarrow\big{(}\mathbb{T}\mathsf{T}^{3} \,,\,\underline{H}_{\mathsf{T}^{3}}\,,\,(\underline{g}_{\mathsf{T}^{3}},0) \big{)}\.\] ## Appendix A Change of Splitting for Compositions The composition of transverse generalised isometries \(R_{1}\colon(E_{1},W_{1})\dashrightarrow(E_{2},W_{2})\) and \(R_{2}\colon(E_{2},W_{2})\dashrightarrow(E_{3},W_{3})\) may fail due to the fact that the lifts of \(W_{2}^{\pm}\) for the decomposition (4.26) may differ for \(R_{1}\) and \(R_{2}\). To see the solution, let us first investigate the case of relations \(R_{i}\) which are graphs of classical Courant algebroid morphisms given by isomorphisms of exact Courant algebroids. Suppose that \(\Phi_{1}\colon E_{1}\to E_{2}\) and \(\Phi_{2}\colon E_{2}\to E_{3}\) are Courant algebroid isomorphisms, where \(E_{i}\) are exact Courant algebroids over \(M_{i}\). Let \(K_{i}\) be isotropic involutive subbundles of \(E_{i}\), let \(W_{i}\) be pre-\(K_{i}\)-transverse generalised metrics, and assume that \(\Phi_{i}\) is a regular transverse generalised isometry between \(W_{i}\) and \(W_{i+1}\). Then there exist the bundles \(\widetilde{W}_{1}^{\pm}\), \(\widetilde{W}_{2}^{\pm}\), \(\widetilde{W}_{2}^{\prime\pm}\) and \(\widetilde{W}_{3}^{\prime\pm}\) such that \[\Phi_{1}\big{(}\widetilde{W}_{1}^{\pm}\big{)}=\widetilde{W}_{2}^{\pm}\qquad \text{and}\qquad\Phi_{2}\big{(}\widetilde{W}_{2}^{\prime\pm}\big{)}=\widetilde {W}_{3}^{\prime\pm}\.\] We can define the subbundles \(\widetilde{W}_{1}^{\prime\pm}\coloneqq\Phi_{1}^{-1}\big{(}\widetilde{W}_{2}^{ \prime\pm}\big{)}\). If \(\widetilde{W}_{1}^{\prime\pm}\) are also lifts of \(W_{1}^{\pm}\) to \(W_{1}\), then \[\Phi_{2}\circ\Phi_{1}\big{(}\widetilde{W}_{1}^{\prime\pm}\big{)}=\widetilde{W }_{3}^{\prime\pm}\,\] and \(\Phi_{2}\circ\Phi_{1}\) is a regular transverse generalised isometry between \(W_{1}\) and \(W_{3}\). If \(s_{2}^{\pm}\colon W_{2}^{\pm}\to W_{2}\) and \(s_{2}^{\prime\pm}\colon W_{2}^{\pm}\to W_{2}\) are the splittings with images \(\widetilde{W}_{2}^{\pm}\) and \(\widetilde{W}_{2}^{\prime\pm}\) respectively, the requirement that \(\widetilde{W}_{1}^{\prime\pm}\) are also lifts of \(W_{1}^{\pm}\) to \(W_{1}\) is that the differences \(s_{2}^{\pm}-s_{2}^{\prime\pm}\) map to \(\Phi_{1}(K_{1})\). Similarly one could define \(\widetilde{W}_{3}^{\pm}\coloneqq\Phi_{2}\big{(}\widetilde{W}_{2}^{\pm}\big{)}\) which, provided that \(s_{2}^{\pm}-s_{2}^{\prime\pm}\) map to \(\Phi_{2}^{-1}(K_{3})\), define new splittings such that \(\Phi_{2}\circ\Phi_{1}\big{(}\widetilde{W}_{1}^{\pm}\big{)}=\widetilde{W}_{3}^{ \pm}\). When \(R_{i}\) are general Courant algebroid relations, the condition on the splittings \(s_{2}^{\pm}\) and \(s_{2}^{\prime\pm}\) may be written as \[\operatorname{pr}_{1}\bigl{(}R_{1}\cap(E_{1}\times\operatorname{im}(s_{2}^{\pm} -s_{2}^{\prime\pm})\bigr{)}\subset K_{1}\qquad\text{or}\qquad\operatorname{pr} _{2}\bigl{(}R_{2}\cap(\operatorname{im}(s_{2}^{\pm}-s_{2}^{\prime\pm})\times E _{3})\bigr{)}\subset K_{3}\,\] which should be viewed as pointwise inclusions. These are similar to the conditions given in Theorem 5.27, though here the splittings \(s_{2}^{\pm}\) and \(s_{2}^{\prime\pm}\) are given _a priori_ whereas in the proof of Theorem 5.27 we had to construct these splittings. That is, a portion of the proof is showing that \(\Phi\) is a transverse generalised isometry. The pointwise nature of the present condition is also in contrast with the construction in Section 5.2, which provides a smooth splitting and hence shows that \(\Phi\) is a regular transverse generalised isometry.
2303.17422
Robust Multi-Agent Pickup and Delivery with Delays
Multi-Agent Pickup and Delivery (MAPD) is the problem of computing collision-free paths for a group of agents such that they can safely reach delivery locations from pickup ones. These locations are provided at runtime, making MAPD a combination between classical Multi-Agent Path Finding (MAPF) and online task assignment. Current algorithms for MAPD do not consider many of the practical issues encountered in real applications: real agents often do not follow the planned paths perfectly, and may be subject to delays and failures. In this paper, we study the problem of MAPD with delays, and we present two solution approaches that provide robustness guarantees by planning paths that limit the effects of imperfect execution. In particular, we introduce two algorithms, k-TP and p-TP, both based on a decentralized algorithm typically used to solve MAPD, Token Passing (TP), which offer deterministic and probabilistic guarantees, respectively. Experimentally, we compare our algorithms against a version of TP enriched with online replanning. k-TP and p-TP provide robust solutions, significantly reducing the number of replans caused by delays, with little or no increase in solution cost and running time.
Giacomo Lodigiani, Nicola Basilico, Francesco Amigoni
2023-03-30T14:42:41Z
http://arxiv.org/abs/2303.17422v1
# Robust Multi-Agent Pickup and Delivery with Delays ###### Abstract. Multi-Agent Pickup and Delivery (MAPD) is the problem of computing collision-free paths for a group of agents such that they can safely reach delivery locations from pickup ones. These locations are provided at runtime, making MAPD a combination between classical Multi-Agent Path Finding (MAPF) and online task assignment. Current algorithms for MAPD do not consider many of the practical issues encountered in real applications: real agents often do not follow the planned paths perfectly, and may be subject to delays and failures. In this paper, we study the problem of MAPD with _delays_, and we present two solution approaches that provide robustness guarantees by planning paths that limit the effects of imperfect execution. In particular, we introduce two algorithms, \(k\)-TP and \(p\)-TP, both based on a decentralized algorithm typically used to solve MAPD, Token Passing (TP), which offer deterministic and probabilistic guarantees, respectively. Experimentally, we compare our algorithms against a version of TP enriched with online replanning. \(k\)-TP and \(p\)-TP provide robust solutions, significantly reducing the number of replans caused by delays, with little or no increase in solution cost and running time. ## 1. Introduction In Multi-Agent Pickup and Delivery (MAPD) (Beck et al., 2013), a set of agents must jointly plan collision-free paths to serve pickup-delivery tasks that are submitted at runtime. MAPD combines a task-assignment problem, where agents must be assigned to pickup-delivery pairs of locations, with Multi-Agent Path Finding (MAPF) (Lodigiani et al., 2014), where collision-free paths for completing the assigned tasks must be computed. A particularly challenging feature of MAPD problems is that they are meant to be cast into dynamic environments for long operational times. In such settings, tasks can be submitted at any time in an online fashion. Despite studied only recently, MAPD has a great relevance for a number of real-world application domains. Automated warehouses, where robots continuously fulfill new orders, arguably represent the most significant industrial deployments (Marcus et al., 2015). Beyond logistics, MAPD applications include also the coordination of teams of service robots (Marcus et al., 2016) or fleets of autonomous cars, and the automated control of non-player characters in video games (Marcus et al., 2017). Recently, the MAPF community has focused on resolution approaches that can deal with real-world-induced relaxations of some idealistic assumptions usually made when defining the problem. A typical example is represented by the assumption that planned paths are executed without errors. In reality, execution of paths might be affected by delays and other issues that can hinder some of their expected properties (e.g., the absence of collisions). One approach is to add online adaptation to offline planning, in order to cope with situations where the path execution incurs in errors (Marcus et al., 2015). Despite being reasonable, this approach is not always desirable in real robotic applications. Indeed, replanning can be costly in those situations where additional activities in the environment are conditioned to the plans the agents initially committed to. In other situations, replanning cannot even be possible: think, as an example, to a centralized setting where robots are no more connected to the base station when they follow their computed paths. This background motivated the study of _robustness_(Beck et al., 2013; Beck et al., 2013; Beck et al., 2013), generally understood as the capacity, guaranteed at planning time, of agents' paths to withstand unexpected runtime events. In our work, we focus on robustness in the long-term setting of MAPD, where it has not been yet consistently studied. Specifically, in this paper, we study the robustness of MAPD to the occurrence of _delays_. To do so, we introduce a variant of the problem that we call _MAPD with delays_ (_MAPD-d_ for short). In this variant, like in standard MAPD, agents must be assigned to tasks (pickup-delivery locations pairs), which may continuously appear at any time step, and collision-free paths to accomplish those tasks must be planned. However, during path execution, delays can occur at arbitrary times causing one or more agents to halt at some time steps, thus slowing down the execution of their planned paths. We devise a set of algorithms to compute robust solutions for MAPD-d. The first one is a baseline built from a decentralized MAPD algorithm, Token Passing (TP), to which we added a mechanism that replans in case collisions caused by delays are detected when following planned paths. TP is able to solve well-formed MAPD problem instances (Marcus et al., 2016), and we show that, under some assumptions, the introduction of delays in MAPD-d does not affect well-formedness. We then propose two new algorithms, \(k\)-TP and \(p\)-TP, which adopt the approach of robust planning, computing paths that limit the risk of collisions caused by potential delays. \(k\)-TP returns solutions with deterministic guarantees about robustness in face of delays (\(k-\)robustness), while solutions returned by \(p\)-TP have probabilistic robustness guarantees (\(p-\)robustness). We compare the proposed algorithms by running experiments in simulated environments and we evaluate the trade-offs offered by different levels and types of robustness. In summary, the main contributions of this paper are: the introduction of the MAPD-d problem and the study of some of its properties (Section 3), the definition of two algorithms (\(k\)-TP and \(p\)-TP) for solving MAPD-d problems with robustness guarantees (Section 4), and their experimental evaluation that provides insights about how robustness and solution cost can be balanced (Section 5). ## 2. Preliminaries and Related Work In this section, we discuss the relevant literature related to our work and we introduce the formal concepts we will build upon in the following sections. A basic MAPF problem assigns a start-goal pair of vertices on a graph \(G=(V,E)\) to each agent from a set \(A=\{a_{1},a_{2},\ldots,a_{\ell}\}\) and is solved by a minimum-cost discrete-time set of paths allowing each agent to reach its goal without collisions (Gillet and Barabasi, 2015). In this work, we shall define agent \(a_{i}\)'s _path_ as \(\pi_{i}=\langle\pi_{i,t},\pi_{i,t+1},\ldots,\pi_{i,t+n}\rangle\), namely a finite sequence of vertices \(\pi_{i,t}\in V\) starting at some time \(t\) and ending at \(t+n\). Following \(\pi_{i}\), the agent must either move to an adjacent vertex (\((\pi_{i,t},\pi_{i,t+1})\in E\)) or not move (\(\pi_{i,t+1}=\pi_{i,t}\)). MAPD extends the above one-shot setting to a time-extended setting by introducing tasks \(\tau_{j}\in\mathcal{T}\), each specifying a pickup and a delivery vertex denoted as \(s_{j}\) and \(g_{j}\), respectively. A task has to be assigned to an agent that must execute it following a collision-free path from its initial location to \(s_{j}\) and then from \(s_{j}\) to \(g_{j}\). A peculiar characteristic of this problem is that the set \(\mathcal{T}\) is filled at runtime: a task can be added to the system at any (finite) time and from the moment it is added it becomes assignable to any agent. An agent is _free_ when it is currently not executing any task and _occupied_ when it is assigned to a task. If an agent is free, it can be assigned to any task \(\tau_{j}\in\mathcal{T}\), with the constraint that a task can be assigned to only one agent. When this happens, the task is removed from \(\mathcal{T}\) and, when the agent completes its task eventually arriving at \(g_{j}\), it returns free. A _plan_ is a set of paths, which are required to be _collision-free_, namely any two agents cannot be in the same vertex or traverse the same edge at the same time. Each action (movement to an adjacent vertex or wait) lasts one time step. Solving MAPD means finding a minimum-cost plan to complete all the tasks in \(\mathcal{T}\). Cost usually takes one of two possible definitions. The _service time_ is the average number of time steps needed to complete each task \(\tau_{j}\), measured as the time elapsed from \(\tau_{j}\)'s arrival to the time an agent reaches \(g_{j}\). The _makespan_, instead, is the earliest time step at which all the tasks are completed. Being MAPD a generalization of MAPF, it is NP-hard to solve optimally with any of the previous cost functions (Gillet and Barabasi, 2015; Barabasi, 2015). Recent research focused on how to compute solutions of the above problems which are robust to delays, namely to runtime events blocking agents at their current vertices for one or more time steps, thus slowing down the paths execution. The MAPF literature provides two notions of robustness, which we will exploit in this paper. The first one is that of \(k\)-robustness (Ball and Barabasi, 2015; Barabasi, 2015). A plan is \(k\)-robust iff it is collision-free and remains so when at most \(k\) delays for each agent occur. To create \(k\)-robust plans, an algorithm should ensure that, when an agent leaves a vertex, that vertex is not occupied by another agent for at least \(k\) time steps. In this way, even if the first agent delays \(k\) times, no collision can occur. The second one is called \(p\)-robustness (Ball and Barabasi, 2015). Assume that a fixed probability \(p_{d}\) of any agent being delayed at any time step is given and that delays are independent of each other. Then, a plan is \(p\)-robust iff the probability that it will be executed without a collision is at least \(p\). Differently from \(k-\)robustness, this notion provides a probabilistic guarantee. Robustness for MAPD problems has been less studied. One notion proposed in (Gillet and Barabasi, 2015) and called _long-term robustness_ is actually a _feasibility_ property that guarantees that a finite number of tasks will be completed in a finite time. Authors show how a sufficient condition to have long-term robustness is to ensure that a MAPD instance is _well-formed_. This amounts to require that (i) the number of tasks is finite; (ii) there are as much endpoints as agents, where endpoints are vertices designated as rest locations at which agents might not interfere with any other moving agent; (iii) for any two endpoints, there exists a path between them that traverses no other endpoints. In this work, we leverage the above concepts to extend \(k-\) and \(p-\)robustness to long-term MAPD settings. To do so, we will focus on a current state-of-the-art algorithm for MAPD, Token Passing (TP) (Gillet and Barabasi, 2015). This algorithm follows an online and decentralized approach that, with respect to the centralized counterparts, trades off optimality to achieve an affordable computational cost in real-time long-term settings. We report it in Algorithm 1. The _token_ is a shared block of memory containing the current agents' paths \(\pi_{i}\)s, the current task set \(\mathcal{T}\), and the current assignment of tasks to the agents. The token is initialized with paths in which each agent \(a_{i}\) rests at its initial location \(loc(a_{i})\) (line 1). At each time step, new tasks might be added to \(\mathcal{T}\) (line 3). When an agent has reached the end of its path in the token, it becomes free and requests the token (at most once per time step). The token is sent in turn to each requesting agent (line 5) and the agent with the token assigns itself (line 9) to the task \(\tau\) in \(\mathcal{T}\) whose pickup vertex is closest to its current location (line 8), provided that no other path already planned (and stored in the token) ends at the pickup or delivery vertex of such task (line 6). The distance between the current location \(loc(a_{i})\) of agent \(a_{i}\) and the pickup location \(s_{j}\) of a task is calculated using a (possibly approximated) function \(h\) (for the grid environments of our experiments we use the Manhattan distance). The agent then computes a collision-free path from its current position to the pickup vertex, then from there to the delivery vertex, and finally it eventually rests at the delivery vertex (line 11). Finally, the agent releases the token (line 17) and everybody moves one step on its path (line 19). If \(a_{i}\) cannot find a feasible path it stays where it is (line 13) or it calls the function _Idle_ to compute a path to an endpoint in order to ensure long-term robustness (line 15). Note that other dynamic and online settings, different from ours, have been considered for MAPF and MAPD. For example, (Mampel and Rafter, 2002) introduces a setting in which the set of agents is not fixed, but agents can enter and leave the system, (Brock et al., 2009) proposes an insightful comparison of online algorithms that can be applied to the aforementioned setting, and (Kraus et al., 2010) studies a related problem where the actions have uncertain costs. ## 3. MAPD with delays Delays are typical problems in real applications of MAPF and MAPD and may have multiple causes. For example, robots can slow down due to some errors occurring in the sensors used for localization and coordination (Brock et al., 2009). Moreover, real robots are subject to physical constraints, like minimum turning radius, maximum velocity, and maximum acceleration, and, although algorithms exists to convert time-discrete MAPD plans into plans executable by real robots (Brock et al., 2009), small differences between models and actual agents may still cause delays. Another source of delays is represented by anomalies happening during path execution and caused, for example, by partial or temporary failures of some agent (Brock et al., 2009). We define the problem of _MAPD with delays_ (_MAPD-d_) as a MAPD problem (see Section 2) where the execution of the computed paths \(\pi_{i}\) can be affected, at any time step \(t\), by delays represented by a time-varying set \(\mathcal{D}(t)\subseteq A\). Given a time step \(t\), \(\mathcal{D}(t)\) specifies the subset of agents that will delay the execution of their paths, lingering at their currently occupied vertices at time step \(t\). An agent could be delayed for several consecutive time steps, but not for indefinitely long in order to preserve well-formedness (see next section). The temporal realization of \(\mathcal{D}(t)\) is unknown when planning paths, so a MAPD-d instance is formulated as a MAPD one: no other information is available at planning time. The difference lies in how the solution is built: in MAPD-d we compute solutions accounting for robustness to delays that might happen at runtime. More formally, delays affect each agent's execution trace. Agent \(a_{i}\)'s _execution trace_\(e_{i}=\langle e_{i,0},e_{i,1},...,e_{i,m}\rangle\)1 for a given path \(\pi_{i}=\langle\pi_{i,0},\pi_{i,1},...,\pi_{i,m}\rangle\) corresponds to the actual sequence of \(m\) (\(m\geq n\)) vertices traversed by \(a_{i}\) while following \(\pi_{i}\) and accounting for possible delays. Let us call \(idx(e_{i,t})\) the index of \(e_{i,t}\) (the vertex occupied by \(a_{i}\) at time step \(t\)) in \(\pi_{i}\). Given that \(e_{i,0}=\pi_{i,0}\), the execution trace is defined, for \(t>0\), as: Footnote 1: For simplicity and w.l.o.g., we consider a path and a corresponding execution trace starting from time step \(0\). \[e_{i,t}=\begin{cases}e_{i,t-1}&\text{if }a_{i}\in\mathcal{D}(t)\\ \pi_{i,k}\mid h=idx(e_{i,t-1})+1&\text{otherwise}\end{cases}\] An execution trace terminates when \(e_{i,m}=\pi_{i,n}\) for some \(m\). Notice that, if no delays are present (that is, \(\mathcal{D}(t)=\{\}\) for all \(t\)) then the execution trace \(e_{i}\) exactly mirrors the path \(\pi_{i}\) and, in case this is guaranteed in advance, the MAPD-d problem becomes _de facto_ a regular MAPD problem. In general, such a guarantee is not given and solving a MAPD-d problem opens the issue of computing collision-free tasks-fulfilling MAPD paths (optimizing service time or makespan) characterized by some level of robustness to delays. The MAPD-d problem reduces to the MAPD problem as a special case, so the MAPD-d problem is NP-hard. ### Well-formedness of MAPD-d In principle, if a problem instance is well-formed, delays will not affect its feasibility (this property is also called long-term robustness, namely the guarantee that a finite number of tasks will be completed in a finite time, see Section 2). Indeed, well-formedness is given by specific topological properties of the environment and delays, by their definition, are not such a type of feature. There is, however, an exception to this argument corresponding to a case where a delay does cause a modification of the environment, eventually resulting in the loss of well-formedness and, in turn, of feasibility. This is the case where an agent is delayed indefinitely and cannot move anymore (namely when the agent is in \(\mathcal{D}(t)\) for all \(t\geq T\) for a given time step \(T\)). In such a situation, the agent becomes a new obstacle, potentially blocking a path critical for preserving the well-formedness. The assumption that an agent cannot be delayed indefinitely made in the previous section ensures the well-formedness of MAPD-d instances. More precisely, a MAPD-d instance is well-formed when, in addition to requirements (i)-(iii) from Section 2, it satisfies also: (iv) any agent cannot be in \(\mathcal{D}(t)\) forever (i.e., for all \(t\geq T\) for a given \(T\)). In a real context, condition (iv) amounts to removing or repairing the blocked agents. For instance, if an agent experiences a permanent fail, it will be removed (in this case its incomplete task will return in the task set and at least one agent must survive in the system) or repaired after a finite number of time steps. This guarantees that the well-formedness of a problem instance is preserved (or, more precisely, that it is restored after a finite time). ### A MAPD-d baseline: TP with replanning Algorithms able to solve well-formed MAPD problems, like TP, are in principle able to solve well-formed MAPD-d problems as well. The only issue is that these algorithms would return paths that do not consider possible delays occurring during execution. Delays cause paths to possibly collide, although they did not at planning time. (Note that, according to our assumptions, when an agent is delayed at time step \(t\), there is no way to know for how long it will be delayed.) In order to have a baseline to compare against the algorithms we propose in the next section, we introduce an adaptation of TP allowing it to work also in the presence of delays. Specifically, we add to TP a replanning mechanism that works as follows: when a collision is detected between agents following their paths, the token is assigned to one of the colliding agents to allow replanning of a new collision-free path. This is a modification of the original TP mechanism where the token can be assigned only to free agents that have reached the end of their paths (see Algorithm 1). To do this, we require the token to include also the current execution traces of the agents. ``` 1:procedure(\(\pi_{i}\)) 2:\(\pi_{i}\leftarrow\pi_{i}\) 3:\(\pi_{i}\leftarrow\pi_{i}\) 4:for\(i=1,...,T\)do 5:\(\pi_{i}\leftarrow\pi_{i}\) 6:endfor 7:\(\pi_{i}\leftarrow\pi_{i}\) 8:for\(i=1,...,T\)do 9:\(\pi_{i}\leftarrow\pi_{i}\) 10:\(\pi_{i}\leftarrow\pi_{i}\) 11:endfor 12:return\(\pi_{i}\) 13:endfor 14:return\(\pi_{i}\) 15:endfor 16:return\(\pi_{i}\) 17:endfor 18:return\(\pi_{i}\) 19:endprocedure 20:return\(\pi_{i}\) 21:endprocedure 22:return\(\pi_{i}\) 23:endprocedure 23:return\(\pi_{i}\) 24:return\(\pi_{i}\) 25:endprocedure 26:return\(\pi_{i}\) 27:endprocedure 28:return\(\pi_{i}\) 29:endprocedure 29:return\(\pi_{i}\) 30:endprocedure 20:return\(\pi_{i}\) 21:endprocedure 22:return\(\pi_{i}\) 23:endprocedure 24:return\(\pi_{i}\) 25:endprocedure 26:return\(\pi_{i}\) 27:endprocedure 28:return\(\pi_{i}\) 29:endprocedure 29:return\(\pi_{i}\) 31:endprocedure 20:return\(\pi_{i}\) 32:endprocedure 21:return\(\pi_{i}\) 33:endprocedure 22:return\(\pi_{i}\) 34:endprocedure 23:return\(\pi_{i}\) 35:endprocedure 24:return\(\pi_{i}\) [MISSING_PAGE_POST] set \(\mathcal{R}\) of non-delayed colliding agents that will try to plan new collision-free paths (line 7). The _PathPlanner_ function considers a set of constraints to avoid conflicts with the current paths of other agents in the token. A problem may happen when multiple delays occur at the same time; in particular situations, two or more agents may prevent each other to follow the only paths available to complete their tasks. In this case, the algorithm recognizes the situation and implements a deadlock recovery behavior. In particular, although with our assumptions agents cannot be delayed forever, we plan short collision-free random walks for the involved agents in order to speedup the deadlock resolution (line 11). An example of execution of TP with replanning is depicted in Figure 1. ## 4. Algorithms for MAPD with DELays In this section we present two algorithms, \(k\)-TP and \(p\)-TP, able to plan paths that solve MAPD-d problem instances with some guaranteed degree of robustness in face of delays. In particular, \(k\)-TP provides a deterministic degree of robustness, while \(p\)-TP provides a probabilistic degree of robustness. For developing these two algorithms, we took inspiration from the corresponding concepts of \(k\)- and \(p\)-robustness for MAPF that we outlined in Section 2. ### \(k\)-TP Algorithm A \(k\)_-robust_ solution for MAPD-d is a plan which is guaranteed to avoid collisions due to at most \(k\) consecutive delays for each agent, not only considering the paths already planned but also those planned in the future. (By the way, this is one of the main differences between our approach and the robustness for MAPF.) As we have discussed in Section 3, TP with replanning (Algorithm 2) can just react to the occurrence of delays once they have been detected. The \(k\)-TP algorithm we propose, instead, plans in advance considering that delays may occur, in the attempt of avoiding replanning at runtime. The algorithm is defined as an extension of TP with replanning, so it is able to solve all well-formed MAPD-d problem instances. A core difference is an additional set of constraints enforced during path planning. The formal steps are reported in Algorithm 3. A new path \(\pi_{i}\), before being added to the token, is used to generate the constraints (the \(k\)-extension of the path, also added to the token, lines 17 and 23) representing that, at any time step \(t\), any vertex in \[\{\pi_{i,t-k},\ldots,\pi_{i,t-1},\pi_{i,t},\pi_{i,t+1},\ldots,\pi_{i,t+k}\}\] should be considered as an obstacle (at time step \(t\)) by agents planning later. In this way, even if agent \(a_{i}\) or agent \(a_{j}\) planning later are delayed up to \(k\) times, no collision will occur. For example, if \(\pi_{i}=\langle{v_{1},v_{2},v_{3}}\rangle\), the 1-extension constraints will forbid any other agent to be in \(\{v_{1},v_{2}\}\) at the first time step, in \(\{v_{1},v_{2},v_{3}\}\) at the second time step, in \(\{v_{2},v_{3}\}\) at the third time step, and in \(\{v_{3}\}\) at the fourth time step. The path of an agent added to the token ends at the delivery vertex of the assigned task, so the space requested in the token to store the path and the corresponding \(k\)-extension constraints is finite, for finite \(k\). Note that, especially for large values of \(k\), it may happen that a sufficiently robust path for an agent \(a_{i}\) cannot be found at some time step; in this case, \(a_{i}\) simply returns the token and tries to replan at the next time step. The idea is that, as other agents advance along their paths, the setting becomes less constrained and a path can be found more easily. Clearly, since delays that affect the execution are not known beforehand, replanning is still necessary in those cases where an agent gets delayed for more than \(k\) consecutive time steps. ### \(p\)-TP Algorithm The idea of \(k\)-robustness considers a fixed value \(k\) for the guarantee, which could be hard to set: if \(k\) is too low, plans may not be robust Figure 1. An example of TP with replanning. The figure shows a grid environment with two agents and two tasks at different time steps. Initially (top), the agents plan their paths without collisions. At time steps \(6\) and \(7\) (middle) \(a_{2}\) is delayed and at time step \(7\) a collision is detected in the token. Then, \(a_{1}\) regains the token and replans (bottom). enough and the number of (possibly costly) replans could be high, while if \(k\) is too high, it will increase the total cost of the solution with no extra benefit (see Section 5 for numerical data supporting these claims). An alternative approach is to resort to the concept of \(p-\)robustness. A \(p-\)_robust_ plan guarantees to keep collision probability below a certain threshold \(p\) (\(0\leq p\leq 1\)). In a MAPD setting, where tasks are not known in advance, a plan could quickly reach the threshold with just few paths planned, so that no other path can be added to it until the current paths have been executed. Our solution to avoid this problem is to impose that only the collision probability of _individual_ paths should remain below the threshold \(p\), not of the whole plan. s discussed in (Garf et al., 2017), this might also be a method to ensure a notion of fairness among agents. We thus need a way to calculate the collision probability for a given path. We adopt a model based on Markov chains (Brock et al., 2010). Assuming that the probability that any agent is delayed at any time step is fixed and equal to \(p_{d}\), we model agent \(a_{i}\)'s execution trace \(e_{i}\) (corresponding to a path \(\pi_{i}\)) with a Markov chain, where the transition matrix \(P\) is such that with probability \(p_{d}\) the agent remains at the current vertex and with probability \(1-p_{d}\) advances along \(\pi_{i}\). We also assume that transitions along chains of different agents are independent. (This simplification avoids that delays for one agent propagate to other agents, which could be problematic for the model (Garf et al., 2017), while still providing an useful proxy for robustness.) This model is leveraged by our \(p-\)TP algorithm reported as Algorithm 4. The approach is again an extension of TP with replanning, so also in this case we are able to solve any well-formed MAPD instance. Here, one difference with the basic algorithms is that before inserting a new path \(\pi_{i}\) in the token, the Markov chain model is used to calculate the collision probability \(\mathit{cprob}_{\pi_{i}}\) between \(\pi_{i}\) and the paths already in the token (lines 18 and 30). Specifically, the probability distribution for the vertex occupied by an agent \(a_{i}\) at the beginning of a path \(\pi_{i}=\langle\pi_{i,t},\pi_{i,t+1},\ldots,\pi_{i,t+n}\rangle\) is given by a (row) vector \(s_{0}\) with length \(n\) that has every element set to \(0\) except that corresponding to the vertex \(\pi_{i,t}\), which is \(1\). The probability distribution for the location of an agent at time step \(t+j\) is given by \(s_{0}P^{j}\) (where \(P\) is the transition matrix defined above). For example, in a situation with 3 agents and 4 vertices \((v_{1},v_{2},v_{3},v_{4})\), the probability distributions at a given time step \(t\) for the locations of agents \(a_{1}\), \(a_{2}\), and \(a_{3}\) could be \(\langle 0.6,0.2,0.1,0.1\rangle\), \((0.3,0.2,0.2,0.3)\), and \((0.5,0.1,0.3,0.1)\), respectively. Then, for any vertex traversed by the path \(\pi_{i}\), we calculate its collision probability as \(1\) minus the probability that all the other agents are not at that vertex at that time step multiplied by the probability that the agent is actually at that vertex at the given time step. Following the above example, the collision probability in \(v_{1}\) for agent \(a_{1}\) at \(t\) (i.e., the probability that at least one of the other agents is at \(v_{1}\) at \(t\)) is calculated as \([1-(1-0.3)\cdot(1-0.5)]\cdot 0.6=0.39\). The collision probabilities of all the vertices along the path are summed to obtain the collision probability \(\mathit{cprob}_{\pi_{i}}\) for the path \(\pi_{i}\). If this probability is above the threshold \(p\) (lines 19 and 31), the path is rejected and a new one is calculated. If an enough robust path is not found after a fixed number of rejections _itermax_, the token is returned to the system and the agent will try to replan at the next time step (as other agents advance along their paths, chances of collisions could decrease). Also for \(p-\)TP, since the delays are not known beforehand, replanning is still necessary. Moreover, we need to set the value of \(p_{d}\), with which we build the probabilistic guarantee according to the specific application setting. We deal with this in the next section. ## 5. Experimental Results ### Setting Our experiments are conducted on a 3.2 GHz Intel Core i7 8700H laptop with 16 GB of RAM. We tested our algorithms in two warehouse 4-connected grid environments where the effects of delays can be significant: a small one, \(15\times 13\) units, with 4 and 8 agents, and a large one, \(25\times 17\), with 12 and 24 agents (Figure 2). (Environments of similar size have been used in (Garf et al., 2017).) At the beginning, the agents are located at the endpoints. We create a sequence of 50 tasks choosing the pickup and delivery vertices uniformly at random among a set of predefined vertices. The arrival time of each task is determined according to a Poisson distribution (Garf et al., 2017). We test 3 different arrival frequencies \(\lambda\) for the tasks: 0.5, 1, and 3 (since, as discussed later, the impact of \(\lambda\) on robustness is not relevant, we do not show results for all values of \(\lambda\)). During each run, 10 delays per agent are randomly inserted and the simulation ends when all the tasks have been completed. We evaluate \(k-\)TP and \(p-\)TP against the baseline TP with replanning (to the best of our knowledge, we are not aware of any other algorithm for finding robust solutions to MAPD-d). For \(p-\)TP we use two different values for the parameter \(p_{d}\), \(0.02\) and \(0.1\), modeling a low and a higher probability of delay, respectively. (Note that this is the expected delay probability used to calculate the robustness of a path and could not match with the delays actually observed.) For planning paths of individual agents (_PathPlaner_ in the algorithms), we use an A* path planner with Manhattan distance as heuristic. Solutions are evaluated according to the makespan (i.e., the earliest time step at which all tasks are completed, see Section 2). (Results for the service time are qualitatively similar and are not reported here.) We also consider the number of replans performed during execution and the total time required by each simulation (including time for both planning and execution). The reported results are averages over 100 randomly restarted runs. All algorithms are implemented in Python and the code is publicly available at an online repository2. Footnote 2: Link hidden to keep anonymity. ### Results Results relative to small warehouse are shown in Tables 1 and 2 and those relative to large warehouse are shown in Tables 3 and 4. For the sake of readability, we do not report the standard deviation in tables. Standard deviation values do not present any evident oddity and support the conclusions about the trends reported below. The baseline algorithm, TP with replanning, appears twice in each table: as \(k-\)TP with \(k=0\) (that is the basic implementation as in Algorithm 2) and as \(p-\)TP with \(p_{d}=0.1\) and \(p=1\) (which accepts all paths). The two versions of the baseline return the same results in terms of makespan and number of replans (we use the same random seed initialization for runs with different algorithms), but the total runtime is larger in the case of \(p-\)TP, due to the overhead \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{\(\ell=4\)} & \multicolumn{4}{c|}{\(\ell=8\)} \\ \cline{2-7} \(k\) or \(p\) & makespan & \# replans & runtime [s] & makespan & replans & runtime [s] \\ \hline \multirow{3}{*}{\(0\)} & \(0\) & **364.88** & 7.26 & **0.85** & **234.59** & 16.04 & **2.11** \\ \cline{2-7} & \(1\) & 374.48 & 1.4 & 0.91 & 240.69 & 3.85 & 2.27 \\ \cline{2-7} & \(2\) & 390.82 & 0.1 & 1.16 & 241.14 & 0.73 & 2.15 \\ \cline{2-7} & \(3\) & 411.09 & 0.01 & 1.59 & 259.38 & 0.09 & 3.12 \\ \cline{2-7} & \(4\) & 436.12 & **0.0** & 2.0 & 278.33 & **0.04** & 4.49 \\ \cline{2-7} & \(1\) & **364.88** & 7.26 & 1.14 & **234.59** & 16.04 & 2.63 \\ \cline{2-7} & \(0.5\) & 369.5 & 6.29 & 1.81 & 237.27 & 12.59 & 5.0 \\ \cline{2-7} & \(0.25\) & 395.07 & 4.29 & 2.88 & 255.21 & 5.63 & 6.11 \\ \cline{2-7} & \(0.1\) & 409.17 & 2.9 & 3.16 & 268.99 & 3.23 & 6.32 \\ \cline{2-7} & \(0.05\) & 428.64 & 2.93 & 3.42 & 279.26 & 2.76 & 6.48 \\ \hline \multirow{3}{*}{\(0\)} & \(0.5\) & 366.72 & 7.34 & 1.29 & 238.83 & 12.81 & 3.87 \\ \cline{2-7} & \(0.25\) & 378.42 & 6.8 & 1.57 & 236.21 & 10.21 & 4.38 \\ \cline{1-1} \cline{2-7} & \(0.1\) & 391.63 & 4.53 & 2.37 & 250.39 & 6.73 & 5.57 \\ \cline{1-1} \cline{2-7} & \(0.05\) & 405.53 & 3.51 & 2.66 & 256.24 & 4.25 & 5.34 \\ \hline \end{tabular} \end{table} Table 1. Small warehouse, \(\lambda=0.5\), and \(10\) delays per agent Figure 2. Large warehouse with 24 agents, obstacles (black), pickup (colored squares) and delivery (triangles) vertices, and endpoints (green circles) of calculating the Markov chains and the collision probability for each path. Looking at robustness, which is the goal of our algorithms, we can see that, in all settings, both \(k-\)TP and \(p-\)TP significantly reduce the number of replans with respect to the baseline. For \(k-\)TP, increasing \(k\) leads to increasingly more robust solutions with less replans, and the same happens for \(p-\)TP when the threshold probability \(p\) is reduced. However, increasing \(k\) shows a more evident effect on the number of replans than reducing \(p\). More robust solutions, as expected, tend to have a larger makespan, but the first levels of robustness (\(k=1\), \(p=0.5\)) manage to reduce significantly the number of replans with a small or no increase in makespan. For instance, in Table 4, \(k-\)TP with \(k=1\) decreases the number of replans of more than 75% with an increase in makespan of less than 2%, with respect to the baseline. Pushing towards higher degrees of robustness (i.e., increasing \(k\) or decreasing \(p\)) tends to increase makespan significantly with diminishing returns in terms of number of replans, especially for \(k-\)TP. Comparing \(k-\)TP and \(p-\)TP, it is clear that solutions produced by \(k-\)TP tend to be more robust at similar makespan (e.g., see \(k-\)TP with \(k=1\) and \(p-\)TP with \(p_{d}=.1\) and \(p=0.5\) in Table 1), and decreasing \(p\) may sometimes lead to relevant increases in makespan. This suggests that our implementation of \(p-\)TP has margins for improvement: if the computed path exceeds the threshold \(p\) we wait the next time step to replans, without storing any collision information extracted from the Markov chains; finding ways to exploit this information may lead to an enhanced version of \(p-\)TP (this investigation is left as future work). It is also interesting to notice the effect of \(p_{d}\) in \(p-\)TP: a higher \(p_{d}\) (which, in our experiments, amounts to overestimating the actual delay probability that, considering that runs last on average about 300 time steps and there are 10 delays per agent, is equal to \(\frac{10}{300}=0.03\)) leads to solutions requiring less replans, but with a noticeable increase in makespan. reinforcing the importance of addressing possible delays during planning and not only during execution, especially when the delays can dramatically affect the operations of the agents, like in this case. The \(k\)-TP algorithm performs better than the \(p\)-TP algorithm, with trends similar to those discussed above. Note that, especially in the more constrained small warehouse (Table 5), the big reduction in the number of replans produces a shorter runtime for \(k\)-TP with small values of \(k\) wrt the baseline TP. Finally, we run simulations in a even larger warehouse \(4\)-connected grid environment of size \(25\times 37\), with \(50\) agents, \(\lambda=1\), \(100\) tasks, and \(10\) delays per agent. The same qualitative trends discussed above are observed also in this case. For example, \(k\)-TP with \(k=2\) reduces the number of replans of \(93\%\) with an increase of makespan of \(5\%\) with respect to the baseline. The runtime of \(p\)-TP grows to hundreds of seconds, also with large values of \(p\), suggesting that some improvements are needed. Full results are not reported here due to space constraints. ## 6. Conclusion In this paper, we introduced a variation of the Multi-Agent Pickup and Delivery (MAPD) problem, called MAPD with delays (MAPD-d), which considers an important practical issue encountered in real applications: delays in execution. In a MAPD-d problem, agents must complete a set of incoming tasks (by moving to the pickup vertex of each task and then to the corresponding delivery vertex) even if they are affected by an unknown but finite number of delays during execution. We proposed two algorithms to solve MAPD-d, \(k\)-TP and \(p\)-TP, that are able to solve well-formed MAPD-d problem instances and provide deterministic and probabilistic robustness guarantees, respectively. Experimentally, we compared them against a baseline algorithm that reactively deals with delays during execution. Both \(k\)-TP and \(p\)-TP plan robust solutions, greatly reducing the number of replans needed with a small increase in solution makespan. \(k\)-TP showed the best results in terms of robustness-cost trade-off, but \(p\)-TP still offers great opportunities for future improvements. Future work will address the enhancement of \(p\)-TP according to what we outlined in Section 5.2 and the experimental testing of our algorithms in real-world settings.
2309.02712
Unveiling the frontiers of deep learning: innovations shaping diverse domains
Deep learning (DL) enables the development of computer models that are capable of learning, visualizing, optimizing, refining, and predicting data. In recent years, DL has been applied in a range of fields, including audio-visual data processing, agriculture, transportation prediction, natural language, biomedicine, disaster management, bioinformatics, drug design, genomics, face recognition, and ecology. To explore the current state of deep learning, it is necessary to investigate the latest developments and applications of deep learning in these disciplines. However, the literature is lacking in exploring the applications of deep learning in all potential sectors. This paper thus extensively investigates the potential applications of deep learning across all major fields of study as well as the associated benefits and challenges. As evidenced in the literature, DL exhibits accuracy in prediction and analysis, makes it a powerful computational tool, and has the ability to articulate itself and optimize, making it effective in processing data with no prior training. Given its independence from training data, deep learning necessitates massive amounts of data for effective analysis and processing, much like data volume. To handle the challenge of compiling huge amounts of medical, scientific, healthcare, and environmental data for use in deep learning, gated architectures like LSTMs and GRUs can be utilized. For multimodal learning, shared neurons in the neural network for all activities and specialized neurons for particular tasks are necessary.
Shams Forruque Ahmed, Md. Sakib Bin Alam, Maliha Kabir, Shaila Afrin, Sabiha Jannat Rafa, Aanushka Mehjabin, Amir H. Gandomi
2023-09-06T04:50:39Z
http://arxiv.org/abs/2309.02712v1
# Unveiling the frontiers of deep learning: innovations shaping diverse domains ###### Abstract Deep learning (DL) enables the development of computer models that are capable of learning, visualizing, optimizing, refining, and predicting data. In recent years, DL has been applied in a range of fields, including audio-visual data processing, agriculture, transportation prediction, natural language, biomedicine, disaster management, bioinformatics, drug design, genomics, face recognition, and ecology. To explore the current state of deep learning, it is necessary to investigate the latest developments and applications of deep learning in these disciplines. However, the literature is lacking in exploring the applications of deep learning in all potential sectors. This paper thus extensively investigates the potential applications of deep learning across all major fields of study as well as the associated benefits and challenges. As evidenced in the literature, DL exhibits accuracy in prediction and analysis, makes it a powerful computational tool, and has the ability to articulate itself and optimize, making it effective in processing data with no prior training. Given its independence from training data, deep learning necessitates massive amounts of data for effective analysis and processing, much like data volume. To handle the challenge of compiling huge amounts of medical, scientific, healthcare, and environmental data for use in deep learning, gated architectures like LSTMs and GRUs can be utilized. For multimodal learning, shared neurons in the neural network for all activities and specialized neurons for particular tasks are necessary. **Keywords:** Deep learning; Deep learning architecture; Natural language processing; Computer vision; Deep neural network ## 1 Introduction Deep learning is a method for constructing computational models composed of numerous processing layers in order to investigate and learn the demonstrations of multiple abstraction data. Utilizing these pathways, improvements have been made in various sectors, including audio-visual technology, better recognition and detection, genomics, proteomics, biomedicine, drug discovery, environment, and security. Deep learning unravels and identifies the underlying structures within massive datasets utilizing several algorithms, such as back-propagation, to learn and apply changes to given conditions as a machine would [1]. Deep learning exhibits advantages over earlier machine learning and artificial intelligence algorithms that lack the ability to analyze natural raw data. By utilizing methods like representation learning, a system is able to take raw data as input and determine the necessary patterns for analysis. Deep learning takes into account many layers of representation data, each of which has an effect on the other. it is essential to examine the recent advancements and applications of deep learning in the potential fields, including audio-visual data processing [2][3][4], agriculture [5][6][7][8], transportation prediction [9][10][11], natural language [12][13], biomedicine [14][15], disaster management [16][17], bioinformatics [18][19][20], healthcare [21][22], drug design [23][24], genomics [25], face recognition [26][27] and ecology [28][29], in order to understand the present state of deep learning. Numerous surveys have been conducted on the analysis of medical image segmentation, reviewing different analysis techniques. While many types of analysis were discussed, the technical details around them were not [30]. For instance, Litjens et al. [31] adopted a broad approach to medical image segmentation and reviewed many different subfields, which did not help to limit the topic. Hesamian et al. [32] shed light on the machine learning and artificial intelligence techniques utilized in recent biomedical image research, focusing on their structure and methodology, and evaluating their benefits and limitations. However, they did not discuss the difficulties of black boxes and the inaccessibility of complex neural networks to human cognition. Several studies [33, 34, 35, 36] evaluated features of deep learning-based audio-visual data processing, such as text analysis, activity recognition, and face recognition, without exploring the difficulties of black boxes or data privacy issues. Particularly, in the sampling of text and picture data, the processing of personally identifiable information is subject to numerous legal constraints that are not adequately clarified. Sorting and analyzing biomedical data are crucial challenges in the health industry since it is so complicated, varied, and dispersed. Health records, picture records, genomes, proteomics, transcriptomics, sensory data, and texts are just a few examples of the many types of data produced by the biomedical sector. To accurately forecast, represent, analyze, and sort this data, deep learning-based data mining approaches have proven to be both effective and fast. The usage of spiking neural networks (SNNs), which are constructed to emulate the information processing methods in biological systems, was reviewed by Pfeiffer et al. [37]. However, the study failed to address the issues posed by these systems, such as the fact that massive volumes of data input cause delays and fail to incorporate uncertainties. Many other reviews concentrating on deep learning techniques for health data overlook the challenges posed by poor data quality and excessive data volume [38]. Privacy problems of health data and genomic data are commonly disregarded in data mining approaches in biomedicine and bioinformatics [39, 40]. Yuan et al. [41] reviewed conventional neural network and deep learning approaches, which pertain to the development of environmental remote sensing procedures, and their applications in environmental monitoring. Nevertheless, the authors did not explain how to deal with issues like incomplete or inaccurate data, complex domains, inaccurate models, or the difficulty of using a multidisciplinary approach to merge disciplines. The numerous research areas surveyed by Kamilaris et al. [42] applied deep learning approaches to address issues in agricultural and food production. The researchers found that deep learning can outperform conventional methods in terms of providing accurate results. Another study [43] on plant diseases reported that deep neural network (DNN) models are effective in simulating a warning mechanism for specific diseases. Unfortunately, data collection, data volume, data structure, as well as domain complexity and modelling were not addressed in either of these studies. Moreover, these reviews barely explored the issue of data representation, despite the fact that reliable data is essential for productive research. Based on a survey of the current literature, there has not been a comprehensive assessment of the applications of deep learning that discusses the most prevalent obstacles associated with deploying deep learning in each of these domains across a wide variety of industries. Therefore, this review aims to combine a wide range of topics, including information technology, biomedical research, health, the environment, and agriculture. In addition, a review of common obstacles encountered in most disciplines is conducted while employing deep learning methods. This report can be used as a single resource for studying and learning several applications of deep learning in a range of sectors, as well as for gaining an understanding of potential implementation issues with deep learning. This study will facilitate the work of academics by collecting several evaluations of applications from various fields and the common deep learning challenges in these domains. ## 2 Deep learning frameworks Deep learning techniques can be referred to as a representation learning method, which has many layers of representation. These representation components can be created by building non-linear elements that can successively change the degree of representation into a subsequent abstract level. Deep learning can perform very complicated functions by combining these modifications. There are many deep learning frameworks that are used to develop solutions for complex problems, such as Caffe, Caffe2, MXNet, Computational Network Tool Kit (CNTK), Torch, Theano, and TensorFlow, which are discussed in this section [44]. The advantages and disadvantages of the DL frameworks are summarized in Table 1. ### Caffe Caffe stands for convolutional architecture for rapid feature embedding. It is a free and open-source DL framework that allows its users to explore complex structures. This library was created in C++ by the BVLC centre and is often used in Matlab and Python [45]. One study found that Caffe can handle dealing with more than forty million pictures each day only with a Titan GPU or K40 [46]. In Caffe, data are received by data layers. Because of the standard picture formats of these data layers (.gif,.tiff,.jpg,.jpeg,.png,.pdf), Hierarchical Data Format (HDF5) and efficient databases are acceptable. Moreover, in another research, it was found that if Caffe can be incorporated with cuDNN, then it enhances the productivity rate by 36% without using much memory [47]. ### Caffe2 Caffe2 is an advanced version of Caffe created by Yangqing Jia, the same person who created Caffe. After Yangqing Jia began operating for Facebook, they collaborated with Facebook and NVIDIA to create the Caffe2 framework, which is based on Caffe [48]. Caffe2 addresses several of Caffe's shortcomings, including vast dispersed training, deployment in mobile systems, and additional equipment support for quantized processing. NVIDIA is an excellent support system for Caffe2, which includes Python and C++ APIs [49]. This enables it to prototype and refine projects fast. MXNet is a multilingual deep learning framework that has been made to achieve speed and flexibility [50]. Pedro Domingos and a team of scientists created MXNet, which is also an addition of DMLC (Distributed (Deep) Machine Learning Common). This framework is highly scalable with a small memory footprint. It can operate on a vast range of platforms, which include GPU systems and mobile devices and can perform a numerical calculation with a short code of Python and R for dispersed networks and GPU. It supports both imperative and symbolic programming, which use NDArray API and Symbol API, respectively [51]. #### Computational Network Tool Kit (CNTK) CNTK is a deep learning framework that features Python API and is built on top of C++ code, which was developed by Microsoft [52]. This framework portrays neural networks as a sequential computational process. CNTK can readily combine some of the prominent architectures like convolutional neural networks (CNNs), Feed-forward DNNs, and recurrent neural networks (RNNs/LSTMs). C# and BrainScript are supported by CNTK, which both offer increased and minimal APIs for simplicity of usage and versatility. In a study, CNTK was evaluated using a system of several GPUs in a thoroughly linked four-layer neural network [53] and showed to outperform Caffe, TensorFlow, Theano, and Torch [54]. #### Torch/PyTorch Torch is a DL framework written in the Lua programming language. A tensor or an array is used in many of the functions in the torch. These functions include memory exchange, indexing, slicing, and resizing [55]. The torch was designed by Facebook in 2017 and employs dynamic graphics to manage variable-length deliverables. PyTorch is the Python version of Torch [56]. It is written in Python, C, and CUDA and features acceleration libraries from Intel and NVIDIA. This feature has aided PyTorch's rapid growth and adoption among academic communities. #### Theano Theano is a Python deep learning framework that acts as a compiler for numerical expressions and allows developers and users to evaluate their mathematical process via NumPy's syntax [57]. This package was created at the lab of the University of Montreal. Some packages have been designed to increase the capabilities of Theano's, such as Pylearn2 [58], Blocks [59], Lasagne [60], and Keras [61]. Although Theano's development team actively developed and expanded its infrastructure, its development ended in 2017. #### TensorFlow TensorFlow is a deep learning framework that employs an individual data flow graph to define all mathematical operations to provide exceptional throughput [62]. It is an open-source framework that was developed by the Google brain team. TensorFlow creates huge computational networks, wherein every node in the graph refers to a mathematical function and the edges indicate node interaction [63]. This data flow graph specifically conducts interaction between a large part of the computational system, allowing for concurrent execution of separate calculations or the deployment of numerous devices to run partition operations [64]. TensorFlow constructs and executes these graphs using APIs from multiple programming languages, including Python, C++, and Java. \begin{table} \begin{tabular}{l l l l} \hline DL frameworks & Brief description & Advantages & Disadvantages \\ \hline Caffe & Caffe represents convolutional architecture & – The code is ideally suited for research due to its flexibility. & – Caffe does not have good documentation. \\ & for rapid feature embedding. It is a freely available DL framework that may be used to analyze complicated networks. & – It allows the adjustment of training models without writing perfect code. & – The code is ideally suited for research due to its flexibility. & – It has poor and cumbersome performance in big neural networks and RNN architecture. \\ & to analyze complicated networks. & – It allows the adjustment of training models without writing perfect code. & – Its development has become slow. \\ \hline Caffe2 & Caffe2 is an improved variant of Caffe. It fixes a number of issues with Caffe, including scalability, portability, and the lack of hardware support for quantized processing. & – Caffe2 is a cost-platform deep learning framework. It can be used in mobile networking systems or edge computing systems. & – Compatibility with Caffe2 can be challenging for new users. \\ & to analyze complex networks. & – It does not support dynamic graph computations. \\ & to analyze complex networks in operation. & – It supports high-level programming languages such as C++, R, Python, Scala, JavaScript, Perl, Go, Julia, Matlab, and Wolfram. & – It is not compatible with some APIs. \\ & to analyze complex networks in communication. & – It supports dynamic graph computations. & – It supports dynamic graph computations. \\ & to analyze complex networks in communication. & – Open Neural Network Exchange (ONNX) format also works in MXNet, which makes it simple to switch between other DL frameworks and libraries. & – It is hard for beginners. \\ \hline CNTK & As a deep learning framework, CNTK is written in C++ and supports a Python API. CNTK makes it simple to combine popular network architectures including CNNs, Feed-forward DNNs, and RNNs\& LSTMs. & – It is hard for beginners. \\ & to analyze complex networks in communication. & – It’s compatible with Azure Cloud, which is supported by Microsoft. & – As it’s significantly new, it does not have vast community support. \\ & to analyze complex networks in communication. & – The control and use of resources are both efficient. & – The is not compatible with some APIs. \\ \hline \end{tabular} \end{table} Table 1: Advantages and disadvantages of the DL frameworks, with brief descriptions of each * [25] T. Toch, M. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K.. K. K. K. K. K. K.. K. K. K. K. K. K. K. K. K. K. K. K. K. K.. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K.. K. K. K. K. K. K.. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K.. K. K. K. K. K. K. K. K. K. K.. K. K. K. K. K. K. K. K. K. K. K. K. K.. K. K. K. K. K. K. K. K. K. K. K. K. K. K.. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K.. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K. K.. K. K. K. K. K. K.. K. K. K. K. K. K. K.. K. K. K.. K. K. K. K. K. K. K.. K. K. K.. K. K. K. K. K. K. K. K. K.. K. K. K. K.. K. K. K. K.. K. K. K. K. K.. K. K. K. K. K.. K. K. K. K.. K. K. K. K. K. K.. K. K. K. K. K. K. K. K. K. K. K.. K. K. K. K.. K. K. K. K. K.. K. K. K. K.. K. K. K. K. K.. K. K. K. K. K. K.. K. K. K. K. K.. K. K. K. K.. K. K. K. K.. K. K. K.. K.. K. K.. K. K.. K. K.. K. K. K. K.. K. K.. K. K.. K.. K. K. K.. K.. K. K.. K.. K. K.. K.. K. K.. K. K.. K.. K. K.. K.. K. K.. K. K.. K.. K. K.. K. K.. K. K. K. K.. K.. K. K.. K.. K. K. K.. K. K.. K. K.. K.. K.. K. K.. K.. K. K.. K.. K.. K... K.. K.. K.. K.. K.. K.. K.. K.. K.. K.. K. K.. K.. K.. K.. K... K.. K... K... K.. K.. K.. K.. K.. K.. K.. K... K.. K... K.. K... K... K.. K.. K... K.. K... K.. K... K.. K.. K... K.. K.. K... K... K... K... K... K... K... K... K... K... K... K.... K.. K.... K.... K... K... K..... K..... K...... K.. **3. Recent advancements and applications of deep learning** Deep learning has addressed challenges that were considered to be impossible a few years back. Specifically, it has been successful in handling issues that conventional machine algorithms failed to do; such as deep learning has revolutionized image and video analysis where traditional methods often struggle with variations in lighting, angles, and backgrounds. Because of its exceptional data handling feature, it has piqued the interest of professionals who are bombarded with all kinds of data. The advancement and application of deep learning (**Fig. 1**) have increased over the years. Natural language processing is one of the fields that has benefited significantly from deep learning, which is used in the various domains of natural language processing like the translation of audio and machines [65]. In 2015, Figure 1: Applications of deep learning in potential sectors Google debuted the word lens identification engine based on deep learning. The best feature of a word lens is that it can read the texts instantaneously and convert them into the target language [66]. Another field where deep learning is being immensely used is transportation. As the world population is increasing, a smart transportation system is in demand. In the transportation sector, deep learning is being used in Destination Prediction [67], Demand Prediction [68, 69], Traffic Flow Prediction [70, 71], Travel Time Estimation [73, 74], Predicting Traffic Accident Severity [75, 76], Predicting the Mode of Transportation [77], Trajectory Clustering [78, 79], Navigation [80, 81], Demand Serving [82, 83], and Traffic Signal Control [84, 85]. Implementation of DL in the bioinformatics domain has increased substantially over the years. In this area, DL has been mostly used to predict the structure of proteins [86, 87], gene regulations and expressions [88, 89], precision medicine [90, 91], biomedical imaging [92, 93], localization of cells [94, 95], clustering [96, 97], classification of proteins [98, 99] and so on. Techniques of DL have become one of the key elements in many multimedia systems [100]. Systems like Convolutional Neural Networks (CNNs) have presented noteworthy outcomes in various tasks from the real world, such as the detection of objects and processing of visual data, including images and videos. There have been numerous advancements, including the development of event detection from sports videos [101], new techniques like recurrent convolution networks (RCNs) for video processing [102], the use of intermediate CNN layers [97], and Gated Recurrent Unit for improving the sparsity and locality in modules. ### Speech and audio processing Speech and audio have been proven as two important modes of communication in human history. Deep learning technologies have made significant progress in the field of speech processing. Unlike traditional speech enhancement techniques that rely on a statistical model, deep learning models are data-driven. Deep learning is among the most important technologies nowadays and, thus, merits its own study. Mapping-based and masking-based are the types of DL approaches that are used in speech enhancement [103]. Moreover, DL methods for speech emotion recognition have several advantages over conventional machine learning methods, such as identifying complex systems and no requirement for manual extraction of the features [104]. Also, deep learning has the ability to handle a large number of unlabeled data and a proclivity for obtaining low-level characteristics from provided raw data. Furthermore, many academics have abandoned traditional signal processing approaches for sound production due to the emergence of deep learning algorithms. Deep learning techniques have accomplished eloquent voice generation, audio textures, and melodies from simulated instruments [105]. Deep neural networks have shown excellent progress in audio processing. #### 3.1.1 Speech enhancement Speech enhancement is being utilized in video conferencing, hearing aids, microphones, audiovisual screening and so on. In order to have a good speech enhancement system, pattern mining is needed [106]. Processing background noise is considered a good speech augmentation method. There are different types of conventional machine learning approaches available to filter and eliminate the additional noise from the voice signals. In recent years, DL methods have proven beneficial for speech augmentation. The deep neural network (DNN) is the most general technique that has been used to remove noises from large datasets [107]. However, research shows that DNN struggles to adapt to new voice corpora in low signal-to-noise ratio (SNR) settings [108]. Accordingly, an alternative solution is to utilize a lower frameshift in short-time speech processing that can substantially improve cross-corpus generalization. Noise suppression is a well-developed field of signal processing [109], yet it still heavily relies on the precise tweaking of estimator techniques and variables. In a study, hybrid deep learning technologies were proposed to eliminate noise from the background [110]. In this technique, a great emphasis is placed on minimizing complications while obtaining highly improved speech. This method proves more viable than a standard lowest mean squared error spectrum estimator. In order to achieve a significant speech enhancement performance, a large DNN is needed, which is computer- and memory-intensive [111]. Therefore, making such a speech enhancement system is challenging to deploy because of timing constraints and minimal resources of hardware. One study introduced two compression models for DNN-based speech enhancements that mainly include quantization to minimize model size based on clustering, sparse regularization, and repeated pruning [112]. Moreover, experimental results indicate that this method decreases the size of four distinct models by a substantial amount without affecting their enhancing performance. #### Speech emotion recognition (SER) Speech emotion recognition (SER) refers to identifying the underlying emotion in a text or speech regardless of the semantic content. Implementation of deep learning in SER has made it possible to detect emotion in a speech in real-time situations [113]. Several researches have aimed to enhance the SER technique using deep learning methods. In a study, a dynamic SER identification system was developed based on aural and visual data processing [114]. In this system, a total of 88 characteristics (Mel Frequency Cepstral Coefficients (MFCC), filter bank energies (FBEs)) are employed to minimize the earlier retrieved features using principal component analysis (PCA) [115]. A comparison of deep learning flow versus conventional machine learning flow techniques for SER is shown in Fig. 2. Surekha [117] utilized GMM classifiers in the SER system to detect five emotions by applying the Berlin Emotional Speech database. An acoustic feature, which is called the MFCC with the Teager Energy Operator (TEO), was used as a prosodic. This approach was employed in another study where 13 MFCC special features derived from the audio data were utilized to identify seven emotions [118]. In this method, the Logistic Model Tree (LMT) method was used, which has a 70% accuracy rate. Nevertheless, there are some challenges related to SER that needs to be addressed. For instance, an issue with emotional speech datasets is annotation ambiguity [119]. In a specific task like picture classification, a car will always be labeled as a car, yet in an emotional discourse, asserting something strongly may be categorized as anger. This bias in categorization complicates the work while also limiting the ability to combine datasets and generate emotional supersets. ### Transportation prediction Recently, researchers have been using deep learning techniques in the intelligent transportation system. Previously, analytical or statistical methods were used to solve problems in this domain. These enhancements have aided traffic management and planning, raised transit road safety and security, reduced maintenance costs, improved public transportation and ride-sharing firm performance, and propelled driverless car development to a new level. Zheng et al. [120] presented a DL model that automatically extracts fundamental properties of traffic flow data using hybrid and multiple layer architectures. They designed an attention-based ConvLSTM module that extracts spatial and short-term temporal features using a CNN and LSTM network. By automatically employing various weights to Figure 2: Traditional machine learning vs deep learning flow process (modified from [116]) flow sequences at different periods, the attention method was correctly constructed to discern the relevance of the sequences at several intervals. In order to investigate long-term temporal features, the researchers proposed a bidirectional-LSTM (Bi-LSTM) architecture that extracts regular (daily and weekly) periodical features in order to grab the variance trend of traffic flow in prior and posterior orientation. However, only relatively small and simple road networks were considered in the study, which are more complicated and large scale. As a result, CNN and Bi-LSTM methods might be unable to completely utilize traffic flow's complex and dynamic properties. A multistep prediction model built on attention mechanism-based Convolution Neural Network-LSTM was proposed by Vijayalakshmi et al. [121]. To increase model accuracy, the suggested technique employs the spatial and time-based features of data, which are retrieved by CNN and LSTM. This method facilitates the detection of short-term traffic characteristics (e.g. speed), which is essential for calculating future flow value. Weather and other conditions like accidents and road closure data could be taken into account for the reliability of the suggested technique. Abdollahi et al. [122] proposed a method for forecasting travel time using a multi-step method, which begins with the removal of both temporal and spatial outliers. To increase prediction accuracy, reduce overfitting risks, and achieve a more robust learner, the researchers employed a deep stacked autoencoder to present lower dimensions' features. To forecast travel times, a multi-layered perceptron was trained. While the proposed technique showed to be capable of capturing traffic dynamics in general, it was not successful when heavy snow was present or another uncommon occurrence that had a substantial influence on travel times prevailed. To overcome this problem, several deep architectures and representation learning algorithms could be implemented, followed by a performance comparison to find an effective method. ### Agriculture The successful application of deep learning has spread to many fields, including agriculture. Some specific problems in this field that can be explored with deep learning methods include land cover categorization, plant classification, fruit counting, and crop type classifications. In a survey, Kamilaris and Prenafeta-Bolda [123] found that deep learning outperforms frequently-used image processing approaches in terms of accuracy in the agricultural domain. Many methods based on deep learning have been developed recently for identifying leaf stress in plants. Ramcharan et al. [124] offered a cassava vegetable-based transfer learning approach. An Inception V3 model was used on a sample of 15,000 photos, and the approach outperformed more common models based on machine learning, such as KNN and support vector machine (SVM), with an accuracy of 93%. For certain diseases (CBSD, BLS, GMD), using the leaflet rather than the full leaf increased diagnostic accuracy. However, using whole leaf photos enhanced accuracies for other diseases like CMD and RMD. Instead of evaluating the entire leaf, Arnal Barbedo [125] employed a CNN to categorize specific diseases and spots on plant leaves. This revealed a number of illnesses that all damage the same leaf. In addition, the author used deep learning to recognize specific lesions and spots in 14 plant species. The models in this study were trained using a pre-trained GoogLeNet CNN. How many pictures of disease symptoms would be needed for the neural network to learn their characteristics could not be determined by the study. In all situations, a few hundred photos appeared to be sufficient to produce credible findings, but this quantity must be approached with caution. Lin et al. [126] applied an RGB sensor along with a UNet-CNN to classify and identify cucumber powdery mildew-affected leaves. Because the loss value of the affected pixels in this study was lower than the non-affected pixels, the experiment presented a loss function (Binary-Cross-Entropy) to tenfold the loss values. The semantic segmentation CNN model segmented the sick powdery mildew on cucumber leaf pictures with a mean pixel accuracy value of 96.08%. However, it is necessary to take images in a controlled setting rather than in an open field. Furthermore, a lack of appropriate amount and diversity of the datasets, where symptoms produced by other conditions were not included, may hinder the effectiveness of DL methods. DL approaches have recently been implemented in smart fish farms. A deep learning method was demonstrated to better differentiate variations in traits, classes, and the environment, allowing it to extract target fish attributes from a picture captured in an uncontrolled underwater environment [127]. On the well-known LifeCLEF-14 and LifeCLEF-15 fish datasets, CNN outperformed conventional techniques with classification accuracies of over 90% [128]. General deep structures in the experiment should be fine-tuned to increase the efficacy to identify vital information in the feature space of interest, thus reducing the requirement for vast volumes of annotated data. ### Natural language processing (NLP) Deep learning algorithms are rapidly being implemented in NLP research. Popular ML algorithms, such as SVM, and logistic regression, trained on sparse and high-dimensional features have been the foundation of ML approaches for NLP issues for decades. On a range of NLP tasks, neural networks based on dense vector representations have recently surpassed classical models. This trend has been fueled by significant advancements in word embedding and DL approaches [65]. #### 3.4.1 Paraphrase identification The task of identifying whether two statements written in natural language have semantic meanings that are comparable is known as identification. When two sentences have the same meaning, they are referred to be paraphrases. This methodology is a fundamental approach in a variety of data mining approaches that has tremendous potential in a variety of domains, including plagiarism detection, machine translation, and others. Various techniques of paraphrase identification have been presented, which are distributed into two categories: similarity-based and classification-based methods [129]. Alzubi et al. [130] applied a collaborative adversarial network (CAN) in order to discover the problem of paraphrasing. A common feature extractor is included in CAN to help increase the association between phrases in the recurrent neural network model. Within a word pair, the extractor looks for related features. The integration of adversarial networks with collaborative learning mostly enhances the generator and discriminator working together. The model outperforms the baseline MaLSTM model as well as several baseline approaches. Hunt et al. [131] investigated a wide range of machine learning approaches for modeling the problem of paraphrase identification and alternative input encoding schemes. Since RNN outperformed the other models, the researchers developed a method based on the LSTM, an RNN variation. RNN works by using the output of past time sequences to create the output for next time sequences. This fits in NLP because it can be illustrated as a sequence of tokens, and the appearance of the next token is generally influenced by the preceding tokens. #### Machine translation NLP's most well-known application is machine translation, which entails translating texts from one to another language using mathematical and computational procedures. The translation is challenging for humans since it requires knowledge of morphology, syntax, and semantics, as well as a full understanding and evaluation of cultural sensitivity for the target language and its associated communities. Ahmed et al. [132] advocated that attention methods can be used to encode a language from input to output instead of employing a huge number of recurrent and convolutional layers. The following three principles motivate the use of "self-attention" methods over traditional layers: decreasing the complexity of calculations needed for each layer; limiting sequential training steps; and lastly, reducing the length of the path between input and output and its influence on the learning of long-range relationships, which is important in several sequencing tasks. Johnson et al. [132] presented that a single, basic (but huge) NN could be applied for translating a number (minimum 12) of several languages into each other. This NN automatically recognizes the source languages and uses just one input token to determine the output languages. When numerous language tokens are supplied, the method showed to be capable of interpreting multilingual inputs and producing mixed outputs, at least in part, sometimes even in languages similar to but not identical to those chosen. The performance of such zero-shot translation is frequently insufficient to be practical, as the basic pivoting strategy quickly outperforms it. A deep-attention technique was introduced by Zhang et al. [133] for a neural machine translation (NMT) system. This model incorporates many stacked attention layers, each of which pays attention to a corresponding encoder layer, to advise what should be transmitted or repressed from the encoder layer, resulting in learnt distributed representations that are suitable for high-level translation tasks. English-French, NIST-English-Chinese, and WMT14 English-German translation skims were used in this experiment. The technique could be implemented on other tasks like summarization and could adapt to more complex attention models. Aside from the work listed above, other academics have suggested a variety of high-performance designs. In the English-German and Chinese-English translation systems, Zhang et al. [134] introduced the Variational NMT approach, which has a novel approach to modeling translation problems. The results demonstrated that it performed better than the baseline technique (NMT). Fast-Forward Connection for RNN (LSTM) was developed by Zhou et al. [135], which allows for a deeper network in implementation and, hence, greater performance. #### Sentiment analysis Users today create massive volumes of data in a large and dynamic fashion, as the number of internet technologies grows. The number of people who use social media on a regular basis is continuously expanding. In this sense, sentiment analysis appears to be a useful technique for automating the extraction of insights from user-generated data. Deep learning algorithms have recently been presented for several sentiment analysis applications, with the existing results [136]. For extracting aspect opinion target expressions (OTEs), Al-Smadi et al. [137] suggested combining BiLSTM with a CRF method for classifying, as well as for classifying aspect-sentiment polarity. An aspect-based (LSTM) was used, where the aspect opinion target expressions were applied as attention expressions. A two stage attention structure was created by Ma et al. [138] that involves paying attention to both the words that make up the target expression and the complete phrase. The author also used extended LSTM to construct a common sense model for target aspect-based emotion detection, which can use external knowledge. However, these methods were unable to represent several aspects of sentences, and the explicit location contexts of words were not investigated. Yuan et al. [139] suggested a Domain-Attention-Model (DAM) by utilizing an attention mechanism to model feature-level tasks for multi-domain emotion categorization. Domain and sentiment modules are the two components that makeup DAM. The domain module uses BiLSTM to forecast the domain where texts belong, and the sentiment module uses another BiLSTM with an attention method to choose the key aspects connected to the domain. To forecast the polarity of the sentences, the vector derived from the sentiment module is input to a soft-max classifier. In contrast to earlier multi-domain sentiment classification algorithms, the proposed methodology can pull out the most distinctive features from the hidden layers, reducing the required number of labeled samples. To predict multimodal attitudes in tweets, F. Chen et al. [140] suggested a weakly-supervised multimodal DL (WS-MDL) method. To produce multimodal prediction scores and sentiment consistency scores, the method employs CNN and dynamic CNN. The classic text-based sentiment/emotion analysis technique has evolved into multimodal sentiment analysis compound models due to the huge quantity of data available on social media in multiple formats, such as videos, audio, and photographs for expressing emotion on these sites. In future work, the order of the emotion icon levels might be further investigated, and this may be included as a constraint to the proposed WS-MDL technique. For sentiment categorization, researchers are also mixing several DL algorithms. Combining LSTM with CNN, Huang et al. [141] suggested an architecture with one CNN layer and two LSTM layers placed on top of CNN. Specifically, the CNN is used to collect important local text properties, which are then input into a two-layered LSTM model. The researchers utilized a pre-trained word2vecmodel. The suggested model can generate a sentence representation for sentence categorization by extracting context-dependent features. Hassan and Mahmood [142] developed the ConvLstm architecture, which merges LSTM and CNN for categorizing brief texts on top of the word2vec. Local information loss may be reduced, and long-term dependencies can be captured using the suggested design. #### 3.4.4 Question answering Question answering (QA) is the process of extracting words, phrases, or sentences from documents that are relevant. In response to a request, QA delivers this information in a logical manner. The approaches are similar to the summarizing methods that are presented in the next section. Wang et al. [143] utilized an attention-based LSTM to associate the questions with answers containing paragraphs. By mapping the full text, a self-matching attention process was employed to develop the machine representations. The position and boundaries of responses were predicted using pointer networks. The networks utilized attention pooling vector representation of passages, and also the words were examined to model the crucial tokens or phrases. Modeling more than one aspect simultaneously with the attention mechanism would be an interesting addition to the experiment. Using Wikipedia as the knowledge source, Mozannar et al. [144] investigated the topic of open domain factual Arabic QA and proposed an approach composed of A BERT document reader and hierarchical TF-IDF document retriever. They also introduced a dataset (ARCD). However, ARCD's questions were created with certain paragraphs in mind; without that context, they might seem ambiguous. Convolutional Neural Networks were compressed with LSTM in [145] in an attempt to speed up processing. For the decomposition of fully connected layers in LSTM and CNN, the researchers suggested applying different decomposition algorithms and regression techniques. Tensor Regression layers replace the Flattening and Fully Connected layers in the final section of the method. The Tensor Contraction layer compresses the flow of features between the layers to further compress the parameter. Determining the rank is an NP hard problem in the low-rank decomposition, and their method is still constrained in this area by inserted hyper-parameters. Yu et al. [146] used a tree-LSTM method for capturing a language's linguistic structures. The study created a semantic tree for each question in the dataset, with every node referring to a single LSTM unit and the root node representing the sequence. To boost reasoning capacity, this approach can divide problems into several logical phrases. The representational ability of the network could be improved in future. Narasimhan and Schwing [147] suggested an end-to-end approach that combines image and question characteristics acquired from a CNN and LSTM, using a multi-layered perceptron (MLP). They gathered this information from other sources in order to better respond to the question. The fact embedding and MLP output are then put into a scoring function. The function calculates the cosine similarity between the inputs, in this case, to guarantee that the fact is useful in answering the image-question pair. The approach might be used on unstructured information sources, such as online text corpora, in addition to structured knowledge bases. #### Summarization Automatic Text Summarization (ATS) is a significant topic due to the vast volume of textual data that is rapidly growing nowadays on the internet and some other sources. Three ATS approaches are available: abstractive, extractive, and hybrid [148]. To create a summary, the extractive technique chooses and incorporates the most significant sentences from the source text. In contrast, the abstractive approach transforms the input content into an intermediate representation before generating a summary using unique terms. The hybrid approach incorporates both the extractive and abstractive processes. L. Chen and Nguyen [149] proposed an ATS technique for summarizing sign documents based on a Reinforcement Learning (RL) technique and an encoder-extractor network architecture's RNN model. A sentence-level encoding approach is used to choose the key features, then summary sentences are retrieved. However, the method has a chance of suffering from large variances since it uses an approximation in the RL method training objective function. Clustering, unsupervised NNs, and topic modeling were used to create an Arabic summarization approach in this study [150]. The researchers also presented ensemble learning models and neural network models to combine the information produced by the topic space. Specifically, the ELM-AE method and k-mean approach were used to accomplish document clustering on a large sample of Arabic documents. The LDA technique was implemented to determine the topic space associated with each cluster, then the discovered subject space was used to generate a numerical document representation. For learning unsupervised features from texts, this format was utilized as the input for various NN and ensemble methods. For the variation of the information contained in the final summary, the learnt characteristics were utilized to rank phrases by following a graph-based model, and key phrases were picked by applying the redundancy removal components. A few more unsupervised NN methods, for instance, stacked-auto-encoder and RBM, could be included in future studies to improve the robustness of the suggested technique. Another study combined text summarization embedding spaces (TSS), sentiment analysis embedding space (SAS), and opinion summarizer modules [151]. SAS employs an LSTM-based RNN to reap the benefits of sequential processing. In order to improve word-level embedding, TSS applies a wide range of statistical and linguistic knowledge variables to extract a relevant set of sentences from a large number of texts. Additionally, TSS uses RBM. OSM is divided into two steps, namely sentence categorization and sentence identification, which work together to provide a relevant summary. This was the first experiment in which RBM and RNN-LSTM were paired for predicting sentiment polarity. One of the limitations of the proposed method is that it fails to obtain the difference between active and passive sentences. ### Biomedicine A plethora of biological and medical data, including information about medical imaging, biological sequences, and protein structures, has been accumulated in recent decades due to the advancements in high-throughput technology. Consequently, deep learning has been extensively applied to biomedical data in biomedicine. For instance, CNNs are widely employed in the field of biomedical image processing due to their extraordinary capacity to assess spatial features. CNNs have a lot of potential in omics analysis [40] and the study of biological signals [152], even though sequencing data employing CNNs are not very common. RNN-based architectures, on the other hand, are designed for sequential data and are more frequently utilized for transcriptome analysis [153] and in dynamic biomedical signals [154]. Hence, the focus on deep learning is increasing in the field of biomedical information, and it is possible that each paradigm may soon find new implementations. #### Prediction of protein structure Protein structural analysis now relies heavily on cryo-electron microscopy (cryo-EM) [155], which has made atomic resolution possible. However, on all but the purest density maps that have a resolution of less than 2.5 angstroms, estimating the structural trace of a protein remains difficult [156]. A DL model proposed by Si et al. [157] predicted the alpha carbon atoms throughout the core structure of proteins using a series of cascaded convolutional neural networks (C-CNNs). Specifically, C-CNN is a revolutionary deep learning architecture consisting of several CNNs, each of which predicts a particular feature of a protein's structure. In order to create a comprehensive prediction map, this model combines the predictions of alpha carbon atoms, core structure, and secondary structure elements. Using a semantic image classifier, the cascaded CNN was trained on a large number of generated concentration maps. With only a suggested threshold value needed per protein concentration map, this procedure was automatic and significant. To create the primary core trace with alpha carbon placements, a customized tabu-search path walking algorithm was applied. The alpha-helix secondary structural elements were further enhanced by a helix-refinement technique. Finally, to create full protein structures, a unique quality assessment-based creative approach was employed to successfully align protein sequences into alpha carbon traces, using 50 trial maps with resolutions between 2.6 and 4.4 angstroms. In terms of the proportion of connecting alpha carbon atoms, the proposed model produced core traces that were more comprehensive (88.9%), thus outperforming the Phoenix-based structure construction method with an accuracy of around 66.8%. By including additional protein structural details in the C-CNN or training the networks with experimental data, further study may enhance this research area. One of the most important issues in computational biology is the identification of microRNAs (miRNAs), which are critical in post-transcriptional expression. The typical length of miRNAs is between 20 and 23 base pairs [158]. Because it can be challenging to differentiate between miRNA-encoding regions from other non-coding and pseudo sequences that are identical in length, the majority of earlier research suggested employing candidate miRNAs for reliable identification. There have been several proposals for traditional machine-learning-based categorization techniques, but they suffer from constrained performance and required constant feature engineering. In order to understand sequence patterns and folding structure, Park et al. [159] introduced deepMiRGene that utilizes RNNs, particularly LSTM networks. The most significant contribution of this suggested approach is that it does not involve any rigorous manual feature development. By utilizing end-to-end DL, this approach eliminates the need for extensive domain expertise and instead utilizes simple preprocessing. However, it is challenging to use an LSTM network right away due to the palindromic secondary structure of microRNA. To solve this problem, deepMiRGene employs a novel learning method in which the secondary structure from the input sequence is segmented into front and back streams. Additionally, deepMiRGene performed better than all other options in terms of sensitivity and specificity on testing datasets and with higher sensitivity accuracies of 89%, 91%, and 88% in the method's three datasets. Even though there were significant disparities between the features of the various species, deepMiRGene also performed best when using cross-species data, thus demonstrating the potential for identifying inherent traits. #### Genomic sequencing and gene expression analysis Large-scale gene expression profiling has been frequently utilized in the characterization of cellular states in response to various illness circumstances, genetic mutations, and so on [160] Creating a compilation consisting of thousands of samples of gene expression profiles is still quite extortionate, despite the fact that the expense of entire gene expression profiles has been gradually declining. The computational strategy used by the LINCS program, however, currently relies on linear regression, which restricts its accuracy because it cannot detect complicated complex correlations among gene expressions. Chen et al. [161] suggested using a DL technique called DGEX to predict the expression of key genes from the landmark gene profile expression. The model was trained using the microarray-based GEO dataset, which contains around 111,100 expression profiles, and its performance was evaluated against other approaches. With a 15.33% proportional improvement, deep learning greatly exceeded LR in terms of the average absolute error of mean through all of the genes. Deep learning outperformed LR in 99.97% of the key genes as per the gene-gene comparison investigation. A separate RNA-Seq-based GTEx dataset with around 20,000 expression profiles was also used to test the performance of the trained model. #### 3.5.3 Medical image classification and segmentation Deep learning strategies have been utilized in studies segmenting cerebral tumors as a result of their achievement in general image analysis disciplines, including the classification of images [162] and semantic identification [163]. In particular, CNNs were deployed for the Multimodal Brain Tumor Image Screening Challenge to identify scans of cerebral tumors [164]. Conditional random fields (CRFs) and fully CNNs (FCNNs) in a single framework were proposed by Zhao et al. [165] to develop a unique deep learning-based identification approach for cerebral malignancy. The model was created to produce a malignancy segment that results in accurate identification of both appearance and concentration. Instead of employing CRFs as a step after FCNN post-processing, the authors utilized CRF-RNN to construct CRFs, making it simple to train both the frameworks as a single deep network. In three steps, the combined deep learning model was trained by employing segments and pixel patches. FCNNs were trained in the first step using image patches, while picture slices were used to train the next CRF-RNN in the second stage. In the third stage, the entire network was tuned using image slices. The unification of FCNNs and CRF-RNN, which acquired an accuracy of 88%, could increase segment resilience to model training factors, such picture patch size and training picture patch count, as per the experimental data. Therefore, using 3D CRF as a step of post-processing could enhance malignancy identification performance. Gliomas are the most prevalent and deadly type of cerebral tumor, with an extremely low survival probability in the maximum grade [166]. Planning the timeline of treatment is consequently essential for expanding the quality of life for cancer patients. The assessment of these tumors is frequently done using magnetic resonance imaging (MRI), but the volume of data generated by MRI makes it impossible to manually segment the images in a timely manner. Accordingly, this restricts the use of profound quantitative assessments in clinical practice. The enormous spatial and functional heterogeneity among cerebral malignancies makes automatic segmentation a difficult task, hence dependable and automated segmentation approaches are needed. An automated segmentation technique based on CNN was suggested by Pereira et al. [167] to explore kernels for glioma segmentation in the images derived from MRI. As a result of the network's smaller number of weights, using small kernels enabled the design of more intricate architectures while also helping to prevent overfitting. The intensity normalization as an essential step for pre-processing was also utilized. While this is considered uncommon in CNN-based segmentation approaches, it proved to be quite successful when combined with data augmentation for segmenting brain tumors in images collected from MRI. By simultaneously receiving the top spot for the full, core, and expanding areas in the DSC measure (88%, 83%, and 77%), the proposed methodology demonstrated to be legitimate in the Cerebral Tumor Classification Challenge database. ### Bioinformatics The natural innate immune components known as antimicrobial peptides (AMPs) are commonly the focus of novel therapeutic developments due to the increment of antibiotic-resistant bacteria. Nowadays, wet-lab researchers frequently use machine learning techniques to find promising applicants. For instance, Veltri et al. [168] proposed a DL model to identify antibacterial activity by generating a neural network model that uses a core sequence composition and contains CCN and RNN layers. A comprehensive training and testing dataset was utilized that incorporates the most recent antibacterial peptide information. In contrast to the current approaches, the proposed novel deep neural network (DNN) classifier performed better AMP identification. Through the implementation of DNN and RNN layers, the dependency on prior feature generation has decreased. Besides, a reduced-alphabet representation demonstrated that adequate AMP identification can be preserved using 9 different kinds of amino acids based on the embedded parameters. The DNN model performed the best in terms of MCC, auROC, and ACC and achieved a superior accuracy of 91.01% compared to other models, such as AntiBP2 (89.37%) and CAMP Database RF model (87.57%). Thus, the suggested model eliminates the dependency on domain experts for feature creation by employing a deep network model that automatically extracts expert-free characteristics. Nowadays, it is possible to measure DNA methylation at a single-cell level due to recent technological advancements. To facilitate genome-wide analysis, strategies to predict unknown methylation patterns are necessary because existing procedures are constrained by insufficient Chg. coverage. Angermueller et al. [169] developed DeepCpG, which is a computational method focusing on DNN to make a prediction regarding the methylation stages of a single cell and model the origins of DNA methylation dispersion. DeepCpG utilizes relationships between patterns of DNA sequences, stages of methylation, and nearby CpG sites both within and between cells. The extraction of useful characteristics and model training parts are not separated in the proposed method, rather DeepCpG is built on a modular architecture and develops predicted sequences of DNA and patterns of methylation through data-based learning. Notably, DeepCpG performed better than RF trained DNA and CpG characteristics. After training both the RF and DeepCpG models using only DNA sequence information, the highest relative improvements in the accuracy of 80% and 83% were achieved, respectively. The detection of enhancer-promoter interactions (EPIs) is vital for human development. However, the majority of computational techniques currently being used depend on a range of genetic data that are unfortunately often not accessible, particularly for a specific cell line [170]. Sequence-based computational approaches, as an option, are only likely to be utilized at the genome size. A novel deep learning technique name EPIVAN was introduced by Hong et al. [171] that allows lengthy EPIs to be predicted only from genomic sequences. Pre-trained DNA vectors were utilized to encode regulators and promoters in order to investigate the essential sequential properties. Subsequently, a one-dimensional convolutional model and deep neural units were used to identify both local and global characteristics to make a prediction regarding EPIs in different cell lines. The effectiveness of EPIVAN was compared to that of the neural network models, such as SPEID, EPIANN, and SIMCNN. Each model applied the identical test and training set to every cell line. On six cell lines, the proposed model's results and the four predictors were provided in terms of AUPR and AUROC. EPIVAN demonstrated the strongest AUROC of any model, with an accuracy ranging from 96.5% to 98.5%. According to test findings on six cell lines, EPIVAN outperformed the existing models, showed the capability of being utilized as a pre-trained model for further transfer learning, and demonstrated strong transfer ability. Prediction of the functionality of enzymes is an important step in constructing new enzymes and diagnosing disorders connected to enzymes, which is a major task in bioinformatics [172]. Several studies have generally concentrated on predicting the mechanism of the monofunctional enzyme. However, the amount of multi-functional enzymes is continuously increasing, necessitating the development of novel computational techniques [173]. Zou et al. [174] proposed mlDEEPre, a deep learning network specifically designed to predict the functionality of multi-functional enzymes. Using an auto label issuing threshold and a new transfer function linked with the interaction between several labels, mlDEEPre could predict multi-functional enzymes reliably and quickly. The proposed multi-label model surpassed all other approaches, as it correctly predicted 97.6% of all the observed primary categories in the test dataset with an SD of 0.27. The performance of SVM-NN with an accuracy of 84.7% seemed somewhat better than that of mlDEEPre (82.6%) and GA (80.8%), despite the fact that SVM-NN was worthy of predicting such infrequent class labels generated by unbalanced training samples. Comprehensive tests further demonstrated that mlDEEPre outperformed the other techniques in determining the kind of functional enzyme that has been utilized, as well as the primary class prediction throughout many parameters. As mlDEEPre and DEEPre are flexible, mlDEEPre could be effortlessly merged into DEEPre, allowing the enhanced DEEPre to handle different functional predictions without the need for human interference. ### Disaster management In terms of disaster or natural calamity management, uncertainty, lack of resources in the affected areas, and dynamic environmental changes are the major characteristics of natural catastrophes. The inability to predict outcomes indicates that catastrophic effects on persons and property during several disasters cannot be foreseen with a level of accuracy that is reasonable [175]. Big data is a technology paradigm that enables researchers to efficiently analyze enormous amounts of data that are made accessible by current practices [176]. Making the most of big data is possible using a variety of technological and scientific strategies and equipment. Recent advancements in big data and IoT technologies open up a wide range of opportunities for disaster management systems in order to obtain leading assistance, guidance, as well as better observations and ideas for precise and suitable decision-making. In order to resolve catastrophic challenges, Anbarasan et al. [177] introduced concepts and strategies for identifying flood catastrophes based on IoT, big data, and CDNN. Big data derived from the flood catastrophe was used as the source of the input data first. Afterwards, the Hadoop Distributed File System (HDFS) was used to decrease the high frequency data. After excluding high frequency data, the data were preprocessed via missing value approximation and a normalizing technique. A conjunction of attributes was utilized to construct the pattern centered on the pre-processed data. In the final step, the created parameters were sent into the CDNN classifier, which categorized them into two categories as the probabilities and infeasibility of a flood emerging. The performance of the suggested system was analyzed in terms of precision, accuracy, recall, F-score, specificity, and sensitivity in comparison to the existing systems, DNN and artificial neural network (ANN). The comparison findings clearly demonstrate that the CDNN method has a greater level of accuracy than the present approach with 93.2% accuracy, 92.23% precision, 90.36 % recall, and 91.28% F-score with 500 data points utilizing CDNN. The results described above can be improved using DNN as well as ANN models. In conclusion, the detection system performed better than other top methods currently in use, and in the future, the work presented here could be improved using IoT devices that have even longer sensor ranges at lower costs and using cutting-edge algorithms at each stage of the flood identification process. Disasters caused by fire do have a negative impact on the environment, society, and economy. Early fire detection and an automated response are vital and essential for disaster management schemes in order to minimize these damages. Fire detection at an early stage in disaster management systems while monitoring public spaces, woods, and nuclear power plants can prevent ecological, monetary, and social harm [178]. The movements of entities with a retardant appearance and varied atmospheric factors make early detection a hard proposition [179]. Subsequently, an improved accuracy algorithm that reduces the number of false alerts in the aforementioned conditions is required. In order to accomplish this, Khan et al. [180] investigated deep neural networks and developed a prominent architecture for the early detection of fire through surveillance for efficient disaster management systems. Another preferable criterion was to notify the disaster management system after a successful fire detection and include the representative frames. A system of flexible prioritization was developed for the monitoring system's camera nodes, taking into account the contents they were capable of visualizing. Two datasets were the major focus in this experiment in which the first dataset indicated further improvement by increasing the accuracy from 93% to 94% and decreasing false positives from 0.11% to 0.9%. Despite producing false negatives of around 21%, the approach maintained a stronger balance, enabling the methodology to more accurately achieve fire identification. The findings for the second dataset [181] were gathered using a different set of metrics, including recall, precision, and F-score, in order to properly conduct the performance. Results produced by using deep features and optimizing the fire prediction model are compared and contrasted, showing that the detection approach and model are feasible with a maximum score of 82% for precision, 98% for recall, and 89% for F-score. Despite evaluating the sustainability of the proposed approach utilizing noise intrusions, scaling, and rotations, each image comprised of fire and showed a high accuracy ranging from 89% to 99%. Considering the potential of such occurrences occurring under surveillance, the results indicate that the CNN-based model could detect fires at an early stage in a range of circumstances, even when images are blurry. The relevance of the framework for successful fire disaster management was supported by experimental results, which also confirm the high accuracy of the fire detection technique compared to cutting-edge methods. Over the past few years, social media networks have contributed to managing natural catastrophes. In order to interpret better messages and extract relevant information from social media, text data mining techniques employing standard machine learning approaches have been constructed. These techniques tend to be distinct and are challenging to generalize for different categorizations. Therefore, considering the crisis management efforts related to hurricanes, Manzhu et al. [33] investigated the ability of a convolutional neural-based deep learning model to classify the trending catastrophic topics from Twitter. SVM and LR are two conventional machine learning techniques that were evaluated against the logistic regression of the CNN model. The outcomes of the experiment demonstrate that CNN models consistently outperformed SVM and LR models in terms of accuracy for both types of assessment scenarios. Moreover, the CNN classifier surpassed the classification methods of SVM (63-72%) and LR (44-60%), achieving an accuracy of up to 81% among all datasets. In order to categorize tweets posted during a later event, the evaluation was carried out using CNN based on Twitter data from prior events. Results of the experiment demonstrated that while SVM and LR's accuracy dramatically decreased, CNN maintained a steady performance. This suggests that the CNN model could be trained in advance using Twitter data from previous events to categorize new occurrences for situational awareness. However, CNN took longer to learn than SVM and LR because there were more parameters to consider, which makes it difficult to employ CNN for web-based learning. Multimedia big data present several opportunities and research prospects due to the quick and explosive proliferation of digital data throughout social media and the Internet [182]. Due to its effects on society and government, disaster management programs have become very popular recently. The multiple correspondence analysis (MCA) method has been widely employed in a variety of data mining jobs because it is efficient in capturing correlations between variables and classes [183]. To improve the final classification outcomes and reduce the complexity across various data modalities, MCA has been utilized for multimodal data fusion. For example, Pouyanfar et al. [184] introduced a multimedia big data framework built on cutting-edge deep learning methods, aiming to manage disasters by analyzing and mining content. The proposed fusion model, which is based on the MCA technique and considers the correlations between data modalities and final classes, was applied to combine the results of both models. On the basis of the disaster dataset that was gathered, the suggested multimodal framework was assessed and contrasted with a number of cutting-edge single modality and fusion methodologies. In comparison to the baseline approaches, the results showed that both the visual model and fusion model were effective. On this difficult dataset, an accuracy of 73% was attained by the suggested MCA-based fusion for the final multi-class classification. ### Drug discovery and toxicology Emerging contaminants (ECs), which pose a serious risk to human health due to their detrimental effect on the endocrine system, include over eighty thousand endocrine-disrupting chemicals (EDCs). To determine the possible impacts of EDCs, numerous in vitro techniques have been developed, such as signaling pathway studies, ligand-binding protein, reporter gene experiments, and cell proliferation [185]. While in vitro methods are often more economical and quicker than in vivo trials, it is still impractical to analyze thousands of molecules in a reasonable amount of time [186]. This has resulted in the usage of alternative computational methods experiencing exponential growth. To anticipate the toxicological effects of chemicals, the QSAR model is a possible substitute for in vitro techniques. A DL-QSAR model was evaluated by Heo et al. [187] to give predictions regarding the impacts of EDCs on the endocrine system, particularly estrogen receptor (ER) and sex-hormone binding globulin (SHBG). For the classification and forecasting of probable EDCs, DL-QSAR models were created and three distinct DL algorithms, namely SAE, DBN, and DNN, were employed. The suggested models' performance was assessed using validation metrics by comparing the classification prediction models with traditional machine learning classifiers, such as LR, SVM, and MLR. Results showed that the DL-QSAR algorithm outperformed traditional machine learning based QSAR models. Accordingly, the DNN-QSAR model adequately described the vast majority of EDCs' qualitative responses to tests. Compared to the LR and SAE-QSAR models (accuracies of 86.49% and 89.91%, respectively), the DNN-QSAR model achieved an accuracy of 90%. As a result, DNN was more effective for evaluating qualitative responses since it could translate dense chemical identifiers into multidimensional space regions. Additionally, by overcoming multicollinearity and overfitting issues, DNN-QSAR exhibited great performance in the field of computational chemistry. As a result, it was determined that DL can effectively utilize the qualitative characteristics of the EDCs. Effective methods for assessing potential EDCs are majorly required. An evaluation platform was established by Zhang et al. [186] established an evaluation platform by employing three different machine learning models, such as SVM, linear discriminant analysis (LDA), and classification and regression trees (CART), for the purpose of identifying EDCs through estrogen receptors. The model was modified using a total of 440 compounds, and 109 new compounds were added for the screening of EDCs. Their predictive capabilities were evaluated by contrasting the screening results with those anticipated by the models derived from classification. The most accurate model was found to use an SVM classifier, which correctly identified agonists at an accuracy level of 76.6% and antagonists with an accuracy of 75% on the test set, including an average predicted accuracy of 75.2%. The overall projected accuracy confirmed by the screening of the EDC assay was 87.57%, illustrating the effectiveness of a synthetic alert for EDCs with ER agonistic or antagonistic actions. The fundamentals of organic chemistry are retrosynthesis and reaction prediction. Retrosynthetic analysis is a technique in which targeted molecules are converted into simpler parent compounds [188]. This process has two related tasks: reaction prediction, which predicts how a group of reactants will react to produce; and planning the best possible sequence of reaction prediction steps with the least amount of expense, energy, and waste [189]. However, reactivity conflicts occur when reaction rules overlook the molecular environment, which is why they frequently fail. Segler et al. [190] constructed a neural network model to predict which transformation rules will be most frequently applied to the involved molecule. The model exhibited an accuracy of 95% in retrosynthesis and 97% in response prediction. During retrosynthesis, the single-layer neural network was 78% accurate and had an MRR of 87%, indicating that the model is extremely capable of ranking the real reactions. Thus, the rule-based expert system and the logistic regression are both significantly outperformed by the neural network models. Drugs are often the cause of detrimental results, such as accidents, injuries, and enormous medical expenses. Clinicians can build suitable treatment plans and make effective judgments with the aid of accurate drug-drug interaction (DDI) predictions. Recently, numerous AI-based methods for DDI prediction have been proposed. However, the majority of currently used techniques give less consideration to possible connections between DDI processes and other multidimensional data, such as targets and enzymes. A multimodal DNN for DDI events prediction was suggested by Lyu et al. [191] in order to generate drug multimodal representations in MDNN. To examine the complementary aspects of the drug's heterogeneous representations, a multimodal neuronal layer was also developed. Several multi-class classification evaluation metrics, including accuracy, F1 score, precision, and recall, were used to assess the prediction performance. The proposed model MDNN achieved the most stable performance and outperformed DDIMDL by 0.08% on accuracy, 7.1% on F1 score, 1.5% on precision, and 10% on recall. The MDNN model achieved an accuracy of around 99%, according to the comparative study with other models used in different journals, such as DDIMDL [192]. The MDNN model's superior performance can be ascribed to its exploration of both the cross-modality embedding representations of the heterogeneous data and the drug topological integrating expressions in the drug network. This effectively demonstrates how the use of structural information and multimodal features can increase the prediction accuracy of drug-drug interaction irrespective of current drugs or newly-launched FDA-approved drugs and provides a solid, trustworthy basis for research on DDI prediction. Drug co-prescription can be safer and more productive if the effects of drug-drug interactions (DDIs) are accurately predicted [193]. There is still potential for improvement in prediction performance despite the many computational methods that have been presented to anticipate the impact of DDIs [194]. These methods attempt to make it easier to find these interactions in vivo or in vitro. A deep learning model was proposed by Lee et al. [195] to more precisely predict the impact of DDIs as well as to employ autoencoders and a feed-forward deep network to predict the therapeutic effects of DDIs. The experiments utilizing just GSP or TSP or combined GSP and TSP did not produce tests with satisfactory classification accuracy. However, integrating TSP and GSP improved classification accuracy to 97-97.5%. Additionally, the suggested model outperformed standard techniques like SVM (80-83%) and Random Forest (75-91%). The results showed that TSP and GSP improved prediction accuracy in comparison to SSP alone, and that the autoencoder outperformed PCA in reducing the parameters of each profile. Hence, the proposed deep learning model provided a more precise prediction of DDIs and their therapeutic effects. ### Partial differential equations (PDEs) For the purpose of multi-scale modeling, model predictive control, and computational complexity, Raissi et al. [196] suggested a physics-based neural network that is effective in incorporating any genuine physiological principles that regulate a particular set of data. The resulting methodologies demonstrated a number of encouraging findings for a broad range of computer science issues, opening the way for giving deep learning the potency of mathematical physics. The goal was to derive the whole spatial and temporal Schrodinger equation response. For the representation of the underlying function, a five-layer DNN with 100 neurons in each layer and nonlinear activation was used. In general, the neural network needed to have enough approximate capacity to handle the function's predicted complex nature. The findings showed the position of the two data snapshots utilized for training as well as the precise solution. The formation of the solution between the two given snapshots is significantly different due to the complicated nonlinear dynamics of the equation. Whether or not the training data was polluted with noise, the method was still able to reliably identify the ambiguous parameters despite these variations and the substantial time delay between the two training images. Uncertainty assessment and surrogate modeling are commonly viewed as supervised learning problems for PDE systems, with varying data pairs being used during the training process [197]. The generation of such emulation is problematic for small data, posing challenges for deep learning algorithms developed for the big data context. Even though these types of models have demonstrated excellent predictive potential in high dimensions, they are incapable of addressing the data limits that the PDE model suggests. The system of the physical model equations was included in the loss functions using a methodology presented by Zhu et al. [198]. Without any labeled training data, the subsequent physics-constrained deep learning algorithms were able to predict outcomes on a level with data-driven models while conforming to the limitations of the given challenge. In order to solve PDEs, generate alternative models, and quantify uncertainty, the latter study used convolutional encoder-decoder neural networks along with a conditional circulation generative model. In order to solve PDEs, the methodology contrasted convolutional decoder networks (CDNs) with completely connected dense networks using input data from a non-linear Darcy law correction. After resolving SPDE, a comparison between data-driven surrogate DDS and surrogate PCS was made. Last but not least, a probabilistic surrogate with a reverse KL formulation was constructed. Two optimizers, L-BFGS and ADAM, were also utilized. For PCS, 300 epochs were produced in iterations, while for DDS, 200 epochs were produced along with 8-30 pictures. Therefore, the suggested physics-restricted alternative sources regularly outperformed data-driven alternative sources in terms of generalization performance. ### Financial fraud detection Financial fraud is a type of thievery whereby a business entity or anyone illegally takes money or assets to profit from it. Since the advent of new technologies, there has been a massive rise in financial fraud involving all aspects of the business world. For instance, financial fraud victims in the United States individually lost an average of $1,090, accumulating over $3.2 billion in 2017 [199]. The selection procedure for practitioners is frequently known as safeguards of fraud analysis, assessment, or detection by information security specialists and associations. Some methods for detecting anomalies include decision trees, logistic regression, SVM, and others. However, these methods are constrained as they are supervised algorithms that rely on labels to determine transaction validity [200]. Several studies [201, 202, 203] revealed that developing measures based on fraud estimations has been an infuriating procedure that is extortionate and futile when tackling composite non-linear data, has inefficient memory utilization, lengthy calculation, and generates abundant false alarms. Financial fraud detection is still a strenuous task because of the constant changes in fraudulent behavior, the absence of a mechanism to track fraudulent transactions' information, certain limitations of the existing detection techniques, and highly skewed datasets [204]. Therefore, optimization of previous procedures and novel approaches are necessary to increase fraud detection rates [205, 206]. As a result, deep learning, a subset of modern artificial intelligence, is one of the promising techniques that has achieved recognition in recent years. DL also facilitates computational models with multiple processing layers to adapt data representation to different abstraction levels. Credit card fraud is one of the most pervasive forms of financial fraud in which a hacker gains unauthorized access to a legit card without the cardholder's consensus. The simultaneous expansion of e-commerce and the internet has resulted in a significant increase in credit card use, which has resulted in an unusual increase in credit card theft in recent years. For instance, a report by the Boston Consulting Group showed that North American financial institutions lost $3 billion in 2017 due to credit card fraud. Despite industry attempts to combat economic fraud, the Nilson Report site revealed that the global financial losses exceeded $24.71 billion in 2016 and $27.69 billion in 2017 attributed to credit card fraud [207]. This vast number of fraudulent occurrences may erode consumer confidence, destabilize economies, and elevate people's living costs. As a result, several studies have applied deep learning to detect credit card fraud. Heryadi and Warnars (2018) built many deep learning paradigms for credit card fraud identification and inspected the implications of an imbalance between deceit and non-fraud data. They investigated the impacts using CNN, CNN-LSTM Hybrid, and Long Short-term Memory (LSTM) models. Based on Area Under the ROC Curve (AUC) as a performance metric, CNN secured the maximum value, thenceforth SLSTM and CNN LSTM. However, the inequitable dataset could not help in using training accuracy as the primary criterion to identify the best model. Most studies have recently used LSTM networks to approach credit card fraud detection. The hidden units in the LSTM have feedback connections linked to discrete time steps, contrasting conventional feedforward neural networks. This enables the learning of long-term sequence dependencies and the transaction label prediction premised on previous transaction sequences. Benchaji et al. (2021) devised a fraud detection model using a sequence classifier that utilizes LSTM networks to track individual cardholders' behavior. LSTMs were intended to overcome vanishing and exploding gradients issues during conventional RNN training. All the changes made by the forget gate, input gate, and output gate are stored in the LSTM unit's memory cells. The three gates control the incoming and outgoing data, and the cell can store values for an indefinite amount of time. The proposed model relies on the Keras deep learning framework that enables fast experimentation with deep neural networks. Over 80 million credit card transactions labeled as fraudulent or legitimate were used to assess the efficacy of the deep learning process reported by Roy et al. (2009). This study assessed the performance of various DL algorithms based on their parameters, class imbalance, scalability, and sensitivity analysis. The researchers used LSTM, Gated Recurrent Unit, RNN, and ANN in a distributed cloud computing environment. The findings confirm that the LSTM technique achieved the best performance and reveal that each of the four topologies' execution expanded with the increased network size. However, the study failed to specify the conditions under which the model functions cease to ameliorate, model sensitivity, and limit of the network size. Alghofaili et al. (2024) initiated an LSTM-based fraud detection technique that strives to enhance current detection tactics while simultaneously increasing the detection accuracy of large amounts of complex data. The system was tested with a unique dataset of credit card frauds. Simultaneously, the outputs are analogized to an auto-encoder model and machine learning technologies in a pre-existing deep learning model to detect suspicious financial activities and alert the appropriate authorities, allowing them to take appropriate action. Although some machine learning techniques have demonstrated promising upshots, they do not detect new patterns or deal with large amounts of data to improve accuracy after reaching a saturation point. LSTM worked brilliantly in the trials, reaching 99.95% accuracy in less than a minute. The unique applicability of LSTM in prediction is ensured by prior experience and the correlation between prediction outputs and historical input. Many-state memory cells and gates are incorporated into the framework. Due to its vital role in ensuring that data is transmitted unmodified, the cell's state is at the core of the information-transfer process (Shen et al., 2019). Several studies used other deep learning frameworks productively for a similar cause. To illustrate, Pandey (2011) introduced the H2O deep learning framework in credit card forgery detection. The pattern of the model enables multiple algorithms to aggregate as modules, and their outputs can integrate to improve the final output accuracy. Concurrently, the efficiency of the algorithms rises as the dataset length grows while using this framework; thus, adding more algorithms with equivalent formats and datasets can elevate this model. Pumsirirat and Yan (2012) propound an autoencoder-based DL model for credit card fraud identification. The model concentrates on fraud cases that cannot be recognized through prior knowledge or supervised learning. Concurrently, the authors suggested a novel restricted Boltzmann machine (RBM), which contains visible and hidden layers, and auto-encoder model for altering ordinary transactions to detect anomalies. The output accurately depicted the root mean squared error, the area under the curve, and the mean squared error with a 96.03% accuracy. Another type of financial fraud is tax fraud, the malicious act of falsifying a tax return document to reduce someone's taxation. Low fiscal revenues are weakening governmental investment [213]. When it comes to the property acquisition tax, Lee [214] implemented a reliable sampling technique for tax-paying citizens. The data from 2,228 returns were fed into autoencoder, a well-known unsupervised deep learning technique, to calculate the potential tax shortfalls for each return based on an estimate of the rebuilding faults. The sorted reconstruction ratings are compatible with practical context, implying that the defects can be exploited to identify suspicious payers for auditing in a cost-efficient strategy. Utilizing the recommended strategy in real-world tax administration can reinforce the self-assessment acquisition tax system. Lopez et al. [215] researched tax fraud detection in the context of individual income tax filings. Neural networks application enabled the taxpayers' segmentation and the possible assessment of a particular taxpayer attempting to avoid taxes. The neural network outcome categorizes whether a taxpayer is deceitful or not as well as reveals a taxpayer's proclivity for unethical activities. In other words, it classifies individuals based on their potential of convicting and also calculates the propensity of tax fraud per taxpayer. The chosen model excelled over other tax fraud detection algorithms with an accuracy of 84.3%. It would be interesting to see this concept applied to additional taxes in the future. ### Computer vision To a broader extent, computer vision is an interdisciplinary branch of computer science study that examines how computers can quickly recognize digital images and movies. The predominant processes include extracting, analyzing, and comprehending relevant information autonomously about a specific picture or a series of images [216]. Computer vision implies the algorithmic and theoretical roots for spontaneous ocular comprehension. The growth of computer vision is reliant on the computer technology system, focusing on image quality enhancement or image classification [217]. Researchers often use the terms interchangeably in the basic approaches because of the overlapping with image processing. However, the core aim of computer vision is to develop models, data extraction, and information from images. On the other hand, image rectification deals with applying computational alterations to pictures, such as sharpening and contrast, among other things [218]. Deep learning algorithms recently tackled major computer vision tasks like face recognition, object detection, human pose estimation, action recognition, activity recognition, and fruit defect detection [219]. #### 3.11.1 Object detection Object detection has attracted massive attention as a vital function in computer vision, in which CNN has made substantial headway [220]. Object detection is, in essence, the first step in several computer vision applications, including logo detection, face recognition, pedestrian identification, and video analysis. One-stage detectors (YOLO and its variants) and two-stage detectors (region-based CNN (R-CNN)) are the two primary families of object detection deep learning frameworks [221]. One-stage detectors do not need the cascaded zone classification step to make qualitative predictions of objects on every position of feature maps. Two-stage detectors, on the other hand, have a proposal generator that makes a handful of suggestions before feature extraction. When dealing with complicated models, deep neural architectures are superior to simple ones [222]. Using the weight-sharing principle, CNN is a feed-forward neural network. CNNs are not precise for minor data, but produce high accuracy for vast image datasets [223]. However, to execute computer vision tasks, CNNs require large labeled datasets. It is a fusion of multiple functions and integration that shows how they intersect. The CNN layered structure for object detection is shown in Fig. 3. For object detection, deep R-CNNs have been widely deployed. Ouyang et al. [225] suggested a deformable deep CNN for generic object detection. This constrained pooling layer in the recommended deep design simulates the deformation of object portions with geometric compensation and limitation. The researchers introduced diverse models by changing the training process, net topologies, and altering some vital components in the detection pipeline. This strategy exceptionally optimized the model's averaging efficacy. The proposed technique elevated the mean averaged precision by RCNN from 31 to 50.3, exceeding GoogLeNet, the winner of the ILSVRC2014, by 6.1%. The extensive experimental evaluation also provides a detailed component analysis. Doulamis and Doulami [226] explored a deep model's representation capacities in a semi-supervised scenario. The method overcame deep learning's crucial drawbacks by utilizing unsupervised data to construct the network, and then fine-tuning the data using a gradient descent optimization process. Unsupervised learning appreciates more abstract representations to reduce the input data, thus optimizing the model's stability, accuracy, and reliability. In addition, an adaptive approach allows modifying the model dynamically to the digital photogrammetry conditions. #### Face recognition Given the enormous diversity of face images present in the real world, face recognition is amongst the most complex biometrics attributes to use in open circumstances [227]. Face recognition structures are frequently comprised of several components, such as an image, face detection, face alignment, face representation, and face matching. Researchers have focused on specialized approaches for each face variation owing to the feature's study complexity in unconstrained environments. Many new face recognition architectures have been presented in tandem with deep learning advances, and some even come near to human performance. Extensive research with acceptable outcomes has Figure 3: Layered CNN structure for object detection (reprinted with the permission of Elsevier from [224]) been conducted on the issues of illumination, position, and expression [228]. However, when dealing with noisy photos, the precision of most methods deteriorates dramatically. Humans can often have a hard time distinguishing an identity from a seriously noisy face. Zadeh et al. [229] designed a local model with convolutional specialists constraints (CMLs) for facial landmark detection by adding a set of appearance prototypes of diverse poses and expressions. A pivotal part of the CE-CLMs is the Convolutional Experts Network (CEN), a new local sensor that blends neural structures with expertise in an end-to-end framework. The findings suggest that launching from iteratively-stated CEN network weights while exerting Menpo trained data alone does not yield competent outcomes. However, applying plain datasets of the Curriculum Learning paradigm [230] and switching to Menpo training data gave satisfactory results. Deng et al. [231] suggested UV-Generative adversarial networks, a painstakingly built system that unifies global and local adversarial deep CNNs to develop an identity-preserving facial UV closure framework for pose-invariant visual recognition. Combining posture augmentation during training and pose discrepancy depletion throughout testing, the protocol attained a state-of-the-art affirmation accuracy of 94.05%. To adversarially train the identity-distilled attributes, Y. Liu et al. [232] built an autoencoder system with minimal direction via facial identities. The model showed to be capable of generating identity-distilled features and extracting concealed identity-dispelled features that can preserve complementary knowledge, such as backdrop clutters and intra-personal variances. The primary limitation of deep learning approaches is that they must be learned on extremely big datasets with enough variety to generalize to unknown samples. Recently, numerous large-scale face datasets, including images of faces in the wild, have been made available to train CNN models [233, 234]. The studies demonstrate that neural networks may get trained as classifiers and can reduce dimensionality. #### Action and activity recognition Human activity recognition (HAR) plays a vital part in today's society because of its potential to assimilate extensive advanced information from raw sensor data about human actions and activities [36]. To identify sensor acquired data, early studies primarily applied naive bias, decision trees, SVM, and other classic machine learning methodologies [235, 236]. Researchers have recently transitioned from traditional handcrafting to deep learning methods for HAR, especially with the advent and efficient application of deep learning algorithms. Handcrafted model representation cannot tackle complicated cases due to their limitations. However, given the success of DL models in speech recognition, NLP, image classification, and other domains, transferring them to the HAR field is a new research concept for pattern identification [237, 238]. Ordonez et al. [239] reported a classifier that can classify 5 movements and 27 hand gestures using CNN and LSTM. The methodology surpasses deep non-recurrent networks by an average of 4% and outperformed some previous findings by 9%. Results suggest that this model can be used with homogenous sensor modalities and can fuse multi-modal sensors for better execution. To reliably classify subjects or activities, Lin et al. [238] introduced a novel iterative CNN technique using autocorrelation pre-processing rather than the conventional micro-Doppler image pre-processing. An iterative DL framework was used in the proposed method to define and extract features automatically. In addition to outperforming feature-based methods employing micro-Doppler pictures, the proposed iterative CNNs, followed by random forests, also outperform classification methods employing various types of supervised classifiers. Even though the preceding models may distinguish human activities in general, the entire network structure is quite complex. Furthermore, the large number of parameters has a massive computational cost in these models, which is thus challenging to employ in situations where high real-time performance is required. Agarwal and Alam [240] implemented a lightweight deep learning paradigm for HAR on a Raspberry Pi3. The overall accuracy of this model on the WISDM dataset was 95.78% using a simplistic RNN incorporating the LSTM technique. Despite its excellent accuracy and simplicity, the conceptual framework was evaluated on a single dataset with only six activities, which does not guarantee that it is generalizable. To address the drawbacks, Xia et al. [241] introduced a unique deep LSTM-CNN for HAR that can extricate activity features and categorize them automatically using only a few factors. The network was tested with three of the most often utilized publicly available datasets. The analysis showed that the network not only has fewer parameters with great precision but also has fast convergence speed with sufficient generalization aptitude. In many computer vision applications, deep learning has outperformed older methodologies due to the capacity of DL algorithms to learn characteristics from raw data, depleting the need for constructed feature detectors and descriptors. #### Human pose estimation (HPE) The goal of the HPE problem, which has existed for centuries, is to determine how well sensor inputs can be used to estimate human posture. The HPE system attempts to determine a human's body position from still images or moving video. It is an important research field because it involves various applications, including activity recognition, action detection, human tracking, movies, virtual reality, sports motion, video surveillance, human-computer communication, medical assistance, and self-driving analysis. HPE is challenging because of the wide variety of body types, self-obscuring stances, and complex environments that might occur from the great degree of freedom of interconnected joints and limbs [242]. Toshev and Szegedy [243] initially tried to instruct an AlexNet-like DNN to determine joint points from complete pictures without utilizing any models or part detectors. A multi-stage refining regressor cascade architecture improved the performance and modified the cropped images from the previous step. Similarly, Pfister et al. [244] used a string of concatenated frames to implement an AlexNet-like network to forecast the human stance in films. The use of joints alone without the context is not robust; however, turning them into numerical-joint positions from heatmap supervision monitoring can benefit both approaches. Yang et al. [245] developed a Pyramid Residual Module (PRM) to increase DCNN stability across levels to swap the Hourglass network's residual module. This method exhibited considerably enhanced performance compared to the earlier state-of-the-art procedures. Papandreou et al. [246] developed a box-free, multi-tasking ResNet-based network for pose evaluation and occurrence classification. In real-time, the ResNet-based network can forecast joint heatmaps of all critical spots of all people, as well as their relative displacements. Then, the most confident detection is grouped, adopting a decoding strategy focused on a tree-structured kinematic graph. Furthermore, the network acquired a greater average precision and above 5% absolute advancement over the former top-performing approach on a similar dataset. Other studies [247, 248, 249, 250] mainly focused on moderately constrained directions, such as 3D HPE, RGB-D-based action identification, body parts-based HPE, and monocular-based HPE model-based HPE. ### Ecology The endless data stream offers ecologists a new challenge: finding or developing the analytical models required for data extraction from the massive amounts of video feeds and streaming photos [251]. On the other hand, obtaining usable footage in maritime zones to attain acceptable computing performance provides a unique set of obstacles compared to terrestrial situations. Variable water purity, obstruction due to schooling fish, complicated background formations, and diminished light with increased depth are some environmental complications that might interfere with clear footage in aquatic ecosystems [252]. Though certain elements may affect the image and video quality, deep learning techniques have evinced effectiveness in a variety of marine implementations. Deep learning algorithms are currently priouetted around marine environments to automate the classification of certain species. Not surprisingly, the most common use of deep learning is identifying species from recordings of sounds or images or videos. These investigations already cover a wide variety of organisms, from bacteria and protozoa to plants, insects, and vertebrates, both living and extinct, and from microscopic to planetary levels [253, 254, 255, 256, 257, 258, 259, 260, 261]. Deep learning's capacity to comprehend features from data is what deems it so powerful. Unsupervised algorithms have no specified output and are frequently used as a systematic appliance to find patterns in data, minimize dimensions, and classify groupings [251]. Ditria et al. [263] emulated the speed and precision of deep learning approaches opposite to human counterparts for verifying fish abundance in underwater video clips and photos to test its adequacy and relevance. They designed three prototypes using Mask R-CNN, an object detection framework to locate the aspired species called luderick (_Girella tricuspidata_). In single image test datasets, the machine topped marine specialists by 7.1% and citizen scientists by 13.4% and outperformed in video datasets by 1.5% and 7.8%, respectively. These results confirm that deep learning is a better tool to evaluate abundance with stable results and is portable across survey locations than humans. Another vast potential of deep learning is to detect disease symptoms similar to existing applications in the medical sector. For example, CNNs can identify tree defoliation and crop illnesses. Rather than accounting for each feature independently through explicit modeling, Kalin et al. [264] built a joint distribution of techniques, applying 5-fold cross-validation for the preliminary investigation to eradicate any train-test split bias. Defoliation frequencies were distributed evenly among all 5 folds. On one of the datasets, this technique performed only 0.9% less than a group of human experts. This protocol can spot malnutrition, scars, and the presence of apparent diseases in wild plants and animals. Although species-specific models might forecast tree stress in a better way, the study does not have enough training data for each species to build a robust algorithm. In consequence, complex situations, including abundantly defoliated trees or a green canopy with trivial defoliation, can produce errors. Deep learning can also automate attribute recognition from herbaria and natural photos, including leaf position, vein structure, bloom color, and leaf shape. Linking properties detected by deep learning to databases can pave the way for new datasets to investigate plant diversity research [265]. Using algorithms based on deep learning to automate the recognition and extraction of characteristics is a novel field of study. Immense efforts are being made for the digitalization of herbaria to streamline access and conserve sensitive specimens. In the United States, the iDigBio portal, a federally supported primary collector of museum specimen records, has over 1.8 million georeferenced and photographed vascular plant specimens [266]. Carranza-Rojas et al. [254] integrated deep learning algorithms into enormous herbarium picture sets as a first contribution in this regard, by analyzing over 260,000 visuals of herbarium sheets comprising over 1,204 distinct species. The method achieved a top-one species recognition rate of 80% and a top-five efficiency of 90%. However, due to the vast differences in the visual guise (e.g. inviolable color discrepancy and the 3D object's alteration), the researchers discovered that it is now impractical to transmit specialized information from a herbarium to field recognition. Humans can abstract widely between topics and make accurate judgments with minimal information. However, deep learning algorithms have limited abstraction and reasoning capabilities. Only a few researches have looked into combining different plant organs or perspectives to improve accuracy [267, 268]. Owing to the high effort required to collect and label the datasets, completely automated species identification remains a long way off. The excellence of an automatic detection model is determined not only by the quantity but also by the caliber of the given training data [269]. On the contrary, most of the reviewed research indicates a deficit in the available qualifying data. ### Fluid dynamics Fluid dynamics is an area of applied science dealing with the flow of liquids and gases. Three conservation laws define fluid dynamics: mass conservation, linear momentum conservation, and energy conservation. Computational fluid dynamics (CFD) is a set of numerical methods for providing a rough solution to fluid dynamics and thermal difficulties. CFD is not a science in itself but rather a means of applying numerical analysis principles to heat and mass transfer [270]. It provides an extensive analysis of airflow motion, contaminant transport, and heat transfer in enclosed places. CFD helps with wind flow and contamination dispersal analysis surrounding buildings in metropolitan surroundings, but still confronts several difficulties concerning computing cost and accuracy. The neoteric success of deep neural networks (DNNs) has been facilitated by a high abundance of computational power that takes advantage of the multi-layer architecture. Not long ago, DNNs gained notoriety in turbulence modeling or, comprehensively, in high-dimensional area, complex dynamical systems [271]. Fonda et al. [272] scrutinized heat transport characteristics of turbulent Rayleigh-Benard convection in horizontally expanded systems by using deep-learning frameworks. The researchers applied the deep CNN trained method to measure the heat transfer's fraction and time variations. The slowly evolving turbulent superstructures received special attention because they are larger than the height of the convection layer. The strategy trains a deep CNN with a U-shaped configuration that has a contraction branch and an expansion branch, simplifying the complicated 3D superstructure in the midplane layer to a temporal planar network. As a result, data compression happens when the maximum Rayleigh number is more than five magnitudes. This shows deep learning's utility to parameterize convection in global models of stellar and atmospheric convection. The U-specialized net's architecture requires fairly minimal training datasets that proved to be the most effective for ridge extraction, especially for noisier data at higher Rayleigh numbers. Nonetheless, the study did not provide information on the network's efficiency at lower Rayleigh numbers. Another study conducted by Daw et al. [273] applied extensive fluid flow simulations to climatology and turbulence to forecast turbulent flow by identifying exceptionally nonlinear phenomena from spatiotemporal velocity fields. Specifically, they introduced adaptable spectral filters along with a specific U-net for assertion. The concept exhibits substantial declines in error prediction compared to state-of-the-art baselines. Most crucially, this procedure accurately previses over physical entities that satisfy favorable physical attributes, such as the conservation of mass, and also simulates the acute turbulent kinetic energy field and spectra for precise predictions. Another novel approach in this discipline is model recognition of reduced-order fluid dynamics process applying deep learning, which is a computational simulation-based method to create mathematical patterns of dynamic physical systems. Reduced-order modeling (ROM) is one of the effective system identification strategies for reducing the complex and excessive dimensional size of discrete dynamical systems. Air pollution modeling, nonlinear large-scale systems, optimal control, ocean modeling, form optimization, neutron difficulties, sensor placement optimization, porous media problems, aerospace, multiscale fracture, and shallow water are some examples of its usage [274, 275, 276, 277, 278, 279, 280, 281, 282]. Subsequently, Wang et al. [280] developed a ROM initially by combining a deep learning algorithm (LSTM) with suitable orthogonal decomposition (POD) methodologies. The results revealed that the DL ROM (DLROM) can capture sophisticated fluid dynamics with a CPU cost of less than 1/1000. While DLROM offers better potentiality in prediction than earlier ROMs [283], deep learning frequently demands the availability of massive training data that greatly surpass the network's parameters. When the probability distributions of the novel input data deviate from those of the training data, the resulting models are typically acceptable for interpolation but may not be adequate for extrapolation. Nevertheless, the effects of expanding this method to variable parametric obstacles, such as the real-time response to natural disasters, are unknown. Table 2 provides a summary of the reviewed studies on the uses of deep learning in various sectors. \begin{table} \begin{tabular}{p{108.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}} \hline Applications & Algorithms/ Models & Objective & Outcome & Remarks & Ref. \\ \hline Transportation Prediction & Attention based ConvLSTM, Bi-LSTM & Extracts fundamental properties of traffic flow using hybrid and multiple layer architectures & The combination of attention ConvLSTM and Bi-LSTM performed better than the existing models. & They considered a relatively small and simple road network. Therefore, CNN and Bi-LSTM methods might be unable to completely utilize traffic flow’s complex and dynamic properties. & [120] \\ \cline{2-6} & Attention-based CNN-LSTM & Traffic flow forecasting found 99\% & The accuracy of the model was found 99\% & Weather and other factors such as accidents and road closures can be factored into the model to enhance it. & [121] \\ \cline{2-6} & Deep stacked autoencoder to present features in a lower dimension & Predict the travel times applying Deep NN in the initial training data. & Showed better performance than applying Deep NN in the initial training data. & [122] \\ \hline Agriculture & Transfer learning to train Deep Convolution NN & Plant leaf stress detection & Achieved 93\% accuracy & For certain diseases (CBSD, BLS, GMD), using the leaflet rather than the full leaf increased diagnostic accuracy. However, using whole leaf photos enhanced accuracies for others (CMD and RMD). & [124] \\ \cline{2-6} & Multi-layer CNN and stressed mango leaves & Distinguish between healthy and “stressed mango” leaves & Achieved 97.13\% accuracy & Employing a new activation function in place of Softmax can improve CNN’s performance & [284] \\ \hline CNN & Plant disease identification & Mildly diseased images were difficult to identify but accuracies were higher for other cases. & In all situations, a few hundred photos appeared to be sufficient to produce credible findings, but this quantity must be approached with caution. & [125] \\ \hline UNet-CNN & Classify and identify cucumber powdery midweffected leaves & CNN model segmented the sick powdery mildew on cucumber leaf pictures with a mean pixel accuracy value of 96.08\%. & Lack of appropriate amount and diversity of the datasets. & [126] \\ \hline \end{tabular} \end{table} Table 2: Overview of the surveyed studies conducted on the applications of deep learning \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline & CNN & Fish species classification & Achieved accuracies of over 90\% & General deep structures in the experiment should be fine-tuned to increase the efficacy to identify vital information in the feature space of interest, in order to reduce the requirement for vast volumes of annotated data. \\ \hline Natural language processing & Collaborative adversarial network & Paraphrase identification & The model outperforms the baseline MaLSTM model & Shows excellent potential using the CAN for paraphrase identification \\ \cline{2-6} & RNN (LSTM) and several other models & Paraphrase identification & RNN (LSTM) outperforms the others & RNN shows significant performance in NLP & [131] \\ \cline{2-6} & Weighted Transformer & Machine Translation methods & Outperformed state-of-the-art methods & The model ignores the modeling of relations among different modules. & [285] \\ \cline{2-6} & Single Neural Machine Translation & Machine Translation & The method works reliably on google scale production setting & The performance of zero-shot translation is frequently insufficient to be practical, as the basic pivoting strategy quickly outperforms it. & [132] \\ \cline{2-6} & Deep-attention model & Machine Translation & In comparison to the best methods currently available, deep attention performs exceptionally well & The technique could be implemented on other tasks like summarization and it could adapt to more complex attention models. & [133] \\ \hline BiLSTM with CRF & Sentiment analysis (extracting aspect opinion target expression) & Outperformed the existing research & Unable to represent several aspects of sentences and did not investigate the explicit location contexts of words & [137] \\ \hline LSTM & Sentiment Analysis & Outperformed the existing research & Unable to represent several aspects of sentences and did not investigate the explicit location contexts of words & [138] \\ \cline{2-6} & Domain attention model & Multi-domain sentiment categorization & Evaluated on multiple datasets and showed better performance than the state-of-the-art techniques & Can pull out the most distinctive features from the hidden layers, reducing the number of labeled samples required. & [139] \\ \hline \end{tabular} \begin{tabular}{|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline Weakly-Supervised Multimodal Deep Learning (WS-MDL) & Prediction of multimodal attitudes in tweets & Better performance than supervised and other weakly supervised models. & The order of the emotion icon levels might be further investigated, and this may be included as a constraint to the proposed WS-MDL technique. \\ \hline CNN, LSTM & Sentiment classification & Achieved accuracy of 87\% & In future work, bag-of-word and word embedding techniques can be combined \\ \hline ConvLSTM (merged CNN and LSTM) & Sentiment analysis & Performed better than existing works with less number of parameters & Local information loss may be reduced, and long-term dependencies can be captured using the suggested design \\ \hline Attention-based LSTM & Question answering & Better performance than baseline approaches & Modeling more than one aspect simultaneously with the attention mechanism would be an interesting addition to the experiment \\ \hline Hierarchical TF-IDF document retrieval and a BERT document reader & Question answering & Significant improvement of the baseline techniques for the Arabic language & They introduced a dataset (ARCD). However, ARCD’s questions were created with certain paragraphs in mind, without that context, they might seem ambiguous. \\ \hline CNN, LSTM & Visual Question answering & Will be helpful to study the reduction of the network model size. & Determining the rank is an NP hard problem in the low-rank decomposition, and their method is still constrained in this area by inserting hyper-parameters \\ \hline Tree-LSTM & Visual Question answering & Better performance than existing Hie and Deeper LSTM & The representational ability of the network could be improved in future. \\ \hline CNN, LSTM & Visual Question answering & Outperformed the existing studies by 5\% & The approach might be used on unstructured information sources, such as online text corpora, in addition to structured knowledge bases \\ \hline Reinforcement Learning technique and an encoder & Summarization & Performed better than baseline Bi-LSTM & The model may suffer from large variance since they use an approximation in the Reinforcement Learning method training objective function \\ \hline \end{tabular} \begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline extractor network architecture’s RNN sequence model & & & & \\ \hline Autoencoder, Ensemble technique & Summarization & Ensemble learning models showed significant improvements in Rouge recall and good F-measure outcomes. & A few more unsupervised NN techniques, such as Restricted Boltzmann Machines and stacked-auto-encoders, could be included in the experiment to improve the robustness of the suggested technique. & [150] \\ \cline{2-5} & RNN-LSTM, RBM & Summarization & First experiment where RBM and RNN-LSTM have been paired for predicting sentiment polarity. & Does’t get the difference between active and passive sentences. & \\ \hline Biomedicine & Cascaded-CNN (C-CNN) & Prediction of alpha carbon atoms throughout the core structure of proteins & C-CNN outperformed (88.9\%) the Phoenix-based structure construction method (66.8\%) & Adding protein structural details for training the networks can enhance the model & [157] \\ \cline{2-5} & deepMiRGene (RNN+LSTM) & Prediction of structural characteristics of precursor miRNAs & deepMiRGene performed better having accuracy between 88\%-91\% of rigorous manual feature development. & The most important contribution was the elimination of rigorous manual feature development. & [159] \\ \cline{2-5} & DNN & Prediction of gene expression interference & Outperformed LR in 99.97\% of the key genes & Demonstrated high accuracy than the linear regression model & [161] \\ \cline{2-5} & FCNNs, CRF, RNN identification & Cerebral malignancy identification & The integration of FCNNs and CRF-RNN acquired an accuracy of 88\% & 2D CNNs lack the essential capabilities to fully utilize 3D information from MR & [165] \\ \cline{2-5} & CNN & Identifications of kernels for gliomas segmentation in the images derived from MRI & 88\%, 83\%, 77\% accuracy acquired & CNN model demonstrated to be legitimate in the Cerebral Tumor Classification database & [167] \\ \hline Bioinformatics & DNN, RNN & Identification of AMPs & Best performance having an accuracy of 91.01\% & Eliminated the dependency on domain experts for feature creation by employing a deep network model & [168] \\ \hline \end{tabular} \begin{tabular}{|p{56.9pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline & \multirow{2}{*}{DeepCpG} & Single-cell DNA methylation prediction & Obtained accuracy of 83\% & The strength of the DeepCpG extracting predictive sequence from large (\textless{1000 bp}) DNA sequence. \\ \cline{2-6} & & Identification of enhancer-promoter interactions & accuracy ranging from 96.5\% to 98.5\% in different datasets & The model can be utilized in transfer learning & [171] \\ \cline{2-6} & & & Prediction of Multiple Enzyme Functions & 97.6\% of accuracy with a standard deviation of 0.27 & mlDEEPre can be effortlessly merged into DEEPre to handle functional predictions. & [174] \\ \hline Disaster Management & CDNN algorithm & Flood catastrophe identification & Higher accuracy (93.2\%) level than the current approach of DNN and ANN & The model can be improved with IoT-based devices using cutting-edge algorithms at each stage of flood identification & [177] \\ \cline{2-6} Systems & CNN-based architecture & Early fire detection & High accuracy of from 89\% to 99\% accuracy and in addition to providing an automated response & [180] \\ \cline{2-6} & SVM, LR, CNN classifier & CNN-based deep learning algorithm to classify the trending catastrophic topics from social media & SVM (63\%-72\%), LR (44\%-60\%), CNN (81\%) & CNN took longer to learn than SVM and LR as there were more parameters to consider making it difficult to employ for web-based learning & [33] \\ \cline{2-6} & RNN, CNN, MCA-based model & Disaster information management & Minimal accuracy 73\% & MCA based model can include more textual and metadata to optimize the final categorization results & [184] \\ \hline Drug discovery and toxicology & DNN-QSAR model & Prediction of impacts of EDCs on the endocrine system, particularly (SHBG) and (ER). & The accuracy of DBN-QSAR was 90\%, & DNN was more effective for evaluating qualitative responses & [187] \\ \cline{2-6} & SVM, LDA, CART & Identifying EDCs through the ER & Overall projected accuracy confirmed by the screening of EDC assay was 87.57\% & The estrogenic activity of 109 compounds was predicted using the best model, SVM. & [186] \\ \hline \end{tabular} \begin{tabular}{|p{56.9pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline CNN & Reaction prediction and Retrosynthesis & The model achieved a 95\% & The model demonstrated the extreme capability of ranking the real reactions precisely & [190] \\ \cline{2-5} & Multimodal Deep & To build a connection between DDI and other targets and enzymes. & DDIMDL outperformed by achieving around a 99\% of accuracy rate & Demonstrated the impact of structural information and multimodal features on prediction accuracy & [191] \\ \cline{2-5} & Autoencoder (SSP, TSP, GSP) & Predict the therapeutic effects of drug-drug interaction & classification accuracy between 97\%-97.5\% & Therapeutic implications of the DDIs predicted should be verified & [195] \\ \cline{2-5} & Partial Differential Equations & 5-layer deep neural network & Building multi-physics/multi-scale modeling & For noise-free training data, the error in predicting unknown parameters was 0.023\% and 0.006\%, respectively. & [196] \\ \cline{2-5} & Deep residual network (ResNet) & Solving SBVPs with high-dimensional uncertainty & DNNs predicted the mean with less than 1.35\% relative L\({}_{2}\) error & Trained DNNs were found to transition effectively to inputs from non-distribution data & [286] \\ \cline{2-5} & CNN based encoder-decoder network & Solving SPDE and uncertainty assessment tasks & 300 epochs and 200 epochs for PCS and DDS respectively along with 8 to 12 images & Surrogate model PCS outperformed data-driven DDS and demonstrated the capability of incorporating prediction uncertainty & [198] \\ \hline Financial fraud detection & LTSM & To catch consumer behavior detecting credit card fraud & Enabled fast experimentation with good accuracy & Can be compared with other variants of RNN such as Bidirectional Many-to-Many and plain neural work & [207] \\ \cline{2-5} & LTSM, RNN, ANN of deep learning techniques in financial fraud detection & & LSTM technique achieved the best performance & Did not specify the network size and sensitivity & [209] \\ \cline{2-5} & LSTM & Credit card fraud detection & Detected suspicious financial activities and alerted the appropriate authorities with 99.95\% accuracy & It can study even complex data structures and adjust to changed fraud trends dynamically & [204] \\ \hline \end{tabular} \begin{tabular}{|p{56.9pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline H\({}_{\text{G}}\)O framework & Credit card fraud detection & Enabled multiple algorithms to aggregate as modules, and their outputs can integrate to improve the final output accuracy & Adding more algorithms with equivalent formats and datasets can elevate this model \\ \hline Autoencoder and RBM & Altering ordinary transactions to detect anomalies & Worked with 96.03\% accuracy & Should use real credit card fraud occurrence with massive quantities of data & [212] \\ \hline Multilayer Perceptron neural network (MLP) & Tax fraud detection & Classified individuals based on convicting potentiality and calculates the propensity of tax fraud per taxpayer with 84.3\% accuracy & Can facilitate tax department with effective decision-making regarding strategic planning & [215] \\ \hline Computer Vision & CNN & Generic object detection & The proposed method increased the RCNN’s mean average precision from 31 to 50.3. It also exceeds GoogleNet, the winner of the ILSVRC2014, by 6.1\% & The addition of a def-pooling layer provides the model with a richer set of options for handling deformations and incorporating deep architectures & [225] \\ \hline Semi-Supervised & Overcoming the crucial drawbacks of the object detection model & Translated the input data into more compact and abstract representations, which enhanced the model’s convergence, stability, and performance & Allows modifying the model dynamically to the digital photogrammetry conditions. & [226] \\ \hline Global adversarial deep CNNs & Face recognition model & Protocol attained 94.05\% state-of-the-art verification accuracy & Can generate identity-distilled features and also extract concealed identity-dispelled features but lacks generalization & [231] \\ \hline \end{tabular} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline CNN and LSTM & Human activity recognition up to 9\% & Outperformed previous models by modalities, and it can combine them for enhanced performance & [239][][][][][][][][][][][][] \begin{tabular}{|p{56.9pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline \multirow{2}{*}{Ecology} & \multirow{2}{*}{Mask R-CNN} & To compare the deep learning algorithms for determining fish abundance with human counterparts & The deep learning algorithm outperformed marine specialists by 7.1\% and citizen scientists by 13.4\% & Evaluated abundance with stable results and is more portable across survey locations than humans \\ \cline{2-5} \cline{7-6} & CNN & Tree defoliation identification identification & Identified tree defoliation 0.9\% less accurately than a group of human experts & Can produce errors during complex situations due to data deficiency \\ \cline{2-5} \cline{7-6} DNN and Transfer learning & To automate characteristic recognition and extraction. & The top-one species identification accuracy was 80\%, while the top-five accuracy was 90\% & It is not possible anymore owing to vast differences in visual appearance \\ \cline{2-5} \cline{7-6} CNN & Directly identifying ant genera from the profile, head, and dorsal views of ant photos & Gained over 80\% accuracy in top-1 classification and over 90\% in top-3 classification while reducing total classification error & Contributes novel understanding of ensembles for multi-view structured data and transfer learning processes for probing commonalities in multi-view CNNs & 268\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 21\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 21\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 21\| 211\| 211\| 211\| 21\| 211\| 21\| 211\| 211\| 211\| 211\| 211\| 211\| 211\| 21\| 211\| 21\| 211\| 21\| 211\| 21\| 211\| 21\| 211\| 211\| 21\| 21\| 211\| 21\| 21\| 211\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 21\| 2 \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Physics-guided neural networks (PGNN) & To forecast turbulent flow & Showed significant improvements in error prediction over state-of-the-art benchmarks & Precisely simulate the turbulent kinetic energy field and spectrum that are vital for accurate turbulent flow prediction \\ \hline DLROM and POD & Describing the conservation of mass and momentum in fluids & Comparison with earlier ROMs clarified that the DLROM has better potentiality in the prediction & The consequences of applying this approach to transitioning parametric constraints, such as real-time response to natural disasters, are uncertain. \\ \hline ROMs and POD & To conduct a comparative analysis using three ROM applied to a biological model & Comparison of POD-DEIM and Gappy POD solutions revealed similar levels of accuracy & The model did not converge when the number of MPE points was low \\ \hline POD and ROM & Using an efficient adjoint technique to optimally collect targeted observations & When compared to the high fidelity model, the size of the problem is decreased by a factor of 200 & It ensures that the sensors are positioned at the optimum distance from one another \\ \hline \end{tabular} Based on our prior discussion, Table 3 represents the relationship among the mentioned fields i.e. how deep learning techniques on vision, audio, and NLP helps the other concrete applications, such as transportation, agriculture, bioinformatics, and ecology. \begin{table} \begin{tabular}{|p{85.4pt}|p{113.8pt}|p{113.8pt}|} \hline Application Area & DL Techniques & Benefits and Applications \\ \hline Transportation & Object Detection & - Enhancing autonomous driving by identifying pedestrians, vehicles, and road signs \\ & & - Real-time surveillance for safer navigation \\ & & - Traffic flow analysis and optimization \\ \cline{2-3} & Image Segmentation & - Identifying road lanes and obstacles for self-driving cars \\ & & - Accurate mapping and localization using satellite imagery \\ \hline Agriculture & Crop Monitoring & - Identifying crop diseases, pests, and nutrient deficiencies through image analysis \\ & & - Precision agriculture and yield prediction \\ \cline{2-3} & Object Recognition & - Differentiating between various plant species and weeds for targeted intervention \\ \hline Disaster Management & Image Analysis & - Aerial imagery interpretation to assess disaster extent and damage \\ & & - Identifying trapped survivors in disaster areas \\ \cline{2-3} & Sentiment Analysis & - Analyzing social media data for real-time disaster response and understanding public sentiment \\ \hline Drug Discovery & Molecular Structure & - Predicting molecular interactions and drug-binding sites for drug design \\ & & - Accelerating drug discovery process \\ \cline{2-3} & Image Analysis & - Analyzing medical images to identify potential drug candidates and understand their effects \\ \hline Toxicology & Toxicity Prediction & - Predicting toxicity of chemicals and compounds, aiding risk assessment \\ & & - Reducing animal testing through computational models \\ \hline Bioinformatics & Sequence Analysis & - Analyzing genetic data for disease prediction and personalized medicine \\ & & - Identifying genetic markers for various conditions \\ \cline{2-3} & Protein Structure & - Predicting protein structures to understand their functions and interactions \\ & & - Advancing drug discovery and understanding diseases \\ \hline \end{tabular} \end{table} Table 3: How deep learning techniques on vision, audio, and NLP helps the other concrete applications **4. Challenges and benefits of deep learning applications** Deep learning is an incredibly important computational tool due mostly to its accuracy in data prediction and analysis. Since DL does not require the use of previously processed data, the input of raw data can be computed through branched layers that separately analyze the data and represent it to the next layer for further processing [287]. Deep learning is useful when dealing with enormous data since it can extract information from it, eliminating the need for data training and the associated computational expenditures. It is also notable for its expression and optimization capabilities, which make it so effective at processing data without training [288]. As the volume and complexity of available data continue to rise at an exponential rate, deep learning is gaining recognition as a crucial tool for more efficient data collection and analysis [289]. Several challenges of deep learning applications are common across various fields. Because of its independence from training data, deep learning demands rigorous data collection for proper analysis and processing of such a large number of data. Hence, medical, research, healthcare, and environmental data are challenging for large-scale data compilation, reducing the efficacy of deep learning [290]. Particularly of concern are data quality and structure, as data from health, research, and environmental studies are highly heterogeneous, full of ambiguity and noise, and often incomplete, which frequently presents problems for the model. Another issue with deep learning is that it typically assumes inputs to be static vectors and cannot readily incorporate time variables as inputs. Owing to the complex relationship of signals across time, samples are irregular, which influences the model's performance, particularly when dealing with health data [291, 292, 293, 294, 295, 296]. Domain complexity, such as diverse data and insufficient information, and black boxes, the complexity of algorithms that aren't understood, offer challenges for the system and inhibit the growth of data comprehension [287, 297, 298]. Some studies have found that using multimodal data improves the accuracy of the results, while others report that the heterogeneous nature of the data makes it difficult for the program to implement the necessary mechanisms [299, 300, 301, 302]. It is also challenging for deep learning algorithms to overcome the problem of label omission in many datasets [38]. ## 5 Conclusions To effectively forecast and analyze data at all levels, deep learning uses cutting-edge methods, combining ideas from machine learning, deep neural networks, and artificial intelligence to construct models that utilize data representations at each level. This review investigated numerous deep learning application pathways as well as the framework and modeling for each unique deep learning application across a variety of fields. Additionally, some of the most significant and commonly experienced obstacles and technical issues associated with using deep learning were discussed. Research domains might specialize in addressing specific difficulties and issues by analyzing the common obstacles encountered by deep learning applications across numerous fields. Data volume, quality, modeling, domain complexity, and representation are some of the issues that deep learning applications have in common with other disciplines. When processing massive amounts of data, it is best to employ a gated architecture, such as LSTM or GRU units, for extracting persistent information. In a neural network designed for multimodal learning, some of the neurons are used for all tasks, while others are trained to perform specific tasks. The irregularity problem in temporal sequence similarity measurement is addressed by a suggested approach that uses dynamic time warming. For labeling challenges, data can be implicitly labeled by applying acquired knowledge to fresh datasets for the same task or by utilizing an autoencoder for a variant architecture, such that transfer learning can be accomplished. Methods like knowledge distillation, which condenses the data learnt from a complex model so that is it a simpler and easier to execute, and attention mechanisms, which employ an understanding of historical data to forecast results, are also being utilized to improve interpretability. Recent advances in quick solutions to current difficulties point to a promising future for deep learning techniques in multi-field applications. Future research should focus on better understanding the issues related to the construction of adequate datasets for deep learning models, including but not limited to data quality, volume, domain complexity, and privacy.
2310.10275
A ML-LLM pairing for better code comment classification
The "Information Retrieval in Software Engineering (IRSE)" at FIRE 2023 shared task introduces code comment classification, a challenging task that pairs a code snippet with a comment that should be evaluated as either useful or not useful to the understanding of the relevant code. We answer the code comment classification shared task challenge by providing a two-fold evaluation: from an algorithmic perspective, we compare the performance of classical machine learning systems and complement our evaluations from a data-driven perspective by generating additional data with the help of large language model (LLM) prompting to measure the potential increase in performance. Our best model, which took second place in the shared task, is a Neural Network with a Macro-F1 score of 88.401% on the provided seed data and a 1.5% overall increase in performance on the data generated by the LLM.
Hanna Abi Akl
2023-10-13T12:43:13Z
http://arxiv.org/abs/2310.10275v1
# A ML-LLM pairing for better code comment classification ###### Abstract The 'Information Retrieval in Software Engineering (IRSE) 1* at FIRE 2023 shared task introduces code comment classification, a challenging task that pairs a code snippet with a comment that should be evaluated as either useful or not useful to the understanding of the relevant code. We answer the code comment classification shared task challenge by providing a two-fold evaluation: from an algorithmic perspective, we compare the performance of classical machine learning systems and complement our evaluations from a data-driven perspective by generating additional data with the help of large language model (LLM) prompting to measure the potential increase in performance. Our best model, which took second place in the shared task, is a Neural Network with a Macro-F1 score of 88.401% on the provided seed data and a 1.5% overall increase in performance on the data generated by the LLM. Natural Language Processing, Machine Learning, Information Retrieval, Large Language Models, Code Comprehension, Comment Quality Footnote 1: [https://sites.google.com/view/irse2023/home](https://sites.google.com/view/irse2023/home) _Forum for Information Retrieval Evaluation, December 15-18, 2023, India_ Footnote 1: Forum for Information Retrieval Evaluation, December 15-18, 2023, India Footnote 2: Forum for Information Retrieval Evaluation, December 15-18, 2023, India Footnote 3: Corresponding author. ## 1 Introduction In software development, code and documentation go hand-in-hand. Writing code is crucial to maintaining existing code bases, developing new features and fixing bugs. Documentation helps developers make sense of the logic behind written code and provides a steady set of guidelines to iterate over it [1]. Code commenting is a form of documentation whereby comments written in natural language are inserted in the code [1]. The advantage of this method is that it helps clarify parts of the code without affecting performance since comments are ignored by compilers [1]. It also provides an easy way to reflect updates on code changes without having to modify the entire documentation [1]. From the practice of writing code comments comes the challenge of identifying useful comments [2]. Writing comments is not always accurate science, and some comments can be outdated or ambiguous [2]. This can be problematic for developers who rely on these comments to understand and alter the code. There is then a real need for code comment checking. This need has framed the task of collecting code comments from real projects in a code-comment database to aid in the task of classifying useful versus not useful comments [3]. Code comment classification is still a relatively new task that explores the possibility of accurately discriminating between comments that bring added value to the corresponding code and comments that are not pertinent with respect to the surrounding code [4]. Recent research has aimed to answer this challenge by compiling a semantic code-comment base by scraping and collecting code and surrounding comments from real projects in C [3]. Researchers have also explored applying machine and deep learning techniques to solve this binary classification problem by considering useful comments (i.e., informative of the surrounding code) as a class and non-useful comments (i.e., redundant, uninformative or ambiguous) as another [5, 6]. On the other hand, the rise of large language models (LLM) [7] and their ability to pose as a jack-of-all-trades by solving a wide range of machine learning and deep learning problems, coupled with their wealth of training data, make them an interesting entry point for the code comment classification task [8]. Based on the Transformers model [9], they are able to create robust embeddings from text, which helps them tackle problems based on natural language [10]. Another recent breakthrough in LLMs is in generative artificial intelligence, where users combine pre-trained models with different prompting techniques to generate output data (e.g., text) [11]. This prompting ability is at the heart of prompt engineering, a method that can redirect a LLM into focusing its generation on a specific need. This need can be in the form of answering specific questions, solving certain tasks (e.g., a classification problem) or even producing data in a pre-defined format [12]. The latter use case plays a detrimental role in data augmentation, whereby users can couple the power of LLMs with a pre-existing dataset to enrich it and overcome data scarcity [13, 14]. The IRSE at FIRE 2023 shared task proposes to measure the effects of leveraging LLMs in the context of solving the code comment classification problem [15]. Specifically, challengers are asked to use the generative capabilities of LLMs to enrich an existing dataset of code comments and compare the performance of classical machine learning models on the classification task before and after data augmentation [15]. In this paper, we show how prompting LLMs effectively can increase model performance on the code comment classification problem. The rest of the paper is organized as follows. In section 2, we discuss some of the related work. In section 3, we present the experimental setup. In section 4, we discuss the results. Finally, we present our conclusions in section 5. ## 2 Related Work This section discusses some of the proposed strategies in the literature to classify code comments by quality. ### Baseline models for code comment classification Paul [16] leveraged classical machine learning models to solve the code comment classification task on a C language dataset of code and comments. They extracted text-level features like comment length and comment position within the source code and found a comparable performance between a logistic regression and a support vector machine binary classifier [16]. Das and Chatterjee [17] studied the performance of deep learning models by proposing a fusion transformer system based on BERT and CodeBERT. Their system combined text-based features with dense embeddings and outperformed all other baseline models on the code comment classification task [17]. ### Embedding techniques for code comment classification Basu et al. [18] compared both classical machine learning models and transformer-based models with different embedding techniques and found that the bag-of-word representation can outperform transformer-based embeddings on the code comment classification problem. Their findings could not be generalized and were limited by the size of the dataset they used for their runs [18]. Majumdar et al. [19] examined the effects of using embeddings to tackle the code comment pair classification challenge by developing and training a low-dimensional contextualized word embeddings model based on masked language models. The resulting model captured semantic code concepts better and resulted in a boost in their binary classification systems when compared to vanilla word embeddings models [19]. Other areas of research suggest an inclination toward specializing software engineering terms and building a domain vocabulary to produce more representative word models. Mishra and Sharma [20] proposed a methodology for crawling and scraping Wikipedia as a base for collecting software engineering terms. Gonzalez-Perez and Henderson-Sellers [21] laid the groundwork for the construction of such an ontology in terms of completeness, clarity, generalizability and extensibility. Simmons and Dillon [22] proposed an open-source architecture designed to act as both an ontology and a knowledge base meta-model for software development semantics. ## 3 Experiments This section describes the framework of our experiments in terms of data, models and training process. ### Dataset description The dataset considered for this shared task is divided in two parts: a seed data provided by the task organizers and a LLM-generated dataset to complement it. We introduce both datasets in the following subsections. #### 3.1.1 Seed Data The data provided by the task organizers consists of 11452 pairs of code and comments written in C, labeled as either Useful or Not Useful. The data contains 7063 Useful rows and 4389 Not Useful rows. The comments and surrounding code snippets are extracted from GitHub. For every comment, the label (Useful or Not Useful) was generated by a team of 14 annotators. Each comment was annotated by 2 annotators. Cohen's metric was used for inter-annotator agreement with a kappa score of 0.734. The annotation process was supervised by weekly meetings and peer review sessions. Sample data is shown in Figure 1. #### 3.1.2 Data Augmentation Participants are required to generate an additional dataset to complement the provided seed data. The generated dataset consists of code and comment pairs with labels generated using a LLM model of choice. In our experiments, we chose ChatGPT as our LLM and prompted it to generate data that aligns with the criteria of the given dataset, i.e., the generated code snippets should be written in the C programming language and the corresponding comments should be a mixture of useful and not useful. Additionally, we asked ChatGPT 1 to label each code-comment pair with the corresponding class (Useful or Not Useful). The ablation study performed on the LLM-generated dataset can be found in the Appendix. Using this method, we were able to generate 421 new code-comment pairs with 411 being labeled as Useful and 10 labeled as Not Useful by ChatGPT. Figure 2 shows an example output from ChatGPT. Footnote 1: [https://chat.openai.com/share/6538a7f4-0a19-4e54-b5d8-d246dac3781a](https://chat.openai.com/share/6538a7f4-0a19-4e54-b5d8-d246dac3781a) ### System description This section introduces the methodology used in our experimental runs. It describes the machine learning models as well as the features employed in our experiments. #### 3.2.1 Model Choice Since the challenge explicitly limits participants to classical machine learning models (including neural networks but not extending to recurrent neural networks or more modern architectures like large language models), we based our experiments on 3 systems: Random Forest (RF), Voting Classifier (VC) and Neural Network (NN). For the Random Forest model, we configured the following parameters: number of estimators = 100, criterion = gini, minimum samples split = 2, minimum samples leaf = 1, maximum features = sqrt, and bootstrap = True. Figure 1: Example of Seed Data Here are the lines with the labels "Useful" or "Not Useful" based on whether the comment matches the code: 1. "char letter = getchar():", "Read a character from the standard input." - Useful 2. "int factorial = calculateFactorial(I0):", "Calculate the factorial of a number." - Useful 3. "print(""Hello, WorldI("m"):", "Print the message "Hello, WorldI" to the console." - Useful 4. "int distance = calculateDistance(x1, y1, x2, y2):", "Calculate the distance between two points in 2D space." - Useful 5. "int max = getMaxValue(array, size),", "Find the maximum value in an array." - Useful 6. "int result = addNumbers(10, 20):", "Add two numbers and store the result in result." - Useful 7. "int index = linearSearch(array, size, target):", "Perform a linear search on an array to find a target value." - Useful 8. "for (int i = 0;!! epsilon = 0.00000001. All models are implemented using the scikit-learn 2 package in Python. Footnote 2: [https://scikit-learn.org/stable/index.h](https://scikit-learn.org/stable/index.h) #### 3.2.2 Features For the feature engineering phase, we concatenate code-comment pairs and embed the resulting input strings. We use the flax-sentence-embeddings/st-codesearch-distilroberta-base 3 model trained with the Hugging Face sentence-transformers 4 library on the CodeSearchNet 5 dataset compiled from code and documentation strings in the Go, Java, Javascript, PHP, Python and Ruby programming languages [23]. The result is one 768 dimensional embedding vector for every code-comment input string. These embeddings constitute our final feature set and are fed to the different models. Footnote 3: [https://huggingface.co/flax-sentence-embeddings/st-codesearch-distilroberta-base](https://huggingface.co/flax-sentence-embeddings/st-codesearch-distilroberta-base) Footnote 4: [https://huggingface.co/sentence-transformers](https://huggingface.co/sentence-transformers) Footnote 5: [https://huggingface.co/datasets/code_search_net](https://huggingface.co/datasets/code_search_net) #### 3.2.3 Experimental Setup We divide our experiment in two phases: seed data run and seed + LLM data run. The setup is identical for both phases and the only difference is the input data used. In the seed data run, only the seed data provided by the task organizers is used to assess model performance. In the seed + LLM data run, the data generated by ChatGPT is added to the seed data and the resulting augmented dataset is used as the input for our models. In both phases, analyzing the data at our disposal shows a class imbalance where the Useful class is over-represented at 61.6% in the seed data and 97.6% in the LLM-generated data. We use the SMOTE [24] technique to balance the datasets and restore class parity by synthetically generating rows of Not Useful data to achieve a 50-50 percent class distribution. Next, we split our data using the scikit-learn Repeated Stratified K-Fold cross validator 6 with 10 folds and 3 allowed repetitions. We use the Accuracy, Precision, Recall and F1 scores as metrics for evaluating our models. All experiments are performed on a Dell G15 Special Edition 5521 hardware with 14 CPU Cores, 32 GB RAM and NVIDIA GeForce RTX 3070 Ti GPU. Footnote 6: [https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RepeatedStratifiedKFold.html](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RepeatedStratifiedKFold.html) ## 4 Results Table 1 demonstrates the performance of each model on the seed data. For each scoring metric, the best score is marked in bold for both the Useful (U) and Not Useful (NU) classes. On the majority of the scoring metrics, the Neural Network outclasses the Random Forest and the Voting Classifier models. The Voting Classifier outperforms the Neural Network on the Recall of the Useful class and the Precision of the Not Useful class which shows that applying different non-linear models together can compensate for the shortcomings of one model's blind spots and classify more instances of Useful and Not Useful data correctly. The results of Table 2 are consistent with these findings. The Neural Network model is the overall best model since it outperforms the other systems in 5 scoring metrics out of 8 over both classes, while the Voting Classifier retains the best scores in F1 (U), Recall (U) and Precision (UN). We also note that the scores are consistently high for both classes, which is in large part helped by the SMOTE data augmentation technique. Having balanced both classes in our experiments allows us to have a better baseline when measuring the impact of the additional data generated by ChatGPT. By comparing the scores of Tables 1 and 2, we see that fixing the models and augmenting the data yields a 1.5% increase in scores overall. Particularly, this solidifies the claim that the data generated by the LLM aligns with the data expected for this challenge and can further aid in solving it. ## 5 Conclusion In this shared task, we evaluate the impact of generating LLM data to improve model performance. We explore the effects of this data generation by augmenting the existing code comment dataset and measuring the increase in the model classification scores. In the future, we plan to incorporate other data generation mechanisms such as ontology or knowledge graph integration into our LLM prompting technique to further our study of the impact of a refined data augmentation pipeline on classification performance.
2302.07518
Slip length for a viscous flow over spiky surfaces
For a model of a 3D coating composed of a bi-periodic system of parallel riblets with gaps we analytically derive an approximate formula for the effective slip length (an offset from the flat surface at which the flow velocity would extrapolate to zero) as a function of the geometry of the system (riblet period, riblet height, and relative gap size). This formula is valid for an arbitrary fraction of gaps (i.e from narrow riblets to narrow gaps) and agrees with the known analytical results for the 2D periodic coating of riblets without gaps. We validate our analytical results with the numerical solution of the equations of the viscous (creeping) flow over the riblets with gaps.
Alexei T. Skvortsov, Denis S. Grebenkov, Leon Chan, Andrew Ooi
2023-02-15T08:28:47Z
http://arxiv.org/abs/2302.07518v2
# Slip length for a viscous flow over spiky surfaces ###### Abstract For a model of a 3D coating composed of a bi-periodic system of parallel riblets with gaps we analytically derive an approximate formula for the effective slip length (an offset from the flat surface at which the flow velocity would extrapolate to zero) as a function of the geometry of the system (riblet period, riblet height, and relative gap size). This formula is valid for an arbitrary fraction of gaps (i.e from narrow riblets to narrow gaps) and agrees with the known analytical results for the 2D periodic coating of riblets without gaps. ## I Introduction The viscous flow over surfaces covered by sharp elements (riblets, grooves, spikes, or pillars) has been the key component in many problems of microfluidics (lab-on-a-chip [1; 2]), geophysics (canopy flows [3]), and biomechanics (the so-called shark skin phenomenon [4; 5; 6; 7]). A spiky coating has a remarkable (and, perhaps, counterintuitive) property of drag (shear stress) reduction of the viscous flow compared to a flat surface, although in the former case, the contact area between the fluid and solid is much higher [8; 9; 10]. This property has made spiky coatings an attractive candidate for many practical applications (e.g., drag reduction of ships and drones [11], improvement of propeller performance [12], micro-pump design [2; 13]) and stimulated many experimental and theoretical studies [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. This is an active area of research with extensive literature, see [8; 9; 26; 27; 28; 29], and references therein. The effect of a coating of complex morphology on viscous flow has been conventionally quantified by a parameter called effective slip length [2; 4; 18; 23; 24; 25; 14]. This parameter can be introduced with the following arguments. Near a flat surface the velocity of flow is directed along the surface, it is zero at the surface (no-slip boundary condition) and can be modeled by a linear profile \[v(y)=Jy, \tag{1}\] where \(y\) is the distance from the surface, constant \(J\) is related to the friction drag at the surface \(\tau=\mu dv/dz=\mu J\), and \(\mu\) is fluid viscosity. With a coating of complex morphology, the flow just above and inside the coating can be very complex. Nevertheless, far from the surface the linear relation \(v(y)\) is restored but with an additional parameter \(\lambda\) \[v(y)=J(y-\lambda), \tag{2}\] where \(\lambda\) (can have either sign) has a dimension of length and is called the effective slip length [2; 4; 18; 23; 24; 25; 14]. This parameter is the aggregated measure of the effect of coating morphology on the hydrodynamic properties of the surface. Condition \(y=\lambda\) corresponds to the fictitious coordinate at which the flow velocity would extrapolate to zero (relative to the surface \(y=0\)). Likewise, the parameter \(\lambda\) can be introduced by postulating a radiation boundary condition at the surface [16]: \[v+\lambda\frac{\partial v}{\partial y}=0. \tag{3}\] The effective slip length may also incorporate the effect of changing boundary conditions at some parts of the surface (from no-slip condition \(v=0\) to no-stress condition \(\tau=0\)) due to the air bubbles trapped between the spikes, see Fig. 1. Evidently, the patches of no-stress areas of the surface (e.g., due to trapped air) may lead to a significant reduction of viscous drag and that is often referred to as hydrophobic properties of the coating. Alternatively, roughness can block access of the flow to some parts of the surface so that the intrinsic hydrophobicity of the surface (as its physicochemical property) can be significantly amplified [20]. This necessitates investigation of the interplay of the effects of hydrophobicity and roughness. To incorporate the effect of hydrophobicity of the spikes due to the no-stress and no-slip patches of the spike surface the spikes can be modeled with the radiation boundary conditions (see below). All these cases are depicted in Fig. 1. The main focus of many theoretical studies of spiky coatings was the analytical derivation of the value of parameter \(\lambda\) as a function of coating morphology. For the steady unidirectional flow of viscous fluid without pressure gradient, in which the velocity vector is directed along the \(z\) axis and depends only on the other two coordinates (i.e., \(v\equiv v(x,y)\)), the equations of motion reduce to the 2D Laplace equation [4; 30] \[\frac{\partial^{2}v}{\partial x^{2}}+\frac{\partial^{2}v}{\partial y^{2}}=0. \tag{4}\] A wealth of analytical results for 2D coatings have been derived by employing the property of conformal invariance of Eq. (4) [4; 5; 28]. For instance, for the 2D comb-like boundary or riblets shown on Fig. 1, the results are as follows. For the no-slip boundary condition at the spikes and on the base, Fig. 1a, [4; 31] \[\lambda=\frac{W}{\pi}\ln\left[\cosh(\pi H/W)\right]\quad\text{(no-slip on the base)}, \tag{5}\] and for the no-slip boundary condition at the spikes and the no-stress ('hydrophobic') boundary condition on the base, Fig. 1b, [32] \[\lambda=\frac{W}{\pi}\ln\left[\sinh(\pi H/W)\right]\quad\text{(no-stress on the base)}, \tag{6}\] where \(W\) is the period of the comb-like structure (distance between spikes) and \(H\) is the height of the spikes. As \(H\to\infty\), both equations (5) and (6) behave similarly as \[\lambda\approx H-\frac{\ln 2}{\pi}W, \tag{7}\] so \(\lambda\) tends to \(H\) minus a universal offset proportional to the period of the structure. For the riblets of other shapes (e.g., semicircle, triangular, rectangular cross-sections) the results are similar and can be found in [4; 5]. For a periodic configuration of alternating (no-slip and no-stress) stripes on a flat surface oriented perpendicular to the flow velocity [23; 24; 25], one has \[\lambda=\frac{W}{2\pi}\ln\left[1/\sin\left(\frac{\pi}{2}\sigma\right)\right], \tag{8}\] Figure 1: Examples of 2D riblet coating: \(a\) - coating with a periodic system of riblets; \(b\) - a system of riblets with trapped air pockets (in grey); \(c\) - coating with a periodic system of partially hydrophobic riblets modeled with partially absorbing (radiation) boundary condition (depicted in grey). The flow velocity \(\mathbf{v}\) is normal to the image plane. where \(\sigma\) is the surface fraction of the no-stress stripes, \(W\) is the period of the stripes. For the 3D morphological structures of the coating (e.g., spikes, pillars, or hemispheres) conformal transformation cannot be applied and there are only a limited number of papers in which the parameter \(\lambda\) has been derived analytically, see [34; 35; 36; 37; 38] and references therein. In particular, the authors of Refs. [34; 35] considered the model (that we refer to as the disk model) in which a viscous flow exists only above 'nanoforest' composed of a lattice of identical cylindrical pillars. In this model, the effect of the coating on the viscous flow is reduced to the friction forces acting at the top disks of each pillar (the flow satisfies the no-slip boundary condition on the top of the circular pillar, \(y=0\)) whilst the flow inside the 'nanoforest' (e.g., between the pillars) is disregarded (at \(y=0\) the no-stress boundary condition was assumed everywhere except the top disks). It was found that at the limit of a small areal density of the 'nanoforest' pillars (or surface fraction of the top disks), \(\sigma\ll 1\), the effective slip length obeys the scaling law [34] \[\lambda=H-\left(\frac{A}{\sqrt{\sigma}}-B\right)W,\quad\sigma\to 0, \tag{9}\] where \(W\) is the period of the pillar lattice (for simplicity assumed to be the square lattice), \(A=(3/16)\sqrt{\pi}\) and \(B=(3/2\pi)\ln(1+\sqrt{2})\)[35]. It was also found that this scaling law is geometry specific, viz., for the elongated (quasi-one-dimensional) cross-section of the structural elements (wall-like in the current context) it changes from the power-law (9) to the logarithmic form [35]: \[\lambda=H-\frac{1}{3\pi}\ln\left(\frac{4}{\pi\sigma}\right)W,\quad\sigma\to 0. \tag{10}\] Formulas (9) and (10) are valid for \(\sigma\ll 1\)[34], [35]. The limit \(\sigma\to 0\) (no disks) corresponds to \(\lambda\to-\infty\) or no friction (drag) on the surface so that the approximate relation (2) does not make sense anymore. The letter condition follows from Eq. (3) when the first term, which is proportional to \(1/\lambda\), becomes insignificant. The second terms in Eqs. (9) and (10) become zero at \(\sigma=(A/B)^{2}\approx 0.18\) and \(\sigma=4/\pi\approx 1.27\), respectively, instead of \(\sigma=1\) (uniform surface with the no-slip boundary condition), which is due to the inapplicability of Eqs. (9) and (10) at high \(\sigma\). The aim of the present paper is to derive the self-consistent expression for the effective slip length of the spiky coating as a function of the height of its 3D structural element, similar to Eqs. (5), (6) for 2D. ## II Riblets with periodic gaps To appreciate the effect of the pillar height, we need to incorporate the flow between pillars. We begin with the 2D solutions (5, 6) which we modify to capture the 3D effects. More specifically, we assume that the riblets have the periodic identical gaps with the relative size of the gaps being identical as shown in Fig. 2. In some regards this model is complementary to the disk model discussed above because all drag is generated by the viscous flow acting on the side of the solid parts of the riblets whilst the contribution of the flux from the top surface area of the riblets is neglected. Let \(x,y\) denote the horizontal and vertical axis, respectively, and the \(z\) axis is directed along the riblets (and flow velocity) as shown in Fig. 2. The boundary condition at the solid part of the riblets is \(v=0\) (no-slip) and at the gaps, the boundary condition is \(\partial_{x}v=0\) (no tangential stress). Applying the aforementioned arguments, the alternating boundary conditions at the riblet surface (\(y<H,x=\pm W/2\)) imply that this surface (grey spikes in Fig. 1c) can be treated with boundary homogenisation framework [39; 40; 41; 42], or, more specifically, it becomes partly hydrophobic and can be modeled with a radiation boundary condition: \[\frac{\partial v}{\partial x}+\frac{v}{\lambda_{s}}=0,\quad x=\pm W/2,\quad 0 <y<H, \tag{11}\] where \(\lambda_{s}\) is given by Eq. (12) \[\lambda_{s}=\frac{L}{\pi}\ln\left[1/\!\sin\left(\frac{\pi}{2}\sigma_{s} \right)\right], \tag{12}\] where \(L=s+g\) is the period of the solid-gap structure of an individual riblet, \(s\) is the width of the solid part of the riblet (per period), \(g=L-s\) is the width of the gap, \(\sigma_{s}=s/L\). For the case \(g=0\) (no gaps) we return to the solution given by Eqs. (5, 6). With the homogenised boundary condition the original 3D problem reduces to a 2D problem that can be tackled analytically (although due to the radiation boundary condition at the riblets conformal mapping is not helpful). Assume that \(v=v_{0}=const\) for some \(y=\delta\gg H\) (Couette flow) and for a given \(\lambda_{s}\) we can derive offset \(\lambda\) as \(\delta\to\infty\). The parameter \(\lambda\), being a function only of the geometry of the coating, is independent of \(\delta,v_{0}\), and \(\mu\). Formally, due to the periodicity of the system, we need to find a solution of the Eq. (4) for \(0\leq y\leq\delta\) and \(|x|<W/2\) with the following boundary conditions (see Fig. 1c): \[\frac{\partial v}{\partial x}+\frac{v}{\lambda_{s}}=0,\quad x=\pm W/2,\quad 0 <y<H, \tag{13}\] \[\frac{\partial v}{\partial x}=0,\quad x=\pm W/2,\quad H<y<\delta, \tag{14}\] \[v=v_{0},\quad y=\delta, \tag{15}\] \[v=0,\quad y=0,\quad\text{no-slip base}, \tag{16}\] or \[\frac{\partial v}{\partial y}=0,\quad y=0,\quad\text{no-stress base}. \tag{17}\] The parameter \(\lambda\) for this setting can be derived from the solution of Eqs. (13 - 17) by assuming that far above the coating (\(H\ll y\ll\delta\)) the solution takes the form (2) and then by matching this solution with the one inside the coating (\(0\leq y\leq H\)); this solution will be presented elsewhere (see also [43]). For the purpose of this study, we derive a simpler (approximate) solution for \(\lambda\) that can straightforwardly be deduced from the fact that the Robin boundary conditions (13) can be replaced with the homogeneous boundary conditions \(v=0\) but imposed on the equivalent boundaries at \(y=W/2+\lambda_{s}\) and \(y=-W/2-\lambda_{s}\). From here the approximate solution is immediately given by Eqs. (5, 6) with substitution \(W\to W+2\lambda_{s}\): \[\lambda =\frac{W+2\lambda_{s}}{\pi}\ln\left[\cosh(\pi H/(W+2\lambda_{s}) )\right]\quad\text{(no-slip on the base)}, \tag{18}\] \[\lambda =\frac{W+2\lambda_{s}}{\pi}\ln\left[\sinh(\pi H/(W+2\lambda_{s}) )\right]\quad\text{(no-stress on the base)}, \tag{19}\] where \(\lambda_{s}\) is given by Eq. (12). This is the main result of the present Letter. It provides insights into the dependence of the effective slip length on two-dimensional arrangements of the pillars and their height. At \(\lambda_{s}=0\) we return to the previous results for the 2D case, Eqs. (5), (6). As \(H\to\infty\), we recover the asymptotic relation similar to Eq. (7): \[\lambda\approx H-\frac{\ln 2}{\pi}(W+2\lambda_{s}). \tag{20}\] Figure 2: Models of coatings: \(a\) - periodic system of riblets; \(b\) - periodic system of riblets with periodic gaps. In view of Eq. (12) the limit \(\sigma_{s}\to 0\) in this formula recovers the logarithmic dependency similar to Eq. (10). Finally, for \(\lambda_{s}\to\infty\) and \(H\) fixed (sparse configuration of the needle-like pillars), one finds \[\lambda\approx\frac{\pi H^{2}}{2(W+2\lambda_{s})},\quad\text{no-slip base}, \tag{21}\] \[\lambda\approx\frac{2\lambda_{s}+W}{\pi}\ln\left(\frac{\pi H}{W+2\lambda_{s}} \right)-\frac{W}{\pi},\quad\text{no-stress base}. \tag{22}\] For the case of a square configuration of pillars (\(W=L\)) the plots of Eqs. (18, 19) are depicted in Fig. 3. For the no-slip boundary condition on the base (left panel), the parameter \(\lambda\) is positive and exhibits a monotonic increase with \(\sigma_{s}\) (i.e., the fraction of the solid part of the riblet). When \(H\) is large, the dependence on \(\sigma_{s}\) is very weak, while \(\lambda\) remains close to \(H\), except for very small values of \(\sigma_{s}\) (note that \(\lambda\) is rescaled by \(H\) in this panel). In particular, one sees that the asymptotic relation (20) accurately reproduces the exact solution. In turn, in the limit \(\sigma_{s}\to 0\) (no riblet), the effective slip length vanishes according to Eq. (21), as it should. However, this limit is achieved extremely slowly for large \(H\). In fact, one has \(\lambda_{s}\approx(L/\pi)\ln(1/\sigma_{s})\), and this parameter should be much larger than \(\pi H/2\) or, equivalently, \(\sigma_{s}\ll e^{-\pi^{2}H/(2L)}\), to be able to apply Eq. (21). For instance, if \(H/L=10\) (the red curve), one has \(e^{-\pi^{2}H/(2L)}\sim 10^{-22}\), i.e., the asymptotic relation (21) is not applicable for any reasonable \(\sigma_{s}\). In contrast, if the riblet height \(H\) is much smaller than \(W\), the asymptotic relation (21) provides an accurate approximation for the whole range of \(\sigma_{s}\). In the intermediate case when \(H\sim W\), the relation (21) is applicable only for small \(\sigma_{s}\) (left panel, middle curve). The situation is quite different for the no-stress boundary condition on the base (right panel). While the effective slip length still grows monotonously with \(\sigma_{s}\), it takes negative values as \(\sigma_{s}\to 0\). Moreover, \(\lambda\) diverges to \(-\infty\) in this limit, in agreement with Eq. (22). As previously, the approach to this limit is very slow when \(H\) is large, so that Eq. (22) is not applicable. In this setting, one can use the large-height expression (20). The parameter \(\lambda\) completely defines the relative change of drag due to coating \[\frac{\tau_{\text{coat}}-\tau_{\text{flat}}}{\tau_{\text{flat}}}=\frac{ \lambda}{\delta-\lambda}\approx\frac{\lambda}{\delta} \tag{23}\] for \(\lambda\ll\delta\) and \(\delta\) is defined in Eq. (15). Eqs (18) and (19) and the data presented in Fig. 3 provide a clear guidance for targeted coating optimisation. ## III Discussion and future work In summary, for a model of 3D spiky coating we derived an approximate formula for the effective slip length as a function of the pillar height, and the 2D arrangements of the pillars. There are several extensions that can easily be incorporated into the proposed model. For instance, there is no need to assume that riblets should have only one gap Figure 3: The effective slip length \(\lambda\) as a function of \(\sigma_{s}\), shown by symbols and given by Eqs. (18, 19) for the no-slip (left) and no-stress (right) condition on the base, respectively. Here \(L/W=1\) and three values of \(H/W\) are considered. Solid and dashed lines present the asymptotic behavior (21, 22) for moderate heights \(H/W=0.1\) and \(H/W=1\), while dash-dotted line indicates the large-height asymptotic relation (20). Note that \(\lambda\) is rescaled by \(H\) on the left panel and by \(W\) on the right panel. per period, since there is the formula for \(\lambda_{s}\) for an arbitrary number of gaps per period [44; 21]. This enables the analytical treatment of coatings with much more complex structures. Our results can also be extended for the pillars of arbitrary cross-section (i.e., different from an infinitely thin interval) provided the momentum flux through the top surface of the pillar can still be neglected. To this end, we can apply the following rationale. It is known that the Stokes force (which is proportional to the momentum flux over the surface of the pillar) is proportional to the capacitance of the object (or logcapacity in 2D) [45]. As a consequence, to translate the results of the proposed framework to arbitrary pillars, it is sufficient to find a solid interval of an equivalent logcapacity for a given pillar cross-section (note that the logcapacity of an interval is \(4s\)[46]). Moreover, the parameter \(s\) is in the argument of logarithm, so the final result is insensitive to minor inaccuracies in estimation of the equivalent logcapacity. Indeed, this approximation can only hold for sparse configurations: \(s/W\ll 1\). This approach also allows us to make informative conclusions regarding the applicability of the disk model for a coating of tall pillars (a brush). For instance, the effective slip length due to the momentum flux through the top surface and the side of the pillars can be characterised by the second terms (offsets) in Eqs. (9), (20), respectively. By comparing these terms we arrive at the simple condition of validity of the disk model \[\frac{1}{\sqrt{\sigma}}\ll\frac{(1+2\lambda_{s}/W)\ln 2}{\pi A}+\frac{B}{A}, \tag{24}\] where constants \(A\) and \(B\) are defined in Eq. (9) and \(\lambda_{s}\) is given by Eq. (12). We note that both terms on the right side of this inequality are positive and \(B/A\approx 5.4\) so this condition is quite restrictive for \(\sigma\). We believe that the presented results can be useful for the targeted design of engineered coatings with desirable hydrodynamic properties before proceeding with extensive computational simulations and experimental evaluation. ## Data availability The data that support the findings of this study are available from the corresponding author upon request. ## Acknowledgements A.T.S. grateful to Ian R. MacGillivray and Paul A. Martin and many insightful discussions. D.S.G. acknowledges the Alexander von Humboldt Foundation for support within a Bessel Prize award.
2308.12628
TimeLighting: Guidance-enhanced Exploration of 2D Projections of Temporal Graphs
In temporal (or event-based) networks, time is a continuous axis, with real-valued time coordinates for each node and edge. Computing a layout for such graphs means embedding the node trajectories and edge surfaces over time in a 2D + t space, known as the space-time cube. Currently, these space-time cube layouts are visualized through animation or by slicing the cube at regular intervals. However, both techniques present problems ranging from sub-par performance on some tasks to loss of precision. In this paper, we present TimeLighting, a novel visual analytics approach to visualize and explore temporal graphs embedded in the space-time cube. Our interactive approach highlights the node trajectories and their mobility over time, visualizes node "aging", and provides guidance to support users during exploration. We evaluate our approach through two case studies, showing the system's efficacy in identifying temporal patterns and the role of the guidance features in the exploration process.
Velitchko Filipov, Davide Ceneda, Daniel Archambault, Alessio Arleo
2023-08-24T08:12:04Z
http://arxiv.org/abs/2308.12628v2
# TimeLighting: Guidance-enhanced Exploration of 2D Projections of Temporal Graphs ###### Abstract In temporal (or _event-based_) networks, time is a continuous axis, with real-valued time coordinates for each node and edge. Computing a layout for such graphs means embedding the node trajectories and edge surfaces over time in a \(2D+t\) space, known as the space-time cube. Currently, these space-time cube layouts are visualized through animation or by slicing the cube at regular intervals. However, both techniques present problems ranging from sub-par performance on some tasks to loss of precision. In this paper, we present TimeLighting, a novel visual analytics approach to visualize and explore temporal graphs embedded in the space-time cube. Our interactive approach highlights the node trajectories and their mobility over time, visualizes node "aging", and provides guidance to support users during exploration. We evaluate our approach through two case studies, showing the system's efficacy in identifying temporal patterns and the role of the guidance features in the exploration process. Keywords:Temporal Graphs Space-time cube Dynamic Network Visualization Visual Analytics. ## 1 Introduction Temporal (or _event-based_) networks [23] are dynamic graphs where the temporal dynamics, such as node/edge additions and removals, have real-time coordinates. These have been characterized and studied extensively [21], as they are used in many applications to model phenomena of commercial and academic interest, such as interactions in social media [23], communication networks [16], and contact tracing [30], to name a few. In "traditional" dynamic graph drawing [11, 21], where time is discretized (or _timesliced_), creating a visualization for such networks poses different challenges. Juxtaposition (or small multiples) would require first identifying suitable timeslices, which inevitably leads to quantization errors that, in turn, obscure the fine temporal details that might be crucial in some domains (e.g., the exact order of personal contacts in contact tracing networks). On the other hand, animation would not suffer from such artifacts, and it has been used in previous work on event-based graph drawing as a visualization metaphor to display the computed layouts [7, 29]. Animation is a more natural way to encode time; however, it is not perceptually effective for many tasks involving dynamic networks [19, 3]. Moreover, the vast majority of research on animation has been done with timesliced graphs (see, e.g., [3, 6, 11, 18]), and its application to temporal networks is still in its infancy. Temporal networks can also be drawn in a 3D space, namely, the "space-time cube" \((2D+t)\). In this case, a drawing algorithm computes the node trajectories over time and space. Existing research [7, 28, 29] provides evidence that this drawing approach yields better quality drawings of temporal graphs, compared to their timesliced counterparts (e.g., Visone [10]), and for discrete time graphs when many changes occur between timeslices. Despite this, research on visually depicting these trajectories and, in turn, obtaining insights from their behavior (i.e., exploring the network in time) is still a largely under-investigated topic - a gap that we intend to address in this paper. On these premises, we present TimeLighting, a guidance-enhanced Visual Analytics (VA) solution for exploring node trajectories in the space-time cube keeping the _full_ temporal resolution of the network. TimeLighting supports understanding temporal patterns and behaviors and, in general, extracting insights from datasets with complex temporal dynamics. The design of TimeLighting is inspired by the "time-coloring" operation [8], whereby time is mapped to color to visualize the evolution of nodes and edges through the space-time cube; and is loosely motivated by _transfer functions_ used in direct volume rendering [26] to emphasize features of interest in the data. Conceptually, in our approach, we "shine" light through the space-time cube down along its time axis (hence the system's name - TimeLighting) in a manner that resembles the behavior of transfer functions for volume rendering. As the light interacts with the node trajectories, they are visualized and colored differently according to the age and persistence of the nodes (i.e., applying the time-coloring operation), generating a 2D visualization of the 3D embedding (see also Figure 1). The resulting visualization is an explorable 2D map of the nodes' densities, with visible individual node movement and "aging" over time. We complement this visualization with several interactive controls to explore the data and introduce a simple mobility metric, based on the length of each node's trajectory, to rank and identify the more and less stable parts of the graph. We designed TimeLighting introducing multiple elements of visual guidance [14] to enhance (and possibly ease) the network exploration process. Finally, we describe two case studies demonstrating how our guidance-enhanced approach supports users in achieving the system design tasks. ## 2 Related Work We now illustrate related literature on which we ground our research. **Visualization of Dynamic Networks**. Visualizing temporal networks differs significantly from how typically we draw and visualize dynamic graphs in the graph drawing and visualization community where the time axis is a discrete series of timeslices. Each individual time point is called a timeslice, a snapshot that represents the state of the graph over a time interval. This simple yet powerful simplification is used as the basis of visualization [11, 9], for layout algorithm design [17, 12], and in user studies [6, 4, 18, 20]. The problem with time slicing is that many networks of scientific interest do not have natural timeslices. Therefore, choosing the right time sampling and duration for each can be a complex task, which eventually results in some loss of temporal information. In this regard, visualization techniques have been presented to suggest interesting timeslice selection. Wang et al. [31] present a technique for non-uniform time slicing (that is, selecting slices of different duration) based on histogram equalization to produce timeslices with the same number of events and, in turn, similar visual complexity. Lee et al. [25] experimented with a visualization tool, called "Dynamic Network Plaid" for interactive time slicing on large displays. Users can select interesting time intervals based on the event distribution over time and visualize the corresponding status of the network in those intervals. Recently, A number of algorithms have been proposed to draw temporal networks directly in the space-time cube [29, 7]. These approaches do not divide the data into a series of timeslices to draw the graph but directly embed the network in the space-time cube. However, the only way to visualize such \(2D+t\) drawings on a 2D plane was to select timeslices or present an animation of the data over time. **Guidance for Graph Exploration** Due to the challenge of analyzing complex events such as those modeled by temporal networks, researchers also investigated approaches to provide support and ease the analysis for users. The resulting approaches fall under the definition of "guidance" [14]. Guidance is characterized as _active_ support in response to a _knowledge gap_ which hinders the completion of an interactive visual data analysis session. Over the years, several approaches have been devised, providing different types of guidance in different phases of the analysis [15, 13]. For instance, May et al. [27] describe a method to enhance the exploration of large graphs using glyphs. While the user explores a given area of interest (the focus), the system automatically highlights the path to other possibly off-screen interesting nodes (the context). Gladisch et al. [22] provide support during the navigation through large hierarchical graphs by suggesting what to explore next. Thanks to a user-customizable degree-of-interest function, the system can suggest how to navigate the graphs, both horizontally and vertically, adjusting the level of abstraction of the hierarchy. Despite the work in this area, applying guidance to temporal networks is uncharted territory. Given a temporal network modeled in a space-time cube, our goal is to provide guidance to support the identification of interesting time intervals and nodes requiring further attention and analysis from the user. ## 3 Design Considerations In this section, we discuss the most relevant aspects that influenced the design of TimeLighting, namely, the data characteristics, the user's tasks, and the time-coloring paradigm used to provide guidance. Data, Tasks.The data we aim to visualize and explore with TimeLighting represents a temporal network [29, 7]. In a temporal network \(D=(V,E,T)\), with \(V\) the set of nodes, \(E\) the set of edges, and \(T\in\Re\) represents the time-dependant _attributes_. These take the form of functions in the \(V\times T\) and \(E\times T\) domains for nodes and edges, respectively. For simplicity, and in accordance with existing literature, we consider all attribute functions as piece-wise linear functions. The _appearance_\(A_{x}\) attribute, for example, models the intervals in time in which nodes and edges exist: \[A_{v}:V\times T\rightarrow[true,false]\] \[A_{e}:E\times T\rightarrow[true,false]\] \(A_{v}\) and \(A_{e}\) map to the node and edge insertion and deletion _events_, respectively. For this reason, these graphs are also called _event-based_ networks, and the terms, temporal and event-based, will be used interchangeably in the remainder of this paper. The _position_\(P_{v}:V\times T\rightarrow\Re^{2}\) attribute describes the nodes' position over time. The following is an example of how each node's \((v\in V)\) position Figure 1: Exemplification of the TimeLighting metaphor. Rays of light (depicted as dashed lines), coming from \(t=0\) travel through the space-time cube and reach the observer at \(t_{max}\). The rays interact with the node trajectories and will carry this information to the projection plane. is computed by an event-based layout algorithm describing the movement over time and space: \[P_{v}(t)=\begin{cases}(5,6)\rightarrow(12,11)&\text{for }t\in[0,1]\\ (8,3)\rightarrow(1,7)&\text{for }t\in[5,7]\\ \cdots&\\ (0,0)&\text{otherwise}\end{cases}\] In this paper, we use the MultiDynNoS [7] event-based layout algorithm to generate the drawings of the graphs. TimeLighting is designed for a number of user tasks, which we characterize using the task taxonomy for network evolution analysis by Ahn et al. [2] and the taxonomy of operations on the space-time cube by Bach et al. [8], and are described in the following. **T1: Overview.**TimeLighting should provide an overview of the temporal information at a glance. Providing an overview is typically the first necessary step in any VA process. **T2: Tracking Events.** Understanding the temporal dynamics and the events' frequency helps the user isolate interesting occurrences in time. Events also cause the nodes' trajectories to bend, i.e., make the node change direction. Understanding the _shape_[2] of changes in node movement over time (e.g., speed, repetition, etc.) would provide further insights during the exploration of the data. **T3: Investigate relationships.** Each edge event occurrence perturbs the trajectories. Identifying which relationships have the most impact or how often they occur might help the user explain the formation of clusters, or, in general, the phenomenon at hand. Time-coloring.To highlight interesting nodes and trajectories and more generally to visualize the network's dynamics, we took inspiration from the time-coloring operation described by Bach et al. [8] in their survey. Time-coloring is a content transformation operation applied to the space-time cube. Given a timesliced graph, the procedure consists in coloring each timeslice based on a uniform linear color scale so that it would be possible to identify the "age" of the data points easily. To apply this technique to temporal graphs, we had to find a way to visually represent two features of each node: their _persistence_ (i.e., its behavior during its appearance intervals) and their _aging_, that is how their movements are distributed in time from the point of view of the observer, in a continuous time axis scenario. Such network features can be visualized as if we projected light through the cube along the time axis: the interaction with light will be visible from the observer, watching from the other side of the cube (at \(t_{max}\), see Figure 1). Guidance.In addition to visualizing the temporal network, we designed guidance to support its exploration and analysis. In general, the degree of support provided to the user may vary significantly, and at least three guidance degrees can be identified [14]: _Orienting_ helps users keep an overview of the problem and the alternative analytical paths they can choose to move forward. _Directing_ guidance provides users with a set of options and orders them according to their importance for solving current tasks, and _prescribing_ guidance (as the name suggests) prescribes a series of actions to take to conclude the task. Given the similarities between our problem and the transfer functions used in volume rendering (although we apply them to a 2D visualization), the general idea at the base of our guidance-enhanced approach is to support and ease the identification of time intervals and nodes with specific desirable characteristics, as well as, investigate their relationships and how they interact (i.e., analyzing the moment of the trajectories), making them stand out from the rest for the user's convenience. Considering our set of design tasks, we highlight the nodes that have the longest trajectories. Long trajectories, in fact, represent nodes that are associated with many events and with high persistence in the currently selected temporal interval (to support **T2**). This type of guidance can be classified as directing, as trajectories are ranked based on their length, and, is necessary to ease or solve the system design tasks. In addition, we highlight temporal intervals in which nodes defined by the user interact with each other, thus providing guidance to **T3**. ## 4 TimeLighting In this section, we describe our system in detail and how we implemented it considering the design requirements described in the previous section. TimeLighting is a guidance-enhanced VA system comprised of two linked views and a focus+context approach. An overview of the prototype is shown in Figure 2. ### Main View In the main view of TimeLighting (see Figure 2-B), we show a 2D projection of the complete temporal graph (i.e., an overview--**T1**). We discuss the details of our approach in the following. We employ a number of different encodings to highlight features of nodes and edges. **Node Positions** are represented as trajectories encoded as a trail of circles (see Figure 3). First, we place each interval's start and end position within every node's \(P_{v}\) attribute (see Section 3). These come directly from the computed drawing and have an orange stroke in order to make these distinguishable from the sampled nodes. To ease the comprehension of the movement flow (**T2**), between the start and end positions of each interval, we place a number of sampled nodes, where the coordinates are interpolated. The user can fine-tune the number of interpolated positions by choosing an appropriate "sampling frequency". The resulting sampled nodes are positioned along the node trajectory but are encoded as smaller circles with no stroke to differentiate them from non-sampled nodes. We calculate and visualize the node _aging_ as follows: for each node visualized on screen (nodes both in \(P_{v}\) and interpolated are considered), its age is computed as the difference between its time coordinate and the time of the node's first appearance. We use a linear opacity scale to visually represent the node's aging process. This encoding makes it easier to understand the information about the progression and movement of each node over time, providing an overview of the evolution of the network. We use relative aging in this context as it is focused on the individual node's trajectory. Hovering over the node makes it possible to see its position in the timeline (in the context of the full temporal extent of the network) as a yellow bar, corresponding to its time coordinate. Typically, nodes are visualized in gray. However, users can change the visual appearance of the nodes to reflect the cumulative amount of movement (see Section 3, _Guidance_ paragraph). Activating this type of guidance changes the coloring from gray-scale to a continuous color scale making nodes with higher mobility visually distinct from the more stationary ones (**T2/T3**). **Edges** are represented as solid straight lines that connect pairs of nodes belonging to two distinct trajectories. Edges have a pivotal role because these interactions eventually cause node movements and the creation of temporal clusters (**T3**). Edges might also appear or disappear within \(P_{v}\) movement intervals. This justifies the choice of introducing sampled nodes: edges can appear between all the points belonging to a trajectory, including interpolated ones. This allows the user to keep an overview of the finer temporal details, as we can display edges closest to their exact time coordinates. In turn, this could generate visual clutter as each edge is shown once for every node pair between each trajectory, and, depending on the sampling frequency, a trajectory can be comprised of several nodes. We mitigate this by showing edges on-demand and related to one trajectory at a time (the selected one). The user, by hovering on any node belonging to a trajectory, is able to find a good representation of the temporal patterns. Figure 2: TimeLighting overview. The view is comprised of the (A) toolbar and sidebar, (B) main view, and (C) event timeline. The yellow bar in the timeline shows the absolute age of the hovered node (visible in the top-center area). to a trajectory, will make all edges incident to those nodes appear. Edge aging is encoded similarly to node aging. In the following Figure 6-B, an example of how edges connect the sampled nodes within the current temporal selection is depicted for the currently hovered node (Munster Rugby). **Movement** is visualized using a polyline connecting each position in the nodes' \(P_{v}\) attribute. It represents how the node's movement changes over time due to the bends in the trajectories computed by the layout process. We add this encoding as the nodes' opacity alone might not be sufficient in showing how trajectories evolve over time (**T2/T3**). We calculate the age of each trajectory segment and apply a similar linear opacity scale as with the nodes and edges. The trajectory's age is calculated as the mean of the ages of each pair of nodes in that given polyline segment. This information is also shown on-demand by hovering over a trajectory, and, either the edges or the movement of a node can be shown (depending on the selection in the top bar--see Figure 2-A). **Density** is represented as a contour map (see the dark-blue areas in the center of Figure 2), providing a quick visual indicator that emphasizes locations where a larger number of nodes have existed (**T2**). This kind of encoding also provides a first glance at the trajectories' "shape", which eases keeping an overview of the events (**T1**). To calculate the density map, we translate the original set of nodes for each point in time into a series of objects with \(x,y\) coordinates and relative age. The \(x,y\) coordinates determine the contours of the density map. The age acts as a weighing function such that older nodes would contribute less to the density map compared to more recent nodes. The "bandwidth" sets the standard deviation of the Gaussian kernel, with lower values showing a sharper picture and higher values more distributed, but also more blurred, representation. **Interactions** are also supported in the main view of TimeLighting. Common interactions such as using the mouse scroll wheel to _zoom_ and rescale the main view, and _panning_ or _dragging_ to reposition it. Figure 3: Examples of trajectory visualizations. In (a) a higher sampling frequency is selected, and aging is clearly visible thanks to the change in opacity. In (b) sampling is lowered to a third (3 points per movement segment), and the trajectory is shown superimposed as the mouse is hovering over one of the nodes, indicated as a yellow node. The orange stroke and size difference indicates coordinates coming from the data compared to interpolated nodes. ### Other Views and Guidance **Side Panel** shows a list of nodes ordered according to their mobility (as a form of guidance, see Section 3). Each node in this list is also accompanied by a small bar chart visualizing the differences in the mobility scores of the nodes. From here, the user can select and "lock" trajectories in the main view. A locked trajectory is always shown regardless of the current temporal interval selection in the timeline (see next paragraph). Locked nodes are colored in bright red if they are in the current temporal selection, while nodes that are out of the current temporal selection are colored in a less saturated hue (see Figure 4). The encoding and ordering of node trajectories in the side panel serve as visual guidance to support the tracking of specific events (i.e., guidance to ease **T2**). Additionally, when loading a new graph, the three nodes with the highest mobility are locked by default (this can later be refined or changed). **Timeline**, shown in (Figure 2-C), allows to select and explore specific temporal intervals as well as keep an overview (**T1**) of the number of nodes and edges that are visible over time. This is obtained by considering the net number of node/edge additions and removals and representing this information as two overlapping area charts (in red--the nodes and in blue--the edges). The timeline serves two purposes: first, the user can brush to select a specific interval within the available data. As a result, only the subgraph existing during the newly selected time interval will be shown in the main view (temporal filtering). This also affects movement coloring, relative age, and density calculation, as well as, limits the edges shown on-screen to those existing in the current selection (these do not apply to locked trajectories). Second, TimeLighting uses the timeline to provide guidance and suggest specific time intervals for further inspection. Specifically, the system highlights intervals in time when all the currently locked trajectories interact with each other. These intervals are represented as orange rectangles drawn on top of the timeline (see Figure 2-C). Clicking one of these intervals will snap onto that temporal selection, helping the user to keep track of and investigate relationships (i.e., guidance to ease **T3**). Figure 4: Example of a locked trajectory. The circles in red are fully within the temporal selection of the users, whereas the less saturated ones are outside of the selected interval. Case Studies In this section, we discuss two case studies on a real temporal network. We show how insights can be extracted from the data using TimeLighting and how the design tasks are achieved and supported with guidance. We build our case studies on the _Rugby_ dataset, which is a collection of 3151 tweets posted during the Pro12 rugby competition of the 2014-2015 season [1], specifically from September 2014 to October 2015. The network has a node for each team participating in the competition (12 teams in total), and an edge exists between two teams when a tweet from one mentions another. While nodes will stay visible from the moment they appear until the end, edges appear at the exact moment the tweet was posted. To improve the visibility of the edges during the layout process (as tweets do not have a "duration"), edges are given a 24-hour duration. For example, if an edge \(\bar{e}\) has a timestamp \(t\), then \(A_{\bar{e}}=[t-12h,t+12h]\). Multiple edges between the same teams are merged together if their appearances overlap by a duration of less than one day. This simplification has already been applied in previous work using this dataset [29], and discretization of this dataset with similar resolution would require 417 timeslices. This dataset is particularly interesting as we have ground truth data to validate our findings. Case Study 1: The First and Second Half of the Season: Examining Trajectories.We begin our use case with an _Overview_ task (**T1**), examining the trajectories in the first (see Figure 5-A) and the second half of the 2014-2015 season (see Figure 5-B). We can immediately observe from the timeline the trend of the events. There is a steady increase in the number of tweets from the beginning of the season that peaks around the beginning of the second round of the league. This peak remains until the season's final and significantly impacts the nodes' mobility. We can also see that nodes move less in the second half of the season compared to the first half. Specifically, in Figure 5-B the majority of the nodes are within the purple to orange range of the scale--lower mobility--whereas in Figure 5-A they are in the yellow to green range--encoding higher mobility. In the first half, instead, tweets are sparser, meaning that the influence of an edge on the movement of nodes is kept (as there is no inertia) until another one changes its trajectory. Continuing the analysis of the network, tweets happen at a much higher rate in the second half of the season, and, since this network is a clique (all teams eventually play against one another), they tend to be "locked" in place by the attractive forces exerted by the other nodes. It must be considered that the layout algorithm attempts to optimize (and reduce) node movement, placing the nodes in an area of the plane where they will likely remain. This behavior can be seen in the density map too, where hot spots are larger and more numerous (i.e., nodes tend to linger more in the same areas) in the second half compared to the first half. Nonetheless, the amount of attractive force will depend on the public interest (i.e., the number of tweets) about individual matches. Case Study 2: Tracking the Two Least Winning Teams.In this second case study, we _track_ (**T2**) the trajectories and _investigate_ (**T3**) the relationships between the two least winning teams of the season (according to the historical information available), namely the "Zebre" (Z) and "Benetton" (B) teams. We begin by selecting them in the sidebar so that the whole trajectory is locked permanently on screen. Guidance shows us the different moments in time when the two teams interact, and we focus on the period of time when the two teams play against each other around the midpoint of the season. The teams played two matches (during the first and second leg of the tournament) in adjacent rounds (12 and 13). The status of the interface is reported in Figure 6-A. It is possible to see how the Z trajectory bends significantly towards B at this point in time, and a similar effect is visible the other way around. This attraction strength can be interpreted as the "hype" of the matches building up, as Z and B are the only two teams coming from Italy in the competition. Finally, we compare the relationships between the last two teams in the ranking and the first two, "Glasgow Warriors" (G), the winners, and "Munster Rugby" (M), referring to the time around the tournament finals (see Figure 6-B). If we focus on M, it is easy to identify the time the final was played, with mostly all teams connected to it. The B trajectory is strongly influenced by M, as it was one of the final matches before the final; Z, instead, is not largely influenced and drifts away. ## 6 Conclusion and Future Work In this paper, we presented and described TimeLighting, a guidance-enhanced VA approach to support the analysis and exploration of temporal networks embedded in a space-time cube. We augmented the visualization approach with guidance to better support users in the visual analysis of the data. We demonstrated the effectiveness of our approach with two use cases, depicting a scenario where the aim is to explore and extract insights from a temporal network describing the events of a rugby season and the relationships between teams. The potential of visualization in exploring temporal graphs in the space-time cube is an opportunity for the entire graph drawing and visualization communities. Future work should primarily be oriented on conducting a formal evaluation of the method, both from a visual quality perspective, i.e., through a selection of metrics that capture the readability, scalability, and expressiveness of the visualization, and from a user perspective, also assessing the impact of the guidance features included in the system, extending experimentation on other real datasets (e.g., [5, 16]). Further work includes investigating how TimeLighting supports users when sampling and identifying temporal network features and patterns. ## Acknowledgements For the purpose of open access, the author has applied a Creative Commons Attribution (CC-BY) license to any Author Accepted Manuscript version arising from this submission. This work was conducted within the projects WWTF grant [10.47379/ICT19047], FFG grant DoRIAH [#880883], and FWF grant ArtVis [P35767]. Figure 5: Illustrations from Case Study 1. It is possible to see how the mobility of nodes changes in the two halves of the season. **(A)** The first half of the season; **(B)** The second half of the season. Timeline brushing is used to filter out events. Trajectory sampling is set at 4 points. Figure 6: Illustrations from Case Study 2. **(A)** The two trajectories of the teams in guidance intervals are visible in the timeline, and the one selected (see blue rectangle) relates to the matchup between the two teams in the first round of the competition. **(B)** it is possible to identify the final match and the connections between the 2nd best team and the other teams of interest in the case study.
2306.02197
Vacuum torque, propulsive forces, and anomalous tangential forces: Effects of nonreciprocal media out of thermal equilibrium
From the generalized fluctuation-dissipation theorem, it is known that a body at rest made of nonreciprocal material may experience a torque, even in vacuum, if it is not in thermal equilibrium with its environment. However, it does not experience self-propulsion in such circumstances, except in higher order. Nevertheless, such a body may experience both a normal torque and a lateral force when adjacent to an ordinary surface with transverse translational symmetry. We explore how these phenomena arise, discuss what terminal velocities might be achieved, and point out some of the limitations of applying our results to observations, including the Lorenz-Lorentz correction, and the cooling due to radiation. In spite of these limitations, the effects discussed would seem to be observable.
Kimball A. Milton, Xin Guo, Gerard Kennedy, Nima Pourtolami, Dylan M. DelCol
2023-06-03T21:03:38Z
http://arxiv.org/abs/2306.02197v1
Vacuum torque, propulsive forces, and anomalous tangential forces: Effects of nonreciprocal media out of thermal equilibrium ###### Abstract From the generalized fluctuation-dissipation theorem, it is known that a body at rest made of nonreciprocal material may experience a torque, even in vacuum, if it is not in thermal equilibrium with its environment. However, it does not experience self-propulsion in such circumstances, except in higher order. Nevertheless, such a body may experience both a normal torque and a lateral force when adjacent to an ordinary surface with transverse translational symmetry. We explore how these phenomena arise, discuss what terminal velocities might be achieved, and point out some of the limitations of applying our results to observations, including the Lorenz-Lorentz correction, and the cooling due to radiation. In spite of these limitations, the effects discussed would seem to be observable. ## I Introduction There is a long history of theoretical predictions of quantum or Casimir friction, where a particle or extended body that moves parallel to a surface experiences a force opposing its motion. The subject seems to have originated with Teodorovich [1] and Levitov [2]. For a selected bibliography on this subject before 2016, see Ref. [3]. The friction is typically conceived to arise because of dissipation in the surface. For a subset of papers on this subject, the reader is referred to Refs. [4; 5; 6; 7; 8; 9]. For a readable overview, see Ref. [10]. Quantum friction can also result if the body itself is made of dissipative material. However, much earlier, it was recognized that, even in vacuum, a moving body or an atom without dissipation will experience friction due to the surrounding radiation field--this is the famous Einstein-Hopf effect [11]. In previous papers, we have considered quantum vacuum friction due to field and dipole fluctuations [12; 13]. For low velocities, the condition that the atom or particle not gain or lose energy, the Nonequilibrium Steady State condition (NESS) [14], implies equal temperatures of the body and the environment, while relativistic velocities typically imply that the body be substantially hotter than the environment. (For other earlier work on nonequilibrium friction, see Refs. [15; 16; 17] for example.) The forces we considered there are true frictions, in that they always oppose the motion, and they vanish at zero velocity and at zero temperature. Here, we consider forces and torques that arise in the vacuum, or near other bodies, when the relative velocity is zero. This requires not only that the system be out of equilibrium, so that the temperature of the body is different from that of its environment, but also that the electrical properties that characterize the material constituting the bodies be exotic, "nonreciprocal," at least in lowest order. Nonreciprocity seems not to be possible for an isolated body; the typical way it can be achieved is through the introduction of an external magnetic field, or some other appropriate external influence. Thus, it is something of an oxymoron to discuss nonreciprocal vacuum torque or friction. Of course, there is much earlier work on such nonequilibrium phenomena, involving heat transfer, torque, and nonreciprocal surface forces [18; 19; 20; 21; 22]. (Further references will be provided as our discussion continues.) The outline of this paper is as follows. In Sec. II we discuss how the fluctuation-dissipation theorem is modified for a nonreciprocal susceptibility. We then display, in Sec. III, a simple model of such a nonreciprocal material as a simplification of that given in Ref. [23]. The corresponding quantum vacuum torque, first found in Refs. [24; 25], is derived in Sec. IV. In Sec. V we rederive the modified torque if the body is slowly rotating, which was first worked out in Ref. [23]. If the body is hotter or not too much colder than the environment, the ordinary quantum frictional torque acts as a drag, and the body acquires a terminal angular velocity which should be readily observable, provided this temperature difference can be maintained. In Sec. VI, the effect on the torque of an underlying plate, be it a dielectric slab or a perfect conductor, is investigated. Then we turn to the force, which can, of course, be inferred from the torque. The quantum vacuum force is shown to vanish, in the weak-susceptibility approximation that we use [Sec. VII], while, if an underlying surface is present, there is a component of the force parallel to the surface, as shown in examples of an imperfectly and a perfectly conducting plate [Sec. VIII]. Again, if the nonequilibrium temperature difference could be maintained, a substantial terminal velocity could be achieved. In Sec. IX we calculate the time it would take for a body at rest to reach thermal equilibrium with the environment, unless some mechanism were supplied to keep it hotter or colder than the background. Possible suppression effects due to the Lorenz-Lorentz (Clausius-Mossotti) correction are discussed in Sec. X, although the resulting torques and forces should still be experimentally measurable. Conclusions round out the paper. Throughout, we use Heaviside-Lorentz (HL) electromagnetic units. We also set \(\hbar=c=1\), except when numerical values are given. It should be noted that many authors (including some of us) often use Gaussian (G) units, for which the polarizability differs by a factor of \(4\pi\). ## II Generalized fluctuation-dissipation theorem Let \({\bf x}(t)\) be some dynamical variable (such as an electric dipole moment). In terms of its frequency Fourier transform, the fluctuation-dissipation theorem (FDT) tells us the expectation value of the symmetrized quadratic product of the frequency-transformed variables: \[\langle S{\bf x}(\omega){\bf x}(\nu)\rangle=2\pi\delta(\omega+\nu)\Im{\bf x}( \omega)\coth\frac{\beta\omega}{2}, \tag{1}\] where \(\beta=1/T\) is the inverse temperature of the system, and \({\bf\chi}\) is the generalized susceptibility; alternatively, the fluctuating quantity might be the electric field, driven by the electric polarization, and the susceptibility would be the retarded electric Green's function. Typically, we regard the susceptibility tensor as diagonal, or at least symmetric. More generally, the "imaginary part" that occurs in the FDT is the anti-Hermitian part, \[\Im{\bf\chi}=\frac{1}{2i}({\bf\chi}-{\bf\chi}^{\dagger}), \tag{2}\] that is, \[(\Im{\bf\chi})_{ij}(\omega)=\frac{1}{2i}[\chi_{ij}(\omega)-\chi_{ji}^{*}( \omega)]=\frac{1}{2i}[\chi_{ij}(\omega)-\chi_{ji}(-\omega)], \tag{3}\] which uses the fact that \(\chi_{ij}(\omega)\) is the Fourier transform of a real response function. Unusual properties emerge from this if \({\bf\chi}\) is not symmetric: This means that \(\Im{\bf\chi}\) has both real and imaginary parts in the conventional sense. The real part is \[{\rm Re}(\Im{\bf\chi})_{ij}(\omega)=\frac{1}{2}[{\rm Im}\,\chi_{ij}(\omega)+ {\rm Im}\,\chi_{ji}(\omega)],\] (4a) which is symmetric in the indices but odd in \[\omega\]. These are the components that give rise to the quantum friction force and torque. The imaginary part of \[\Im{\bf\chi}\) is \[{\rm Im}(\Im{\bf\chi})_{ij}(\omega)=-\frac{1}{2}[{\rm Re}\,\chi_{ij}(\omega)- {\rm Re}\,\chi_{ji}(\omega)],\] (4b) which is antisymmetric in the indices but even in \[\omega\]. This property, as we shall see, leads to unusual phenomena for nonreciprocal bodies, spontaneous quantum torque and quantum propulsion. The term "nonreciprocal" seems to have a variety of meanings in the literature; in this paper we will take it to mean that Eq. ( 4b ) is nonzero. For an ordinary material, \(\chi_{ij}(\omega)\) is symmetric in the indices, which means that the anti-Hermitian part coincides with the usual imaginary part: \[\chi_{ij}(\omega)=\chi_{ji}(\omega)\Rightarrow(\Im{\bf\chi})_{ij}(\omega)={ \rm Im}\,\chi_{ij}(\omega). \tag{5}\] Where a susceptibility depends on continuous coordinates as well, such as the Green's dyadic that describes the electric field, reciprocity means invariance under interchange of discrete indices and continuous coordinates: \[\Gamma_{ij}({\bf r},{\bf r}^{\prime};\omega)=\Gamma_{ji}({\bf r}^{\prime},{ \bf r};\omega)\Rightarrow(\Im{\bf\Gamma})_{ij}({\bf r},{\bf r}^{\prime}; \omega)={\rm Im}\,\Gamma_{ij}({\bf r},{\bf r}^{\prime};\omega). \tag{6}\] It is easy to check that this is satisfied by the Green's dyadic for a dielectric half-space, for example, which has off-diagonal elements, symmetric in the tensor indices. When the latter is expressed as a two-dimensional Fourier transform, as is convenient when the environment consists of a dielectric slab perpendicular to the \(z\) axis, \[\Gamma_{ij}({\bf r},{\bf r}^{\prime};\omega)=\int\frac{(d{\bf k}_{\perp})}{(2 \pi)^{2}}e^{i{\bf k}_{\perp}\cdot({\bf r}-{\bf r}^{\prime})_{\perp}}g_{ij}(z,z ^{\prime};{\bf k}_{\perp},\omega), \tag{7}\] the FDT is expressed in terms of \[(\Im{\bf g})_{ij}(z,z^{\prime};{\bf k}_{\perp},\omega)=\frac{1}{2i}\left[g_{ij }(z,z^{\prime};{\bf k}_{\perp},\omega)-g_{ji}(z^{\prime},z;-{\bf k}_{\perp},- \omega)\right]. \tag{8}\] ## III Model for nonreciprocal material In order to create a nonreciprocal response, one needs an external influence. Such is supplied by a magnetic field. Let us suppose a oscillator with damping \(\eta\) is driven by both an electric and a magnetic field: \[m\frac{d^{2}{\bf r}}{dt^{2}}+m\eta\frac{d{\bf r}}{dt}+m\omega_{0}^{2}{\bf r}= e\left({\bf E}+\frac{d{\bf r}}{dt}\times{\bf B}\right). \tag{9}\] If the magnetic field lies in the \(z\) direction, this immediately yields an electric susceptibility that is nonsymmetric and nonreciprocal: \[{\bf\chi}=\omega_{p}^{2}\left(\frac{\frac{\omega_{0}^{2}-\omega^{2}-i\omega \eta}{(\omega_{0}^{2}-\omega^{2}-i\omega\eta)^{2}-\omega^{2}\omega_{c}^{2}}}{ \frac{i\omega_{0}^{2}-\omega^{2}-i\omega\eta}{(\omega_{0}^{2}-\omega^{2}-i \omega\eta)^{2}-\omega^{2}\omega_{c}^{2}}{(\omega_{0}^{2}-\omega^{2}-i\omega \eta)^{2}-\omega^{2}\omega_{c}^{2}}}\right. \tag{10}\] in terms of the plasma frequency \(\omega_{p}^{2}=ne^{2}/m\), \(n\) being the density of charges, and the cyclotron frequency \(\omega_{c}=eB/m\). For a metal, we would set the restoring force to zero, so \(\omega_{0}=0\), and we exactly recover the form given by Guo and Fan [23]. In particular, this provides us with a model for the anti-Hermitian part of the susceptibility, \[\chi_{xy}=-\chi_{yx}=-i\frac{\omega_{p}^{2}\omega_{c}/\omega}{(\omega+i\eta)^ {2}-\omega_{c}^{2}}. \tag{11}\] Numerically, for the charge and mass of the electron, \[\omega_{c}=\frac{eB}{m}=m\frac{B}{B_{c}},\quad B_{c}=\frac{m^{2}}{e}=4.41 \times 10^{9}\,{\rm T}, \tag{12}\] so for a magnetic field of strength \(1\,{\rm T}\), \(\omega_{c}\sim 10^{-4}\) eV, far smaller than the damping parameter for gold, for example, \(\eta\approx 3.5\times 10^{-2}\) eV. Thus, to a good approximation, we can use for a metal, \[\chi_{xy}-\chi_{yx}\approx-2i\frac{\omega_{c}\omega_{p}^{2}}{\omega}\frac{1}{ (\omega+i\eta)^{2}}. \tag{13}\] ## IV Quantum vacuum torque The vacuum torque on a stationary body was discussed in general terms in Ref. [24], and subsequently in Ref. [25]. Somewhat earlier, Ref. [20] showed that a topologically insulating film in a magnetic field, out of thermal equilibrium, experiences a torque, which seems to be an instance of this same phenomenon. A torque on a body at rest requires that it be composed of nonreciprocal material, which is characterized by having a real part of the susceptibility which is nonsymmetric, and that the temperature of the body, \(T^{\prime}\), be different from that of the environment, \(T\). The torque on an arbitrary body, described by electric susceptibility, \({\bf\chi}({\bf r};\omega)\), is, in terms of the polarization, \({\bf P}\), \[{\bf\tau}(t)=\int(d{\bf r})\,{\bf r}\times\left[-{\bf\nabla}\cdot{\bf P}({ \bf r},t){\bf E}({\bf r},t)+\frac{\partial{\bf P}({\bf r},t)}{\partial t} \times{\bf B}({\bf r},t)\right]. \tag{14}\] Writing this in terms of Fourier transforms, and eliminating \({\bf B}\) in favor of \({\bf E}\), we have \[\mathbf{\tau} = \int(d{\bf r})\,{\bf r}\times\int\frac{d\omega}{2\pi}\frac{d\nu}{2 \pi}e^{-i(\omega+\nu)t}\left\{-\mathbf{\nabla}\cdot{\bf P}({\bf r};\omega){\bf E}({ \bf r};\nu)-\frac{\omega}{\nu}{\bf P}({\bf r};\omega)\times[\mathbf{\nabla}\times{ \bf E}({\bf r};\nu)]\right\}\] \[= \int(d{\bf r})\frac{d\omega}{2\pi}\frac{d\nu}{2\pi}e^{-i(\omega+ \nu)t}{\bf r}\times\bigg{[}\frac{\omega}{\nu}\left\{\mathbf{\nabla}\cdot[{\bf P}({ \bf r};\omega){\bf E}({\bf r};\nu)]-{\bf P}({\bf r};\omega)\cdot(\mathbf{\nabla}) \cdot{\bf E}({\bf r};\nu)\right\}-\left(1+\frac{\omega}{\nu}\right)[\mathbf{\nabla }\cdot{\bf P}({\bf r};\omega)]{\bf E}({\bf r};\nu)\bigg{]},\] where the notation in the first term in the last equality means that the \(\mathbf{\nabla}\) is dotted only with \({\bf P}\), but acts on both variables, while in the second term \(\mathbf{\nabla}\) is the vector crossed with \({\bf r}\)--the parentheses are intended to insulate it from the two vectors surrounding it. Here, the source of the electric field is the electric polarization, \[{\bf E}({\bf r};\omega)=\int(d{\bf r}^{\prime})\,\mathbf{\Gamma}({\bf r},{\bf r}^{ \prime};\omega)\cdot{\bf P}({\bf r}^{\prime};\omega),\] (4.3a) where \[\mathbf{\Gamma}\] is the retarded electromagnetic Green's dyadic, while the polarization is linearly related to the electric field, \[{\bf P}({\bf r};\omega)=\int(d{\bf r})\,\delta({\bf r}-{\bf r}^{\prime})\mathbf{ \chi}({\bf r};\omega)\cdot{\bf E}({\bf r}^{\prime};\omega)=\mathbf{\chi}({\bf r}; \omega)\cdot{\bf E}({\bf r};\omega),\] (4.3b) where we assume that the electric susceptibility is local in space. This means that there are two origins for the quantum torque: field fluctuations and dipole fluctuations. We evaluate the two contributions to the torque by use of the FDT: \[\langle SE_{i}({\bf r};\omega)E_{j}({\bf r}^{\prime};\nu)\rangle = 2\pi\delta(\omega+\nu)(\Im\mathbf{\Gamma})_{ij}({\bf r},{\bf r}^{ \prime};\omega)\coth\frac{\beta\omega}{2},\quad\beta=\frac{1}{T}, \tag{4.4a}\] \[\langle SP_{i}({\bf r};\omega)P_{j}({\bf r}^{\prime};\nu)\rangle = 2\pi\delta(\omega+\nu)\delta({\bf r}-{\bf r}^{\prime})(\Im\mathbf{ \chi})_{ij}({\bf r};\omega)\coth\frac{\beta^{\prime}\omega}{2},\quad\beta^{ \prime}=\frac{1}{T^{\prime}},\] (4.4b) where \[S\] indicates that the symmetrized expectation values are used. Therefore, the last term in Eq. ( 4 ) vanishes, because the sum of the two frequencies is zero, leaving us with1 Footnote 1: This is made up of the “internal” and “external” torques, as given in Ref. [26], Eqs. ( 4.47) and ( 4.46). \[\mathbf{\tau}=\int(d{\bf r})\frac{d\omega}{2\pi}\frac{d\nu}{2\pi}e^{-i(\omega+\nu )t}\left[{\bf P}({\bf r};\omega)\times{\bf E}({\bf r};\nu)+{\bf P}({\bf r}; \omega)\cdot({\bf r}\times\mathbf{\nabla})\cdot{\bf E}({\bf r};\nu)\right].\] (4.5) Here, the notation in the last term signifies that the free vector index is on the angular momentum operator; the \[{\bf P},{\bf E}\] are dotted together. Using Eqs. ( 4.3b ) and ( 4.4a ), we find for the EE contribution to the torque \[\mathbf{\tau}_{i}^{\rm EE}=\int(d{\bf r})\frac{d\omega}{2\pi}\coth\frac{\beta \omega}{2}\epsilon_{ijk}\left[\chi_{jl}({\bf r};\omega)(\Im\mathbf{\Gamma})_{lk}( {\bf r},{\bf r};\omega)+\chi_{lm}({\bf r};\omega)x_{j}\nabla_{k}^{\prime}(\Im \mathbf{\Gamma})_{ml}({\bf r},{\bf r}^{\prime};\omega)\bigg{|}_{{\bf r}-{\bf r}^{ \prime}={\bf R}\to{\bf 0}}\right].\] (4.6) Here, \[\mathbf{\Gamma}\] is taken to be the usual vacuum retarded Green's dyadic, \[\mathbf{\Gamma}^{0}\], the divergenceless part of which can be written as 2 \[\mathbf{\Gamma}^{0\prime}({\bf r},{\bf r}^{\prime};\omega)=(\mathbf{\nabla}\mathbf{\nabla}- \mathbf{1}\nabla^{2})\frac{e^{i\omega R}}{4\pi R}=\left[\hat{\mathbf{R}}\hat{\mathbf{R}}(3 -3i\omega R-\omega^{2}R^{2})-\mathbf{1}(1-i\omega R-\omega^{2}R^{2})\right]\frac{e^{ i\omega R}}{4\pi R^{3}},\] (4.7) where \[R=|{\bf r}-{\bf r}^{\prime}|\] and \[\hat{\bf R}={\bf R}/R\]. It is evident that the second term in Eq. ( 4.6 ) vanishes here because \[\text{Im}\,e^{i\omega R}/R\] is a function of \[R\]. When ( 4.7 ) is rotationally averaged in the coincidence limit ( \[{\bf R}\to{\bf 0}\] ), we obtain \[\mathbf{\Gamma}^{0\prime}({\bf r},{\bf r}^{\prime};\omega)\to\mathbf{1}\left(\frac{ \omega^{2}}{6\pi R}+i\frac{\omega^{3}}{6\pi}+O(R)\right).\] (4.8) Therefore, we are left with only a single term for the torque: \[\tau_{i}^{\rm EE}=\int\frac{d\omega}{2\pi}\coth\frac{\beta\omega}{2}\epsilon_{ ijk}\,\text{Re}\,\alpha_{jk}(\omega)\frac{\omega^{3}}{6\pi}, \tag{4.9}\] where the mean polarizability3 of the body4 is given by Footnote 3: The nonlinear effects occurring in the Lorenz-Lorentz law relate the polarizability to the permittivity; the polarizability is implied thereby to be linear—see Ref. [27], and Sec. X. Footnote 4: Note that there is no requirement that the body be spherical, so any rotation would be observable. \[\alpha_{jk}(\omega)=\int(d\mathbf{r})\chi_{jk}(\mathbf{r};\omega), \tag{4.10}\] the real part of which is picked out by the necessity of the integrand being even in \(\omega\). Thus, nonreciprocity is necessary for a vacuum torque in first order. For the PP fluctuation part, Eq. (4.5) is written as \[\boldsymbol{\tau}^{\rm PP}=\int(d\mathbf{r})(d\mathbf{r}^{\prime})\frac{d \omega}{2\pi}\frac{d\nu}{2\pi}e^{-i(\omega+\nu)t}\left[\mathbf{P}(\mathbf{r}; \omega)\times\mathbf{\Gamma}(\mathbf{r},\mathbf{r}^{\prime};\nu)\cdot\mathbf{ P}(\mathbf{r}^{\prime};\nu)+\mathbf{P}(\mathbf{r};\omega)\cdot(\mathbf{r} \times\boldsymbol{\nabla})\cdot\mathbf{\Gamma}(\mathbf{r},\mathbf{r}^{\prime} ;\nu)\cdot\mathbf{P}(\mathbf{r}^{\prime};\nu)\right], \tag{4.11}\] to which the FDT (4.4b) is to be applied. Again, for vacuum, the coincidence limit of the second term is zero, and because of the diagonal form of the limit of the Green's dyadic, only the antisymmetric part of the susceptibility survives upon use of Eq. (4.4b). That is, the \(i\)th component of the quantity in square brackets in Eq. (4.11) becomes \[\delta(\mathbf{r}-\mathbf{r}^{\prime})2\pi\delta(\omega+\nu)\epsilon_{ijk}( \Im\boldsymbol{\chi})_{jk}(\mathbf{r};\omega)\left(-i\frac{\omega^{3}}{6\pi} \right)\coth\frac{\beta^{\prime}\omega}{2}. \tag{4.12}\] Here we have recognized that the antisymmetric part of \(\Im\boldsymbol{\chi}\) is even in \(\omega\), according to Eq. (2.4b), and therefore only the odd part of the vacuum Green's dyadic survives. The resulting torque is thus of the same form as for the EE contribution, except for the sign, and the replacement \(\beta\to\beta^{\prime}\). The combination of the two contributions thus yields the torque on a nonreciprocal body in vacuum, when the temperature of the body, \(T^{\prime}\), differs from that of the blackbody radiation, \(T\), due to PP and EE fluctuations: \[\tau_{i}=\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\frac{\omega^{3}}{6\pi} \epsilon_{ijk}\,{\rm Re}\,\alpha_{jk}(\omega)\left[\coth\frac{\beta\omega}{2} -\coth\frac{\beta^{\prime}\omega}{2}\right]. \tag{4.13}\] This result exactly agrees with that of Guo and Fan [24], for zero rotational velocity, and with that of Strekha et al. [25]. However, there is no quantum vacuum force in this static situation, at least in first order, which we will demonstrate in Sec. VII. (Actually, this is already evident from the vanishing of the external torque contribution.) Let us use the model in Eq. (3.5) to give an estimate of the size of the torque. Inserting this into Eq. (4.13) and letting \(\omega=\eta x\) gives \[\tau_{z}=\frac{\eta\omega_{c}\omega_{p}^{2}V}{3\pi^{2}}\int_{-\infty}^{\infty }dx\frac{x^{3}}{(x^{2}+1)^{2}}\left(\coth\frac{\beta^{\prime}\eta x}{2}-\coth \frac{\beta\eta x}{2}\right)=\frac{4\eta\omega_{c}\omega_{p}^{2}V}{3\pi^{2}}[I _{2}(\beta^{\prime}\eta)-I_{2}(\beta\eta)], \tag{4.14}\] where \(V\) is the volume of the body, and the integrals are defined by Eq. (A5) of Appendix A. As expected, this is positive if \(T^{\prime}>T\). These integrals are readily evaluated in Eq. (A6): \[\tau_{z}=\frac{\eta\omega_{c}\omega_{p}^{2}V}{3\pi^{2}}\left[\frac{\pi}{\eta }(T-T^{\prime})+2\ln\frac{T^{\prime}}{T}+2\psi\left(\frac{\eta}{2\pi T} \right)-2\psi\left(\frac{\eta}{2\pi T^{\prime}}\right)+\frac{\eta}{2\pi T} \psi^{\prime}\left(\frac{\eta}{2\pi T}\right)-\frac{\eta}{2\pi T^{\prime}} \psi^{\prime}\left(\frac{\eta}{2\pi T^{\prime}}\right)\right]. \tag{4.15}\] The torque, and the approximations, are shown in Fig. 1. For a gold (\(\omega_{p}=9\) eV, \(\eta=0.035\) eV) nanosphere of radius 100 nm, with \(\omega_{c}=10^{-4}\) eV, the prefactor in Eq. (4.15) is \(8\times 10^{-25}\) Nm. ## V Torque on a rotating body Of course, a torque on a body will cause it to rotate. So, what is the torque on a rotating body? Naturally, there should be a vacuum torque on a rotating body made of ordinary (reciprocal) material, just as there is quantum vacuum friction on a linearly moving body [12; 13]. The nonreciprocal aspect of this torque was first treated in Ref. [24]. We consider a body rotating about the \(z\) axis passing through its center of mass with angular velocity \(\Omega\). The formula (4.5) should still apply, with the external torque (the second term there) still not contributing if the background is vacuum. However, the polarization and electric fields should now refer to the body (rotating) frame, denoted by a prime subsequently. For low velocities, these are related to those in the blackbody (unprimed) frame by a rotation: \[E^{\prime}_{x}(\mathbf{r}^{\prime},t) = E_{x}(\mathbf{r},t)\cos\Omega t+E_{y}(\mathbf{r},t)\sin\Omega t, \tag{10a}\] \[E^{\prime}_{y}(\mathbf{r}^{\prime},t) = E_{y}(\mathbf{r},t)\cos\Omega t-E_{x}(\mathbf{r},t)\sin\Omega t, \tag{10b}\] which means for the frequency transforms, \[E^{\prime}_{x}(\mathbf{r}^{\prime};\omega) = \frac{1}{2}\left[E_{x}(\mathbf{r},\omega_{+})+E_{x}(\mathbf{r}; \omega_{-})\right]+\frac{1}{2i}\left[E_{y}(\mathbf{r};\omega_{+})-E_{y}( \mathbf{r};\omega_{-})\right], \tag{11a}\] \[E^{\prime}_{y}(\mathbf{r}^{\prime};\omega) = \frac{1}{2}\left[E_{y}(\mathbf{r},\omega_{+})+E_{y}(\mathbf{r}; \omega_{-})\right]-\frac{1}{2i}\left[E_{x}(\mathbf{r};\omega_{+})-E_{x}( \mathbf{r};\omega_{-})\right], \tag{11b}\] where \(\omega_{\pm}=\omega\pm\Omega\). \(\mathbf{P}\) transforms in the same way. The strategy followed to calculate the quantum rotational friction is the same as that used for quantum rectilinear friction [13]. There are two contributions: field fluctuations and dipole fluctuations. For the former we use Eq. (10b) to replace the polarization by the electric field, but now understood in the body frame. Then we have to transform both electric fields to the blackbody frame. For vacuum friction we can use Eq. (11) for the Green's dyadic that appears when the FDT is employed for the fields. We also only keep half the terms: only those proportional to \(\delta(\omega_{\pm}+\nu_{\mp})\) do not average to zero in time. The result of a straightforward calculation is \[\tau_{z}^{\rm EE}=\int\frac{d\omega}{2\pi}\frac{\omega^{3}}{6\pi}\left[{\rm Im }(\alpha_{xx}+\alpha_{yy})(\omega_{-})+{\rm Re}(\alpha_{xy}-\alpha_{yx})( \omega_{-})\right]\coth\frac{\beta\omega}{2}, \tag{12}\] where we have also noticed that under \(\omega\to-\omega\), \(\omega_{+}\to-\omega_{-}\). The procedure for the PP fluctuations is similar, except now we replace \(\mathbf{E}\) by \(\mathbf{P}\) according to Eq. (10a). This holds in the blackbody frame, so \(\mathbf{P}\) must be transformed back to the body (rotating) frame. Simplifications occur as before for the vacuum case, and we find after a bit of algebra \[\tau_{z}^{\rm PP}=-\int\frac{d\omega}{2\pi}\frac{\omega^{3}}{6\pi}\left[{\rm Im }(\alpha_{xx}+\alpha_{yy})(\omega_{-})+{\rm Re}(\alpha_{xy}-\alpha_{yx})( \omega_{-})\right]\coth\frac{\beta^{\prime}\omega_{-}}{2}. \tag{13}\] Thus, when the two contributions are added, we find for the torque on a (slowly) rotating body: \[\tau_{z}=\frac{1}{12\pi^{2}}\int_{-\infty}^{\infty}d\omega\,\omega_{+}^{3} \left[{\rm Im}(\alpha_{xx}+\alpha_{yy})(\omega)+{\rm Re}(\alpha_{xy}-\alpha_{ yx})(\omega)\right]\left(\coth\frac{\beta\omega_{+}}{2}-\coth\frac{\beta^{\prime} \omega}{2}\right). \tag{14}\] This is precisely the torque found in Ref. [24] (recall \(4\pi\alpha^{G}=\alpha^{\rm HL}\)). This result for an isotropic (reciprocal) particle was Figure 1: The quantum vacuum torque, apart from the prefactor, in Eqs. (12) or (13) is shown as a function of \(T^{\prime}/\eta\) by the solid line for \(T/\eta=0.714\), appropriate for an environment at room temperature, 300 K, and a gold body. The dashed blue line is the high-temperature approximation (10b), while the low-temperature approximation (10b) is shown by the dashed red line. In both limiting cases, the environmental temperature is treated exactly. given in Ref. [28], which further considered the effect of a magnetic field.5 Note that if \(\Omega=0\) the first term involving the diagonal polarizabilities vanishes because the integrand is odd, and the second term reproduces Eq. (4.13). Footnote 5: Earlier related works on forces and torques on bodies with various kinds of asymmetries include Refs. [19; 20; 21; 29]. It is illuminating to expand this expression to leading order in the rotational velocity \(\Omega\) (this is the adiabatic approximation): \[\tau_{z} = \frac{1}{12\pi^{2}}\int_{-\infty}^{\infty}d\omega\,\omega^{3} \bigg{\{}\operatorname{Re}\left(\alpha_{xy}-\alpha_{yx}\right)\left(\omega \right)\left[\coth\frac{\beta\omega}{2}-\coth\frac{\beta^{\prime}\omega}{2}\right] \tag{5.6}\] \[\quad-\Omega\frac{3}{\omega}\operatorname{Im}\left(\alpha_{xx}+ \alpha_{yy}\right)\left(\omega\right)\left[\coth\frac{\beta^{\prime}\omega}{2 }-\coth\frac{\beta\omega}{2}\right]-\Omega\frac{\beta}{2}\operatorname{Im} \left(\alpha_{xx}+\alpha_{yy}\right)\left(\omega\right)\csc\mathrm{s}\mathrm{ ch}^{2}\frac{\beta\omega}{2}\bigg{\}}.\] The first term here is the (nonreciprocal) quantum vacuum torque (4.13), the second is the nonequilibrium contribution to the ordinary (reciprocal) quantum vacuum frictional torque, and the third term is the analog of the Einstein-Hopf quantum vacuum friction. The sum of the two frictional terms is a drag if \(T^{\prime}>T\). If \(T^{\prime}<T\), the angular velocity changes sign, so initially the friction remains a drag, but for sufficiently low temperatures, \(T^{\prime}\), the second term in Eq. (5.6) will dominate, and exponential growth of the angular velocity will ensue, insofar as the low-velocity approximation remains valid. Note, that if the last two terms constitute a drag, the nonreciprocal torque found here will lead to the body rotating with a constant terminal angular velocity. Writing Eq. (5.6) in the abbreviated form,6 Footnote 6: That is, \(\tau_{0}=\tau_{z}(\Omega=0)\) and \(\tau_{1}^{\prime}=-\frac{d\tau_{z}}{d\Omega}(\Omega=0)\). \[\tau_{z}=I\dot{\Omega}=\tau_{0}-\Omega\tau_{1}^{\prime}, \tag{5.7}\] where \(I\) is the moment of inertia of the body, we immediately obtain \[\Omega(t)=\frac{\tau_{0}}{\tau_{1}^{\prime}}\left(1-e^{-\tau_{1}^{\prime}t/I} \right), \tag{5.8}\] if the body is not rotating at time \(t=0\). The terminal velocity is \(\Omega_{T}=\Omega(t\to\infty)=\tau_{0}/\tau_{1}^{\prime}\), which might be expected to be small. (This, of course, assumes that the particle and environmental temperatures do not change. We will address the tendency toward thermal equilibrium in Sec. IX.) To proceed, let us again use the model (3.2), with \(\omega_{0}=0\) and \(\omega_{c}\ll\eta\). Then we have \[\operatorname{Im}(\alpha_{xx}+\alpha_{yy})(\omega)=\frac{2\omega_{p}^{2}\eta} {\omega(\omega^{2}+\eta^{2})}V. \tag{5.9}\] The result is (\(x=\omega/\eta\)) \[\tau_{1}^{\prime} = \frac{2\omega_{p}^{2}\eta V}{3\pi^{2}}\left(3+\beta\frac{\partial }{\partial\beta}\right)\int_{0}^{\infty}dx\frac{x}{(x^{2}+1)}\left[\frac{1}{e^ {\beta^{\prime}\eta x}-1}-\frac{1}{e^{\beta\eta x}-1}\right] \tag{5.10}\] \[= \frac{\omega_{p}^{2}\eta V}{3\pi^{2}}\left\{\frac{\pi}{\eta}(2T- 3T^{\prime})+3\ln\frac{T}{T^{\prime}}-1+3\psi\left(\frac{\eta}{2\pi T}\right) -3\psi\left(\frac{\eta}{2\pi T^{\prime}}\right)+\frac{\eta}{2\pi T}\psi^{ \prime}\left(\frac{\eta}{2\pi T}\right)\right\}.\] Let us write, from Eqs. (4.15) and (5.10), \[\tau_{0}=\frac{\eta\omega_{c}\omega_{p}^{2}V}{\pi^{2}}f(T,T^{\prime}),\quad \tau_{1}^{\prime}=\frac{\eta\omega_{p}^{2}V}{\pi^{2}}g(T,T^{\prime}). \tag{5.11}\] Then, the terminal angular velocity is \[\Omega_{T}=\frac{\tau_{0}}{\tau_{1}^{\prime}}=\omega_{c}\frac{f(T,T^{\prime})} {g(T,T^{\prime})}\sim\omega_{c}\sim 10^{-4}\mathrm{eV}\sim 10^{11}\,\mathrm{s}^{-1}, \tag{5.12}\] perhaps surprisingly high, but very small compared to atomic frequencies. (The terminal circumferential speed in this case is \(\Omega_{T}R\sim 10^{4}\) m/s, for a gold nanosphere of radius \(R=100\) nm.) The relaxation time required to reach this velocity is \[t_{0}=\frac{I}{\tau_{1}^{\prime}}\sim\frac{MR^{2}}{\omega_{p}^{2}\eta V}\sim 1 0^{6}\,\mathrm{s}, \tag{5.13}\] for the same parameters. The temperature dependence of the terminal angular velocity, \(\Omega_{T}\), is shown in Fig. 2. Note that if the temperature of the body is lower than that of the environment, the terminal angular velocity is negative. Whether the body is hotter or not too much colder than the environment, it will reach a terminal velocity if the temperature difference is maintained. If the temperature of the body is much colder than that of the environment (for the example shown, about half room temperature), the frictional torques reverse sign, and no bound to the angular velocity can be reached. A negative \(\tau_{1}^{\prime}\) means exponential growth. Of course, before the angular velocity gets too large, the nonrelativistic approximation used here breaks down, even if the temperature difference can be maintained by some external or internal agent. However, it is not necessary to wait a long time to reach the terminal velocity, because the initial angular acceleration, \[\dot{\Omega}(0)=\frac{\Omega_{T}}{t_{0}}\sim 10^{5}\,\mathrm{s}^{-2}, \tag{5.14}\] should be easily discernible. ## VI Torque in presence of a dielectric plate What happens if the background is less trivial, say consisting of an isotropic dielectric plate filling the halfspace \(z<0\), while the body lies a distance \(a\) above it? Then, of course, there will be a torque on a nonspherical body as well as a force, due to ordinary Casimir forces, even when the body is made of ordinary reciprocal material. What would be unusual is if there were a torque around the \(z\) axis, since the environment possesses rotational symmetry about that direction. For simplicity, we will assume that the entire background, vacuum plus dielectric plate, is in equilibrium at temperature \(T\), while the nanoparticle has temperature \(T^{\prime}\). In that case, the \(z\)-component of the torque coming from field fluctuations is given by Eq. (4.6). Now both terms can contribute, but only for a nonreciprocal body, which might be irregularly shaped. For such an body, where \[\hat{\chi}_{ij}(\mathbf{r};\omega)=\mathrm{Re}(\chi_{ij}-\chi_{ji})(\mathbf{r };\omega) \tag{6.1}\] is nonzero, the normal torque component can be computed from the explicit construction of the Green's dyadic (for example, as given in Ref. [12]). Using the Fourier representation (2.7), the integration over \(k_{x}\) and \(k_{y}\) will vanish except for the \(g_{xx}\) and \(g_{yy}\) terms, for the first term in Eq. (4.6), while \(g_{xz,zx}\) and \(g_{yz,zy}\) contribute for the second term, yielding the scattering part of the torque: \[\tau_{z}^{s} = \int(d\mathbf{r})\frac{d\omega}{2\pi}\left[\coth\frac{\beta\omega }{2}-\coth\frac{\beta^{\prime}\omega}{2}\right]\frac{1}{4}\int\frac{(d\mathbf{ k}_{\perp})}{(2\pi)^{2}}\bigg{\{}\hat{\chi}_{xy}(\mathbf{r};\omega)\,\mathrm{Im} \left[\left(\kappa r^{H}+\frac{\omega^{2}}{\kappa}r^{E}\right)e^{-2\kappa z} \right] \tag{6.2}\] \[\quad+\left[\hat{\chi}_{yz}(\mathbf{r};\omega)x-\hat{\chi}_{xz}( \mathbf{r};\omega)y\right]k^{2}\,\mathrm{Im}\left[r^{H}e^{-2\kappa z}\right] \bigg{\}},\] Figure 2: The terminal angular velocity of the nonreciprocal body when it is hotter or colder, than its environment, which is taken to be at room temperature. The limit of \(\Omega_{T}\) for high particle temperature is universally \(\omega_{c}/3\), independent of the background temperature. When the temperature is lower than that of the background, the frictional torque initially acts as a drag, but for sufficiently low temperature, the “frictional” terms change sign, and the angular velocity increases exponentially without bound. where \(\kappa=\sqrt{k^{2}-\omega^{2}}\), and the transverse magnetic and electric reflection coefficients are \[r^{H}=\frac{\kappa-\kappa^{\prime}/\varepsilon(\omega)}{\kappa+\kappa^{\prime}/ \varepsilon(\omega)},\quad r^{E}=\frac{\kappa-\kappa^{\prime}}{\kappa+\kappa^{ \prime}},\quad\kappa^{\prime}=\sqrt{k^{2}-\omega^{2}\varepsilon(\omega)}. \tag{101}\] Now the torque depends on the distribution of the anisotropic material across the body, so is not describable by simply an effective nonreciprocal polarizability. Note further that the second two terms in Eq. (100), proportional to \(x\) and \(y\), respectively, depend on the position of the body as well as the distribution of material within the body. If we write \({\bf r}={\bf R}+{\bf r}^{\prime}\), where \({\bf R}\) locates the center of mass of the body, we can read off the force on the center of mass of the body from \(\tau_{z}=XF_{y}-YF_{x}\), so that \[F_{x}=\int(d{\bf r})\int_{0}^{\infty}\frac{d\omega}{2\pi}\left[\frac{1}{e^{ \beta\omega}-1}-\frac{1}{e^{\beta^{\prime}\omega}-1}\right]\hat{\chi}_{xz}({ \bf r};\omega)\int\frac{(d{\bf k}_{\perp})}{(2\pi)^{2}}k^{2}\mathop{\rm Im} \nolimits\left[r^{H}e^{-2\kappa z}\right]. \tag{102}\] We will derive this result directly in Sec. VIII. ### Torque for a nanoparticle above a perfectly conducting plate A very simple example is provided by a perfectly conducting surface lying in the \(z=0\) plane. This means \(r^{H,E}=\pm 1\). Consider only the case with \(\hat{\chi}_{xy}\neq 0\), that is, for our model, the magnetic field lying in the \(z\) direction. The imaginary part comes only from the region where \(\omega^{2}>k^{2}\), where \[\mathop{\rm Im}\nolimits\kappa=-\mathop{\rm sgn}\nolimits(\omega)\sqrt{\omega ^{2}-k^{2}}, \tag{103}\] and then the integral in Eq. (100) over transverse wavenumbers is (provided the body is of negligible extent, a nanoparticle, so \(z=a\)) \[\frac{1}{(2\pi)^{2}}\int_{0}^{|\omega|}dk\,k\int_{0}^{2\pi}d\theta \left[-\sqrt{\omega^{2}-k^{2}}-\frac{\omega^{2}}{\sqrt{\omega^{2}-k^{2}}} \right]\cos\left[2a\sqrt{\omega^{2}-k^{2}}\right] = -\frac{\omega^{3}}{2\pi}\int_{0}^{1}dy(y^{2}+1)\cos 2\omegaay \tag{104}\] \[= -\frac{1}{\pi(2a)^{2}}[u\cos u+(u^{2}-1)\sin u].\] Here we have defined \(u=2\omega a\). Then, we write the scattering part of the torque in the direction perpendicular to the plate as \[\tau_{z}^{s}=\frac{2}{\pi^{2}}V\omega_{c}\omega_{p}^{2}\eta\int_{0}^{\infty} du\frac{u\cos u+(u^{2}-1)\sin u}{(u^{2}+(2\eta a)^{2})^{2}}\left[\frac{1}{e^{ \beta u/(2a)}-1}-\frac{1}{e^{\beta^{\prime}u/(2a)}-1}\right]. \tag{105}\] Close to the plate, \(2\eta a\ll 1\), \(\beta/(2a)\gg 1\), \(\beta^{\prime}/(2a)\gg 1\), where the integral is dominated by small values of \(u\), for which the numerator in the integrand approaches \[u\cos u+(u^{2}-1)\sin u\sim\frac{2}{3}u^{3}, \tag{106}\] we obtain precisely the negative of the torque coming from the vacuum contribution (101). That the total normal torque vanishes as the perfectly conducting plate is approached is expected, because the tangential electric field must vanish there. The total torque is then \[\tau_{z}^{\rm vac}+\tau_{z}^{s}=\frac{4}{3\pi^{2}}V\omega_{c}\omega_{p}^{2} \eta\int_{0}^{\infty}du\frac{u^{3}}{(u^{2}+(2a\eta)^{2})^{2}}\left[1-\frac{3 }{2}\frac{u\cos u+(u^{2}-1)\sin u}{u^{3}}\right]\left[\frac{1}{e^{\beta^{ \prime}u/(2a)}-1}-\frac{1}{e^{\beta u/(2a)}-1}\right]. \tag{107}\] This is plotted in Fig. 3. ## VII Quantum vacuum force Let us start by writing the force on a dielectric body on which an electric field is impressed, writing the fields in terms of their frequency transforms: \[{\bf F} = \int(d{\bf r})\int\frac{d\omega}{2\pi}\frac{d\nu}{2\pi}e^{-i( \omega+\nu)t}\left\{-\left[{\bf\nabla}\cdot{\bf P}({\bf r};\omega)\right]{\bf E }({\bf r};\nu)-i\omega{\bf P}({\bf r};\omega)\times\left[\frac{1}{i\nu}{\bf \nabla}\times{\bf E}({\bf r};\nu)\right]\right\} \tag{108}\] \[= \int(d{\bf r})\int\frac{d\omega}{2\pi}\frac{d\nu}{2\pi}e^{-i( \omega+\nu)t}\left\{-\left[{\bf\nabla}\cdot{\bf P}({\bf r};\omega)\right]{\bf E }({\bf r};\nu)\left(1+\frac{\omega}{\nu}\right)-\frac{\omega}{\nu}{\bf P}({ \bf r};\omega)\cdot({\bf\nabla})\cdot{\bf E}({\bf r};\nu)\right\},\] where in the second line we integrated spatially by parts. Unlike for the torque, the total divergence does not contribute. Now we either expand \(\mathbf{P}\) in terms of \(\mathbf{E}\), using Eq. (4.3b), or \(\mathbf{E}\) in terms of \(\mathbf{P}\), using Eq. (4.3a), and then use the fluctuation-dissipation theorem on the two parts. This yields rather immediately for the force on the body7 Footnote 7: This general formula can be derived immediately from the torque on the center of mass inferred from Eq. (4.6). \[F_{i}^{\rm EE}+F_{i}^{\rm PP}=\int\frac{d\omega}{2\pi}(d\mathbf{r})\left[\chi_{ jl}(\mathbf{r};\omega)\nabla^{\prime}_{i}(\Im\mathbf{\Gamma})_{lj}(\mathbf{r}, \mathbf{r}^{\prime};\omega)\big{|}_{\mathbf{r}^{\prime}=\mathbf{r}}\coth\frac{ \beta\omega}{2}+(\Im\mathbf{\chi})_{jl}(\mathbf{r};\omega)\nabla_{i}\Gamma_{lj}( \mathbf{r},\mathbf{r}^{\prime};-\omega)\big{|}_{\mathbf{r}^{\prime}=\mathbf{r} }\coth\frac{\beta^{\prime}\omega}{2}\right]. \tag{7.2}\] However, for vacuum, Eq. (4.8) describes the vacuum Green's dyadic, So, again in the coincidence limit, it is then clear that the gradient of the Green's dyadic vanishes, and thus there is no vacuum force. The conclusion appears to be opposite to that of Refs. [29; 30], but evidently the self-propulsion found there arises as a second-order effect, with which we will deal in a later paper. There, nonreciprocity is not required. The only necessary conditions are that the system be out of equilibrium, and that the body be extended and inhomogeneous. ## VIII Transverse force on a nonreciprocal nanoparticle induced by a dielectric surface In contrast to the result found in the previous section, a nonreciprocal body does experience, in first order, a force transverse to another ordinary body, even when both bodies are at rest, provided they are not in thermal equilibrium with each other. This was observed in Ref. [30] and more recently in Refs. [31; 21]. This is still described by the formula (7.2), but requires the scattering part of the Green's function. We will consider the second, ordinary body to be a planar dielectric, with permittivity \(\varepsilon(\omega)\), lying in the halfspace \(z<0\), while the nonreciprocal body lies at a distance \(z=a\) above the plane. It is convenient then to introduce a two-dimensional Fourier transform in the transverse coordinates, \(x\) and \(y\). Then the force in the \(x\) direction, say, is \[F_{x}=-\int(d\mathbf{r})\frac{d\omega}{2\pi}\frac{(d\mathbf{k}_{\perp})}{(2 \pi)^{2}}ik_{x}\left[\chi_{jk}(\mathbf{r};\omega)(\Im\mathbf{g}^{s})_{kj}(z,z ;\omega,\mathbf{k}_{\perp})\coth\frac{\beta\omega}{2}-(\Im\mathbf{\chi})_{jk}( \mathbf{r};\omega)g^{s}_{kj}(z,z;-\omega,\mathbf{k}_{\perp})\coth\frac{\beta^ {\prime}\omega}{2}\right]. \tag{8.1}\] Here, the \(s\) superscripts on the reduced Green's functions represent the scattering parts, since it is evident that the bulk (vacuum) part does not contribute, as already demonstrated in the previous section. Now the integration over \(k_{x}\) and \(k_{y}\) will vanish except for \(g_{xz}\) and \(g_{zx}\).8 Hence, unlike for the torque, only the TM Green's function contributes. Using the properties of \(\Im\mathbf{\chi}\) given in Sec. II, we immediately obtain Footnote 8: For the bulk (vacuum) contribution, the force would involve the symmetric limit \(\lim_{z\to z^{\prime}}\text{sgn}(z-z^{\prime})=0\). \[F_{x}=2\int_{0}^{\infty}\frac{d\omega}{2\pi}\int\frac{(d\mathbf{k}_{\perp})}{(2 \pi)^{2}}(d\mathbf{r})\hat{\chi}_{xz}(\mathbf{r};\omega)k_{x}^{2}\,\text{Im} \left(r^{H}e^{-2\kappa z}\right)\left[\frac{1}{e^{\beta\omega}-1}-\frac{1}{e^{ \beta^{\prime}\omega}-1}\right]. \tag{8.2}\] Figure 3: Torque (apart from the prefactor in Eq. (6.9)) as a function of separation of the nanoparticle \(a\) from the perfectly conducting plate in \(\mu\)m, for a damping parameter of \(\eta=0.035\) eV. The temperatures are taken to be \(T=300\) K and \(T^{\prime}=600\) K. The torque vanishes, as expected, close to the plate, and approaches the vacuum value far from the plate. Most interesting is the appearance of a very weak maximum at about 14.4 \(\mu\)m, just before the decrease to the vacuum torque value, as displayed in the inset. This force is precisely that inferred from the torque in Eq. (6.4). As with Casimir friction, this force will vanish unless dissipation occurs somewhere. This could be due to dissipation in the dielectric slab, or to radiation. We will consider these in the following subsections. ### Dissipation in a metallic slab We will describe the metallic substrate by a Drude model, \[\varepsilon(\omega)=1-\frac{\omega_{p}^{2}}{\omega^{2}+i\omega\nu}, \tag{8.3}\] where \(\omega_{p}\) is the plasma frequency, and \(\nu\) the damping parameter. For simplicity, we will consider the regime \[\nu\ll\omega\ll\omega_{p},\quad\omega\ll k, \tag{8.4}\] so that \[\mathrm{Im}\,\varepsilon(\omega)\approx\frac{\omega_{p}^{2}\nu}{\omega^{3}}, \tag{8.5}\] and then, for low frequencies, \[\mathrm{Im}\,r^{H}=\mathrm{Im}\,\frac{\kappa-\kappa^{\prime}/ \varepsilon}{\kappa+\kappa^{\prime}/\varepsilon}\approx\mathrm{Im}\,\frac{ \varepsilon-1}{\varepsilon+1}\approx\frac{2\omega\nu}{\omega_{p}^{2}}. \tag{8.6}\] Thus, in this approximation, where we crudely replace \(\kappa\) and \(\kappa^{\prime}\) by \(k\), the force on a nanoparticle of negligible extent is \[F_{x} = -2\frac{\nu}{\omega_{p}^{2}}\int\frac{d\omega}{2\pi}\frac{(d{ \bf k}_{\perp})}{(2\pi)^{2}}\hat{\alpha}_{xz}(\omega)\omega k_{x}^{2}e^{-2 \kappa a}\left(\frac{1}{e^{\beta\omega}-1}-\frac{1}{e^{\beta^{\prime}\omega}-1}\right) \tag{8.7}\] \[\approx -\frac{12}{(2\pi)^{2}}\frac{1}{(2a)^{4}}\frac{\nu}{\omega_{p}^{ 2}}\int_{0}^{\infty}d\omega\,\hat{\alpha}_{xz}(\omega)\omega\left(\frac{1}{e^ {\beta\omega}-1}-\frac{1}{e^{\beta^{\prime}\omega}-1}\right).\] Now for the nonreciprocal polarizability, let us use the model (3.5), where we now assume that the magnetic field (confined to the particle) lies in the \(y\) direction. This leads directly to the following formula for the force, \((x=\omega/\eta)\) \[F_{x}=\frac{3V}{4\pi^{2}a^{4}}\frac{\nu}{\eta}\omega_{c}f(\beta\eta,\beta^{ \prime}\eta),\quad f(\beta\eta,\beta^{\prime}\eta)=-\int_{0}^{\infty}dx\frac{ x}{(x^{2}+1)^{2}}\left[\frac{1}{e^{\beta\eta x}-1}-\frac{1}{e^{\beta^{\prime}\eta x}-1} \right], \tag{8.8}\] where the integral follows from Appendix A, \[\int_{0}^{\infty}dx\frac{x}{(x^{2}+1)^{2}}\frac{1}{e^{\beta\eta x}-1}=\frac{ 1}{4}\left[1+\frac{\pi}{\beta\eta}-\frac{\beta\eta}{2\pi}\psi^{\prime}\left( \frac{\beta\eta}{2\pi}\right)\right]. \tag{8.9}\] The dimensionless force \(f\) is plotted in Fig. 4, and compared to the high-temperature approximation. The prefactor, for \(\nu=\eta\), \(a=1\,\mu\)m, and the radius of the nanosphere being 100 nm, is \(5\times 10^{-21}\) N. ### Transverse force in presence of a perfectly conducting plate If the slab is a perfect conductor, with \(r^{H}=1\), the formula for the transverse force simplifies considerably. The imaginary part of the Green's function then requires that \(\omega^{2}>k^{2}\), and so the integral over the wavenumber is \[\int\frac{(d{\bf k}_{\perp})}{(2\pi)^{2}}k_{x}^{2}\mathrm{Im}\,e^{-2\kappa a }=-\frac{1}{4\pi}\frac{1}{(2a)^{4}}[6u\cos u+2(u^{2}-3)\sin u], \tag{8.10}\] where \(u=2\omega a\). Then, the transverse force is \[F_{x}=\frac{\omega_{c}\eta\omega_{p}^{2}V}{2\pi^{2}a}f_{0}(\epsilon,b,b^{ \prime}),\quad f_{0}(\epsilon,b,b^{\prime})=\int_{0}^{\infty}du\frac{6u\cos u +2(u^{2}-3)\sin u}{(u^{2}+\epsilon^{2})^{2}}\left[\frac{1}{e^{ab}-1}-\frac{1 }{e^{ab^{\prime}}-1}\right], \tag{8.11}\] where \(\epsilon=2\eta a\), \(b=1/(2aT)\), and \(b^{\prime}=1/(2aT^{\prime})\). The integral \(f\) is plotted in Fig. 5 as a function of the nanoparticle temperature \(T^{\prime}\) for the environment at room temperature, for a separation of \(a=100\) nm, with a damping parameter appropriate for gold, \(\eta=0.035\) eV. Note, that the high-temperature limit for the force is given by \(f_{0}\sim\frac{\pi}{b^{\prime}}\), for \(b^{\prime}\ll 1/\epsilon,b\), but this limit requires very high temperatures which are not accessible in practice. It is noteworthy that Figs. 4 and 5 are qualitatively (but not quantitatively) similar, given that the physical mechanisms invoked are rather different. It is easily seen that the lateral force rapidly vanishes as \(a\to\infty\), consistent with the absence of a quantum vacuum force. We expect, as we saw for the torque, that this force will be resisted by the quantum vacuum friction in the presence of the plate, which for low velocities will lead to a terminal velocity, according to \[m\frac{dv}{dt}=F_{0}-vF_{1}^{\prime}\Rightarrow v(t)=\frac{F_{0}}{F_{1}^{ \prime}}\left(1-e^{-F_{1}^{\prime}t/m}\right). \tag{8.12}\] We require, then, the nonequilibrium frictional force in the presence of a conducting plate, which we derive in Figure 4: Force between a nonreciprocal nanoparticle and a metal plate with finite conductivity out of thermal equilibrium. The temperature of both the plate and the background electromagnetic field is fixed at room temperature, which corresponds to \(T/\eta=0.714\) for gold. The polarizability of the nanoparticle is described by the model (3.5). The temperature of the nanoparticle, \(T^{\prime}\), is given in units of the damping parameter for gold, \(\eta\). For comparison, the straight line shows the force when both temperatures are large. Figure 5: The transverse force given in Eq. (8.11) as a function of the temperature of the nonreciprocal nanoparticle relative to room temperature, \(300\) K, for the environment and perfectly conducting plate at room temperature. Here we take the separation \(a\) of the nanoparticle and the plate to be \(100\) nm, and the damping to be that appropriate for gold, \(0.035\) eV. In this case, the prefactor in the force in Eq. (8.11), for a gold nanosphere of \(10\) nm radius is \(1.2\times 10^{-20}\) N, so this would be challenging to observe. Appendix B. Using the same model for the permittivity of the nanoparticle, the linear term in the friction is \[F_{1}^{\prime}=\frac{\omega_{p}^{2}\eta V}{\pi^{2}(2a)^{2}}f_{1}(\epsilon,b,b^{ \prime}), \tag{101}\] where \[f_{1}(\epsilon,b,b^{\prime}) = \int_{0}^{\infty}du\frac{u^{3}}{u^{2}+\epsilon^{2}}\bigg{\{}\left[ 1-\frac{2\cos u+(u^{2}-2)\sin u}{u^{3}}\right]\left[\frac{1}{e^{b^{\prime}u}-1 }-\frac{1}{e^{bu}-1}\right] \tag{102}\] \[\quad+\frac{1}{12}\frac{bu}{\sinh^{2}(bu/2)}\left[1-3\frac{-u(u^{ 2}-12)\cos u+(5u^{2}-12)\sin u}{u^{5}}\right]\bigg{\}}.\] Indeed, \(f_{1}\) is always positive, corresponding to a frictional drag, and the corresponding terminal velocity is \[v_{T}=\frac{F_{0}}{F_{1}^{\prime}}=2\omega_{c}a\frac{f_{0}}{f_{1}}. \tag{103}\] The scale factor here is small compared to the speed of light: for a particle 100 nm above the plate, \(2\omega_{c}a=10^{-4}\). The ratio of \(f_{0}/f_{1}\) is shown in Fig. 6. The apparent saturation of the terminal velocity near 0.2 is illusory; for still larger temperatures, the terminal velocity tends to zero, since the frictional force rapidly increases with temperature. However, for these nominal values, the damping time is long: \[t_{0}=\frac{m}{F_{1}^{\prime}}\sim 2\times 10^{3}\,\mathrm{s}\frac{1}{f_{1}} \sim 10^{6}\,\mathrm{s}, \tag{104}\] if the particle is at twice room temperature. ## IX Relaxation to thermal equilibrium All of the above considerations assumed that the temperatures of the body and of the background are constant. Of course, this will not be so unless some mechanism keeps the system out of thermal equilibrium. Here, we will calculate the time it would take for such a body at rest to come to thermal equilibrium with its environment. We cannot regard the body to be a black body, but we can calculate the rate at which it loses heat from the power (for an isotropic body)9[13] Footnote 9: For the purpose of the rough calculation presented here, we will ignore the small nonreciprocal effects. \[P(T,T^{\prime})=\frac{1}{\pi^{2}}\int_{0}^{\infty}d\omega\,\omega^{4}\,\mathrm{ Im}\,\alpha(\omega)\left[\frac{1}{e^{\beta\omega}-1}-\frac{1}{e^{\beta^{\prime} \omega}-1}\right]=\frac{dQ}{dt}. \tag{105}\] Figure 6: Terminal velocity of a nonreciprocal nanoparticle near a perfectly conducting plate in units of \(2\omega_{c}a\). Here it is assumed that the plate and the background are at temperature \(T=300\) K, that the particle is made of gold, and it is \(a=100\) nm above the surface of the plate. The temperature of the nanoparticle is \(T^{\prime}=1/(2ab^{\prime})\). Thus, the highest particle temperature displayed on the graph is 1200 K. This is related to the rate of change of temperature of the body by its heat capacity: \[\frac{dQ}{dt}=C_{V}(T^{\prime})\frac{dT^{\prime}}{dt}. \tag{101}\] Thus, the time it takes for the body to cool from temperature \(T^{\prime}_{0}\) to temperature \(T^{\prime}_{1}\), where \(T^{\prime}_{0}>T^{\prime}_{1}>T\), is \[t=\int_{T^{\prime}_{0}}^{T^{\prime}_{1}}dT^{\prime}\frac{C_{V}(T^{\prime})}{P( T,T^{\prime})}. \tag{102}\] To proceed, we need a model for the heat capacity of the body. Such is provided by the Debye model,10 which is satisfactory for simple crystals (see Ref. [32]): Footnote 10: We consider bulk effects only, and ignore surface effects, for the purpose of a rough estimate. \[C_{V}(T)=9N\bigg{(}\frac{T}{\Theta}\bigg{)}^{3}\!\int_{0}^{\Theta/T}\!\!dx\frac {x^{4}e^{x}}{(e^{x}-1)^{2}}. \tag{103}\] where \(N\) is the number of atoms constituting the body, and \(\Theta\) is the Debye temperature. This interpolates between the low- and high-temperature limits: \[C_{V}(T)\sim 3N\left\{\begin{array}{cc}1,&T\gg\Theta,\\ \frac{4\pi^{4}}{5}\big{(}\frac{T}{\Theta}\big{)}^{3},&T\ll\Theta.\end{array}\right. \tag{104}\] Since the Debye temperature for gold is about \(\Theta=170\) K, the high-temperature approximation would seem appropriate for an estimate at room temperature and above. We finally need a model for the imaginary part of the polarizability of the body. The Lorenz-Lorentz model would give \[\mbox{Im}\,\alpha(\omega)=\frac{V\omega_{p}^{2}\omega\eta}{(\omega_{1}^{2}- \omega^{2})^{2}+\omega^{2}\eta^{2}}\approx\frac{V\omega_{p}^{2}\omega\eta}{ \omega_{1}^{4}}, \tag{105}\] where, for a metal (Drude model), \(\omega_{1}=\omega_{p}/\sqrt{3}\). The approximation here is appropriate if, as expected, \(\omega_{1}\gg\omega,\eta\). Inserting this approximation into the formula (100) we obtain \[P(T,T^{\prime})\approx\frac{8\pi^{4}}{7}\frac{V\eta}{\omega_{p}^{2}}(T^{6}-T^ {\prime 6}). \tag{106}\] Now we compute the cooling time from Eq. (102): \[t=t_{0}\int_{T^{\prime}_{0}/T}^{T^{\prime}_{1}/T}\!du\frac{1}{1-u^{6}},\quad T ^{\prime}_{0}>T^{\prime}_{1}>T,\quad\mbox{where}\quad t_{0}=\frac{21}{8\pi^{4 }}n\frac{\omega_{p}^{2}}{\eta}\frac{1}{T^{5}} \tag{107}\] Here, \(n\) is the number density of atoms in the body. The relaxation scale, \(t_{0}\), is independent of the volume of the particle, and is about \(10^{4}\) seconds for gold, for an environmental temperature of 300 K. The cooling time diverges as \(T^{\prime}_{1}\to T\), but cooling to a temperature slightly above the environmental temperature takes a finite time. The integral here is elementary, but the resulting expression is not very illuminating. We content ourselves by showing some representative values in left panel of Fig. 7. It will be seen that if \(T^{\prime}_{0}\) is appreciably larger than \(T\) the cooling time rapidly saturates to an asymptotic value. If we then take \(T^{\prime}_{0}\) to be large, Fig. 8 shows how long it will take to reach a multiple of the environmental temperature. Thus, we see that the terminal angular velocity seen in Eq. (101) and the terminal linear velocity obtained in Eq. (100) will not be achievable unless some mechanism maintains the thermal imbalance, because the time scales for achieving those velocities, Eqs. (102) and (103), are much longer than the cooling time found here. If the environmental temperature is very low, \(T\ll\Theta\), the cooling time is very much longer. The analysis proceeds as above, using the low temperature limit in Eq. (104), with the result for the cooling time being \[t=\tilde{t}_{0}\int_{(T^{\prime}_{0}/T)^{2}}^{(T^{\prime}_{1}/T)^{2}}\!dy \frac{y}{1-y^{3}},\quad\tilde{t}_{0}=\frac{21}{20}n\frac{\omega_{p}^{2}}{\eta }\left(\frac{T}{\Theta}\right)^{3}\!\frac{1}{T^{5}}. \tag{108}\] The integral, which has a relatively simple analytic form, is shown in the right panel of Fig. 7. The ratio of the time scales in the two cases is \[\frac{\tilde{t}_{0}}{t_{0}}=\frac{2\pi^{4}}{5}\left(\frac{T_{\rm low}}{\Theta} \right)^{\!3}\!\left(\frac{T_{\rm high}}{T_{\rm low}}\right)^{\!5}\sim 10^{7}, \tag{10}\] for \(T_{\rm low}=1\) K, \(T_{\rm high}=300\) K, and \(\Theta=170\) K for gold, so terminal velocities might be achievable. ## X Lorenz-Lorentz correction Hithertofore, we have ignored the Lorenz-Lorentz correction familiar in passing from the permittivity of a body to its polarizability. This was because the forces and torques were derived directly from the macroscopic susceptibilities appropriate for a dissipative metal body. In the case of small bodies, we could always pass from the susceptibility to the mean polarizability by integrating over the volume of the body. However, as is evident from the discussion in the preceding section [see Eq. (9.6)], the effect of the medium on the local electric field can result in a large correction in the case of metal bodies. Figure 8: The cooling time, in terms of the scale \(t_{0}\), for the nanoparticle to cool from a high temperature to \(T_{1}\). It takes an increasingly long time to get very close to the environmental temperature. Figure 7: Time required for a body to cool from temperature \(T_{0}^{\prime}\) to temperature \(T_{l}^{\prime}\) for different environmental temperatures \(T\). Here \(T_{0}^{\prime}>T_{l}^{\prime}>T\). The left panel is for the environmental temperature \(T=300\) K, and the right is for \(T=1\) K. On the left, the upper set of curves is for \(T_{l}^{\prime}/T=1.1\), and the lower curves are for \(T_{l}^{\prime}/T=1.5\). On the right, the three curves from top to bottom are for \(T_{l}^{\prime}/T=1.05\), \(1.1\), and \(1.5\). The times are scaled by the prefactor, \(t_{0}\), in Eq. (9.8), which for a gold body evaluates to about \(10^{4}\) s, for the environment at room temperature, and by \(t_{0}\) (Eq. (10)), which is about \(10^{11}\) s, for \(T=1\) K. The difficulty is that the simple Lorenz-Lorentz model is ordinarily derived using spherical symmetry. The relation between the electric polarizability and the permittivity is, in the isotropic case, in HL units, \[\alpha=\frac{\varepsilon-1}{\varepsilon+2}4\pi a^{3}. \tag{10.1}\] This is not valid for a nonsymmetric permittivity; for example, see Ref. [34]. However, for \(\omega_{c}\) small, the nonsymmetric nature is small, so, for the purpose of an estimate, we use, as in Refs. [24; 25], the matrix generalization of the above: \[\mathbf{\alpha}=(\mathbf{\varepsilon-1})(\mathbf{\varepsilon+2})^{-1}4\pi a^{3}. \tag{10.2}\] It is quite straightforward to compute the components of this matrix: the term we need for the torque in Eq. (4.13) is, for \(\omega_{p}\gg\omega\sim T\), \[\operatorname{Re}\alpha_{xy}\approx 54V\frac{\omega^{2}\omega_{c}\eta}{ \omega_{p}^{4}}. \tag{10.3}\] When this is inserted into Eq. (4.13), we obtain \[\tau_{z}=\frac{32}{7}\pi^{4}V\frac{\omega_{c}\eta}{\omega_{p}^{4} }T^{6}\left[1-\left(\frac{T^{\prime}}{T}\right)^{6}\right]. \tag{10.4}\] Putting in the numbers for a 100 nm gold nanosphere, the coefficient of \((1-T^{\prime 6}/T^{6})\) is about \(5\times 10^{-36}\) Nm, some 11 orders of magnitude smaller than that found at the end of Sec. IV. We can also repeat the calculation of the terminal angular velocity in Sec. V in this Lorenz-Lorentz model. Then, \[\tau_{1}^{\prime}=-\frac{1}{6\pi^{2}}\left(3+\beta\frac{\partial }{\partial\beta}\right)\int_{0}^{\infty}d\omega\,\omega^{2}\operatorname{Im}( \alpha_{xx}+\alpha_{yy})(\omega)\left(\frac{1}{e^{\beta\omega}-1}-\frac{1}{e^ {\beta\omega}-1}\right), \tag{10.5}\] where in our model \[\operatorname{Im}(\alpha_{xx}+\alpha_{yy})\approx 18V\frac{\omega\eta}{ \omega_{p}^{2}}, \tag{10.6}\] which implies \[\tau_{1}^{\prime}=\frac{2\pi^{2}}{5}\frac{V\eta}{\omega_{p}^{2}}T ^{4}\left[1+3\left(\frac{T^{\prime}}{T}\right)^{4}\right]. \tag{10.7}\] Note that \(\tau_{1}^{\prime}\) is always positive, indicating that it always opposes the rotation. The corresponding terminal angular velocity is \[\Omega_{T}=\frac{\tau_{0}}{\tau_{1}^{\prime}}=\frac{80}{7}\pi^{2 }\frac{\omega_{c}}{\omega_{p}^{2}}T^{2}\frac{1-\frac{T^{\prime 6}}{T^{6}}}{1+3 \frac{T^{\prime 4}}{T^{4}}}, \tag{10.8}\] where the prefactor, independent of \(T^{\prime}\), implies a substantial angular velocity for gold at room temperature: \(\sim 10^{8}\) s\({}^{-1}\). Although the time required to reach such a velocity is very long, \(t_{0}=I/\tau_{1}^{\prime}\sim 10^{13}\) s, the initial angular acceleration is not so small, \[\dot{\Omega}(0)=\frac{\Omega_{T}}{t_{0}}\sim 10^{-5}\,\text{s}^{-2}. \tag{10.9}\] While this angular acceleration is 10 orders of magnitude smaller than that found without the Lorenz-Lorentz correction in Eq. (5.14), the body will acquire a measurable angular velocity after a relatively small period of observation. However, is this correction valid or even necessary for a metal nanoparticle? The discussion in Secs. IV-VIII is based on describing the susceptiblity of a metal by the phenomenological Drude model, which should include, approximately, all internal effects. There is a large literature on the subject of ordinary polarizabilities of metal nanoparticles--see Refs. [35; 36], for example--where it is seen that both classical and approximate quantum mechanical treatments are inadequate. We are unaware of comparable work in the nonreciprocal case. So, to some extent, the issue of applying the Lorenz-Lorentz correction remains open. In this paper we are interested in the interaction between the electromagnetic field and the body, the electromagnetic properties of which are specified by a given susceptibility, so that the crude models for the latter should only be taken as illustrative. Conclusions In this paper we have concentrated on analysis to first order in the susceptibility, to better understand the effects of nonreciprocal materials on torque and on forces for bodies out of thermal equilibrium with their environment. Time-reversal symmetry is broken by these materials, so spontaneous forces and torques are possible. Of course, time reversal symmetry is not broken by electrodynamics, whether classical or quantum; rather the nonreciprocity is a consequence of an external agent, such as a magnetic field, that is encoded in the dielectric response of the materials. Interestingly, potentially observable phenomena are, nevertheless, predicted. A nonreciprocal body out of thermal equilibrium will spontaneously start to rotate, and reach a substantial terminal angular velocity. Such a body will not feel a net force to first order in the susceptibility. However, if the body is placed near a translationally invariant surface, even a perfect conductor, then a force parallel to the surface would arise. The presence of such a surface would tend to suppress the vacuum torque. A potentially observable terminal linear velocity arises here as well, although the time scales are such that it would be difficult to keep the system out of thermal equilibrium. A possible drastic reduction in the strength of these nonreciprocal effects, due to the Lorenz-Lorentz correction for dielectric susceptibilities, is discussed in the penultimate section, although it seems the angular and linear accelerations might still be amenable to observation. Elsewhere, we will examine higher-order effects, to see how phenomena such as vacuum self-propulsion can arise, even for a reciprocal body [29; 30]. ###### Acknowledgements. The work of KAM and XG was supported in part by a grant from the US National Science Foundation, No. 2008417. The work of GK and KAM was supported in part by a grant from the US National Science Foundation, No. PHY-1748958. In particular, GK and KAM thank KITP for its hospitality, and Benjamin Strekha for conversations there, which stimulated the research carried out here. We thank Steve Fulling, Li Yang, Prachi Parashar, Shadi Rezaei, and Venkat Abhignan for collaborative assistance. This paper reflects solely the authors' personal opinions and does not represent the opinions of the authors' employers, present and past, in any way. For the purpose of open access, the authors have applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission. ## Appendix A Evaluation and expansion of integrals Here, we express the integrals encountered in the text in terms of the digamma and trigamma functions, \(\psi(z)\) and \(\psi^{\prime}(z)\), and provide corresponding low- and high-temperature expansions. Differentiation of Binet's second integral representation of the log gamma function immediately yields the integral representation \[\psi(z)=\ln z-\frac{1}{2z}-2\int_{0}^{\infty}dx\,\frac{x}{x^{2}+1}\frac{1}{e^ {2\pi zx}-1}. \tag{101}\] Thus, \[I_{1}(\beta\eta)\equiv\int_{0}^{\infty}dx\,\frac{x}{x^{2}+1}\frac{1}{e^{\beta \eta x}-1}=\frac{1}{2}\left[-\frac{\pi}{\beta\eta}+\ln\left(\frac{\beta\eta}{ 2\pi}\right)-\psi\left(\frac{\beta\eta}{2\pi}\right)\right]. \tag{102}\] Since \[\beta\frac{\partial}{\partial\beta}\frac{1}{e^{\beta\eta x}-1}=\eta\frac{ \partial}{\partial\eta}\frac{1}{e^{\beta\eta x}-1}=x\frac{\partial}{\partial x }\frac{1}{e^{\beta\eta x}-1}, \tag{103}\] it follows that \[\beta\frac{\partial}{\partial\beta}I_{1}(\beta\eta)=\eta\frac{\partial}{ \partial\eta}I_{1}(\beta\eta)=\int_{0}^{\infty}dx\,\frac{x^{2}}{x^{2}+1}\frac{ \partial}{\partial x}\frac{1}{e^{\beta\eta x}-1}=2\int_{0}^{\infty}dx\,\left[ \frac{x^{3}}{(x^{2}+1)^{2}}-\frac{x}{x^{2}+1}\right]\frac{1}{e^{\beta\eta x}-1}. \tag{104}\] Thus, \[I_{2}(\beta\eta)\equiv\int_{0}^{\infty}dx\,\frac{x^{3}}{(x^{2}+1)^{2}}\frac{1 }{e^{\beta\eta x}-1}=\left[1+\frac{\beta}{2}\frac{\partial}{\partial\beta} \right]I_{1}(\beta\eta)=\left[1+\frac{\eta}{2}\frac{\partial}{\partial\eta} \right]I_{1}(\beta\eta). \tag{105}\] Hence, from Eq. (16), \[I_{2}(\beta\eta)=\frac{1}{4}\left[-\frac{\pi}{\beta\eta}+2\ln\left(\frac{\beta \eta}{2\pi}\right)+1-2\psi\left(\frac{\beta\eta}{2\pi}\right)-\frac{\beta\eta} {2\pi}\psi^{\prime}\left(\frac{\beta\eta}{2\pi}\right)\right]. \tag{17}\] Using the series representation \[\psi(z)=-\frac{1}{z}-\gamma_{E}+z\sum_{k=1}^{\infty}\frac{1}{k(z+k)}, \tag{18}\] where \(\gamma_{E}\) is the Euler-Mascheroni constant, we readily obtain from Eqs. (16) and (17) the small \(\beta\eta\) (or high-temperature) expansions \[I_{1}(\beta\eta)\sim\frac{1}{2}\left[\frac{\pi}{\beta\eta}+\ln\left(\frac{ \beta\eta}{2\pi}\right)+\gamma_{E}\right]\qquad(\beta\eta\to 0)\] (19a) and \[I_{2}(\beta\eta)\sim\frac{1}{4}\left[\frac{\pi}{\beta\eta}+2\ln\left(\frac{ \beta\eta}{2\pi}\right)+1+2\gamma_{E}\right]\qquad(\beta\eta\to 0). \tag{19b}\] Likewise, using the asymptotic representation \[\psi(z)\sim\ln z-\frac{1}{2z}-\sum_{k=1}^{\infty}\frac{B_{2k}}{2kz^{2k}} \qquad(z\to\infty), \tag{20}\] there follow from Eqs. (16) and (17) the large \(\beta\eta\) (or low-temperature) expansions \[I_{1}(\beta\eta)\sim\frac{\pi^{2}}{6\beta^{2}\eta^{2}}-\frac{\pi^{4}}{15\beta ^{4}\eta^{4}}\qquad(\beta\eta\to\infty)\] (21a) and \[I_{2}(\beta\eta)\sim\frac{\pi^{4}}{15\beta^{4}\eta^{4}}\qquad(\beta\eta\to \infty). \tag{21b}\] ## Appendix B Out-of-equilibrium frictional force near perfectly conducting plate Following the discussion in Ref. [13], it is easy to derive the expression for the frictional force used in Sec. VIII.2. The general expression for the force is \[F=\int\frac{d\omega}{2\pi}\frac{d{\bf k}_{\perp}}{(2\pi)^{2}}(k_{x}+\omega v) \operatorname{tr}\Im\mathbf{\alpha}(\omega)\Im\mathbf{\mathbf{g}}^{\prime}(\omega,{ \bf k}_{\perp})\left[\coth\frac{\beta\gamma(\omega+k_{x}v)}{2}-\coth\frac{ \beta^{\prime}\omega}{2}\right]. \tag{22}\] Here \(\mathbf{\mathbf{g}}^{\prime}\) is the reduced Green's function in the rest frame of the particle. For a perfectly conductimg plate, as with the vacuum, \(\mathbf{\mathbf{g}}^{\prime}=\mathbf{\mathbf{g}}\), the Green's function in the frame of the plate and vacuum. From this we can calculate both the frictional force and the propulsive force. The latter is present at \(v=0\), and arises from the antisymmetric parts of the polarizability and the Green's dyadic, and is given in Sec. VIII. The diagonal parts of both these tensors correspond to friction. According to the model (3.2) with \(\omega_{c}\) neglected, the diagonal terms of the imaginary part of the polarizability are all equal to \[\operatorname{Im}\alpha_{d}=\frac{V\omega_{p}^{2}\omega\eta}{(\omega_{0}^{2}- \omega^{2})^{2}+\omega^{2}\eta^{2}}. \tag{23}\] That leaves us with the imaginary part of the trace of \(\mathbf{\mathbf{g}}\): \[\operatorname{Im}\operatorname{tr}\mathbf{\mathbf{g}}=\operatorname{sgn}(\omega) \left[\frac{\omega^{2}}{\sqrt{\omega^{2}-k^{2}}}-\sqrt{\omega^{2}-k^{2}}\cos \left(2a\sqrt{\omega^{2}-k^{2}}\right)\right]\theta(\omega^{2}-k^{2}). \tag{24}\] Now when we expand Eq. (14) to first order in \(v\) we obtain two terms: \[F=F^{(1)}+F^{(2)}. \tag{15}\] Here the first term comes from expanding the hyperbolic cotangent: \[F^{(1)}=-\frac{\beta v}{8\pi^{2}}\int_{0}^{\infty}d\omega\,\text{Im}\,\alpha_{d }(\omega)\frac{\omega^{5}}{\sinh^{2}(\beta\omega/2)}\left\{\frac{2}{3}-\frac{2 }{x^{5}}\left[-x(x^{2}-12)\cos x+(5x^{2}-12)\sin x\right]\right\}, \tag{16}\] where we've carried out the elementary integration over \(\mathbf{k}_{\perp}\). Note that the first term here corresponds to the usual Einstein-Hopf effect. The \(F^{(2)}\) contribution to the force corresponds to the \(\omega v\) prefactor in Eq. (14), and is a non-equilibrium friction contribution: \[F^{(2)}=\frac{v}{2\pi^{2}}\int_{0}^{\infty}d\omega\,\text{Im}\,\alpha_{d}( \omega)\omega^{4}\left(\coth\frac{\beta\omega}{2}-\coth\frac{\beta^{\prime} \omega}{2}\right)\left[1-\frac{1}{x^{3}}\left(2x\cos x+(x^{2}-2)\sin x\right) \right], \tag{17}\] again after carrying out the wavenumber integration. The sum of these two terms is Eq. (8.14) if we set \(\omega_{0}=0\), appropriate for a metal nanoparticle.
2303.04338
Provable Pathways: Learning Multiple Tasks over Multiple Paths
Constructing useful representations across a large number of tasks is a key requirement for sample-efficient intelligent systems. A traditional idea in multitask learning (MTL) is building a shared representation across tasks which can then be adapted to new tasks by tuning last layers. A desirable refinement of using a shared one-fits-all representation is to construct task-specific representations. To this end, recent PathNet/muNet architectures represent individual tasks as pathways within a larger supernet. The subnetworks induced by pathways can be viewed as task-specific representations that are composition of modules within supernet's computation graph. This work explores the pathways proposal from the lens of statistical learning: We first develop novel generalization bounds for empirical risk minimization problems learning multiple tasks over multiple paths (Multipath MTL). In conjunction, we formalize the benefits of resulting multipath representation when adapting to new downstream tasks. Our bounds are expressed in terms of Gaussian complexity, lead to tangible guarantees for the class of linear representations, and provide novel insights into the quality and benefits of a multipath representation. When computation graph is a tree, Multipath MTL hierarchically clusters the tasks and builds cluster-specific representations. We provide further discussion and experiments for hierarchical MTL and rigorously identify the conditions under which Multipath MTL is provably superior to traditional MTL approaches with shallow supernets.
Yingcong Li, Samet Oymak
2023-03-08T02:25:28Z
http://arxiv.org/abs/2303.04338v1
# Provable Pathways: Learning Multiple Tasks over Multiple Paths ###### Abstract Constructing useful representations across a large number of tasks is a key requirement for sample-efficient intelligent systems. A traditional idea in multitask learning (MTL) is building a shared representation across tasks which can then be adapted to new tasks by tuning last layers. A desirable refinement of using a shared one-fits-all representation is to construct task-specific representations. To this end, recent PathNet/mnNet architectures represent individual tasks as pathways within a larger supernet. The subnetworks induced by pathways can be viewed as task-specific representations that are composition of modules within supernet's computation graph. This work explores the pathways proposal from the lens of statistical learning: We first develop novel generalization bounds for empirical risk minimization problems learning multiple tasks over multiple paths (Multipath MTL). In conjunction, we formalize the benefits of resulting multipath representation when adapting to new downstream tasks. Our bounds are expressed in terms of Gaussian complexity, lead to tangible guarantees for the class of linear representations, and provide novel insights into the quality and benefits of a multipath representation. When computation graph is a tree, Multipath MTL hierarchically clusters the tasks and builds cluster-specific representations. We provide further discussion and experiments for hierarchical MTL and rigorously identify the conditions under which Multipath MTL is provably superior to traditional MTL approaches with shallow supernets. ## 1 Introduction Multitask learning (MTL) promises to deliver significant accuracy improvements by leveraging similarities across many tasks through shared representations. The potential of MTL has been recognized since 1990s [1] however its impact has grown over time thanks to more recent machine learning applications arising in computer vision and NLP that involve large datasets with thousands of classes/tasks. Representation learning techniques (e.g. MTL and self-supervision) are also central to the success of deep learning as large pre-trained models enable data-efficient learning for downstream transfer learning tasks [4, 1]. As we move from tens of tasks trained with small models to thousands of tasks trained with large models, new statistical and computational challenges arise: First, not all tasks will be closely related to each other, for instance, tasks might admit a natural clustering into groups. This is also connected to heterogeneity challenge in federated learning where clients have distinct distributions and benefit from personalization. To address this challenge, rather than a single task-agnostic representation, it might be preferable to use a task-specific representation. Secondly, pretrained language and vision models achieve better accuracy with larger sizes which creates computational challenges as they push towards trillion parameters. This motivated new architectural proposals such as Pathways/PathNet [1, 1, 1] where tasks can be computed over compute-efficient subnetworks. At a high-level, each subnetwork is created by a composition of modules within a larger supernet which induces a pathway as depicted in Figure 1. Inspired from these challenges, we ask **Q:** What are the statistical benefits of learning task-specific representations along supernet pathways? Our primary contribution is formalizing the Multipath MTL problem depicted in Figure 1 and developing associated statistical learning guarantees that shed light on its benefits. Our formulation captures important aspects of the problem including learning compositional MTL representations, multilayer Figure 1: In Multipath MTL, each task selects a pathway within a supernet graph. The composition of the modules along the pathway forms the task-specific representation. Fig. 0(a) depicts a general supernet graph (highlighted in gray block), and the pathways for different tasks are shown in colored arrows. Fig. 0(b) is a special instance where related tasks are hierarchically clustered: For instance, Tasks 1 and 2 are assigned the same representation \(\psi_{2}^{1}\circ\psi_{1}\). nature of supernet, assigning optimal pathways to individual tasks, and transferring learned representations to novel downstream tasks. Our specific contributions are as follows. \(\bullet\) Suppose we have \(N\) samples per task and \(T\) tasks in total. Denote the hypothesis sets for multipath representation by \(\Phi\), task specific heads by \(\mathcal{H}\) and potential pathway choices by \(\mathcal{A}\). Our main result bounds the task-averaged risk of MTL as \[\sqrt{\frac{\text{DoF}(\Phi_{\text{used}})}{NT}}+\sqrt{\frac{\text{DoF}( \mathcal{H})+\text{DoF}(\mathcal{A})}{N}}. \tag{1}\] Here, \(\text{DoF}(\cdot)\) returns the _degrees of freedom_ of a hypothesis set (i.e. number of parameters). More generally, Theorem 1 states our guarantees in terms of Gaussian complexity. \(\Phi_{\text{used}}\subseteq\Phi\) is the supernet spanned by the pathways of the empirical solution and \(1/NT\) dependence implies that cost of representation learning is shared across tasks. We also show a _no-harm_ result (Lemma 1): If the supernet is sufficiently expressive to achieve zero empirical risk, then, the excess risk of individual tasks will not be harmed by the other tasks. Theorem 2 develops guarantees for transferring the resulting MTL representation to a new task in terms of representation bias of the empirical MTL supernet. \(\bullet\) When the supernet has a single module, the problem boils down to (vanilla) MTL with single shared representation and our bounds recover the results by [16, 15]. When the supernet graph is hierarchical (as in Figure 0(b)), our bounds provide insights for the benefits of clustering tasks into similar groups and superiority of multilayer Multipath MTL over using single-layer shallow supernets (Section 5). \(\bullet\) We develop stronger results for linear representations over a supernet and obtain novel MTL and transfer learning bounds (Sec. 4 and Theorem 4). These are accomplished by developing new task-diversity criteria to account for the task-specific (thus heterogeneous) nature of multipath representations. Numerical experiments support our theory and verify the benefits of multipath representations. Finally, we also highlight multiple future directions. ## 2 Setup and Problem Formulations **Notation.** Let \(\|\cdot\|\) denote the \(\ell_{2}\)-norm of a vector and operator norm of a matrix. \(|\cdot|\) denotes the absolute value for scalars and cardinality for discrete sets. We use \([K]\) to denote the set \(\{1,2,\dots,K\}\) and \(\lesssim,\gtrsim\) for inequalities that hold up to constant/logarithmic factors. \(\mathcal{Q}^{K}\) denotes \(K\)-times Cartesian product of a set \(\mathcal{Q}\) with itself. \(\circ\) denotes functional composition, i.e., \(f\circ g(x)=f(g(x))\). **Setup.** Suppose we have \(T\) tasks each following data distribution \(\{\mathcal{D}_{t}\}_{t=1}^{T}\). During MTL phase, we are given \(T\) training datasets \(\{\mathcal{S}_{t}\}_{t=1}^{T}\) each drawn i.i.d. from its corresponding distribution \(\mathcal{D}_{t}\). Let \(\mathcal{S}_{t}=\{(\mathbf{x}_{ti},y_{ti})\}_{i=1}^{N}\), where \((\mathbf{x}_{ti},y_{ti})\in(\mathcal{X},\mathbb{R})\) is an input-label pair and \(\mathcal{X}\) is the input space, and \(|\mathcal{S}_{t}|=N\) is the number of samples per task. We assume the same \(N\) for all tasks for cleaner exposition. Define the union of the datasets by \(\mathcal{S}_{\text{all}}=\bigcup_{t=1}^{T}\mathcal{S}_{t}\) (with \(|\mathcal{S}_{\text{all}}|=NT\)), and the set of distributions by \(\mathcal{\bar{D}}=\{\mathcal{D}_{t}\}_{t=1}^{T}\). Following the setting of related works [15], we will consider two problems: **(1) MTL problem** will use these \(T\) datasets to learn a supernet and establish guarantees for representation learning. **(2) Transfer learning problem** will use the resulting representation for a downstream task in a sample efficient fashion. **Problem (1): Multipath Multitask Learning (M\({}^{2}\)TL).** We consider a supernet with \(L\) layers where layer \(\ell\) has \(K_{\ell}\) modules for \(\ell\in[L]\). As depicted in Figure 1, each task will compose a task-specific representation by choosing one module from each layer. We refer to each sequence of \(L\) modules as a _pathway_. Let \(\mathcal{A}=[K_{1}]\times\dots\times[K_{L}]\) be the set of all pathway choices obeying \(|\mathcal{A}|=\prod_{\ell=1}^{L}K_{\ell}\). Let \(\alpha_{t}\in\mathcal{A}\) denote the pathway associated with task \(t\in[T]\) where \(\alpha_{t}[\ell]\in[K_{\ell}]\) denotes the selected module index from layer \(\ell\). We remark that results can be extended to more general pathway sets as discussed in Section 3.1. As depicted in Figure 1, let \(\Psi_{\ell}\) be the hypothesis set of modules in \(\ell_{\text{th}}\) layer and \(\psi_{\ell}^{k}\in\Psi_{\ell}\) denote the \(k_{\text{th}}\) module function in the \(\ell_{\text{th}}\) layer, referred to as \((\ell,k)\)'th module. Let \(h_{t}\in\mathcal{H}\) be the prediction head of task \(t\) where all tasks use the same hypothesis set \(\mathcal{H}\) for prediction. Let us denote the combined hypothesis \[\mathbf{h}=[h_{1},\dots,h_{T}]\in\mathcal{H}^{T},\] \[\mathbf{\alpha}=[\alpha_{1},\dots,\alpha_{T}]\in\mathcal{A}^{T},\] \[\mathbf{\psi}_{\ell}=[\psi_{\ell}^{1},\dots,\psi_{\ell}^{K_{\ell}}] \in\Psi_{\ell}^{K_{\ell}},\ \forall\ell\in[L],\] \[\mathbf{\phi}:=[\mathbf{\psi}_{1},\dots,\mathbf{\psi}_{L}]\in\Phi\] where \(\Phi=\Psi_{1}^{K_{1}}\times\dots\times\Psi_{L}^{K_{L}}\) is the supernet hypothesis class containing all modules/layers. Given a supernet \(\mathbf{\phi}\in\Phi\) and pathway \(\alpha\), \(\mathbf{\phi}_{\alpha}=\psi_{L}^{\alpha}\circ\dots\circ\psi_{1}^{\alpha}\) denotes the representation induced by pathway \(\alpha\) where we use the convention \(\psi_{\ell}^{\alpha}:=\psi_{\ell}^{\alpha[\ell]}\). Hence, \(\mathbf{\phi}_{\alpha_{t}}\) is the representation of task \(t\). We would like to solve for supernet weights \(\mathbf{\phi}\), pathways \(\mathbf{\alpha}\), and heads \(\mathbf{h}\). Thus, given a loss function \(\ell(\hat{y},y)\), Multipath MTL (M\({}^{2}\)TL) solves the following empirical risk minimization problem over \(\mathcal{S}_{\text{all}}\) to optimize the combined hypothesis \(\mathbf{f}=(\mathbf{h},\mathbf{\alpha},\mathbf{\phi})\): \[\hat{\mathbf{f}}=\operatorname*{arg\,min}_{\mathbf{f}\in\mathcal{F}} \widehat{\mathcal{L}}_{\mathcal{S}_{\text{all}}}(\mathbf{f}):=\frac{1}{T}\sum_{t=1}^ {T}\widehat{\mathcal{L}}_{t}(h_{t}\circ\mathbf{\phi}_{\alpha_{t}})\] (M\[{}^{2}\] \[\text{where}\ \ \widehat{\mathcal{L}}_{t}(f)=\frac{1}{N}\sum_{i=1}^{N} \ell(f(\mathbf{x}_{ti}),y_{ti})\] \[\mathcal{F}:=\mathcal{H}^{T}\times\mathcal{A}^{T}\times\Phi.\] Here \(\widehat{\mathcal{L}}_{t}\) and \(\widehat{\mathcal{L}}_{\mathcal{S}_{\text{all}}}\) are task-conditional and task-averaged empirical risks. We are primarily interested in controlling the task-averaged test risk \(\mathcal{L}_{\mathcal{\bar{D}}}(\mathbf{f})=\mathbb{E}[\widehat{\mathcal{L}}_{ \mathcal{S}_{\text{all}}}(\mathbf{f})]\). Let \(\mathcal{L}_{\mathcal{\bar{D}}}^{\star}:=\min_{\mathbf{f}\in\mathcal{F}} \mathcal{L}_{\mathcal{\bar{D}}}(\mathbf{f})\), then the _excess MTL risk_ is defined as \[\mathcal{R}_{\text{M}^{2}\text{TL}}(\hat{\mathbf{f}})=\mathcal{L}_{\mathcal{\bar{D}}} (\hat{\mathbf{f}})-\mathcal{L}_{\mathcal{\bar{D}}}^{\star}. \tag{2}\] **Problem (2): Transfer Learning with Optimal Pathway (TLOP).** Suppose we have a novel target task with i.i.d. training dataset \(\mathcal{S}_{\mathcal{T}}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{M}\) with \(M\) samples drawn from distribution \(\mathcal{D}_{\mathcal{T}}\). Given a pretrained supernet \(\mathbf{\phi}\) (e.g., following (M\({}^{2}\)TL)), we can search for a pathway \(\alpha\) so that \(\mathbf{\phi}_{\alpha}\) becomes a suitable representation for \(\mathcal{D}_{\mathcal{T}}\). Thus, for this new task, we only need to optimize the path \(\alpha\in\mathcal{A}\) and the prediction head \(h\in\mathcal{H}_{\mathcal{T}}\) while reusing weights of \(\mathbf{\phi}\). This leads to the following problem: \[\hat{f}_{\mathbf{\phi}}=\operatorname*{arg\,min}_{h\in\mathcal{H}_{ \mathcal{T}},\alpha\in\mathcal{A}}\widehat{\mathcal{L}}_{\mathcal{T}}(f)\ \ \text{ where }\ f=h\circ\mathbf{\phi}_{\alpha}\] (TLOP) \[\text{ and }\ \ \widehat{\mathcal{L}}_{\mathcal{T}}(f)=\frac{1}{M} \sum_{i=1}^{M}\ell(f(\mathbf{x}_{i}),y_{i}).\] Here, \(\hat{f}_{\mathbf{\phi}}\) reflects the fact that solution depends on the suitability of pretrained supernet \(\mathbf{\phi}\). Let \(f^{\star}_{\mathbf{\phi}}\) be a population minima of (TLOP) given supernet \(\mathbf{\phi}\) (as \(M\to\infty\)) and define the population risk \(\mathcal{L}_{\mathcal{T}}(f)=\mathbb{E}[\widehat{\mathcal{L}}_{\mathcal{T}}(f)]\). (TLOP) will be evaluated against the hindsight knowledge of optimal supernet for target: Define the optimal target risk \(\mathcal{L}^{\star}_{\mathcal{T}}:=\min_{h\in\mathcal{H}_{\mathcal{T}},\mathbf{ \phi}\in\mathbf{\phi}}\mathcal{L}_{\mathcal{T}}(h\circ\mathbf{\phi}_{\alpha})\) which optimizes \(h,\mathbf{\phi}\) for the target task along the fixed pathway \(\alpha=[1,\dots,1]\). Here we can fix \(\alpha\) since all pathways result in the same search space. We define the _excess transfer learning risk_ to be \[\mathcal{R}_{\text{TLOP}}(\hat{f}_{\mathbf{\phi}})=\mathcal{L}_{ \mathcal{T}}(\hat{f}_{\mathbf{\phi}})-\mathcal{L}^{\star}_{\mathcal{T}} \tag{3}\] \[=\underbrace{\mathcal{L}_{\mathcal{T}}(\hat{f}_{\mathbf{\phi}})- \mathcal{L}_{\mathcal{T}}(f^{\star}_{\mathbf{\phi}})}_{\text{variance}}+ \underbrace{\mathcal{L}_{\mathcal{T}}(f^{\star}_{\mathbf{\phi}})-\mathcal{L}^{ \star}_{\mathcal{T}}}_{\text{supernet bias}}.\] The final line decomposes the overall risk into a _variance_ term and _supernet bias_. The former arises from the fact that we solve the problem with finite training samples. This term will vanish as \(M\to\infty\). The latter term quantifies the bias induced by the fact that (TLOP) uses the representation \(\mathbf{\phi}\) rather than the optimal representation. Finally, while supernet \(\mathbf{\phi}\) in (TLOP) is arbitrary, for end-to-end guarantees we will set it to the solution \(\hat{\mathbf{\phi}}\) of (M\({}^{2}\)TL). In this scenario, we will refer to \(\{\mathcal{D}_{t}\}_{t=1}^{T}\) as source tasks. ## 3 Main Results We are ready to present our results that establish generalization guarantees for multitask and transfer learning problems over supernet pathways. Our results will be stated in terms of Gaussian complexity which is introduced below. **Definition 1** (Gaussian Complexity): _Let \(\mathcal{Q}\) be a set of hypotheses that map \(\mathcal{Z}\) to \(\mathbb{R}^{r}\). Let \((\mathbf{g}_{i})_{i=1}^{n}\) (\(\mathbf{g}_{i}\in\mathbb{R}^{r}\)) be \(n\) independent vectors each distributed as \(\mathcal{N}(\mathbf{0},\mathbf{I}_{r})\) and let \(\mathbf{Z}=(\mathbf{z}_{i})_{i=1}^{n}\in\mathcal{Z}^{n}\) be a dataset of input features. Then, the empirical Gaussian complexity is defined as_ \[\widehat{\mathcal{G}}_{\mathbf{Z}}(\mathcal{Q})=\mathbb{E}_{\mathbf{g}_{i}}\left[ \sup_{q\in\mathcal{Q}}\frac{1}{n}\sum_{i=1}^{n}\mathbf{g}_{i}^{\top}q(\mathbf{z}_{i}) \right].\] _The worst-case Gaussian complexity is obtained by considering the supremum over \(\mathbf{Z}\in\mathcal{Z}^{n}\) as follows_ \[\widehat{\mathcal{G}}_{n}^{\mathcal{Z}}(\mathcal{Q})=\sup_{\mathbf{Z}\in\mathcal{ Z}^{n}}[\widehat{\mathcal{G}}_{\mathbf{Z}}(\mathcal{Q})].\] For cleaner notation, we drop the superscript \(\mathcal{Z}\) from the worst-case Gaussian complexity (using \(\widehat{\mathcal{G}}_{n}(\mathcal{Q})\)) as its input space will be clear from context. When \(\mathbf{Z}=(\mathbf{z}_{i})_{i=1}^{n}\) are drawn i.i.d. from \(\mathcal{D}\), the (usual) Gaussian complexity is defined by \(\mathcal{G}_{n}(\mathcal{Q})=\mathbb{E}_{\mathbf{Z}\sim\mathcal{D}^{n}}[\widehat{ \mathcal{G}}_{\mathbf{Z}}(\mathcal{Q})]\). Note that, we always have \(\mathcal{G}_{n}(\mathcal{Q})\leq\widetilde{\mathcal{G}}_{n}(\mathcal{Q})\) assuming \(\mathcal{D}\) is supported on \(\mathcal{Z}\). In our setting, keeping track of distributions along exponentially many pathways proves challenging, and we opt to use \(\widetilde{\mathcal{G}}_{n}(\mathcal{Q})\) which leads to clean upper bounds. The supplementary material also derives tighter but more convoluted bounds in terms of empirical complexity. Finally, it is well-known that Gaussian/Rademacher complexities scale as \(\sqrt{\text{comp}(\mathcal{Q})/n}\) where \(\text{comp}(\mathcal{Q})\) is a set complexity such as VC-dimension, which links to our informal statement (1). We will first present our generalization bounds for the Multipath MTL problem using empirical process theory arguments. Our bounds will lead to meaningful guarantees for specific MTL settings, including vanilla MTL where all tasks share a single representation, as well as hierarchical MTL depicted in Fig. 0(b). We will next derive transfer learning guarantees in terms of supernet bias, which quantifies the performance difference of a supernet from its optimum for a target. To state our results, we introduce two standard assumptions. **Assumption 1**: _Elements of hypothesis sets \(\mathcal{H}\) and \((\Psi_{\ell})_{\ell=1}^{L}\) are \(\Gamma\)-Lipschitz functions with respect to Euclidean norm._ **Assumption 2**: _Loss function \(\ell(\cdot,y):\mathbb{R}\times\mathbb{R}\to[0,1]\) is \(\Gamma\)-Lipschitz with respect to Euclidean norm._ ### Results for Multipath Multitask Learning This section presents our task-averaged generalization bound for Multipath MTL problem. Recall that \(\hat{\mathbf{f}}=(\hat{\mathbf{h}},\hat{\mathbf{\alpha}},\hat{\mathbf{\phi}})\) is the outcome of the ERM problem (M\({}^{2}\)TL). Observe that, if we were solving the problem with only one task, the generalization bound would depend on only one module per layer rather than the overall size of the supernet. This is because each task gets to select a single module through their pathway. In light of this, we can quantify the utilization of supernet layers as follows: Let \(\hat{K}_{\ell}\) be the number of modules utilized by the empirical solution \(\hat{\mathbf{f}}\). Formally, \(\hat{K}_{\ell}=|\{\hat{\alpha}_{t}[\ell]\ \ \text{for}\ \ t\in[T]\}|\). The following theorem provides our guarantee in terms of Gaussian complexities of individual modules. **Theorem 1**: _Suppose Assumptions 1&2 hold. Let \(\hat{\mathbf{f}}\) be the empirical solution of (M\({}^{2}\)TL). Then, with probability at least \(1-\delta\), the excess test risk in (2) obeys \(\mathcal{R}_{\text{M}^{2}\text{TL}}(\hat{\mathbf{f}})\)_ \[\lesssim\widetilde{\mathcal{G}}_{N}(\mathcal{H})+\sum_{\ell=1}^{L}\sqrt{\hat{K}_{ \ell}}\widetilde{\mathcal{G}}_{NT}(\Psi_{\ell})+\sqrt{\frac{\log|\mathcal{A}|}{N} +\frac{\log(2/\delta)}{NT}}.\] _Here, the input spaces for \(\mathcal{H}\) and \(\Psi_{\ell}\) are \(\mathcal{X}_{\mathcal{H}}=\Psi_{L}\circ\dots\Psi_{1}\circ\mathcal{X}\), \(\mathcal{X}_{\Psi_{\ell}}=\Psi_{\ell-1}\circ\dots\Psi_{1}\circ\mathcal{X}\) for \(\ell>1\), and \(\mathcal{X}_{\Psi_{1}}=\mathcal{X}\)._ In Theorem 1, \(\sqrt{\frac{\log|\mathcal{A}|}{N}}\) quantifies the cost of learning the pathway and \(\widetilde{\mathcal{G}}_{N}(\mathcal{H})\) quantifies the cost of learning the prediction head for each task \(t\in[T]\). \(\log|\mathcal{A}|\) dependence is standard for the discrete search space \(|\mathcal{A}|\). The \(\widetilde{\mathcal{G}}_{NT}(\Psi_{\ell})\) terms are more interesting and reflect the benefits of MTL. The reason is that, these modules are essentially learned with \(NT\) samples rather than \(N\) samples, thus cost of representation learning is shared across tasks. The \(\sqrt{\hat{K}_{\ell}}\) multiplier highlights the fact that, we only need to worry about the used modules rather than all possible \(K_{\ell}\) modules we could have used. In essence, \(\sum_{\ell=1}^{L}\sqrt{\hat{K}_{\ell}}\widetilde{\mathcal{G}}_{NT}(\Psi_{\ell})\) summarizes the Gaussian complexity of \(\widetilde{\mathcal{G}}(\Phi_{\text{used}})\) where \(\Phi_{\text{used}}\) is the subnetwork of the supernet utilized by the ERM solution \(\hat{\mathbf{f}}\). By definition \(\widetilde{\mathcal{G}}(\Phi_{\text{used}})\leq\widetilde{\mathcal{G}}(\Phi)\). With all these in mind, Theorem 1 formalizes our earlier statement (1). A key challenge we address in Theorem 1 is decomposing the complexity of the combined hypothesis class \(\mathcal{F}\) in (M\({}^{2}\)TL) into its building blocks \(\mathcal{A},\mathcal{H},(\Psi_{\ell})_{\ell=1}^{L}\). This is accomplished by developing Gaussian complexity chain rules inspired from the influential work of [13, 14]. While this work focuses on two layer composition (prediction heads composed with a shared representation), we develop bounds to control arbitrarily long compositions of hypotheses. Accomplishing this in our multipath setting presents additional technical challenges because each task gets to choose a unique pathway. Thus, tasks don't have to contribute to the learning process of each module unlike the vanilla MTL with shared representation. Consequently, ERM solution is highly heterogeneous and some modules and tasks will be learned better than the others. Worst-case Gaussian complexity plays an important role to establish clean upper bounds in the face of this heterogeneity. In fact, in supplementary material, we provide tighter bounds in terms of empirical Gaussian complexity \(\widetilde{\mathcal{G}}\), however, they necessitate more convoluted definitions that involve the number of tasks that choose a particular module. Finally, we note that our bound has a natural interpretation for parametric classes whose \(\log(\varepsilon\)-covering number) (i.e. metric entropy) grows with degrees of freedom as \(\text{DoF}\cdot\log(1/\varepsilon)\). Then, Theorem 1 implies a risk bound proportional to \(\sqrt{\frac{T\cdot(\text{DoF}(\mathcal{H})+\log|\mathcal{A}|)+\sum_{\ell=1}^{L }\hat{K}_{\ell}\cdot\text{DoF}(\Psi_{\ell})}{NT}}\). For a neural net implementation, this means small risk as soon as total sample size \(NT\) exceeds total number of weights. We have a few more remarks in place, discussed below. \(\bullet\)**Dependencies.** In Theorem 1, \(\lesssim\) suppresses dependencies on \(\log(NT)\) and \(\Gamma^{L}\). The latter term arises from the exponentially growing Lipschitz constant as we compose more/deeper modules, however, it can be treated as a constant for fixed depth \(L\). We note that such exponential depth dependence is frequent in existing generalization guarantees in deep learning literature [1, 1, 13]. In supplementary material, we prove that the exponential dependence can be replaced with a much stronger bound of \(\sqrt{L}\) by assuming parameterized hypothesis classes. \(\bullet\)**Implications for Vanilla MTL.** Observe that Vanilla MTL with single shared representation corresponds to the setting \(L=1\) and \(K_{1}=1\). Also supernet is simply \(\Phi=\Psi_{1}\) and \(\log|\mathcal{A}|=0\). Applying Theorem 1 to this setting with \(T\) tasks each with \(N\) samples, we obtain an excess risk upper bound of \(\widetilde{\mathcal{O}}\left(\widetilde{\mathcal{G}}_{NT}(\Phi)+\widetilde{ \mathcal{G}}_{N}(\mathcal{H})\right)\), where representation \(\Phi\) is trained with \(NT\) samples with input space \(\mathcal{X}\), and task-specific heads \(h_{t}\in\mathcal{H}\) are trained with \(N\) samples with input space \(\Phi\circ\mathcal{X}\). This bound recovers earlier guarantees by [14, 13]. \(\bullet\)**Unselected modules do not hurt performance.** A useful feature of our bound is its dependence on \(\Phi_{\text{used}}\) (spanned by empirical pathways) rather than full hypothesis class \(\Phi\). This feature arises from a uniform concentration argument where we uniformly control the excess MTL risk over all potential \(\Phi_{\text{used}}\) choices. This uniform control ensures \(\widetilde{\mathcal{G}}_{NT}(\Phi_{\text{used}})\) cost for the actual solution \(\hat{\mathbf{f}}\) and it only comes at the cost of an additional \(\sqrt{\frac{\log|\mathcal{A}|}{N}}\) term which is free (up to constant)! \(\bullet\)**Continuous pathways.** This work focuses on relatively simple pathways where tasks choose one module from each layer. The results can be extended to other choices of pathway sets \(\mathcal{A}\). First, note that, as long as \(\mathcal{A}\) is a discrete set, we will naturally end up with the excess risk dependence of \(\sqrt{\frac{\log|\mathcal{A}|}{N}}\). However, one can also consider continuous \(\alpha\), for instance, due to relaxation of the discrete set with a simplex constraint. Such approaches are common in differentiable architecture search methods [15]. In this case, each entry \(\alpha[\ell]\) can be treated as a \(K_{\ell}\) dimensional vector that chooses a continuous superposition of \(\ell\)'th layer modules. Thus, the overall \(\alpha\in\mathcal{A}\) parameter would have \(\text{comp}(\mathcal{A})=\sum_{\ell=1}^{L}K_{\ell}\) resulting in an excess risk term of \(\sqrt{\sum_{\ell=1}^{L}K_{\ell}/N}\). Note that, these are high-level insights based on classical generalization arguments. In practice, performance can be much better than these uniform concentration based upper bounds. \(\bullet\)**No harm under overparameter.** A drawback of Theorem 1 is that, it is an average-risk guarantee over \(T\) tasks. In practice, it is possible that some tasks are hurt during MTL because they are isolated or dissimilar to others (see supplementary for examples). Below, we show that, if the supernet achieves zero empirical risk, then, no task will be worse than the scenario where they are individually trained with \(N\) samples, i.e. Multipath MTL does not hurt any task. **Lemma 1**: _Recall \(\hat{\mathbf{f}}\) is the solution of (M\({}^{2}\)TL) and \(\hat{f}_{t}=\hat{h}_{t}\circ\hat{\mathbf{\phi}}_{\hat{\alpha}_{t}}\) is the associated task-\(t\) hypothesis. Define the excess risk of task \(t\) as \(\mathcal{R}_{t}(\hat{f}_{t})=\mathcal{L}_{t}(\hat{f}_{t})-\mathcal{L}_{t}^{*}\) where \(\mathcal{L}_{t}(f)=\mathbb{E}_{\mathcal{D}_{t}}[\widehat{\mathcal{L}}_{t}(f)]\) is the population risk of task \(t\) and \(\mathcal{L}_{t}^{*}\) is the optimal achievable test risk for task \(t\) over \(\mathcal{F}\). With probability at least \(1-\delta-\mathbb{P}(\widehat{\mathcal{L}}_{\mathcal{S}_{\text{all}}}(\hat{\mathbf{f }})\neq 0)\), for all tasks \(t\in[T]\),_ \[\mathcal{R}_{t}(\hat{f}_{t})\lesssim\widetilde{\mathcal{G}}_{N}(\mathcal{H})+ \sum_{\ell=1}^{L}\widetilde{\mathcal{G}}_{N}(\Psi_{\ell})+\sqrt{\frac{\log(2T/ \delta)}{N}}.\] Here, \(\mathbb{P}(\widehat{\mathcal{L}}_{\mathcal{S}_{\text{all}}}(\hat{\mathbf{f}})=0)\) is the event of interpolation (zero empirical risk) under which the guarantee holds. We call this _no harm_ because the bound is same as what one would get by applying union bound over \(T\) empirical risk minimizations where each task is optimized individually. ### Transfer Learning with Optimal Pathway Following Multipath MTL problem, in this section, we discuss guarantees for transfer learning on a supernet. Recall that \(\mathcal{A}\) is the set of pathways and our goal in (TLOP) is finding the optimal pathway \(\alpha\in\mathcal{A}\) and prediction head \(h\in\mathcal{H}_{\mathcal{T}}\) to achieve small target risk. In order to quantify the bias arising from the Multipath MTL phase, we introduce the following definition. Definition 2 (Supernet Bias): Recall the definitions \(\mathcal{D}_{\mathcal{T}}\), \(\mathcal{H}_{\mathcal{T}}\), and \(\mathcal{L}_{\mathcal{T}}^{\star}\) stated in Section 2. Given a supernet \(\phi\), we define the supernet/representation bias of \(\phi\) for a target \(\mathcal{T}\) as \[\text{Bias}_{\mathcal{T}}(\phi)=\min_{h\in\mathcal{H}_{\mathcal{T}},\alpha\in \mathcal{A}}\mathcal{L}_{\mathcal{T}}(h\circ\phi_{\alpha})-\mathcal{L}_{ \mathcal{T}}^{\star}.\] Definition 2 is a restatement of the supernet bias term in (3). Importantly, it ensures that the optimal pathway-representation over \(\phi\) can not be worse than the optimal performance by \(\text{Bias}_{\mathcal{T}}(\phi)\). Following this, we can state a generalization guarantee for transfer learning problem (TLOP). Theorem 2 (): _Suppose Assumptions 1&2 hold. Let supernet \(\hat{\phi}\) be the solution of (M\({}^{2}\)TL) and \(\hat{f}_{\hat{\phi}}\) be the empirical minima of (TLOP) with respect to supernet \(\hat{\phi}\). Then with probability at least \(1-\delta\),_ \[\mathcal{R}_{\text{TLOP}}(\hat{f}_{\hat{\phi}})\lesssim\text{Bias}_{\mathcal{ T}}(\hat{\phi})+\sqrt{\frac{\log(2|\mathcal{A}|/\delta)}{M}}+\widetilde{\mathcal{G}}_{M} (\mathcal{H}_{\mathcal{T}}),\] _where input space of \(\widetilde{\mathcal{G}}_{M}(\mathcal{H}_{\mathcal{T}})\) is given by \(\{\hat{\phi}_{\alpha}\circ\mathcal{X}\mid\alpha\in\mathcal{A}\}\)._ Theorem 2 highlights the sample efficiency of transfer learning with optimal pathway. While the derivation is straightforward relative to Theorem 1, the key consideration is the supernet bias \(\text{Bias}_{\mathcal{T}}(\hat{\phi})\). This term captures the excess risk in (TLOP) introduced by using \(\hat{\phi}\). Let \(\phi^{\star}\) be the population minima of (M\({}^{2}\)TL). Then we can define the _supernet distance_ of \(\hat{\phi}\) and \(\phi^{\star}\) by \(d_{\mathcal{T}}(\hat{\phi};\phi^{\star})=\text{Bias}_{\mathcal{T}}(\hat{\phi} )-\text{Bias}_{\mathcal{T}}(\phi^{\star})\). The distance measures how well the finite sample solution \(\hat{\phi}\) from (M\({}^{2}\)TL) performs compared to the optimal MTL solution \(\phi^{\star}\). A plausible assumption is so-called _task diversity_ proposed by Chen et al. (2021); Tripuraneni et al. (2020); Xu and Tewari (2021). Here, the idea (or assumption) is that, if a target task is similar to the source tasks, the distance term for target can be controlled in terms of the excess MTL risk \(\mathcal{R}_{\text{M}^{2}\text{TL}}(\hat{\mathbf{f}})\) (e.g. by assuming \(d_{\mathcal{T}}(\hat{\phi};\phi^{\star})\lesssim\mathcal{R}_{\text{M}^{2}\text {TL}}(\hat{\mathbf{f}})+\varepsilon\)). Plugging in this assumption would lead to end-to-end transfer guarantees by integrating Theorems 1 and 2, and we extend the formal analysis to appendix. However, as discussed in Theorem 4, in multipath setting, the problem is a lot more intricate because source tasks can choose totally different task-specific representations making such assumptions unrealistic. In contrast, Theorem 4 establishes concrete guarantees by probabilistically relating target and source distributions. Finally, \(\text{Bias}_{\mathcal{T}}(\phi^{\star})\) term is unavoidable, however, similar to \(d_{\mathcal{T}}(\hat{\phi};\phi^{\star})\), it will be small as long as source and target tasks benefit from a shared supernet at the population level. ## 4 Guarantees for Linear Representations As a concrete instantiation of Multipath MTL, consider a linear representation learning problem where each module \(\psi_{\ell}^{k}\) applies matrix multiplications parameterized by \(\mathbf{B}_{\ell}^{k}\) with dimensions \(p_{\ell}\times p_{\ell-1}\): \(\psi_{\ell}^{k}(\mathbf{x})=\mathbf{B}_{\ell}^{k}\mathbf{x}\). Here \(p_{\ell}\) are module dimensions with input dimension \(p_{0}=p\) and output dimension \(p_{L}\). Given a path \(\alpha\), we obtain the linear representation \(\mathbf{B}_{\alpha}=\Pi_{\ell=1}^{L}\mathbf{B}_{\ell}^{\alpha[\ell]}\in\mathbb{R}^{p_{ L}\times p}\) where \(p_{L}\) is the number of rows of the final module \(\mathbf{B}_{p}^{\alpha[L]}\). When \(p_{L}\ll p\), \(\mathbf{B}_{\alpha}\) is a fat matrix that projects \(\mathbf{x}\in\mathbb{R}^{p}\) onto a lower dimensional subspace. This way, during few-shot adaptation, we only need to train \(p_{L}\ll p\) parameters with features \(\mathbf{B}_{\alpha}\mathbf{x}\). This is also the central idea in several works on linear meta-learning (Kong et al., 2020; Sun et al., 2021; Bouniot et al., 2020; Tripuraneni et al., 2021) which focus on a single linear representation. Our discussion within this section extends these results to the Multipath MTL setting. Denote \(\mathbf{f}=\{((\mathbf{B}_{\ell}^{k})_{k=1}^{K_{\ell}})_{\ell=1}^{L},(\mathbf{h}_{t},\alpha_ {t})_{t=1}^{T}\}\) where \(\mathbf{h}_{t}\in\mathbb{R}^{p_{L}}\) the linear prediction heads. Let \(\mathcal{F}\) be the search space associated with \(\mathbf{f}\). Follow the similar setting as in Section 2 and let \(\mathcal{X}\subset\mathbb{R}^{p}\). Given dataset \(\mathcal{S}_{\text{all}}=(\mathcal{S}_{t})_{\ell=1}^{T}\), we study \[\hat{\mathbf{f}}=\min_{\mathbf{f}\in\mathcal{F}}\widehat{\mathcal{L}}_{ \mathcal{S}_{\text{all}}}(\mathbf{f}):=\frac{1}{NT}\sum_{t=1}^{T}\sum_{i=1}^{N}(y_{ ti}-\mathbf{h}_{t}^{\top}\mathbf{B}_{\alpha_{t}}\mathbf{x}_{ti})^{2}. \tag{4}\] Let \(\mathcal{B}^{p}(r)\subset\mathbb{R}^{p}\) be the Euclidean ball of radius \(r\). To proceed, we make the following assumption for a constant \(C\geq 1\). Assumption 3 (): _For all \(\ell\in[L]\), \(\Psi_{\ell}\) is the set of matrices with operator norm bounded by \(C\) and \(\mathcal{H}=\mathcal{B}^{p_{L}}(C)\)._ The result below is a variation of Theorem 1 where the bound is refined for linear representations (with finite parameters). Theorem 3 (): _Suppose Assumptions 2&3 hold, and input set \(\mathcal{X}\subset\mathcal{B}^{p}(R)\) for a constant \(R>0\). Then, with probability at least \(1-\delta\),_ \[\mathcal{R}_{M^{2}\text{TL}}(\hat{\mathbf{f}})\lesssim\sqrt{\frac{L\cdot\text{DoF}( \mathcal{F})}{NT}}+\sqrt{\frac{\log|\mathcal{A}|}{N}+\frac{\log(2/\delta)}{NT}},\] _where \(\text{DoF}(\mathcal{F})=T\cdot p_{L}+\sum_{\ell=1}^{L}K_{\ell}\cdot p_{\ell} \cdot p_{\ell-1}\) is the total number of trainable parameters in \(\mathcal{F}\)._ We note that Theorem 3 can be stated more generally for neural nets by placing ReLU activations between layers. Here \(\lesssim\) subsumes the logarithmic dependencies, and the sample complexity has linear dependence on \(L\) (rather than exponential dependence as in Thm 1). In essence, it implies small task-averaged excess risk as soon as total sample size \(\gtrsim\) total number of weights. While flexible, this result does not guarantee that \(\hat{\mathbf{f}}\) can benefit transfer learning for a new task. To proceed, we introduce additional assumptions under which we can guarantee the success of (TLOP). The first assumption is a realizability condition that guarantees tasks share same supernet representation (so that supernet bias is small). Assumption 4 (): _Task datasets are generated from a planted model \((\mathbf{x}_{t},y_{t})\sim\mathcal{D}_{t}\) where \(\mathbf{x}_{t},z_{t}\) are zero mean, \(\mathcal{O}\left(1\right)\)-subgaussian and \(\mathbb{E}[\mathbf{x}_{t}\mathbf{x}_{t}^{\top}]=\mathbf{I}_{p}\). **(B)** Task vectors are generated according to ground-truth supernet \(\mathbf{f}^{\star}=\{((\bar{\mathbf{B}}_{t}^{k})^{K_{\ell}})^{L}_{k=1},(\bar{\mathbf{h}}_{t },\bar{\alpha}_{t})^{T}_{t=1}\}\) so that \(\mathbf{\theta}_{t}^{\star}=\bar{\mathbf{B}}_{\bar{\alpha}_{t}}^{\top}\bar{\mathbf{h}}_{t}\). \(\mathbf{f}^{\star}\) is normalized so that \(\|\bar{\mathbf{B}}_{\ell}^{k}\|=\|\bar{\mathbf{h}}_{t}\|=1\)._ Our second assumption is a task diversity condition adapted from [11, 12, 13] that facilitates the identifiability of the ground truth supernet. **Assumption 5** (Diversity during MTL): _Cluster the tasks by their pathways via \(\mathbf{H}_{\alpha}=\{\bar{\mathbf{h}}_{t}\mid\bar{\alpha}_{t}=\alpha\}\). Define cluster population \(\gamma_{\alpha}=|\mathbf{H}_{\alpha}|/p_{L}\) and covariance \(\mathbf{\Sigma}_{\alpha}=\gamma_{\alpha}^{-1}\sum_{\mathbf{h}\in\mathbf{H}_{\alpha}}\mathbf{h} ^{\top}\). For a proper constant \(c>0\) and for all pathways \(\alpha\) we have \(\mathbf{\Sigma}_{\alpha}\succeq\mathbf{c}\mathbf{I}_{p_{L}}\)._ Verbally, this condition requires that, if a pathway is chosen by a source task, that pathway should contain diverse tasks so that (M\({}^{2}\)TL) phase can learn a good representation that can benefit transfer learning. However, this definition is flexible in the sense that pathways can still have sophisticated interactions/intersections and we don't assume anything for the pathways that are not chosen by source. We also have the challenge that, some pathways can be a lot more populated than others and target task might suffer from poor MTL representation quality over less populated pathways. The following assumption is key to overcoming this issue by enforcing a distributional prior on the target task pathway so that _its pathway is similar to the source tasks in average_. **Assumption 6** (Distribution of target task): _Draw \(\alpha_{\mathcal{T}}\) uniformly at random from source pathways \((\bar{\alpha}_{t})^{T}_{t=1}\). Target task is distributed as in Assumption 4(A) with pathway \(\alpha_{\mathcal{T}}\) and \(\mathbf{\theta}_{\mathcal{T}}^{\star}=\bar{\mathbf{B}}_{\alpha_{\mathcal{T}}}^{\top} \mathbf{h}_{\mathcal{T}}\) with \(\|\mathbf{h}_{\mathcal{T}}\|=1\)._ With these assumptions, we have the following result that guarantees end-to-end multipath learning ((M\({}^{2}\)TL) phase followed by (TLOP) using MTL representation). **Theorem 4**: _Suppose Assumptions 3-6 hold and \(\ell(\hat{y},y)=(y-\hat{y})^{2}\). Additionally assume input set \(\mathcal{X}\subset\mathcal{B}^{p}(R)\) for a constant \(R>0\) and \(\mathcal{H}_{\mathcal{T}}\subset\mathbb{R}^{p_{L}}\). Solve MTL problem (M\({}^{2}\)TL) with the knowledge of ground-truth pathways \((\bar{\alpha}_{t})^{T}_{t=1}\) to obtain a supernet \(\hat{\mathbf{\phi}}\) and \(NT\gtrsim\text{Do}F(\mathcal{F})\log(NT)\). Solve transfer learning problem (TLOP) with \(\hat{\mathbf{\phi}}\) to obtain a target hypothesis \(\hat{f}_{\hat{\mathbf{\phi}}}\). Then, with probability at least \(1-3e^{-ch}-\delta\), path-averaged excess target risk (3) obeys \(\mathbb{E}_{\alpha_{\mathcal{T}}}[\mathcal{R}_{\text{TLOP}}(\hat{f}_{\hat{\bm {\phi}}})]\)_ \[\lesssim p_{L}\sqrt{\frac{L\cdot\text{Do}F(\mathcal{F})+\log(8/\delta)}{NT}}+ \frac{p_{L}}{M}+\sqrt{\frac{\log(8|\mathcal{A}|/\delta)}{M}}.\] _Here \(\text{Do}F(\mathcal{F})=T\cdot p_{L}+\sum_{\ell=1}^{L}K_{\ell}\cdot p_{\ell} \cdot p_{\ell-1}\), and \(\mathbb{E}_{\alpha_{\mathcal{T}}}\) denotes the expectation over the random target pathways._ In words, this result controls the target risk in terms of the sample size of the target task and sample size during multi-task representation learning, and provides a concrete instantiation of discussion following Theorem 2. In Theorem 9 in appendix, we provide a tighter bound for expected transfer risk when linear head \(\mathbf{h}_{\mathcal{T}}\) is uniformly drawn from the unit sphere. The primary challenge in our work compared to related vanilla MTL results by [11, 12, 13, 14] is the fact that, we deal with exponentially many pathway representations many of which may be low quality. Assumption 6 allows us to convert task-averaged MTL risk into a transfer learning guarantee over a _random pathway_. Finally, Theorem 4 assumes that source pathways are known during MTL phase. In Appendix E, we show that this assumption is indeed necessary: Otherwise, one can construct scenarios where (M\({}^{2}\)TL) problem admits an alternative solution \(\tilde{\mathbf{f}}\) with optimal MTL risk but the resulting supernet \(\tilde{\mathbf{\phi}}\) achieves poor target risk. Supplementary material discusses this challenge and identifies additional conditions that make ground-truth pathways uniquely identifiable when we solve (M\({}^{2}\)TL). ## 5 Insights from Hierarchical Representations We now discuss the special two-layer supernet structure depicted in Figure 0(b). This setting groups tasks into \(K:=K_{2}\) clusters and first layer module is shared across all tasks (\(K_{1}=1\)). Ignoring first layer, pathway \(\alpha_{t}\in[K]\) becomes the clustering assignment for task \(t\). Applying Theorem 1, we obtain a generalization bound of \[\mathcal{R}_{\text{M}^{2}\text{TL}}(\hat{\mathbf{f}})\lesssim\widetilde{\mathcal{G }}_{NT}(\Psi_{1})+\sqrt{K}\widetilde{\mathcal{G}}_{NT}(\Psi_{2})+\widetilde{ \mathcal{G}}_{N}(\mathcal{H})+\sqrt{\frac{\log K}{N}}.\] Here, \(\psi_{1}\in\Psi_{1}\) is the shared first layer module, \(\psi_{2}^{k}\in\Psi_{2}\) is the module assigned to cluster \(k\in[K]\) that personalizes its representation, and we have \(|\mathcal{A}|=K\). To provide further insights, let us focus on linear representations with the notation of Section 4: \(\psi_{1}(\mathbf{x})=\mathbf{B}_{1}\mathbf{x}\), \(\psi_{2}^{k}(\mathbf{x}^{\prime})=\mathbf{B}_{2}^{k}\mathbf{x}^{\prime}\), and \(h_{t}(\mathbf{x}^{\prime\prime})=\mathbf{h}_{t}^{\top}\mathbf{x}^{\prime\prime}\) with dimensions \(\mathbf{B}_{1}\in\mathbb{R}^{R\times p}\), \(\mathbf{B}_{2}^{k}\in\mathbb{R}^{r\times R}\), \(\mathbf{h}_{t}\in\mathbb{R}^{r}\) and \(r\leq R\leq p\). Our bound now takes the form \[\mathcal{R}_{\text{M}^{2}\text{TL}}(\hat{\mathbf{f}})\lesssim\sqrt{\frac{Rp+KrR+T(r +\log K)}{NT}},\] where \(Rp\) and \(KrR\) are the number of parameters in supernet layers \(1\) and \(2\), and \((r+\log K)/N\) is the cost of learning pathway and prediction head per task. Let us contrast this to the shallow MTL approaches with \(1\)-layer supernets. \(\bullet\)**Vanilla MTL:** Learn \(\mathbf{B}_{1}\in\mathbb{R}^{R\times p}\) and learn larger prediction heads \(\mathbf{h}_{t}^{V}\in\mathbb{R}^{R}\) (no clustering needed). \(\bullet\)**Cluster MTL:** Learn larger cluster modules \(\mathbf{B}_{2}^{C,k}\in\mathbb{R}^{r\times p}\), and learn pathway \(\alpha_{t}\) and head \(\mathbf{h}_{t}\in\mathbb{R}^{r}\) (no \(\mathbf{B}_{1}\) needed). **Experimental Insights.** Before providing a theoretical comparison, let us discuss the experimental results where we compare these three approaches in a realizable dataset generated according to Figure 0(b). Specifically, we generate \(\bar{\mathbf{B}}_{1}\) and \(\{\bar{\mathbf{B}}_{2}^{k}\}_{k=1}^{K}\) with orthonormal rows uniformly at random independently. We also generate \(\bar{\mathbf{h}}_{t}\) uniformly at random over the unit sphere independently. Let \(\bar{\alpha}_{t}\) be the cluster assignment of task \(t\) where each cluster has same size/number of tasks with \(\bar{T}=T/K\) tasks. The distribution \(\mathcal{D}_{t}\) associated with task \(t\) is generated as \[y=\mathbf{x}^{\top}\mathbf{\theta}_{t}^{\star}\quad\text{where}\quad\mathbf{\theta}_{t}^{ \star}=(\bar{\mathbf{h}}_{t}^{\top}\bar{\mathbf{B}}_{2}^{\alpha_{t}}\bar{\mathbf{B}}_{1})^{ \top},\;\mathbf{x}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{p}),\] without label noise. We evaluate and present results from two scenarios where cluster assignment of each task \(\bar{\alpha}_{t}\) is known (Figure 2) or not (Figure 3). MTL, Cluster-MTL and Multipath-MTL labels corresponds to our single representation, clustering and hierarchical MTL strategies respectively, in the figures. In Figure 2, we solve MTL problems with the knowledge of clustering \(\tilde{\alpha}_{t}\). We set ambient dimension \(p=32\), shared embedding \(R=8\), and cluster embeddings \(r=2\). We consider a base configuration of \(K=40\) clusters, \(\bar{T}=T/K=10\) tasks per cluster and \(N=10\) samples per task (see supplementary material for further details). Figure 2 compares the performance of three approaches for the task-averaged MTL test risk and demonstrates consistent benefits of Multipath MTL for varying \(K,\bar{T},N\). We also consider the setting where \(\tilde{\alpha}_{t}\), \(t\in[T]\) are unknown during training. Set \(p=128\), \(R=32\) and \(r=2\), and fix number of clusters \(K=50\) and cluster size \(\bar{T}=10\). In this experiment, instead of using the ground truth clustering \(\bar{\alpha}_{t}\), we also learn the clustering assignment \(\hat{\alpha}_{t}\) for each task. As we discussed and visualized in supplementary material, it is not easy to cluster random tasks even with the hindsight knowledge of task vectors \(\mathbf{\theta}_{t}^{\star}\). To overcome this issue, we add correlation between tasks in the same cluster. Specifically, generate the prediction head by \(\bar{\mathbf{h}}_{t}^{\star}=\gamma\bar{\mathbf{h}}^{k}+(1-\gamma)\bar{\mathbf{h}}_{t}\) where \(\bar{\mathbf{h}}^{k},\bar{\mathbf{h}}_{t}\) are random unit vectors corresponding to the cluster \(k\) and task \(t\) (assuming \(\bar{\alpha}_{t}=k\)). To cluster tasks, we first run vanilla MTL and learn the shared representation \(\hat{\mathbf{B}}_{1}\) and heads \((\hat{\mathbf{h}}_{t}^{V})_{t=1}^{T}\). Next build task vector estimates by \(\hat{\mathbf{\theta}}_{t}:=\hat{\mathbf{B}}_{1}^{\top}\hat{\mathbf{h}}_{t}^{V}\), and get \(T\times T\) task similarity matrix using Euclidean distance metric. Applying standard \(K\)-means clustering to it provides a clustering assignment \(\hat{\alpha}_{t}\). In the experiment, we set \(\gamma=0.6\) to make sure hindsight knowledge of \(\mathbf{\theta}_{t}^{\star}\) is sufficient to correctly cluster all tasks. Results are presented in Figure 3, where solid curves are solving MTL with ground truth \(\tilde{\alpha}_{t}\) while dashed curves are using \(\hat{\alpha}_{t}\). We observe that when given enough samples (\(N\geq 60\)), all tasks are grouped correctly even if the MTL risk is not zero. More importantly, Multipath MTL does outperform both vanilla MTL and cluster MTL even when the clustering is not fully correct. **Understanding the benefits of Multipath MTL.** Naturally, superior numerical performance of Multipath MTL in Figure 2&3 partly stems from the hierarchical dataset model we study. This model will also shed light on shortcomings of 1-layer supernets drawing from our theoretical predictions. First, observe that all three baselines are exactly specified: We use the smallest model sizes that capture the ground-truth model so that they can achieve zero test risk as \(N,K,T\) grows. For instance, Vanilla MTL achieves zero risk by setting \(\mathbf{B}_{1}=\bar{\mathbf{B}}_{1},\mathbf{h}_{t}^{V}=(\bar{\mathbf{B}}_{2}^{\star_{1}})^{ \top}\bar{h}_{t}\) and cluster MTL achieves zero risk by setting \(\mathbf{B}_{2}^{C,k}=\bar{\mathbf{B}}_{2}^{k}\bar{\mathbf{B}}_{1},\mathbf{h}_{t}=\bar{\mathbf{h}}_ {t}\). Thus, the benefit of Multipath MTL arises from shorter weight sharing across tasks that reduces test risk. In light of Sec. 4, the generalization risks of these approaches can be bounded as \(\sqrt{\text{DoF}(\mathcal{F})/NT}\) where Number-of-Parameters compare as **Vanilla:**\(Rp+TR\), **Cluster:**\(Krp+Tr\), **Multipath:**\(Rp+KrR+Tr\). From this, it can be seen that Multipath is never worse than the others as long as \(Kr\geq R\) and \(\bar{T}=T/K\geq r\). These conditions hold under the assumption that multipath model is of minimal size: Otherwise, there would be a strictly smaller zero-risk model by setting \(R\gets Kr\) and \(r\leftarrow\bar{T}\). Conversely, Multipath shines in the regime \(Kr\gg R\) or \(\bar{T}\gg r\). As \(\frac{Kr}{R},\frac{p}{R}\rightarrow\infty\), Multipath strictly outperforms Cluster MTL. This arises from a _cluster diversity_ phe Figure 3: We group the \(T=500\) tasks into \(K=50\) clusters and compare the sample complexity of different MTL strategies. Given different sample size, we cluster tasks based on the trained MTL model and solve Cluster-/Multipath-MTL based on the assigned clusters. Solid curves are results using ground truth cluster knowledge \(\bar{\alpha}_{t}\) and dashed are using the learned clustering \(\hat{\alpha}_{t}\). Experimental setting follows the same setting as in Figure 2. Figure 2: We compare the sample complexity of MTL, Cluster-MTL and Multipath-MTL in a noiseless linear regression setting. For each figure, we fix two of the configurations and vary the other one. We find that Multipath-MTL is superior to both baselines of MTL and Cluster-MTL as predicted by our theory. The solid curves are the median risk and the shaded regions highlight the first and third quantile risks. Each marker is obtained by averaging 20 independent realizations. nomenon that connects to the _task diversity_ notions of prior art. In essence, since \(r\)-dimensional clusters lie on a shared \(R\) dimensional space, as we add more clusters beyond \(Kr\geq R\), they will collaboratively estimate the shared subspace which in turn helps estimating their local subspaces by projecting them onto the shared one. As \(\frac{T}{r},\frac{R}{r}\rightarrow\infty\), Multipath strictly outperforms Vanilla MTL. \(\frac{T}{r}\) is needed to ensure that there is enough task diversity within each cluster to estimate its local subspace. Finally, \(\frac{R}{r}\) ratio is the few-shot learning benefit of clustering over Vanilla MTL. The prediction heads of vanilla MTL is larger which necessitates a larger \(N\), at the minimum \(N\geq R\). Whereas Multipath works with as little as \(N\geq r\). The same argument also implies that clustering/hierarchy would also enable better transfer learning. ## 6 Related Work Our work is related to a large body of literature spanning efficient architectures and statistical guarantees for MTL, representation learning, task similarity, and subspace clustering. \(\bullet\)**Multitask Representation Learning.** While MTL problems admit multiple approaches, an important idea is building shared representations to embed tasks in a low-dimensional space [22, 23, 24, 25]. After identifying this low-dimensional representation, new tasks can be learned in a sample efficient fashion inline with the benefits of deep representations in modern ML applications. While most earlier works focus on linear models, [23, 24] provides guarantees for general hypothesis classes through empirical process theory improving over [2]. More recently, there is a growing line of work on multitask representations that spans tighter sample complexity analysis [19, 25, 26, 27, 28], convergence guarantees [29, 28, 26, 27, 28], lifelong learning [23, 24], and decision making problems [25, 26, 27, 28]. Closest to our work is [26] which provides tighter sample complexity guarantees compared to [23]. Our problem formulation generalizes prior work (that is mostly limited to single shared representation) by allowing deep compositional representations computed along supernet pathways. To overcome the associated technical challenges, we develop multilayer chain rules for Gaussian Complexity, introduce new notions to assess the quality of supernet representations, and develop new theory for linear representations. \(\bullet\)**Quantifying Task Similarity and Clustering.** We note that task similarity and clustering has been studied by [23, 24, 25, 26, 27, 28] however these works do not come with comparable statistical guarantees. Leveraging relations between tasks are explored even more broadly [23, 24]. Our experiments on linear Multipath MTL connects well with the broader subspace clustering literature [23, 24, 25]. Specifically, each learning task \(\theta_{t}\) can be viewed as a point on a high-dimensional subspace. Multipath MTL aims to cluster these points into smaller subspaces that correspond to task-specific representations. Our challenge is that we only get to see the points through the associated datasets. \(\bullet\)**ML Architectures and Systems.** While traditional ML models tend to be good at a handful of tasks, next-generation of neural architectures are expected to excel at a diverse range of tasks while allowing for multiple input modalities. To this aim, task-specific representations can help address both computational and data efficiency challenges. Recent works [23, 24, 25, 26, 27, 28, 29, 28] propose hierarchical/clustering approaches to group tasks in terms of their similarities, [23, 24, 25, 26, 27, 28, 29] focus on training mixture-of-experts (MoE) models, and similar to the pathways [26, 27, 28, 29] study on task routing. In the context of lifelong learning, PathNet, PackNet [23, 24] and many other existing methods [23, 24, 25, 26, 27, 28] propose to embed many tasks into the same network to facilitate sample/compute efficiency. PathNet as well as SNR [26] propose methods to identify pathways/routes for individual tasks and efficiently compute them over the conditional subnetwork. With the advent of large language models, conditional computation paradigm is witnessing a growing interest with architectural innovations such as muNet, GShard, Pathways, and PaLM [25, 27, 28, 29, 28, 29] and provide a strong motivation for theoretically-grounded Multipath MTL methods. ## 7 Discussion This work explored novel multitask learning problems which allow for task-specific representations that are computed along pathways of a large supernet. We established generalization bounds under a general setting which proved insightful when specialized to linear or hierarchical representations. We believe there are multiple exciting directions to explore. First, it is desirable to develop a stronger control over the generalization risk of specific groups of tasks. Our Lemma 1 is a step in this direction. Second, what are risk upper/lower bounds for Multipath MTL as we vary the depth and width of the supernet graph? Discussion in Section 5 falls under this question where we demonstrate the sample complexity benefits of Multipath MTL over traditional MTL approaches. Finally, following experiments in Section 5, can we establish similar provable guarantees for computationally-efficient algorithms (e.g. method of moments, gradient descent)? ## Acknowledgements Authors would like to thank Zhe Zhao for helpful discussions and pointing out related works. This work was supported in part by the NSF grants CCF-2046816 and CCF-2212426, Google Research Scholar award, and Army Research Office grant W911NF2110312.
2304.09863
End-User Development for Artificial Intelligence: A Systematic Literature Review
In recent years, Artificial Intelligence has become more and more relevant in our society. Creating AI systems is almost always the prerogative of IT and AI experts. However, users may need to create intelligent solutions tailored to their specific needs. In this way, AI systems can be enhanced if new approaches are devised to allow non-technical users to be directly involved in the definition and personalization of AI technologies. End-User Development (EUD) can provide a solution to these problems, allowing people to create, customize, or adapt AI-based systems to their own needs. This paper presents a systematic literature review that aims to shed the light on the current landscape of EUD for AI systems, i.e., how users, even without skills in AI and/or programming, can customize the AI behavior to their needs. This study also discusses the current challenges of EUD for AI, the potential benefits, and the future implications of integrating EUD into the overall AI development process.
Andrea Esposito, Miriana Calvano, Antonio Curci, Giuseppe Desolda, Rosa Lanzilotti, Claudia Lorusso, Antonio Piccinno
2023-04-14T09:57:36Z
http://arxiv.org/abs/2304.09863v2
# End-User Development for Artificial Intelligence: ###### Abstract In recent years, Artificial Intelligence has become more and more relevant in our society. Creating AI systems is almost always the prerogative of IT and AI experts. However, users may need to create intelligent solutions tailored to their specific needs. In this way, AI systems can be enhanced if new approaches are devised to allow non-technical users to be directly involved in the definition and personalization of AI technologies. End-User Development (EUD) can provide a solution to these problems, allowing people to create, customize, or adapt AI-based systems to their own needs. This paper presents a systematic literature review that aims to shed the light on the current landscape of EUD for AI systems, i.e., how users, even without skills in AI and/or programming, can customize the AI behavior to their needs. This study also discusses the current challenges of EUD for AI, the potential benefits, and the future implications of integrating EUD into the overall AI development process. Keywords:Artificial Intelligence End-User Development No-Code Low-Code AI Customization. ## 1 Introduction A very recent survey by the McKinsey Global Institute found that by 2022, approximately 50% of the surveyed companies will use AI in at least one function [1]. Furthermore, the recent proliferation of AI products such as Chat-GPT has contributed to the growing popularity of the topic, as evidenced by a possible correlation between the popularity of the two keywords in Google searches [2]. We would like to emphasize that in this paper, unless otherwise specified, we assume a broad definition of AI that includes autonomous systems using machine learning, neural networks, and statistical methods, as well as recommender systems, adaptive systems, and systems for face, image, speech, and pattern recognition. The motivation for this study lies in the fact that the consequences of the "one size fits all" approach often adopted by AI systems can be an advantage when it comes to reaching a broader audience, thanks to its intrinsic generality. However, such an approach often renders the overall system inadequate for the specific and situational needs of different users. The full benefits of AI systems can be increased if new approaches are devised to allow non-technical users to be directly involved in the definition and personalization of AI technologies. In this direction, End-User Development (EUD) can provide a solution to these problems, allowing users to customize AI-based systems to their own needs and providing ways to deal with outliers and reduce bias. Another important motivation is that a few literature reviews deal with the topic of EUD for AI. For example, Gresse von Wangenheim et al. provide a comprehensive mapping of tools aiding the teaching of machine learning using visual programming paradigms [3]. Similarly, Hauck et al. provide an overview of the available tools that, using node- or block-based programming, allow the development of smart IoT devices (thus, powered by AI) [4]. An additional list of tools is provided by Queiroz et al., who provide a review of tools that may help teach lay people with a minimum understanding of what AI is [5]. More in line with our study, Li et al. provide a review of no/low code tools for AI ; however, they do not cover general techniques, research trends, and challenges, which are important aspects in an SLR to drive future activities or the related research area. To fill these gaps and shed the light on the current state of the applications of EUD for AI, this paper presents a Systematic Literature Review (SLR) that focuses on solutions that support end-users to develop, customize and tailor AI models, and how such activities may shape the future of AI development. In addition, the SLR discusses the current challenges of EUD for AI, the potential benefits, and the future implications of integrating EUD into the overall AI development process. This paper is structured as follows: Section 2 details the SLR methodology; Section 0 describes the dimensions of the analysis; Section 4 discusses research challenges; Section 5 outlines the threats to the validity of this study, and, finally, Section 6 concludes the article. ## 2 Planning and Conducting the Systematic Literature Review We conducted a Systematic Literature Review (SLR) via a reproducible and thorough approach to shed the light on the current landscape of EUD for AI systems, i.e., how users, even without skills in AI and/or programming, can customize the AI behavior to their needs. According to Kitchenham, a SLR requires three steps: _planning_, _conducting_, and _reporting_[6]. This section details the first two, while Section 2 details the last one. ### Planning the SLR Planning the SLR includes the following activities [6]: 1) formulation of the research question; 2) definition of the search strings; 3) selection of data sources; 4) definition of inclusion criteria. In the following, we report the details of each activity. #### 2.1.1 Formulation of the Research Question. The main goal of our SLR is to investigate the current state of research on the EUD for AI systems. With this goal in mind, we formulated the following research question: _How users can perform EUD for AI systems?_ Answering this question allows us to provide insights into how the literature tackles the problem of AI customization and democratization. On the other hand, it will drive the identification of future research respectively by focusing on research trends and by presenting the challenges and limitations identified in the available literature. #### 2.1.2 Definition of the Search Strings. We defined a total of 9 search strings by deriving terms from the knowledge of the authors of the subject matter. The strings resulted from a combination of the two keywords "end-user development" and "artificial intelligence". To ensure that most of the literature was covered by our search, we decided to also include the concepts of "no-code" and "low-code" (i.e., names for EUD that are common in modern commercial systems [7-9]), as well as the concept of "customization" (i.e., one of the main goals of EUD, as well as one of our interests). Thus, the resulting strings used to query the search engines were: * EUD AI * EUD AI customization * End-user development AI * End-user development AI customization * No-code AI * No-code AI customization * Low-code AI customization * AI customization #### 2.1.3 Selection of Data Sources. The chosen search engine is Google Scholar, as it is considered one of the top search engines for scientific researchers and can search the largest databases of scientific publications, ensuring wide coverage of searches. #### 2.1.4 Definition of the Inclusion Criteria. This step concerns the final selection of the relevant publications based on 3 inclusion criteria: * _Peer reviewed,_ i.e., the article is the result of a peer review process. In the case of journal articles, we included publications that appeared in a journal ranked on Scimago as Q2 or Q1, while publications ranked as Q3 were carefully evaluated. In the case of conference articles, we considered the Core Conference Raking, including publications that appeared in a venue with a score of B, A, and A*, while in the case of score C, we carefully evaluated the publication. * _Written in English_. * _Focused on EUD and AI_. Relevance to the topic of EUD for AI is assessed by analyzing the title and abstract of each publication, and introduction if needed. ### Conducting the Literature Review After the initial planning phase, the literature review is conducted. Following what Kitchenham suggests, we performed two main activities: the literature review execution, and the data synthesis [6], described in the subsequent subsections. Literature Review Execution.This activity was performed from December 2022 - January 2023 following the process depicted in Figure 1, which mainly consists of 2 phases: * Digital library search_: we searched in the Google Scholar digital library using the search strings described in Section 2.1; Figure 1: Flow diagram summarizing the selection of the publications along the 2 search phases. - Backward and forward snowballing search_: we checked references and citations of the publications resulting from the previous phase, as well as publications that cited publications from Phase 1 [10]. The initial search across the digital library yielded a total of 48 potentially relevant publications. After a check for duplicates, a dataset of 46 publications was obtained. Each publication was then analyzed by reviewing the abstract, the introduction, and the conclusions, considering the inclusion criteria. In the end, this phase resulted in a total of 43 publications. After the application of the inclusion criteria, we excluded 34 publications, thus obtaining a final dataset of 9 publications. Phase 2 allowed us to retrieve further publications, without any time constraint, leading the final set to 22 publications. #### 2.0.2 Data Synthesis. The 22 publications resulting from the search phases are listed in Table 1 and in the References Section. The distribution of the selected publications according to their publication year is shown in Figure 2. ## 3 Reporting and Analyzing the Results This section reports the analysis of the literature to answer the research question presented in Section 2.1. Throughout a deep analysis of the selected publications, we defined 8 dimensions that characterize the existing EUD solutions for AI. This provides an overview of the state of the art, but it also provides an initial framework that may aid the design of novel EUD AI-based systems. A summary of the dimensions and the placement of the publications in such dimensions are reported in Table 1. An overview of the distribution of the set of publication in each dimensions category is also shown in Figure 3. Figure 2: Publications distribution by year. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline **Ref.** & \begin{tabular}{c} **Composition** \\ **paradigm** \\ \end{tabular} & \begin{tabular}{c} **Target** \\ **users** \\ \end{tabular} & **Technology** & **Domain** & **Usage** & \begin{tabular}{c} **Customization** \\ **level** \\ \end{tabular} & \begin{tabular}{c} **Approach** \\ **Output** \\ \end{tabular} \\ \hline [11] & \begin{tabular}{c} Component- \\ Based \\ \end{tabular} & \begin{tabular}{c} Lay \\ Users \\ \end{tabular} & Architecture & \begin{tabular}{c} AI Model \\ Development \\ \end{tabular} & Single & Tailoring & AI Model \\ \hline [12] & \begin{tabular}{c} Wizard-Based; \\ Rule-Based; \\ Component- \\ Based \\ \end{tabular} & Experts & IoT & \begin{tabular}{c} Education \\ and \\ Teaching \\ \end{tabular} & Collaborative & Tailoring & \begin{tabular}{c} Teaching \\ Suggestions \\ \end{tabular} \\ \hline [13] & Template-based & \begin{tabular}{c} Lay \\ Users; \\ Experts \\ \end{tabular} & AI models & \begin{tabular}{c} Domain- \\ Specific \\ Operations \\ \end{tabular} & Collaborative & Customization & AI Model \\ \hline [14] & Rule-Based & \begin{tabular}{c} Lay \\ Users; \\ Experts \\ \end{tabular} & IPA & \begin{tabular}{c} AI Model \\ Development \\ \end{tabular} & Single & Tailoring & AI Model \\ \hline [15] & \begin{tabular}{c} Component- \\ Based; \\ Workflow and \\ Data Diagrams \\ \end{tabular} & Experts & IPA & \begin{tabular}{c} Interaction \\ Design \\ \end{tabular} & Collaborative & Tailoring & AI Model \\ \hline [16] & Template-based & \begin{tabular}{c} Lay \\ Users \\ \end{tabular} & \begin{tabular}{c} Visual \\ Analytics \\ \end{tabular} & \begin{tabular}{c} Domain- \\ Specific \\ Operations \\ \end{tabular} & Collaborative & Customization & \begin{tabular}{c} Visualization \\ Prototype \\ \end{tabular} \\ \hline [17] & Wizard-Based & \begin{tabular}{c} Lay \\ Users \\ \end{tabular} & AI models & \begin{tabular}{c} Interaction \\ Design \\ \end{tabular} & Single & Customization & AI Model \\ \hline [18] & Template-based & \begin{tabular}{c} Lay \\ Users \\ \end{tabular} & AI models & \begin{tabular}{c} Domain- \\ Specific \\ Operations \\ \end{tabular} & Collaborative & Customization & \begin{tabular}{c} Business \\ Model \\ \end{tabular} \\ \hline [19] & Template-based & \begin{tabular}{c} Lay \\ Users \\ \end{tabular} & AI models & \begin{tabular}{c} Domain- \\ Specific \\ Operations \\ \end{tabular} & Collaborative & Customization & \begin{tabular}{c} Business \\ Model \\ \end{tabular} \\ \hline [20] & \begin{tabular}{c} Component- \\ Based \\ \end{tabular} & \begin{tabular}{c} Lay \\ Users \\ \end{tabular} & IoT & \begin{tabular}{c} Education \\ and \\ Teaching \\ \end{tabular} & Single & Customization & AI Model \\ \hline [21] & \begin{tabular}{c} Component- \\ Based \\ \end{tabular} & \begin{tabular}{c} Lay \\ Users \\ \end{tabular} & AI Models & \begin{tabular}{c} Education \\ and \\ Teaching \\ \end{tabular} & Single & Tailoring & AI Model \\ \hline [22] & \begin{tabular}{c} Component- \\ Based; Workflow and \\ Data Diagrams \\ \end{tabular} & \begin{tabular}{c} Lay \\ Users \\ \end{tabular} & AI Models & \begin{tabular}{c} AI Model \\ Development \\ \end{tabular} & Single & Customization & AI Model \\ \hline [23] & \begin{tabular}{c} Workflow and \\ Data Diagrams \\ \end{tabular} & \begin{tabular}{c} Lay \\ Users \\ \end{tabular} & AI Models & \begin{tabular}{c} Domain- \\ Specific \\ Operations \\ \end{tabular} & Single & Tailoring & AI Model \\ \hline [24] & \begin{tabular}{c} Component- \\ Based \\ \end{tabular} & \begin{tabular}{c} Lay \\ Users \\ \end{tabular} & AI Models & \begin{tabular}{c} AI Model \\ Development \\ \end{tabular} & Single & Customization & AI Model \\ \hline [25] & \begin{tabular}{c} Component- \\ Based1 \\ \end{tabular} & Experts & AI Models & \begin{tabular}{c} AI Model \\ Development \\ \end{tabular} & Single & Creation & AI Model \\ \hline [26] & \begin{tabular}{c} Component- \\ Based \\ \end{tabular} & \begin{tabular}{c} Lay \\ Users \\ \end{tabular} & AI Models & \begin{tabular}{c} Domain- \\ Specific \\ Operations \\ \end{tabular} & Single & Tailoring & AI Model \\ \hline \end{tabular} \end{table} Table 1: Summary of the 8 dimensions and their values for each publication of the SLR. Figure 3: Frequencies of the distributions of the publications in each dimension category. The resulting dimensions are described in the following paragraphs. For each dimension, we discuss the classification of the publications. Figure 4 provides an overview of the classification framework. It is worth remarking that the dimensions identified for and associated with each publication are not exclusive since a publication can be associated with one or more dimensions. **Dimension 1-Composition Paradigm.** EUD researchers proposed several techniques to offer lay users the ability to customize their systems [33]. In the specific context of EUD for AI, we identified 5 techniques that are described in the following paragraphs. Component-Based.It consists of composing 2D or 3D objects that represent domain-specific concepts [33]: a typical example of this technique is the jigsaw metaphor. For example, Piro et al. present an interactive paradigm, which extends an already-existent framework used to build chatbots for conversational AI [15]. The authors' proposal consists in manipulating 2D objects that represent a database's annotation schema and conversation patterns, allowing non-expert programmers to build chatbots from scratch. Wizard-Based.The wizard-based approach is useful in situations where a task can be simplified to a sequence of simple operations that guide the users throughout the overall activity, thus reducing the cognitive load of the task [33]. An example is the proposal by Rodriguez-Garcia et al., who propose a system that guides the users with a wizard-like interface in the process of the training of an AI model (i.e., dataset creation, training, and testing), providing the possibility of using the trained model in other EUD tools (e.g., Scratch) [27]. Figure 4: A classification framework for the existing EUD solutions for AI. Template-BasedIt consists of presenting to the end-users pre-made and customizable functionalities, allowing them to edit parameters and/or text to fit their own needs [33]. For example, Iyer et al. present _Trinity_, a system for Data Mining that enables the end-users to visually perform three tasks: experiment management, label management, and feature management. For each task, this tool proposes a template with a set of default configurations that can simply be modified by the end users [13]. Rule-BasedIt allows end-users to tailor AI components by defining trigger-actions rules for specific purposes [33]. For instance, Rough et al. identify and categorize existing EUD for AI tools applied to the Intelligent Personal Assistant (IPA) domain: they illustrate the opportunity given to end-users in creating their own rules to customize skills or construct routines by composing a rule, after the identification of trigger events and subsequent actions [14]. Workflow and Dataflow DiagramsThis technique concerns the way data are manipulated in the proposed solutions. It consists of using graphical elements, which can be interconnected and intertwined, depending on the needs of the users (i.e., a graph) [33]. The approach can be useful for programmers and developers who do not have a professional or technical background by enabling them to have a visual way to manipulate workflows and data. For example, Godec et al. propose a solution that allows the definition of a pipeline to automatically analyze medical images [23]. **Dimension 2-Target Users.** One of the key requirements in designing an EUD tool for AI is to carefully identify who the end users will be, considering both their limitations and strengths. The main goal is to facilitate their experience and use of the final tool. The analyzed publications led to the identification of 4 main target users, which are described in the following paragraphs. Lay UsersThey can be defined as people who lack any IT or AI-related skills. In this class, we identified generic users and domain experts. For this review, we consider children and students also as lay users (regardless of their education in IT) as they are not IT or AI experts. Most of the retrieved research focuses on this class of users. For example, Zimmermann-Niefield et al. propose an application that allows its users to train and use ML models that help them during athletic exercises, without requiring any IT or AI knowledge [31]. Another example focused on domain experts is the proposal by Sanctorum et al., who strive to find solutions to help subject matter experts autonomously manage a knowledge base in the toxicology domain [11]. ExpertsThis category includes users that are experts in IT and/or AI. Here we identified different types of experts, namely _IT experts_, _AI experts_, _developers_, and _researchers_. _IT experts_ are users that possess a technical background in IT but do not have any AI or developer skills. Kahn et al. provide a case study, showing that a block-based programming environment may be better for this kind of user [24]. On the other hand, for AI experts we mean users that are experts both in the field of IT and AI, such as AI Specialists and Machine Learning Experts. For instance, Shaikh provides an example of low-code AI customization, geared toward experts in IT that know AI models to choose the best Microsoft Azure services to create an AI-based system [28]. By _developers_ we mean users who are capable of programming: for example, Rough and Cowan discuss how the majority of EUD solutions for personal assistants are targeted toward developers, who know how to use Software Development Kits [14]. The last category is _researchers,_ i.e., users that adopt this kind of solution for research purposes. For example, Tamilselvam et al. present and test a solution that allows the quick prototyping of neural networks using either tables or a drag-and-drop interface, allowing researchers to quickly create neural networks for further usage [29]. #### 4.1.2 Dimension 4-Technology. The technology defines the constraints, the possibilities, and the context of the use of the proposed solutions. In this context, we identified 5 different types of technologies: the Internet of Things (IoT), AI Models, Intelligent Personal Assistants (IPAs), Visual Analytics, and Architecture. Internet of Things.The Internet of Things (IoT) is defined as "An open and comprehensive network of intelligent objects that can auto-organize, share information, data, and resources, reacting and acting in face of situations and changes in the environment" [34]. We classify in this category the research that was focused mainly on IoT itself, discussing AI as a secondary subject. For example, Agassi et al. discuss an IoT system that recognizes gestures, and they provide a solution that allows users to customize the recognition model itself [20]. AI Models.We classify as part of this category all research that is mainly focused on the customization, tailoring, or creation of the AI models themselves, regardless of the specific context of use. For example, Xie et al. present an IDE plugin that allows its users to create, through the paradigms of visual programming, and visualize the neural network they are building [30]. Intelligent Personal Assistant.An _IPA_ is defined as a user interface, which main interaction method is through speech, that aims at performing tasks requested in natural language [14]. We classify in this category all publications that aim at enabling users to customize chatbots or personal assistants. For example, Rough and Cowan highlight existing EUD opportunities to allow users to customize their personal assistants to make them truly "personal" [14]. Visual Analytics.Visual analytics technology supports the analysis of datasets through sophisticated tools and processes using visual and graphical representations. An example is the solution proposed by Mishra et al., which discuss the development of a prototype tool that enables leaderboard revamping through customization, based on the focus area that the end-user is interested in [16]. _Architecture._ The last category does not refer to a concrete technology but to an abstract architecture to design EUD systems for AI. For instance, Sanctorum et al. propose an architecture representing the key phases and tasks of knowledge graphs' lifecycle, to guide end-users in the definition of custom knowledge bases [11]. **Dimension 5-Domain.** With this dimension, we aim at classifying the focus of each publication. The domain of the publication sets its goal, and it may heavily influence the approach used in the publications. A total of 4 dimensions have been identified. _AI Model Development._ It represents the class of publications that mainly aim at providing the end-users with a customized AI model that they can use for either a generic task (chosen by them during the customization) or for a predefined task. For example, Carney et al. propose a solution that allows end users to train, through the manipulation of graphical elements, a pre-defined neural network architecture for classification, leaving the users the freedom to choose the classes themselves [22]. _Interaction Design._ We classify in this domain all the publications that aim at defining the way the interactive system supports users' activities and interacts with them [35]. For example, Bunt et al. try to discover new ways of interacting with AI-based systems, specifically with an AI-enabled recommender system-like tool, which can provide users with suggestions on how to customize graphical interfaces according to their personal preferences [17]. _Domain-Specific Operations._ This general domain comprises all publications that aim at allowing users to customize or create AI models that are designed to perform a specific task in a specific domain. For example, Jauhar et al. explore how EUD and AI can be used in inventory and supply chains to enable retailers and operators to manage, most coherently and consistently as possible, managing costs, processes, anomalies, and predictions by using machine learning algorithms [36]. _Education and Teaching._ The last domain identified is the one of education and teaching; it concerns how EUD, with its different subtopics, can be taught in various contexts and how awareness can be spread among developers, designers, and engineers. An example is by Paterno, who explores how AI and IT experts need to empower and encourage end users in using and creating daily automations [12]. **Dimension 6-Usage.** The _Usage_ dimension regards the number of people that the system allows to collaborate during its employment. _Single User._ It refers to the use of a system, or a solution, by a single individual. For example, Sanctorum et al. present an approach to let the subject-matter expert alone manually manage a toxicology knowledge base [11]. Collaborative.It refers to the use of a system, or a solution, by a group of individuals with multiple perspectives and experiences. For instance, Iyer et al. propose a system that holds a shared vocabulary and workspace, enabling better collaboration between more end users while recalibrating their skills as if they were equal partners [13]. **Dimension 7-Customization Level.** This dimension refers to the level of allowed modifications to the AI-based software's components. We identify three possibilities: creation, customization, and tailoring. Creation.Systems that allow the "creation" of AI models are systems that do not pose any limitation to the personalized system capabilities, nor do they provide a pre-made system that can be adapted to the users' Needs. Examples are by Xie et al. and Tamilselvam et al., who both define solutions that allow end users to graphically build neural networks [29, 30]. Customization.With customization, we identify all the approaches that allow the users to heavily alter the parameters of an existing AI model. However, the coarse task (e.g., classification) and the model architecture are predefined. For example, the solution proposed by Carney, et al. allows training a pre-defined neural network for classification, providing the ability to customize the dataset and the available labels [22]. Tailoring.We categorize as "tailoring" systems, all solutions that allow end users to _fine-tune_ a model for their specific needs, even though the specific task is predetermined. An example is the gesture recognition system proposed by Agassi et al., in which users can change specific gestures, but they are not able to edit other aspects of the AI model [20]. **Dimension 8-Approach Output.** This dimension was identified to define the types of results and outcomes of the proposed solutions. A total of 4 categories were identified. Teaching Suggestions.Discussion concerning how EUD should be thought about AI is still open. An example of the discussion is in the work by Paterno, who provides suggestions as to how experts should encourage end users in using and creating daily automations [12]. Visualization Prototype.EUD solutions for AI customization may also aid the users by presenting results and metrics in different modalities. For example, Mishra et al. propose visualization techniques that aid the end users in selecting models that are trained and compared fairly [16]. Business Model.The adoption of EUD may aid in undertaking business decisions and may provide new business models to be exploited. This is suggested, for example, by Redchuk et al., who show that adopting no/low-code AI technologies may shorten the implementation cycles of new systems [18]. AI Model.The main output type concerns the AI model itself. We classify in this category all the approaches that aim at providing the end users with their customized AI model. Most of the publications provide this type of output: for example, Piro et al. allow users to define chatbots that can be deployed immediately [15]. Similarly, Tamilselvam et al. propose a method that allows users to graphically build neural networks for later use [29]. ## 4 Future Challenges Throughout this SLR, some interesting and relevant challenges emerged that are worth reporting. This section thus discusses relevant open questions that may guide future research in EUD for AI. **Adopt a Human-Centered Artificial Intelligence approach.** In recent years, a new perspective has emerged that aims to reconsider the centrality of humans while reaping the benefits of AI systems to augment rather than replace professional skills: Human-Centered AI (HCAI) is a novel framework that posits that high levels of human control are not incompatible with high levels of computer automation [37]. EUD for AI will play a central role in this direction. As emphasized by Schmidt, AI-based systems should allow humans to have an appropriate level of control by providing EUD approaches for the initial configuration of the system, as well as for reconfiguration of the system at the time of use, to satisfy user needs that cannot be anticipated by the automated system [38]. This is an approach that has not been much explored in the literature, so we believe that EUD can contribute in this direction. **Support collaborative activities.** Another challenge lies in the collaborative aspect of the EUD for AI. Very little work is available on this aspect: only 6 publications out of the retrieved 22 acknowledge the collaborative aspect of AI development. However, the creation, testing, and deployment of AI models (and AI systems in general) is a collaborative activity that involves multiple actors with different expertise [39]. This highlights the requirement of collaboration in no/low-code AI tools for them to be used in real-case scenarios, especially by experts. **Provide EUD solutions also for AI experts**. The SLR found that most of the research is focused on lay users, while a few studies target experts. Although EUD for AI could lead to a democratization of AI technologies, it may be interesting to better explore how AI experts can benefit from EUD solutions for AI. In fact, although experts might create or adapt AI systems using technical tools (e.g., Python, R, Weka, etc.), providing an EUD environment might allow them to optimize their resources [18]. An example is Knime, an end-to-end data science platform that provides a graphical user interface based on the graph metaphor to support experts from building analytical models to deploying them without coding [40]. **Enable the creation of AI solutions through EUD**. A strong limitation of the current research on this topic is the scarce focus on AI system _creation_ rather than _customization_ or _tailoring_: only 2 out of 22 retrieved publications reach this goal. Furthermore, these 2 publications focus on expert users: research on AI model creation by lay users is, as far as we could find out, completely missing. The lack of the possibility for end users to create rather than adapt the model also affects solutions aimed at experts [27]. **Define the right composition paradigm for each type of user.** At the time of the review, no studies were available on the relationship between EUD programming paradigms and the level of expertise and skills of the users. This is a very important aspect that has been addressed in a similar context. For example, trigger-action programming of IoT devices was found to be easier for non-technical users when using wizard procedures as well as component- or template-based paradigms, while graph-based metaphor was found to be more effective for technical users [41, 42]. Certainly, lessons learned in similar domains can be a good starting point for EUD for AI, but further research is needed to provide the users with the right abstraction mechanisms. ## 5 Threats to validity Several threats to validity can affect the results of a systematic literature review. In the following, we report how we mitigated the most critical ones. **Selection bias**. This occurs when the studies included in the review are not representative of the entire population of studies on the topic. This has been mitigated by i) manually reviewing the publication to ensure their compliance with the SLR goal, and ii) performing two phases, i.e., search on digital libraries and snowballing. **Publication bias**. This occurs when studies that show statistically significant results are more likely to be published than studies that do not. This aspect has been mitigated by manually reading those publications that do not report any result but only a technical solution with preliminary results. Besides the generic inclusion criteria, their relevance for our SLR considered, for example, the number of citations and the novelty of the solution. **Time lag bias**. This occurs when the review does not include all relevant studies because they were published after the review was conducted. In this case, we can safely assume that this threat is not so evident in our study since the SLR has been performed 2 months before its submission. **Publication quality**. This occurs when studies of poor quality are included in the review. To mitigate this aspect, we defined inclusion criteria on the quality of the venue of the publication, leaving to a manual evaluation of the authors of this SRL the inclusion of publications that appeared in venues of lower quality. ## 6 Conclusions This SLR provides an overview of the current state of research in the field of EUD for no/low-code AI creation and customization. The first contribution is the identification of the main topics that are discussed in the community, highlighting potential benefits that lay users or businesses may get from the adoption of EUD technologies. A second contribution, the SLR sheds the light on existing limitations and key challenges that affect the current research landscapes, identifying some insights into possible future research directions. In future work, this SLR might be extended in different directions. First, further digital libraries might be considered, for example, Scholar, Elsevier, and IEEE. Then, other keywords might be defined to increase coverage. Finally, an additional manual search can be performed on Conferences Proceedings and Journals relevant to the topics of the SLR.
2303.02991
Kolmogorov $\varepsilon$-entropy of the uniform attractor for a wave equation
This paper is concerned with a non-autonomous sup-cubic semilinear wave equation in a smooth bounded domain of $\mathbb R^{3}$, using the introduced weak topology entropy, we obtain an upper bound for the $\varepsilon$-entropy of the uniform attractor for the case where the external forces are not translation-compact.
Yangmin Xiong, Chunyou Sun
2023-03-06T09:38:30Z
http://arxiv.org/abs/2303.02991v1
# Kolmogorov \(\varepsilon\)-entropy of the uniform attractor for a wave equation ###### Abstract. This paper is concerned with a non-autonomous sup-cubic semilinear wave equation in a smooth bounded domain of \(\mathbb{R}^{3}\), using the introduced weak topology entropy, we obtain an upper bound for the \(\varepsilon\)-entropy of the uniform attractor for the case where the external forces are not translation-compact. **Keywords** Kolmogorov \(\varepsilon\)-entropy; Uniform attractor; Non-translation compact; Wave equation; Strichartz estimate 2000 Mathematics Subject Classification: 35B40, 35B45, 35L70. \({}^{*}\) Corresponding author. Email address: [email protected] (Y.Xiong), [email protected] (C.Sun). This work was supported by the NSFC (Grant No. 12271227). here \(\|\cdot\|\) is the usual \(L^{2}(\Omega)\)-norm. The long-time behavior of the autonomous systems can be described by global attractors, which have been extensively studied in [1, 8, 9, 14, 25], and this attractor usually has finite fractal and Hausdorff dimension. The study becomes more complicated for the non-autonomous equations. One of the main methods is so-called uniform attractor, which originated from the work of Haraux [15, 16] and further developed by Chepyzhov and Vishik [6, 7, 8]. By means of constructing skew product flow one can reduce the problem to an autonomous one in the extended phase space and keep the invariance, but need the hull has some compactness. The Hausdorff and fractal dimension of the uniform attractor may be infinite [7, 8, 22]. A possible way to measure the "thickness" of infinite dimensional attractors, which has been suggested in [8], is to estimate their Kolmogorov \(\varepsilon\)-entropy. Based on the results of the existence and structure of uniform attractor \(\mathcal{A}_{\Sigma}\), and under the assumptions that the symbol space \(\Sigma\) is compact in the Hausdorff topological space \(\Xi\) endowed with the local uniform convergence topology, the family of processes \(\{U_{h}(t,\tau),h\in\Sigma\}\) satisfies the Lipschitz condition with respect to symbols and is uniformly quasidifferentiable, the authors in [8] proved that the upper estimate of \(\varepsilon\)-entropy of uniform attractor has the following form \[\mathbb{H}_{\varepsilon}(\mathcal{A}_{\Sigma};\mathcal{E})\leq D\log_{2}\frac {\varepsilon_{0}}{\varepsilon}+\mathbb{H}_{\lambda\varepsilon}(\Sigma_{0,L \log_{2}\frac{\varepsilon_{0}}{\varepsilon}};\Xi_{0,L\log_{2}\frac{ \varepsilon_{0}}{\varepsilon}}),\ \ \forall\ \varepsilon\leq\varepsilon_{0}\] provided that the behavior of entropy of \(\Pi_{0,l}\Sigma=\Sigma_{0,l}\) in \(\Xi_{0,l}\) is known. Here the positive constants \(D\), \(\lambda\), \(L\) and \(\varepsilon_{0}\) depend only on the parameters of equation and the topological space \(\Xi\) where the symbol space \(\Sigma\) live in, but are independent of \(\varepsilon\). With the progress on the wave equations obtained in recent years, there are some results show that one can still obtain the existence and structure of the strong uniform attractor for the system with more general external forces which are not translation compact, see e.g. [19, 21, 23, 28]. Meanwhile, based on the recent extension of the Strichartz type estimates for the bounded domains in [2, 4], the attractor theory has been developed for semilinear wave equations concerning sup-cubic nonlinearities in both autonomous and non-autonomous cases, see [17, 20, 21, 23]. Based on the above existing results, it is natural to study the Kolmogorov \(\varepsilon\)-entropy of the uniform attractors for equation (1.1) with sup-cubic nonlinearity and more general external forces. Since for the quintic wave equations, it is still not clear the behavior of the energy-to-Strichartz estimate as \(t\to\infty\) (crucial for the attractor theory) in bounded domain with Dirichlet boundary conditions, so we just consider the sub-quintic case in this paper. And in particular, we consider the following two typical cases of extra regularity for \(g\): \[g\in H^{1}_{b}(\mathbb{R};L^{2}(\Omega)) \tag{1.6}\] or \[g\in L^{2}_{b}(\mathbb{R};H^{1}(\Omega)), \tag{1.7}\] which belong to the so-called time regular or space regular functions in \(L^{2}_{b}(\mathbb{R};L^{2}(\Omega))\) respectively, see [28]. As we know, the value of the \(\varepsilon\)-entropy may depend on the topology chosen. In contrast to the translation compact case, it would brings some technical difficulties in our case. We note that the Lipschitz condition with respect to symbols proposed in [8] or the derivation of some estimates for difference between solutions [11, 27] is of fundamental significance for estimate the \(\varepsilon\)-entropy, however, is no longer suitable in our case, at least it hard to verify in a straightforward way. Moreover, the fact that in weak topology the distance between two symbols on each subinterval cannot be controlled by the whole interval, which leads us cannot use the method of iterated. To this end, we introduce a new number \(\mathbb{H}^{w}_{\varepsilon}(A,X;\,B,Y)\), the weak topology entropy of the set \(B\) in \(Y\) corresponding to the set \(A\) in \(X\), which depends only on the spaces \(X\), \(Y\) and the sizes of \(A\) and \(B\), for our application turns to depend only the bounds of symbols and the parameters of equation (1.1). This new definition is based on an observation that by some asymptotic smoothing property of solutions to verify that the smoothness of uniform attractors, then the collection of the second ingredient of the difference of solutions (with same initial data and different symbols \(h_{1},h_{2}\in\Sigma\)) is uniformly bounded in \(X:=L^{\infty}(iT,(i+1)T;H^{1}_{0}(\Omega))\cap H^{1}(iT,(i+1)T;H^{-1}(\Omega))\) for all \(i\in\mathbb{N}^{+}\) and for some \(T>0\). Moreover, \(X\) can be compactly embedded into \(Y^{*}\), and \(Y=L^{2}(iT,(i+1)T;L^{2}(\Omega))\) is the space where the symbol space \(\Sigma_{[iT,(i+1)T]}\) lives in. Based on this, a special cover of \(\Sigma_{[T,(l+1)T]}\), \(l\in\mathbb{N}^{+}\) is defined and we obtain an upper bound of the number of this cover. Meanwhile, we construct an intermediate object based on the non-autonomous exponential attractor proposed in [10, 12], which has finite fractal dimension and satisfies uniform forward (and also pullback) exponential attracting, and the most important is that these two properties are independent of the specified choice of symbols that obtained by time translations of the initial symbol. As an application, for the uniform attractor \(\mathcal{A}_{\Sigma}\) of equation (1.1), under the assumptions (1.2)-(1.4) and \(g\) satisfies either (1.6) or (1.7), we obtain the following estimate of its \(\varepsilon\)-entropy \[\mathbb{H}_{\varepsilon}(\mathcal{A}_{\Sigma};\mathcal{E})\leq D^{\prime}\log _{2}\frac{1}{\varepsilon}+L^{\prime}\log_{2}\frac{1}{\varepsilon}\mathbb{H}^{ w}_{\lambda^{\prime}\varepsilon^{\prime}}(r,X;\,\|g\|_{L^{2}_{b}(\mathbb{R};L^{2}( \Omega))},Y),\] where the positive constants \(D^{\prime}\), \(L^{\prime}\), \(\lambda^{\prime}\), \(\ell\) and \(r\) depend only on the spaces \(X\), \(Y\) and the parameters of equation (1.1), but are independent of \(\varepsilon\). We remark that the space \(H^{1}_{b}(\mathbb{R};L^{2}(\Omega))\) or \(L^{2}_{b}(\mathbb{R};H^{1}(\Omega))\) is much better than \(L^{2}_{b}(\mathbb{R};L^{2}(\Omega))\), however, a function just belongs to such more regular spaces still not good enough to deduce its hull is translation compact in \(L^{2}_{loc}(\mathbb{R};L^{2}(\Omega))\) (e.g., see [8, 28]). Here, in this paper, we assume \(g\in H^{1}_{b}(\mathbb{R};L^{2}(\Omega))\) or \(L^{2}_{b}(\mathbb{R};H^{1}(\Omega))\) mainly for simplifying the calculations and presenting some examples to illustrate how to using the weak topology entropy to give an upper bound about the \(\varepsilon\)-entropy of the considered uniform attractor. The rest of the paper is organized as follows. In Section 2, we give the symbol spaces and recall briefly the results about uniform attractors. In Section 3, we introduce an entropy \(\mathbb{H}^{w}_{\varepsilon}(A,X;\,B,Y)\) with respect to the weak topology of \(Y\) for the subsets-spaces pair \((A,X;\,B,Y)\). In Section 4, we verify that the uniform attractor of equation (1.1) is more regular if the external force \(g\in L^{2}_{b}(\mathbb{R};L^{2}(\Omega))\) is more regular. In Section 5, we derive some asymptotic smoothing estimates for difference of solutions of (1.1) that allows us to construct a family of discrete exponential attractor, which has uniform finite fractal dimension and enjoys uniform exponential attracting property. Finally, an upper bound of the entropy of the attractor for equation (1.1) is given in Section 6 by using the new defined weak topology entropy. ## 2. Preliminaries Let \(\Xi\) be a Hausdorff topological space and let \(\{T(t),t\in\mathbb{R}\}\) be the continuous translation operators on \(\Xi\): \[T(t)h(s):=h(s+t),\ \ s\in\mathbb{R}.\] Let all time-dependent coefficients of the considered equations belonging to a set \(\Sigma\) with topology from \(\Xi\). The set \(\Sigma\) is called the symbol space and \(h\in\Sigma\) is the symbol. Let us consider a typical symbol space \(\Sigma\) which only contains time-dependent external forces. Let \(H\) be a reflexive Banach space and we consider functions \(g:\mathbb{R}\to H\). Assume that \(g\in L^{p}_{b}(\mathbb{R};H)\), \(1<p<\infty\). Consider the following set \[\Sigma_{0}:=\{T(t)g(s);t\in\mathbb{R}\}:=\{g(t+s);t\in\mathbb{R}\}\] and the closure in \(\Xi\) of the set \(\Sigma_{0}\). This closure is said to be the hull of the function \(g(s)\) in \(\Xi\) and is denoted by \(\mathcal{H}(g)\). For \(g\in L^{p}_{b}(\mathbb{R};H)\), \(1<p<\infty\), due to the Banach-Alaoglu theorem, the hull \[\mathcal{H}(g):=[\{T(t)g,t\in\mathbb{R}\}]_{L^{p,w}_{loc}(\mathbb{R};H)} \tag{2.1}\] is compact in \(L^{p,w}_{loc}(\mathbb{R};H)\). We denote by \(L^{p,w}_{loc}(\mathbb{R};H)\) the space \(L^{p}_{loc}(\mathbb{R};H)\) endowed with local weak convergence topology, which means a sequence \(h_{n}\) converges to \(h\) as \(n\to\infty\) in \(L^{p,w}_{loc}(\mathbb{R};H)\) if and only if \[\int_{t_{1}}^{t_{2}}\langle v(s),h_{n}(s)-h(s)\rangle ds\to 0\ \ \text{as}\ n\to\infty\] for each bounded interval \([t_{1},t_{2}]\subset\mathbb{R}\) and any \(v\in L^{q}(t_{1},t_{2};H^{*})\) with \(1/p+1/q=1\). We know that \(\mathcal{H}(g)\) is compact in \(L^{p,w}_{loc}(\mathbb{R};H)\) and the translation group \(\{T(t),t\in\mathbb{R}\}\) is continuous in the topology of \(L^{p,w}_{loc}(\mathbb{R};H)\). It is also not difficult to see that \[\|h\|_{L^{p}_{b}(\mathbb{R};H)}\leq\|g\|_{L^{p}_{b}(\mathbb{R};H)},\ \ \forall h\in\mathcal{H}(g).\] Since translation boundedness is not sufficient to gain the existence of a strong uniform attractor, the most natural and most studied is the class of translation-compact (tr.c.) external forces introduced by Vishik and Chepyzhov, see [8]. We recall that \(g\in L^{p}_{b}(\mathbb{R};H)\) is translation-compact if the set \(\Sigma_{0}\) is precompact in \(L^{p}_{loc}(\mathbb{R};H)\). In [28], the author proposed several more general classes of external forces which are not translation compact but can still establish the existence of strong uniform attractors. Here, we are mainly interested in the following two classes of external forces. **Definition 2.1** ([28]).: Let \(H\) be a reflexive Banach space and \(1<p<\infty\). A function \(g\in L^{p}_{b}(\mathbb{R};H)\) is space regular if for every \(\varepsilon>0\) there exists a finite-dimensional subspace \(H_{\varepsilon}\subset H\), \(\text{dim}H_{\varepsilon}<\infty\) and a function \(g_{\varepsilon}\in L^{p}_{b}(\mathbb{R};H_{\varepsilon})\) such that \(\|g-g_{\varepsilon}\|_{L^{p}_{b}(\mathbb{R};H)}\leq\varepsilon\). Analogously, a function \(g\in L^{p}_{b}(\mathbb{R};H)\) is time regular if for every \(\varepsilon>0\) there exists a function \(g_{\varepsilon}\in H^{k}_{b}(\mathbb{R};H)\) for all \(k>0\) such that \(\|g-g_{\varepsilon}\|_{L^{p}_{b}(\mathbb{R};H)}\leq\varepsilon\). Now, let us recall some concepts and results related to the uniform attractor theory, see [8, 23, 28]. Let \(\mathcal{E}\) be a Hausdorff topological space. Let \(\{U_{h}(t,\tau),h\in\Sigma\}\) be a family of dynamical processes on \(\mathcal{E}\), that is, for each \(h\in\Sigma\), the two parameter family of operators \(U_{h}(t,\tau)\) from to \(\mathcal{E}\) satisfy \[U_{h}(\tau,\tau)=\mathrm{Id},\ \ U_{h}(t,\tau)=U_{h}(t,s)\circ U_{h}(s,\tau),\ \ t\geq s \geq\tau\in\mathbb{R}.\] Also, let \(\mathbb{B}\) be a family of sets \(B\subset\mathcal{E}\) such that if \(B\in\mathbb{B}\) and \(B_{1}\subset B\), then \(B_{1}\in\mathbb{B}\). The sets \(B\in\mathbb{B}\) are said to be bounded. **Definition 2.2** ([8]).: A set \(\mathcal{A}_{\Sigma}\subset\mathcal{E}\) is called a uniform attractor for the family of processes \(\{U_{h}(t,\tau),h\in\Sigma\}\) if: 1) \(\mathcal{A}_{\Sigma}\) is compact and bounded in \(\mathcal{E}\); 2) \(\mathcal{A}_{\Sigma}\) is a uniformly attracting set for the processes \(\{U_{h}(t,\tau),h\in\Sigma\}\), that means, for every \(B\in\mathbb{B}\) and every neighborhood \(\mathcal{O}(\mathcal{A}_{\Sigma})\), there exists a time \(T=T(\mathcal{O},B)\) such that \[\cup_{h\in\Sigma}U_{h}(t,\tau)B\subset\mathcal{O}(\mathcal{A}_{\Sigma}),\ \ t-\tau\geq T,\ \tau\in\mathbb{R};\] 3) \(\mathcal{A}_{\Sigma}\) is a minimal set with the properties 1) and 2). Below, \(\mathcal{E}\) will usually be a Banach space or even a Hilbert space endowed with either the strong or the weak topology. The associated uniform attractor will be referred to as a strong or a weak uniform attractor respectively. In both cases \(\mathbb{B}\) consists of all bounded sets in the Banach space under consideration. We recall the following standard existence theorem, see [8] for details. **Theorem 2.3**.: _Let \(\mathcal{E}\) and \(\Xi\) be Hausdorff topological spaces and let \(\Sigma\) be a compact set in the space \(\Xi\). Let the family of processes \(\{U_{h}(t,\tau),h\in\Sigma\}\) acting on \(\mathcal{E}\) possesses a uniformly absorbing set \(\mathcal{B}\in\mathbb{B}\) and is uniformly asymptotically compact on \(\mathcal{B}\). Then this family of processes has a uniform attractor \(\mathcal{A}_{\Sigma}\subset\mathcal{B}\)._ _Assume, in addition, that the map \((\xi,h)\to(U_{h}(t,\tau)\xi,T(t)h)\) is continuous for every fixed \(t\) and \(\tau\). Then \(\mathcal{A}_{\Sigma}\) possesses the following description:_ \[\mathcal{A}_{\Sigma}:=\cup_{h\in\Sigma}\mathcal{K}_{h}, \tag{2.2}\] _where_ \[\mathcal{K}_{h}:=\{u:\mathbb{R}\to\mathcal{E},\ \ U_{h}(t,\tau)u(\tau)=u(t),\ t \geq\tau\in\mathbb{R}\}\] _is a set of all complete bounded trajectories of the process \(U_{h}(t,\tau)\) (the so-called kernel of \(U_{h}(t,\tau)\) in the terminology of Vishik and Chepyzhov, see [8])._ We now turn to the damped wave equation (1.1). Recall some known results related with the non-autonomous system (1.1) and the corresponding uniform attractors, see e.g., [21, 23, 28]. We start with the following linear wave equation \[\partial_{t}^{2}v+\gamma\partial_{t}v-\Delta v=G(t),\ \ \xi_{v}\big{|}_{t=\tau}=\xi_{\tau},\ \ v\big{|}_{\partial\Omega}=0. \tag{2.3}\] **Theorem 2.4** ([23]).: _Let the initial data \(\xi_{\tau}\in\mathcal{E}^{\alpha}\) and \(G\in L^{1}_{loc}(\mathbb{R};\mathcal{E}^{\alpha})\) for some \(\alpha\in\mathbb{R}\). Then there exist a unique solution \(\xi_{v}\in C_{loc}(\mathbb{R};\mathcal{E}^{\alpha})\) of the problem (2.3). Also, the solution \(v\) belongs to the space \(L^{4}_{loc}(\mathbb{R};H^{\alpha,12}(\Omega))\). Moreover, the following estimate holds:_ \[\|\xi_{v}(t)\|_{\mathcal{E}^{\alpha}}+\left(\int_{\tau}^{t}e^{- 4\beta(t-s)}\|v(s)\|^{4}_{H^{\alpha,12}}\,ds\right)^{1/4}\leq\\ \leq C\|\xi_{\tau}\|_{\mathcal{E}^{\alpha}}e^{-\beta(t-\tau)}+C \int_{\tau}^{t}e^{-\beta(t-s)}\|G(s)\|_{H^{\alpha}}\,ds, \tag{2.4}\] _where the positive constants \(C\) and \(\beta\) are independent of \(t\geq\tau\), \(G\) and \(\xi_{\tau}\)._ The well-posedness and dissipativity of problem (1.1) is verified in [21, 23] in the class of the so-called Shatah-Struwe (SS) solutions, see [2, 4, 24]. We recall that \(u(t)\) is a SS-solution of problem (1.1) if \(\xi_{u}\in C([\tau,T],\mathcal{E})\) and \[u\in L^{4}(\tau,T;L^{12}(\Omega))\ \ \text{for all}\ \ T>\tau\] and if it satisfies equation (1.1) in the sense of distributions. **Theorem 2.5** ([21, 23]).: _Under the assumptions (1.2)-(1.5), for any \(\xi_{\tau}\in\mathcal{E}\) there exists a unique Shatah-Struwe solution of problem (1.1) defined for all \(t>\tau\). Moreover, this solution possesses the following dissipative estimate_ \[\|\xi_{u}(t)\|_{\mathcal{E}}+\|u\|_{L^{4}(t,t+1;L^{12}(\Omega))}\leq Q(\|\xi_ {\tau}\|_{\mathcal{E}})e^{-\beta(t-\tau)}+Q(\|g\|_{L^{2}_{b}(\mathbb{R};L^{2}( \Omega))}), \tag{2.5}\] _where the positive constant \(\beta\) and the monotone increasing function \(Q\) are independent of \(t,\tau\), \(\xi_{\tau}\) and \(g\)._ According to the general scheme of [8] to construct the uniform attractor, we consider a family of equations with symbols \(h\in\Sigma=\mathcal{H}(g)\): \[\begin{cases}\partial_{t}^{2}u+\gamma\partial_{t}u-\Delta u+f(u)=h(t),\\ u\big{|}_{\partial\Omega}=0,\ \ \ \xi_{u}\big{|}_{t=\tau}=\xi_{\tau}.\end{cases} \tag{2.6}\] Problem (2.6) is well-posed for each \(h\in\Sigma\) and then generate a family of dynamical processes \(\{U_{h}(t,\tau),h\in\Sigma\}\) in the energy phase space \(\mathcal{E}\) which satisfy the translation identity \[U_{h}(t+s,\tau+s)=U_{T(s)h}(t,\tau),\ \ t\geq\tau,\ \ \tau\in\mathbb{R},\ \ s\geq 0. \tag{2.7}\] Moreover, estimate (2.5) implies that the family of processes \(\{U_{h}(t,\tau),h\in\Sigma\}\) corresponding to (2.6) has a bounded uniformly absorbing set \(\mathcal{B}_{0}\) in \(\mathcal{E}\). Since \(\mathcal{E}\) is a reflexive Banach space, the absorbing set \(\mathcal{B}_{0}\) is compact and metrizable in the weak topology of \(\mathcal{E}\). It is straightforward to verify the maps \((\xi,h)\to U_{h}(t,\tau)\xi\) are weakly continuous from \(\mathcal{E}\times\Sigma\) to \(\mathcal{E}\) for every fixed \(t\) and \(\tau\). Then, the family of processes \(\{U_{h}(t,\tau),h\in\Sigma\}\) possesses a weak uniform attractor \(\mathcal{A}^{w}_{\Sigma}\) which satisfies (2.2). **Theorem 2.6** ([21, 23]).: _Let the assumptions (1.2)-(1.5) hold, and in addition let the external force \(g\) be space or time regular. Then there exists a uniform attractor \(\mathcal{A}_{\Sigma}\) for equation (1.1) in the strong topology of the energy space \(\mathcal{E}\) and it coincides with the weak attractor \(\mathcal{A}^{w}_{\Sigma}\) constructed above._ Let us recall the definition of the Kolmogorov \(\varepsilon\)-entropy of a (pre)compact set \(K\) in a metric space \(X\). For a given \(\varepsilon>0\), let \(N_{\varepsilon}(K;X)=N_{\varepsilon}(K)\) be the minimal number of \(\varepsilon\)-balls in \(X\) which is necessary to cover \(K\). Since \(K\) is (pre)compact, the number \(N_{\varepsilon}(K)<\infty\) for all \(\varepsilon>0\) due to Hausdorff's criteria. **Definition 2.7** ([8, 18]).: The Kolmogorov \(\varepsilon\)-entropy of the set \(K\) in the space \(X\) is the number \[\mathbb{H}_{\varepsilon}(K;X)=\mathbb{H}_{\varepsilon}(K)=\log_{2}N_{ \varepsilon}(K).\] ## 3. The weak topology entropy \(\mathbb{H}^{w}_{\varepsilon}(A,X;\,B,Y)\) Let \(X\) and \(Y\) be two Banach spaces with the embedding \(X\) to \(Y^{*}\) is compact (w.r.t. the strong topology of \(Y^{*}\)) and \(Y\) is reflexive. In the following, for the bounded subsets \(A\subset X\) and \(B\subset Y\), we will define an entropy \(\mathbb{H}^{w}_{\varepsilon}(A,X;\,B,Y)\) with respect to the weak topology of \(Y\) for the subsets-spaces pair \((A,X;\,B,Y)\). **Definition 3.1**.: Let \(A\subset X\) and \(B\subset Y\) and \(\varepsilon>0\) be given. An \((A,X;\,B,Y;\,\varepsilon)\)-weak partition of \(B\) is a partition \(\cup_{i\in\Lambda}V_{i}\) which satisfies \(B\subset\cup_{i\in\Lambda}V_{i}\) and for each \(V_{i}\) (\(i\in\Lambda\)), the following estimate hold: \[|\langle f,\,v-\bar{v}\rangle|<\varepsilon\ \forall\ f\in A,\ v,\bar{v}\in B \cap V_{i}.\] For any \(\varepsilon>0\), let \(N^{w}_{\varepsilon}(A,X;\,B,Y)\) be the minimal number of the cardinality of \(\Lambda\) with \(\cup_{i\in\Lambda}V_{i}\) is an \((A,X;\,B,Y;\,\varepsilon)\)-weak partition of \(B\). If \(A\) is bounded in \(X\) and \(B\) is bounded in \(Y\), then by the compactness of the embedding \(X\hookrightarrow Y^{*}\) and reflexive property of \(Y\), we have that the number \(N^{w}_{\varepsilon}(A,X;\,B,Y)<\infty\) for all \(\varepsilon>0\). Indeed, we will construct a special weak partition of \(B\) that each \(V_{i}\), \(i\in\Lambda\) can be presented as the form of a neighborhood in weak topology. So let us first recall that for any \(v_{0}\in Y\), given \(\eta>0\) and a finite set \(\{f_{1},f_{2},\cdots,f_{k}\}\subset Y^{*}\), the set \[V_{v_{0}}=V(f_{1},f_{2},\cdots,f_{k};\eta):=\{v\in Y:\,|\langle f_{i},\,v-v_{ 0}\rangle|<\eta,\ \forall\,i=1,2,\cdots,k\} \tag{3.1}\] is an open neighborhood of \(v_{0}\) for the weak topology of \(Y\), and we can obtain a basis of neighborhoods (w.r.t. weak topology of \(Y\)) of \(v_{0}\) by varying \(\eta\), \(k\) and the \(f^{\prime}_{i}s\) in \(Y^{*}\). Here \(\langle\cdot,\,\cdot\rangle\) is the dual product of \(Y^{*}\) and \(Y\). Since for any \(\bar{\epsilon}>0\), \(A\) has finite \(\bar{\epsilon}\)-net \(\{f_{1},\cdots,f_{m}\}\) in \(Y^{*}\), that is, for every \(f\in A\) there exists \(f_{i}\), \(i\in\{1,\cdots,m\}\) such that \[\|f-f_{i}\|_{Y^{*}}\leqslant\bar{\epsilon}. \tag{3.2}\] Then, for each \(v_{0}\in B\) and \(\eta>0\), we define \[\mathcal{O}_{v_{0}}=\mathcal{O}_{v_{0}}(f_{1},\cdots,f_{m};\eta)=\{v\in Y;| \langle f_{i},v-v_{0}\rangle|<\eta,\ i=1,\cdots,m\}. \tag{3.3}\] Clearly, \(\mathcal{O}_{v_{0}}\) is a neighborhood of \(v_{0}\) for the weak topology of \(Y\). By the reflexive property of \(Y\) we know that \(B\) is weak compact, thus, there exist finite \(v_{1},\cdots,v_{M}\in B\) such that \[B\subset\cup_{j=1}^{M}\mathcal{O}_{v_{j}}. \tag{3.4}\] Note that, for any \(f\in A\) and any \(v,\bar{v}\in\mathcal{O}_{v_{j}}\cap B\), we have that \[|\langle f,\,v-\bar{v}\rangle| \leqslant|\langle f-f_{i},\,v-\bar{v}\rangle|+|\langle f_{i},\,v- \bar{v}\rangle|\] \[\leqslant|\langle f-f_{i},\,v-\bar{v}\rangle|+|\langle f_{i},\,v- v_{j}\rangle|+|\langle f_{i},\,v_{j}-\bar{v}\rangle| \tag{3.5}\] \[\leqslant 2\|B\|_{Y}\bar{\epsilon}+2\eta,\] where \(\|B\|_{Y}\) is the bound of \(B\) in \(Y\). Then, by taking first \(\bar{\epsilon}\) small, e.g., \(\bar{\epsilon}<\varepsilon/4\|B\|_{Y}\), to find \(f_{1},\cdots,f_{m}\), then taking \(\eta\) small enough, e.g., \(\eta<\varepsilon/4\), we can find that the special partition defined in (3.4) is an \((A,X;\,B,Y;\,\varepsilon)\)-weak partition of \(B\) with finite neighborhoods. **Definition 3.2**.: Let \(A\subset X\) and \(B\subset Y\) and \(\varepsilon>0\) be given. The \((A,X;\,B,Y;\,\varepsilon)\)-weak topology entropy \(\mathbb{H}_{\varepsilon}^{w}(A,X;\,B,Y)\) of the set \(B\) in \(Y\) corresponding to set \(A\) in \(X\) is the number \[\mathbb{H}_{\varepsilon}^{w}(A,X;\,B,Y)=\log_{2}N_{\varepsilon}^{w}(A,X;\,B,Y).\] **Remark 3.3**.: The definitions proposed above still valid if only know \(A\subset Y^{*}\) is compact. Here, we involve the additional space \(X\) is just designed for our application in Sections 4 and 5, and we will estimate this bound explicitly and to make clear its dependencies for system (1.1). For example, if \(A\) and \(B\) are some balls in \(X\) and \(Y\) respectively, e.g., for given \(r,s\geqslant 0\), denote \[A_{r}:=\{u\in X:\,\|u\|_{X}\leqslant r\}\quad\text{and}\quad B_{s}:=\{v\in Y: \,\|v\|_{Y}\leqslant s\},\] then the \((A_{r},X;\,B_{s},Y;\,\varepsilon)\)-weak topology entropy \(\mathbb{H}_{\varepsilon}^{w}(A_{r},X;\,B_{s},Y)\) will depends only on the spaces \(X\) and \(Y\) and the size \(r,s\). For this situation, we will denote \(\mathbb{H}_{\varepsilon}^{w}(A_{r},X;\,B_{s},Y)\) by \(\mathbb{H}_{\varepsilon}^{w}(r,X;\,s,Y)\). ## 4. Smoothness of uniform attractors **Proposition 4.1**.: _Let the assumptions of Theorem 2.6 hold, and in addition, let the external force \(g\) satisfies (1.6) or (1.7). Then, if \(R\) is large enough, the closed ball \(\mathcal{B}_{1}\) of radius \(R\) in the space \(\mathcal{E}^{1}:=[H^{2}(\Omega)\cap H_{0}^{1}(\Omega)]\times H_{0}^{1}(\Omega)\) is a uniformly attracting set for the process \(U_{g}(t,\tau)\) associated with equation (1.1). Namely, there are positive constant \(\beta\) and a monotone increasing function \(Q\) such that, for any bounded set \(B\subset\mathcal{E}\),_ \[\mathrm{dist}_{\mathcal{E}}(U_{g}(t+\tau,\tau)B,\mathcal{B}_{1})\leq Q(\|B\|_ {\mathcal{E}})e^{-\beta t},\ \ t\geq 0\] _holds uniformly with respect to \(\tau\in\mathbb{R}\)._ Proof.: We split the solution \(u(t)\) as \(u(t)=v(t)+w(t)\), where \(v(t)\) solves the linear problem \[\partial_{t}^{2}v+\gamma\partial_{t}v-\Delta v=0,\ \ \xi_{v}\big{|}_{t=\tau}=\xi_{\tau}, \tag{4.1}\] the reminder \(w\) satisfies \[\partial_{t}^{2}w+\gamma\partial_{t}w-\Delta w+f(u)=g,\ \ \xi_{w}\big{|}_{t=\tau}=0. \tag{4.2}\] Multiplying (4.1) by \(\partial_{t}v+\beta v\) with sufficiently small \(\beta>0\), we obtain that the solution \(v(t)\) satisfies the following estimate \[\|\xi_{v}(t)\|_{\mathcal{E}}\leq Q(\|\xi_{\tau}\|_{\mathcal{E}})e^{-\beta(t- \tau)},\ \ t\geq\tau, \tag{4.3}\] where the positive constant \(\beta\) and the monotone increasing function \(Q\) are independent of \(t\), \(\tau\) and \(\xi_{\tau}\). Now, we turn to the equation (4.2). In the case of (1.7), using the standard \(\mathcal{E}^{\alpha}\)-energy estimate (2.4), we have \[\|\xi_{w}(t)\|_{\mathcal{E}^{\alpha}}\leq C\int_{\tau}^{t}e^{-\beta(t-s)}(\|f (u)\|_{H^{\alpha}}+\|g(s)\|_{H^{\alpha}})ds\leq C(\|f(u)\|_{L^{1}_{b}(\mathbb{ R};H^{\alpha})}+\|g\|_{L^{2}_{b}(\mathbb{R};H^{\alpha})}).\] Using the Holder inequality, the growth condition of \(f\) and estimate (2.5), we have \[\|f(u)\|_{L^{1}(t,t+1;W^{1,\kappa})} \leq C\|f^{\prime}(u)\nabla u\|_{L^{1}(t,t+1;L^{\kappa})} \tag{4.4}\] \[\leq C(1+\|u\|_{L^{4}(t,t+1;L^{12})}^{4})\|\nabla u\|_{L^{\infty}( t,t+1;L^{2})}\] \[\leq Q(\|\xi_{\tau}\|_{\mathcal{E}})e^{-\beta(t-\tau)}+Q(\|g\|_{L _{b}^{2}(\mathbb{R};L^{2})}),\ \ t\geq\tau,\] where \(\frac{1}{\kappa}=\frac{1}{2}+\frac{p-1}{12}\). Using the embedding \[W^{1,\kappa}\subset H^{\alpha}\ \ \text{with}\ \ \frac{1}{2}=\frac{1}{\kappa}- \frac{1-\alpha}{3},\] i.e. \[\alpha=\frac{5-p}{4}>0,\] we obtain that \[\|f(u)\|_{L^{1}(t,t+1;H^{\alpha})}\leq Q(\|\xi_{\tau}\|_{\mathcal{E}})e^{- \beta(t-\tau)}+Q(\|g\|_{L_{b}^{2}(\mathbb{R};L^{2})}).\] Finally, we arrive at the estimate \[\|\xi_{w}(t)\|_{\mathcal{E}^{\alpha}}\leq Q(\|\xi_{\tau}\|_{\mathcal{E}})e^{- \beta(t-\tau)}+Q(\|g\|_{L_{b}^{2}(\mathbb{R};H^{\alpha})}), \tag{4.5}\] where the constant \(\beta\) and the monotone function \(Q\) are independent of \(t\), \(\tau\), \(g\) and \(\xi_{\tau}\). Now let (1.6) be satisfied. We split the solution \(w(t)\) of equation (4.2) as \(w(t)=y(t)+z(t)\), where \(z(t)\) solves the linear wave equation \[\partial_{t}^{2}z+\gamma\partial_{t}z-\Delta z=g,\ \ \xi_{z}\big{|}_{t=\tau}=0 \tag{4.6}\] and the function \(y(t)\) solves \[\partial_{t}^{2}y+\gamma\partial_{t}y-\Delta y+f(u)=0,\ \ \xi_{y}\big{|}_{t=\tau}=0. \tag{4.7}\] Similar to the proof of estimate (4.5), we have \[\|\xi_{y}(t)\|_{\mathcal{E}^{\alpha}}\leq Q(\|\xi_{\tau}\|_{\mathcal{E}})e^{- \beta(t-\tau)}+Q(\|g\|_{L_{b}^{2}(\mathbb{R};L^{2})}). \tag{4.8}\] Then by differentiating (4.6) with respect to time and writing \(\bar{z}:=\partial_{t}z\), we get that \[\partial_{t}^{2}\bar{z}+\gamma\partial_{t}\bar{z}-\Delta\bar{z}=g^{\prime}(t),\ \ \xi_{\bar{z}}\big{|}_{t=\tau}=(0,g(\tau)).\] Using the standard energy estimate and together with the Sobolev embedding \(H_{b}^{1}(\mathbb{R};L^{2}(\Omega))\subset L_{b}^{\infty}(\mathbb{R};L^{2}( \Omega))\), we have \[\|\xi_{\bar{z}}(t)\|_{\mathcal{E}}\leq C\|g\|_{H_{b}^{1}(\mathbb{R};L^{2})}, \ \ t\geq\tau. \tag{4.9}\] Expressing the term \(\Delta z(t)\) from equation (4.6) and taking the \(L^{2}\)-norm from the both sides of the equation obtained, and combining with (4.9) we have that, for all \(t\geq\tau\) \[\|z(t)\|_{H^{2}}\leq C\|g\|_{H_{b}^{1}(\mathbb{R};L^{2}(\Omega))}. \tag{4.10}\] Combining (4.9) and (4.10), we obtain that \[\|\xi_{z}(t)\|_{\mathcal{E}^{1}}\leq C\|g\|_{H_{b}^{1}(\mathbb{R};L^{2}(\Omega ))}. \tag{4.11}\] From (4.5), (4.8) and (4.11), we conclude that \[\|\xi_{w}(t)\|_{\mathcal{E}^{\alpha}}\leq Q(\|\xi_{\tau}\|_{\mathcal{E}})e^{- \beta(t-\tau)}+Q(\|g\|_{W}), \tag{4.12}\] where the symbol \(W\) means the space \(H^{1}_{b}(\mathbb{R};L^{2}(\Omega))\) if (1.6) is satisfied or \(L^{2}_{b}(\mathbb{R};H^{1}(\Omega))\) if (1.7) is satisfied, the positive constant \(\beta\) and the monotone increasing function \(Q\) are independent of \(t,\tau\), \(g\) and \(u\). According to the estimates (4.3) and (4.12), the set \[\mathcal{B}_{\alpha}:=\{\xi\in\mathcal{E}^{\alpha};\|\xi\|_{\mathcal{E}^{ \alpha}}\leq R\}\] is a compact uniformly (w.r.t. \(\tau\in\mathbb{R}\)) attracting set for the process \(U_{g}(t,\tau)\) in \(\mathcal{E}\) if \(R\) is large enough. In order to get the higher regularity, we will use standard bootstrap arguments. If we take \(\xi_{\tau}\in\mathcal{B}_{\alpha}\) from the very beginning, we have \[\|\xi_{v}(t)\|_{\mathcal{E}^{\alpha}}\leq Q(\|\xi_{\tau}\|_{\mathcal{E}^{ \alpha}})e^{-\beta(t-\tau)},\ \ t\geq\tau,\] which together with (4.12) shows that the dynamical process \(U_{g}(t,\tau)\) is well defined and dissipative in the higher energy space \(\mathcal{E}^{\alpha}\) as well. Due to that \(H^{\alpha}\subset L^{\frac{6}{3-2\alpha}}\), we can improve estimate (4.4) as follows: \[\|f(u)\|_{L^{1}(t,t+1;W^{1,\kappa_{1}})} \leq C(1+\|u\|_{L^{4}(t,t+1;L^{12})}^{4})\|\nabla u\|_{L^{\infty}( t,t+1;L^{\frac{6}{3-2\alpha}})}\] \[\leq Q(\|\xi_{\tau}\|_{\mathcal{E}^{\alpha}})e^{-\beta(t-\tau)}+ Q(\|g\|_{W}),\ \ t\geq\tau,\] where \(\frac{1}{\kappa_{1}}=\frac{3-2\alpha}{6}+\frac{p-1}{12}\). Using the embedding \(W^{1,\kappa_{1}}\subset H^{\alpha_{1}}\) with \(\alpha_{1}=\frac{5-p}{4}+\alpha\), we have \[\|f(u)\|_{L^{1}(t,t+1;H^{\alpha_{1}})}\leq Q(\|\xi_{\tau}\|_{\mathcal{E}^{ \alpha}})e^{-\beta(t-\tau)}+Q(\|g\|_{W}).\] Thus, we get \[\|\xi_{w}(t)\|_{\mathcal{E}^{\alpha_{1}}}\leq Q(\|\xi_{\tau}\|_{\mathcal{E}^{ \alpha}})e^{-\beta(t-\tau)}+Q(\|g\|_{W}). \tag{4.13}\] By the transitivity of an exponential attraction established in [13], the dynamical process \(U_{g}(t,\tau)\) on \(\mathcal{B}_{\alpha}\) has an exponentially attracting set \(\mathcal{B}_{\alpha_{1}}\subset\mathcal{E}^{\alpha_{1}}\). Then iterating the above procedure in finitely many steps, we get the exponentially attracting ball \(\mathcal{B}_{1}\) in the space \(\mathcal{E}^{1}\). **Corollary 4.2**.: _Let the assumptions of Proposition 4.1 hold, and in addition let the initial data \(\xi_{\tau}\in\mathcal{E}^{1}\). Then the solution \(u\) of problem (1.1) satisfies \(\xi_{u}(t)\in\mathcal{E}^{1}\) for all \(t\geq\tau\) and the following estimate holds:_ \[\|\xi_{u}(t)\|_{\mathcal{E}^{1}}\leq Q(\|\xi_{\tau}\|_{\mathcal{E}^{1}})e^{- \beta(t-\tau)}+Q(\|g\|_{W}), \tag{4.14}\] _where the symbol \(W\) means the space \(H^{1}_{b}(\mathbb{R};L^{2}(\Omega))\) if (1.6) is satisfied or \(L^{2}_{b}(\mathbb{R};H^{1}(\Omega))\) if (1.7) is satisfied, the positive constant \(\beta\) and the monotone increasing function \(Q\) are independent of \(t,\tau\), \(g\) and \(u\)._ The proof of estimate (4.14) can be verified analogously to the proof of the estimation of \(w\)-component in the higher energy space in Proposition 4.1. Then, we will prove that \(\mathcal{B}_{1}\) is actually a compact uniformly (w.r.t. \(h\in\Sigma\)) attracting set in \(\mathcal{E}\) for the family of processes \(\{U_{h}(t,\tau),h\in\Sigma\}\). **Lemma 4.3**.: _Let the family of processes \(\{U_{h}(t,\tau),h\in\Sigma\}\) satisfies translation identity (2.7) and be weakly continuous. Then any compact uniformly (w.r.t. \(\tau\in\mathbb{R}\)) attracting set for the process \(U_{g}(t,\tau)\) is a simultaneously compact uniformly (w.r.t. \(h\in\Sigma\)) attracting set for the family \(\{U_{h}(t,\tau),h\in\Sigma\}\)._ Proof.: Due to the translation identity (2.7), any compact uniformly (w.r.t. \(\tau\in\mathbb{R}\)) attracting set for the process \(U_{g}(t,\tau)\) is a simultaneously compact uniformly (w.r.t. \(h\in\Sigma_{0}\)) attracting set for the family \(\{U_{h}(t,\tau),h\in\Sigma_{0}\}\). If \(P\) is a compact uniformly (w.r.t. \(h\in\Sigma_{0}\)) attracting set for the family \(\{U_{h}(t,\tau),h\in\Sigma_{0}\}\), that is, for every \(\varepsilon>0\) and every bounded set \(B\subset\mathcal{E}\), there exists \(T_{0}=T_{0}(B,\varepsilon)>0\) such that \[\cup_{h\in\Sigma_{0}}U_{h}(t,0)B\subset\mathcal{O}_{\varepsilon/2}(P),\ \ \forall t \geq T_{0}. \tag{4.15}\] We will prove that \[\cup_{h\in\Sigma}U_{h}(t,0)B\subset\mathcal{O}_{\varepsilon}(P),\ \ \forall t\geq T_{0}. \tag{4.16}\] For any \(y\in\cup_{t\geq T_{0}}\cup_{h\in\Sigma}U_{h}(t,0)B\), there exist \(t\geq T_{0}\), \(h\in\Sigma\) and \(x\in B\) such that \(y=U_{h}(t,0)x\). For such \(h\in\Sigma\), there exists \(\{h_{n}\}\in\Sigma_{0}\) such that \(h_{n}\to h\) in the weak topology of \(L^{2}_{loc}(\mathbb{R};H)\) as \(n\to\infty\). Then the weak continuity of processes implies that \[U_{h_{n}}(t,0)x\rightharpoonup U_{h}(t,0)x\ \ \text{in}\ \ \mathcal{E}\ \ \text{as}\ \ n\to\infty. \tag{4.17}\] For every \(\eta>0\), we have \(P\subset\cup_{x\in P}B(x,\eta)\). Since \(P\) is a compact set in \(\mathcal{E}\), we have a finite subcovering such that \(P\subset\cup_{i=1}^{N}B(x_{i},\eta)\), \(x_{i}\in P\). Then, \(\mathcal{O}_{\varepsilon/2}(P)\subset\cup_{i=1}^{N}B(x_{i},\eta+\varepsilon/2)\). Due to that \(\{U_{h_{n}}(t,0)x\}\subset\mathcal{O}_{\varepsilon/2}(P)\) if \(t\geq T_{0}\), there exists a subsequence of \(\{U_{h_{n}}(t,0)x\}\) belongs to some ball \(B(x_{i},\eta+\varepsilon/2)\). By Mazur theorem, we know that its weak limit \[U_{h}(t,0)x=y\in[B(x_{i},\eta+\varepsilon/2)]_{\mathcal{E}}\subset[\mathcal{O }_{\eta+\varepsilon/2}(P)]_{\mathcal{E}}.\] Let \(\eta\to 0\), we have that \(y\in[\mathcal{O}_{\varepsilon/2}(P)]_{\mathcal{E}}\subset\mathcal{O}_{ \varepsilon}(P)\). This means that (4.16) holds, thus \(P\) is a compact uniformly (w.r.t. \(h\in\Sigma\)) attracting set. On the other hand, (4.16) always implies (4.15) since \(\Sigma_{0}\subset\Sigma\). Then by the minimality of the uniform attractor among closed uniformly attracting sets we know that \(\mathcal{A}_{\Sigma}\subset\mathcal{B}_{1}\) and is bounded in the space \(\mathcal{E}^{1}\). Moreover, by Corollary 4.2 we have \[\cup_{h\in\Sigma}U_{h}(t,0)\mathcal{A}_{\Sigma}\subset\cup_{h\in\Sigma}U_{h}( t,0)\mathcal{B}_{1}\subset\mathcal{B}_{1},\ \ t\geq T_{0}. \tag{4.18}\] ## 5. Uniform exponential attracting sets In this section, we will construct a family of sets which enjoys uniform exponential attracting property and has uniform finite fractal dimension. **Lemma 5.1**.: _Let \(X_{0}\) and \(X_{1}\) be two Banach spaces such that \(X_{1}\) is compactly embedded into \(X_{0}\) and \(\mathbb{B}\) be a bounded subset of \(X_{0}\). For any fixed \(h\in\Sigma\), let \(U_{h}(n):=U_{h}(n+1,n),\ n\in\mathbb{Z}\) be a family of discrete dynamical processes on \(X_{0}\) satisfying the following properties:_ * _for every_ \(h\in\Sigma\)_, the process_ \(U_{h}(n):\mathbb{B}\to\mathbb{B}\)_;_ * _the process_ \(U_{h}(n)\) _admits a decomposition of the form_ \[U_{h}(n)=\mathcal{C}_{h}(n)+\mathcal{L}_{h}(n),\] _where_ \(\mathcal{C}_{h}(n):\mathbb{B}\to X_{0}\) _satisfies_ \[\|\mathcal{L}_{h}(n)\xi_{1}-\mathcal{L}_{h}(n)\xi_{2}\|_{X_{0}}\leq\kappa\|\xi_ {1}-\xi_{2}\|_{X_{0}},\ \ \ \forall\ \xi_{1},\ \xi_{2}\in\mathbb{B}\] (5.1) _for some_ \(\kappa<\frac{1}{2}\) _and_ \(\mathcal{C}_{h}(n):\mathbb{B}\to X_{1}\) _satisfies_ \[\|\mathcal{C}_{h}(n)\xi_{1}-\mathcal{C}_{h}(n)\xi_{2}\|_{X_{1}}\leq L\|\xi_{1}- \xi_{2}\|_{X_{0}},\ \ \ \forall\ \xi_{1},\ \xi_{2}\in\mathbb{B}\] (5.2) _for some_ \(L>0\)_._ _Then, there exists a family of time-dependent sets \(\mathcal{M}_{h}(n)\), \(n\in\mathbb{Z}\), which satisfy_ 1. \(\mathcal{M}_{h}(n)\subset\mathbb{B}\) _for every_ \(n\in\mathbb{Z}\) _and their fractal dimension is uniformly bounded_ \[dim_{F}(\mathcal{M}_{h}(n);X_{0})\leq C_{1},\] (5.3) _where the constant_ \(C_{1}\) _is independent of_ \(n\in\mathbb{Z}\)_;_ 2. _the uniform exponential attraction property:_ \[dist_{X_{0}}(U_{h}(n+k,n)\mathbb{B},\mathcal{M}_{h}(n+k))\leq C_{2}e^{-\sigma k},\] (5.4) _where the positive constants_ \(C_{2}\) _and_ \(\sigma\) _are independent of_ \(n\in\mathbb{Z}\) _and_ \(k\in\mathbb{N}\)_._ Proof.: The proof is similar to [10, 12]. For the convenience of the reader, we present the details below. Since \(\mathbb{B}\) is bounded in \(X_{0}\), there exists a ball \(B(x_{0},R;X_{0})\) of radius \(R\) centered at \(x_{0}\in\mathbb{B}\) in \(X_{0}\) such that \(\mathbb{B}\subset B(x_{0},R;X_{0})\). We set \(V_{0}:=\{x_{0}\}\subset\mathbb{B}\). We also fix an arbitrary dynamical process \(U_{h}\) satisfying the above assumptions \((a)\) and \((b)\). Now we construct a family of the sets \(V_{k}(n)\subset\mathbb{B}\) by induction with respect to \(k\) such that \(V_{k}(n)\) is an \(R_{k}:=R(\kappa+\frac{1}{2})^{k}\)-net of \(U_{h}(n,n-k)\mathbb{B}\). Assume that the required sets are already been constructed for some \(k=l\), then we construct the next set \(V_{l+1}(n+1)\) preserving these properties. It follows from (5.2) that \[\mathcal{C}_{h}(n)(U_{h}(n,n-l)\mathbb{B})\subset\cup_{x\in V_{l}(n)}B( \mathcal{C}_{h}(n)x,LR_{l};X_{1}).\] Since the embedding \(X_{1}\subset X_{0}\) is compact, we cover this ball by a finite number of \(\frac{1-2\kappa}{4}R_{l}\)-balls in \(X_{0}\) with centers \(y_{i}\) and the minimal number of balls in this covering can be estimated as follows: \[N_{\frac{1-2\kappa}{4}R_{l}}(B(\mathcal{C}_{h}(n)x,LR_{l};X_{1});X_{0})=N_{ \frac{1-2\kappa}{4L}}(B(0,1;X_{1});X_{0}):=N.\] Crucial for us that the number \(N\) is independent of \(l\), \(R\), \(\kappa\) and \(L\). It follows from (5.1) that the family of balls with centers \(\mathcal{L}_{h}(n)x\), \(x\in V_{l}(n)\) and with radius \(\kappa R_{l}\) covers \(\mathcal{L}_{h}(n)(U_{h}(n,n-l)\mathbb{B})\). Consequently, we conclude that \[U_{h}(n+1,n-l)\mathbb{B} =U_{h}(n)U_{h}(n,n-l)\mathbb{B}\] \[\subset\cup_{x\in V_{l}(n)}B(y_{i}+\mathcal{L}_{h}(n)x,\frac{1-2 \kappa}{4}R_{l}+\kappa R_{l};X_{0})\] \[=\cup_{x\in V_{l}(n)}B(y_{i}+\mathcal{L}_{h}(n)x,\frac{1+2\kappa }{4}R_{l};X_{0}),\] and the number of balls in this system is not greater than \(N\cdot\sharp V_{l}(n)\). Increasing the radius of every ball in this covering by a factor of two, we can assume that the centres of this covering belong to \(U_{h}(n+1,n-l)\mathbb{B}\). We denote by \(V_{l+1}(n+1)\) the new centres of this covering and condition \((a)\) for \(U_{h}(n)\) guarantees that \(V_{l+1}(n+1)\subset\mathbb{B}\), we also note that \(R_{l+1}=\frac{1+2\kappa}{2}R_{l}=R(\kappa+\frac{1}{2})^{l+1}\). Thus, the required sets \(V_{k}(n)\) are constructed for every \(n\in\mathbb{Z}\) and \(k\in\mathbb{N}\). According to the above construction, we have \[\begin{split}&\sharp V_{k}(n)\leq N^{k},\ \ k\in\mathbb{N},\ n\in\mathbb{Z},\\ &\operatorname{dist}_{X_{0}}(U_{h}(n,n-k)\mathbb{B},V_{k}(n))\leq R (\kappa+\frac{1}{2})^{k}.\end{split} \tag{5.5}\] We define the sets \(E_{k}(n)=E_{k}^{h}(n)\) by \[E_{1}(n):=V_{1}(n),\ \ E_{k+1}(n+1):=V_{k+1}(n+1)\cup U_{h}(n)E_{k}(n),\ \ k\in\mathbb{N},\ n\in\mathbb{Z}. \tag{5.6}\] From (5.5) and (5.6), we have \[\begin{split}&\sharp E_{k}(n)\leq kN^{k},\\ &\operatorname{dist}_{X_{0}}(U_{h}(n,n-k)\mathbb{B},E_{k}(n))\leq R (\kappa+\frac{1}{2})^{k}.\end{split} \tag{5.7}\] The required family of sets \(\mathcal{M}_{h}(n)\) can be defined by \[\mathcal{M}_{h}(n):=\cup_{k=1}^{\infty}E_{k}(n),\ \ n\in\mathbb{Z}. \tag{5.8}\] Let us verify that \(\mathcal{M}_{h}(n)\) defined by (5.8) satisfies (5.3) and (5.4). Indeed, since \(E_{k}(n)\subset\mathcal{M}_{h}(n)\) and \(\kappa+\frac{1}{2}<1\), the uniform exponential attraction property follows from (5.7): \[\operatorname{dist}_{X_{0}}(U_{h}(n,n-k)\mathbb{B},\mathcal{M}_{h}(n))\leq R( \kappa+\frac{1}{2})^{k}=Re^{-k\ln\frac{1}{\kappa+\frac{1}{2}}}.\] Now, there remains to verify the finite dimensionality of \(\mathcal{M}_{h}(n)\). We fix \(\varepsilon>0\) and choose the smallest integer \(k=k(\varepsilon)\) such that \(R(\kappa+\frac{1}{2})^{k}\leq\varepsilon\). Since, by definition (5.6), \[E_{k}(n)=\cup_{l=0}^{k-1}U_{h}(n,n-l)V_{k-l}(n-l)\] and from the construction of \(V_{k}(n)\), we have \(V_{k-l}(n-l)\subset U_{h}(n-l,n-k)\mathbb{B}\). Then, by (5.5) we obtain \[\begin{split}\cup_{k\geq k(\varepsilon)}E_{k}(n)& \subset\cup_{k\geq k(\varepsilon)}\cup_{l=0}^{k-1}U_{h}(n,n-l)U_{h}(n-l,n-k) \mathbb{B}\\ &=\cup_{k\geq k(\varepsilon)}U_{h}(n,n-k)\mathbb{B}\\ &\subset\cup_{v\in V_{k}(n)}B(v,\varepsilon;X_{0}).\end{split}\] Thus, \[\begin{split} N_{\varepsilon}(\mathcal{M}_{h}(n);X_{0})& \leq N_{\varepsilon}(\cup_{k\leq k(\varepsilon)}E_{k}(n))+N_{ \varepsilon}(\cup_{k>k(\varepsilon)}E_{k}(n))\\ &\leq\sum_{k\leq k(\varepsilon)}\sharp E_{k}(n)+\sharp V_{k( \varepsilon)+1}(n)\\ &\leq(k(\varepsilon)+1)^{2}N^{k(\varepsilon)+1}.\end{split}\] Consequently, \[\dim_{F}(\mathcal{M}_{h}(n);X_{0}):=\limsup_{\varepsilon\to 0^{+}}\frac{ \log_{2}N_{\varepsilon}(\mathcal{M}_{h}(n);X_{0})}{\log_{2}\frac{1}{ \varepsilon}}\leq(\log_{2}\frac{1}{\kappa+\frac{1}{2}})^{-1}\log_{2}N.\] Then, we will apply Lemma 5.1 to problem (1.1). **Lemma 5.2**.: _Let assumptions (1.2)-(1.5) hold. Then, there exists some \(T>0\) such that, for any arbitrary fixed \(h\in\Sigma\) and \(\tau\in\mathbb{R}\), the family of discrete dynamical processes \(U_{h}^{\tau}(m,n):=U_{h}(\tau+mT,\tau+nT)\), \(m,n\in\mathbb{Z},m\geq n\) possesses a family sets \(\mathcal{M}_{h}^{\tau}(n)\), \(n\in\mathbb{Z}\) which satisfies_ \[dist_{\mathcal{E}}(U_{h}^{\tau}(m,n)\mathcal{B}_{1},\mathcal{M}_{h}^{\tau}(m) )\leq\nu e^{-\sigma(m-n)T}, \tag{5.9}\] _where the positive constants \(\nu\) and \(\sigma\) are independent of \(m,n\in\mathbb{Z}\), \(m\geq n\) and \(\tau\in\mathbb{R}\). And for every \(\delta>0\), there exists \(\varepsilon_{0}>0\) such that_ \[N_{\varepsilon}(\mathcal{M}_{h}^{\tau}(n);\mathcal{E})\leq(\frac{1}{ \varepsilon})^{\mathcal{N}+\delta},\ \ \text{for all }0<\varepsilon\leq \varepsilon_{0}, \tag{5.10}\] _where the positive constant \(\mathcal{N}\) is independent of \(n\in\mathbb{Z}\) and \(\tau\in\mathbb{R}\)._ Proof.: Thanks to the translation identity (2.7), we know that for any fixed \(\tau\in\mathbb{R}\) and \(h\in\Sigma\), we can find \(h^{\prime}\in\Sigma\) such that \[U_{h^{\prime}}(t+\tau,\tau)x=U_{h}(t,0)x,\ \ \forall\ t\geq 0,\ x\in\mathcal{E}.\] Therefore, we set \(\tau=0\) for simplicity. Let \(u_{1}(t)\) and \(u_{2}(t)\) be two solutions of (1.1) with different initial data \(\xi_{0}^{1},\xi_{0}^{2}\in\mathcal{B}_{1}\) starting at \(t=0\) and with the same external force \(h\in\Sigma\). Let \(\theta(t)=u_{1}(t)-u_{2}(t)\), then this function solves \[\partial_{t}^{2}\theta+\gamma\partial_{t}\theta-\Delta_{x}\theta+l(t)\theta=0, \ \xi_{\theta}\big{|}_{t=0}=\xi_{0}^{1}-\xi_{0}^{2}, \tag{5.11}\] where \(l(t):=\int_{0}^{1}f^{\prime}(su_{1}(t)+(1-s)u_{2}(t))\,ds\). We split the solution \(\theta(t)=v(t)+w(t)\), where \(v(t)\) satisfies the equation \[\partial_{t}^{2}v+\gamma\partial_{t}v-\Delta v=0,\ \ v\big{|}_{\partial \Omega}=0,\ \ \xi_{v}\big{|}_{t=0}=\xi_{0}^{1}-\xi_{0}^{2} \tag{5.12}\] and the function \(w(t)\) satisfies \[\partial_{t}^{2}w+\gamma\partial_{t}w-\Delta w+l(t)\theta=0,\ \ w\big{|}_{ \partial\Omega}=0,\ \ \xi_{w}\big{|}_{t=0}=0. \tag{5.13}\] For the linear equation (5.12), we get \[\|\xi_{v}(t)\|_{\mathcal{E}}\leq C\|\xi_{0}^{1}-\xi_{0}^{2}\|_{\mathcal{E}}e^{ -\beta t}, \tag{5.14}\] where the positive constants \(C\) and \(\beta\) are independent of \(t\), \(g\) and \(\xi_{0}^{1},\xi_{0}^{2}\). From Corollary 4.2, we know that \[\|\xi_{u_{i}}(t)\|_{\mathcal{E}^{1}}\leq C,\ \ \forall t\geq\tau,\ i=1,2,\] where the constant \(C\) is independent of \(t\), \(u_{i}\) and \(h\). Using the embedding \(H^{2}\subset C\), we find that \[\|\xi_{w}(t)\|_{\mathcal{E}^{1}}\leq C\int_{0}^{t}e^{-\beta(t-s)}\|l(t)\theta \|_{H_{0}^{1}}ds\leq C\|\nabla\theta\|_{L^{\infty}(0,t;L^{2})}\leq C\|\xi_{ \theta}\|_{L^{\infty}(0,t;\mathcal{E})}. \tag{5.15}\] Next, we estimate the term \(\|\xi_{\theta}\|_{L^{\infty}(0,t;\mathcal{E})}\). Multiplying (5.11) by \(\partial_{t}\theta\) and integrating over \(\Omega\), we end up with \[\frac{d}{dt}\|\xi_{\theta}(t)\|_{\mathcal{E}}^{2}+2\gamma\|\partial_{t}\theta \|^{2}=2(l(t)\theta,\partial_{t}\theta).\] Using the growth condition (1.2), we get \[2|(l(t)\theta,\partial_{t}\theta)| \leq C((1+|u_{1}|^{p-1}+|u_{2}|^{p-1})|\theta|,|\partial_{t}\theta|)\] \[\leq C\|\partial_{t}\theta\|_{L^{2}}\|\theta\|_{L^{6}}(1+\|u_{1} \|_{L^{12}}^{4}+\|u_{2}\|_{L^{12}}^{4})\] \[\leq C\|\xi_{\theta}\|_{L}^{2}(1+\|u_{1}\|_{L^{12}}^{4}+\|u_{2}\| _{L^{12}}^{4}),\] By applying the Gronwall inequality, we obtain \[\|\xi_{\theta}(t)\|_{\mathcal{E}}\leq Ce^{Kt}\|\xi_{0}^{1}-\xi_{0}^{2}\|_{ \mathcal{E}}.\] Inserting this into estimate (5.15), we have \[\|\xi_{w}(t)\|_{\mathcal{E}^{1}}\leq Ce^{Kt}\|\xi_{0}^{1}-\xi_{0}^{2}\|_{ \mathcal{E}},\] where the positive constants \(C\) and \(K\) are independent of \(t\), \(h\), \(\xi_{0}^{1}\) and \(\xi_{0}^{2}\). Taking \(T_{1}>0\) such that \(T_{1}=\frac{1}{\beta}\ln(4C)\) and fixing \[T:=\max\{T_{0},T_{1}\}, \tag{5.16}\] then the relation \[\cup_{h\in\Sigma}U_{h}(t,0)\mathcal{A}_{\Sigma}\subset\cup_{h\in\Sigma}U_{h}(t,0)\mathcal{B}_{1}\subset\mathcal{B}_{1},\ \ t\geq T \tag{5.17}\] together with estimates \[\|\xi_{v}(T)\|_{\mathcal{E}}\leq\frac{1}{4}\|\xi_{0}^{1}-\xi_{0}^{2}\|_{ \mathcal{E}}\] and \[\|\xi_{w}(T)\|_{\mathcal{E}^{1}}\leq Ce^{KT}\|\xi_{0}^{1}-\xi_{0}^{2}\|_{ \mathcal{E}}\] implies that \(U_{h}(T,0)\) satisfies the assumptions of Lemma 5.1 with \(\kappa=\frac{1}{4}\) and \(L=Ce^{KT}\). Thus, we can apply Lemma 5.1 to the family of discrete dynamical processes \(U_{h}^{0}(m,n)\), \(m,n\in\mathbb{Z}\), \(m\geq n\), which implies that these processes possess a family of sets \(\mathcal{M}_{h}^{0}(n)\), \(n\in\mathbb{Z}\) satisfies (5.9) and (5.10) with \(\tau=0\). **Remark 5.3**.: We have the following translation invariance: \[\mathcal{M}_{T(s)h}^{0}(l)=\mathcal{M}_{h}^{s}(l) \tag{5.18}\] for all \(l\in\mathbb{Z}\) and \(s\in\mathbb{R}\). Thus, the estimates (5.9) and (5.10) are independent of the specific choice of \(h\in\Sigma_{0}\). ## 6. Estimates of the \(\varepsilon\)-entropy First, we will obtain a weak partition of \(\Sigma_{iT,(i+1)T}\), \(i\in\mathbb{N}^{+}\). let \(u_{1}(t)\) and \(u_{2}(t)\) be two solutions of (2.6) staring from the same initial data \(\xi\in\mathcal{A}_{\Sigma}\) at \(t=0\), but with different external forces \(h,h^{\prime}\in\Sigma\). Then the function \(\theta(t)=u_{1}(t)-u_{2}(t)\) satisfies \[\partial_{t}^{2}\theta+\gamma\partial_{t}\theta-\Delta\theta+l(t)\theta=h-h^{ \prime},\ \ \xi_{\theta}\big{|}_{t=0}=0. \tag{6.1}\] Then we have the following result which is immediately from (5.17). **Corollary 6.1**.: _Let the assumptions of Proposition 4.1 hold. Then the second ingredient \(\partial_{t}\theta\) of the solution \(\xi_{\theta}\) for equation (6.1) satisfying that_ \[\partial_{t}\theta\in L_{b}^{\infty}(T,+\infty;H_{0}^{1})\cap H_{b}^{1}(T,+ \infty;H^{-1})\] _and_ \[\|\partial_{t}\theta\|_{L_{b}^{\infty}(T,+\infty;H_{0}^{1})\cap H_{b}^{1}(T,+ \infty;H^{-1})}\leq r, \tag{6.2}\] _where \(T\) is defined in (5.16) and the positive constant \(r\) only depends on \(\|g\|_{W}\)._ Denote the set \[A=\begin{pmatrix}A_{1}\\ A_{2}\end{pmatrix}=\{\xi_{\theta}|\xi_{\theta}(s),\ s>0\ \text{satisfies}\ (\ref{eq:A1})\ \text{with}\ \xi_{\theta}(0)=0,\ h,h^{\prime}\in\Sigma\}.\] According to Corollary 6.1, we infer that \(A\subset L^{\infty}_{b}(T,+\infty;\mathcal{E}^{1})\). From equation (6.1) we know that \(\{\partial_{t}^{2}\theta|\partial_{t}\theta\in A_{2}\}\subset L^{2}_{b}(T,+ \infty;H^{-1}(\Omega))\). That is, \[A_{2}\subset L^{\infty}_{b}(T,+\infty;H^{1}_{0}(\Omega))\cap H^{1}_{b}(T,+ \infty;H^{-1}(\Omega)).\] Denote by \[X:=L^{\infty}(iT,(i+1)T;H^{1}_{0})\cap H^{1}(iT,(i+1)T;H^{-1})\ \ \text{and}\ \ Y:=L^{2}(iT,(i+1)T;L^{2}). \tag{6.3}\] Then for any \(i\in\mathbb{N}^{+}\), \[A_{2}\big{|}_{[iT,(i+1)T]}\subset B_{r}:=\{u\in X;\|u\|_{X}\leq r\}\ \ \text{and}\ \ \Sigma_{[iT,(i+1)T]}\subset B_{s}:=\{v\in Y;\|v\|_{Y}\leq s\},\] where the positive constants \(r\) given in (6.2) and \(s=\|g\|_{L^{2}_{b}(\mathbb{R};L^{2}(\Omega))}\). Thus, from Section 3, for any given \(\tilde{\varepsilon}>0\), we obtain a \((r,X;\|g\|_{L^{2}_{b}(\mathbb{R};L^{2}(\Omega))},Y;\tilde{\varepsilon})\)-weak partition of \(\Sigma_{[iT,(i+1)T]}\), \(i\in\mathbb{N}^{+}\). Here we denote by \(\Sigma_{[iT,(i+1)T]}\subset\cup_{n\in\Lambda}V^{(i)}_{n}\), \(i\in\mathbb{N}^{+}\) and \(N^{w}_{\tilde{\varepsilon}}(r,X;\|g\|_{L^{2}_{b}(\mathbb{R};L^{2}(\Omega))},Y)\) be the minimal number of the cardinality of \(\Lambda\). Then, for any given \(l\in\mathbb{N}^{+}\), we will construct a special cover of \(\Sigma_{T,(l+1)T}\) in the following way such that \[\begin{split}&\Sigma_{[T,(l+1)T]}\subset\cup_{j=1}^{M}W_{j};\\ &\Sigma_{[iT,(i+1)T]}\cap W_{j}\big{|}_{[iT,(i+1)T]}\subset\cup_{ n\in\Lambda}V^{(i)}_{n},\ \ j=1,\cdots,M,\ i=1,\cdots,l.\end{split} \tag{6.4}\] By combination the partitions on each subinterval \([iT,(i+1)T]\), \(i=1,\cdots,l\), we obtain a cover of set \(\Sigma_{[T,(l+1)T]}\), and the upper bound of the number of such cover can be estimated by \[M\leq(N^{w}_{\tilde{\varepsilon}}(r,X;\,\|g\|_{L^{2}_{b}(\mathbb{R};L^{2}( \Omega))},Y))^{l}. \tag{6.5}\] **Lemma 6.2**.: _For every \(\varepsilon>0\) and any given \(l\in\mathbb{N}^{+}\), for the cover of \(\Sigma_{[T,(l+1)T]}\) constructed in (6.4), we have that if \(\tilde{\varepsilon}\leq\varepsilon^{2}/32e^{l(K+1)}\), then for any \(h,h^{\prime}\in W_{j}\), \(j=1,\cdots,M\),_ \[dist_{\mathcal{E}}(\mathcal{K}_{h}((l+1)T),U_{h^{\prime}}((l+1)T,T)\mathcal{K }_{h}(T))\leq\varepsilon/4. \tag{6.6}\] Proof.: If \(y\in\mathcal{K}_{h}((l+1)T)\), then we have \(y=\xi((l+1)T)=U_{h}((l+1)T,T)\xi(T)\) for some \(\xi\in\mathcal{K}_{h}\). Consider \[\|U_{h}((l+1)T,T)\xi(T)-U_{h^{\prime}}((l+1)T,T)\xi(T)\|_{\mathcal{E}}.\] Let \(\xi_{u_{1}}(t)=U_{h}(t,T)\xi(T)\) and \(\xi_{u_{2}}(t)=U_{h^{\prime}}(t,T)\xi(T)\) for \(t\geq T\). Then \(\theta=u_{1}-u_{2}\) satisfying the equation (6.1). Multiplying (6.1) by \(\partial_{t}\theta\), using (1.2) we get \[\frac{d}{dt}\|\xi_{\theta}(t)\|_{\mathcal{E}}^{2}+2\gamma\|\partial_{t}\theta( t)\|^{2}\leq C\|\xi_{\theta}\|_{\mathcal{E}}^{2}(1+\|u_{1}\|_{L^{12}}^{4}+\|u_{2}\|_{L^{12} }^{4})+2(h-h^{\prime},\partial_{t}\theta).\] Applying the Gronwall inequality and let \(t=(l+1)T\), from (6.4) and estimate (2.5) we obtain \[\|\xi_{\theta}((l+1)T)\|_{\mathcal{E}}^{2} \leq 2\int_{T}^{(l+1)T}e^{C\int_{s}^{(l+1)T}(1+\|u_{1}\|_{L^{12}}^{ 4}+\|u_{2}\|_{L^{12}}^{4})dr}(h-h^{\prime},\partial_{t}\theta)ds\] \[\leq 2e^{C\int_{T}^{(l+1)T}(1+\|u_{1}\|_{L^{12}}^{4}+\|u_{2}\|_{L ^{12}}^{4})dr}\sum_{i=1}^{l}\int_{iT}^{(i+1)T}(h-h^{\prime},\partial_{t}\theta )ds\] \[\leq 2e^{lKT}l\tilde{\varepsilon}\leq 2e^{l(K+1)T}\tilde{ \varepsilon}\leq(\varepsilon/4)^{2}\] provided that \[\tilde{\varepsilon}\leq\frac{\varepsilon^{2}}{32e^{l(K+1)T}},\] where \[K:=C\int_{t}^{t+1}(1+\|u_{1}(s)\|_{L^{12}}^{4}+\|u_{2}(s)\|_{L^{12}}^{4})ds \leq Q(\|g\|_{L^{2}_{b}(\mathbb{R};L^{2}(\Omega))}). \tag{6.7}\] **Remark 6.3**.: Note that we find an upper bound for the number of a partition that covers the set \(\Sigma_{[T,(l+1)T]}\), which may not be a sharp upper bound, but it depends only on the spaces \(X\), \(Y\) and the size \(r\) given in (6.2) and \(s=\|g\|_{L^{2}_{b}(\mathbb{R};L^{2}(\Omega))}\). Finally, we obtain the following theorem which is the main result of this paper. **Theorem 6.4**.: _Let the assumptions (1.2)-(1.4) be valid and \(g\) satisfy either (1.6) or (1.7). Then the process generated by (1.1) has a compact uniform attractor \(\mathcal{A}_{\Sigma}\), and for any arbitrary and fixed \(\delta>0\), there exist \(T>0\) and \(\varepsilon_{0}>0\) such that_ \[\mathbb{H}_{\varepsilon}(\mathcal{A}_{\Sigma})\leq(\mathcal{N}+\delta)\log_{2 }\frac{2}{\varepsilon}+(\frac{1}{\sigma}\ln\frac{4\nu}{\varepsilon}+T)\mathbb{ H}^{w}_{\frac{\varepsilon[2^{+}(K+1)/\sigma]}{32e^{l(K+1)(4\nu)(K+1)/\sigma+1}}}(r,X;\, \|g\|_{L^{2}_{b}(\mathbb{R};L^{2}(\Omega))},Y)\] _for all \(\varepsilon\leq\varepsilon_{0}\), where \(\mathcal{N}\), \(\nu\) and \(\sigma>0\) satisfying (5.9) and (5.10), the positive constants \(r\) and \(K\) are given in (6.2) and (6.7) respectively, and all of them depend only on the parameters of equation (1.1) and \(\|g\|_{W}\); the spaces \(X\) and \(Y\) are defined in (6.3)._ Proof.: According to (6.4), we set \[\Sigma_{j}:=\Sigma\cap W_{j},\quad j=1,\cdots,M.\] The representation (2.2) can be written as \[\mathcal{A}_{\Sigma}=\cup_{j=1}^{M}\mathcal{K}_{\Sigma_{j}}(t),\quad t\in \mathbb{R}, \tag{6.8}\] where \(\mathcal{K}_{\Sigma_{j}}:=\cup_{h\in\Sigma_{j}}\mathcal{K}_{h}\). Since the estimates (5.9) and (5.10) is uniformly for \(h\in\Sigma_{0}\), then there exists \(k=k(\varepsilon)>\frac{1}{\sigma T}\ln\frac{4\nu}{\varepsilon}\) and \(k\in\mathbb{N}\), here \(T\) is given in (5.16) and we take \[k=[\frac{1}{\sigma T}\ln\frac{4\nu}{\varepsilon}]+1 \tag{6.9}\] such that for any \(h_{j}\in\Sigma_{0}\cap\Sigma_{j}\), \[dist_{\mathcal{E}}(U_{h_{j}}((k+1)T,T)\mathcal{B}_{1},\mathcal{M}^{0}_{h_{j}} (k+1))\leq\nu e^{-\sigma kT}<\varepsilon/4, \tag{6.10}\] and for every \(\delta>0\), there exists \(\varepsilon_{0}>0\) such that \[N_{\varepsilon/2}(\mathcal{M}^{0}_{h_{j}}(k+1);\mathcal{E})\leq(2/\varepsilon) ^{\mathcal{N}+\delta},\ \ \text{for all $0<\varepsilon\leq\varepsilon_{0}$}. \tag{6.11}\] Combining with the estimates (6.6) for \(l=k\) and (6.10), we obtain that \[dist_{\mathcal{E}}(\mathcal{K}_{h}((k+1)T),\mathcal{M}^{0}_{h_{j}}( k+1)) \leq dist_{\mathcal{E}}(\mathcal{K}_{h}((k+1)T),U_{h_{j}}((k+1)T,T) \mathcal{K}_{h}(T))\] \[\quad+dist_{\mathcal{E}}(U_{h_{j}}((k+1)T,T)\mathcal{K}_{h}(T), \mathcal{M}^{0}_{h_{j}}(k+1))\] \[\leq\varepsilon/4+\varepsilon/4=\varepsilon/2,\quad\forall\ h\in \Sigma_{j}.\] That is, \[\mathcal{K}_{h}((k+1)T)\subset\mathbb{O}_{\varepsilon/2}(\mathcal{M}^{0}_{h_{ j}}(k+1);\mathcal{E}),\ \ \forall\ h\in\Sigma_{j}. \tag{6.12}\] Thus from (6.11) and (6.12), for all \(0<\varepsilon\leq\varepsilon_{0}\), we have \[N_{\varepsilon}(\mathcal{K}_{\Sigma_{j}}((k+1)T);\mathcal{E})\leq N_{ \varepsilon}(\mathbb{O}_{\varepsilon/2}(\mathcal{M}^{0}_{h_{j}}(k+1); \mathcal{E});\mathcal{E})=N_{\varepsilon/2}(\mathcal{M}^{0}_{h_{j}}(k+1); \mathcal{E})\leq(2/\varepsilon)^{\mathcal{N}+\delta}.\] Using (6.8) for \(t=(k+1)T\), let \(\tilde{\varepsilon}=\frac{\varepsilon^{2}}{32e^{kT(K+1)}+1}\) and from (6.5), (6.9), we find that \[N_{\varepsilon}(\mathcal{A}_{\Sigma};\mathcal{E}) \leq N_{\varepsilon}(\mathcal{K}_{\Sigma_{j}}((k+1)T);\mathcal{E})M\] \[\leq(2/\varepsilon)^{\mathcal{N}+\delta}\cdot(N_{\varepsilon^{2 }/(32e^{kT(K+1)}+1)}^{w}(r,X;\,\|g\|_{L^{2}_{\mathbb{R}}(\mathbb{R};L^{2}( \Omega))},Y))^{k},\ \forall\ 0<\varepsilon\leq\varepsilon_{0}.\] Hence the Kolmogorov \(\varepsilon\)-entropy \(\mathbb{H}_{\varepsilon}(\mathcal{A}_{\Sigma})=\log_{2}N_{\varepsilon}( \mathcal{A}_{\Sigma};\mathcal{E})\) of \(\mathcal{A}_{\Sigma}\) satisfies the estimate \[\mathbb{H}_{\varepsilon}(\mathcal{A}_{\Sigma})\leq(\mathcal{N}+\delta)\log_{2 }\frac{2}{\varepsilon}+(\frac{1}{\sigma}\ln\frac{4\nu}{\varepsilon}+T) \mathbb{H}_{\frac{\varepsilon^{[2+(K+1)/\sigma]}}{32e^{kT(K+1)}(4\nu)(K+1)^{ \sigma}+1}}^{w}(r,X;\,\|g\|_{L^{2}_{\mathbb{R}}(\mathbb{R};L^{2}(\Omega))},Y).\]
2310.14013
Structural fluctuations in thin cohesive particle layers in powder-based additive manufacturing
Producing dense and homogeneous powder layers with smooth free surface is challenging in additive manufacturing, as interparticle cohesion can strongly affect the powder packing structure and therefore influence the quality of the end product. We use the Discrete Element Method to simulate the spreading process of spherical powders and examine how cohesion influences the characteristics of the packing structure with a focus on the fluctuation of the local morphology. As cohesion increases, the overall packing density decreases, and the free surface roughness increases, which is calculated from digitized surface height distributions. Local structural fluctuations for both quantities are examined through the local packing anisotropy on the particle scale, obtained from Vorono\"{\i} tessellation. The distributions of these particle-level metrics quantify the increasingly heterogeneous packing structure with clustering and changing surface morphology.
Sudeshna Roy, Hongyi Xiao, Vasileios Angelidakis, Thorsten Pöschel
2023-10-21T13:53:08Z
http://arxiv.org/abs/2310.14013v1
# Structural fluctuations in thin cohesive particle layers in powder-based additive manufacturing ###### Abstract Producing dense and homogeneous powder layers with smooth free surface is challenging in additive manufacturing, as interparticle cohesion can strongly affect the powder packing structure and therefore influence the quality of the end product. We use the Discrete Element Method to simulate the spreading process of spherical powders and examine how cohesion influences the characteristics of the packing structure with a focus on the fluctuation of the local morphology. As cohesion increases, the overall packing density decreases, and the free surface roughness increases, which is calculated from digitized surface height distributions. Local structural fluctuations for both quantities are examined through the local packing anisotropy on the particle scale, obtained from Voronoi tessellation. The distributions of these particle-level metrics quantify the increasingly heterogeneous packing structure with clustering and changing surface morphology. keywords: Discrete Element Method, powder spreading, cohesion, anisotropy, surface roughness + Footnote †: journal: Additive Manufacturing ## 1 Introduction Powder-based additive manufacturing techniques, like powder bed fusion, have garnered considerable interest [1; 2; 3] for their ability to facilitate rapid prototyping and the production of highly customizable parts. These methods enable efficient manufacturing by minimizing the need for material removal and extensive support structures, which in turn reduces production time and material waste. However, the quality and efficiency of powder-based techniques are far from ideal. Non-uniform powder packing during spreading is one of the major issues that limit the range of available powder materials and impair printing quality. Various types of structural defects in the deposited powder layer have been observed, which strongly correlate to defects in sintered parts [4; 5; 6]. Since commonly used particle sizes are far below \(100\,\mu\)m, cohesion between particles can impair the spreading and deteriorate the quality of the powder layer through reduced powder flowability and cohesion-induced powder clustering. Understanding the influence of cohesion on spreading requires detailed measurement of the packing structure under various levels of cohesion, which is expensive and difficult to obtain experimentally [7; 8; 9; 10]. Characterizing a thin particle layer is also challenging, as most of the existing metrics are meant for bulk characterization. One prominent method utilized to understand and design the powder spreading process is the Discrete Element Method (DEM), which is a particle-based simulation technique that computes particle trajectories from the interaction forces. Various DEM-based studies have investigated powder spreading with the aim of improving the quality of the powder layer [11, 12, 13, 14, 15, 16, 17, 18]. Simulations can reflect behaviors of real powders [19, 20] during spreading as they can be calibrated by experiments of powder flowability [21, 13, 22, 19], which allows detailed studies of the influence of process parameters and powder properties on spreading. For example, Parteli and Poschel [23] showed in simulations that a fast spreading process increases the surface roughness of a cohesive powder layer for roller spreading. Nasato et al. [24] found that small frequency and amplitude of a vibrating recoater lead to low powder bed porosity. Non-spherical powders with realistic particle shapes were also considered when investigating how the recoating velocity influences the bed porosity [25, 12]. Shaheen et al. [13] showed that powder layer defects are more likely to occur with higher particle rolling and sliding friction. In these studies, the prerequisite of establishing the relation between process and material parameters and the layer quality is a detailed and informative characterization of the packing structure, which can be challenging for cohesive particles due to effects like clustering. While the global packing density is informative and widely used, it does not contain information of how the particles are spatially arranged. Therefore, the spatial fluctuation of the packing structure is also important, especially for highly cohesive powders where the packing tends to be heterogeneous [3]. To this end, local density is often calculated using binning and coarse-graining where the averaging length must be chosen [18, 13]. Metrics based on the Voronoi cell volume can also be used, which does not require hand-picking an averaging length scale. For example, Phua et al. used Voronoi-tessellation of particles in 3D to calculate the average packing fraction of powder layers[26]. However, examining the global distribution of the Voronoi still does not offer the complete picture of how density fluctuates. Here, we adopt a Voronoi-tessellation based method [27] to quantify local structural anisotropy, which is an inherent property of non-crystalline packing of particles and is associated with critical mechanical properties in disordered packings, such as jamming [27], plasticity [28], and shear band formation [29, 30, 31]. This method does not require choosing a density threshold to identify voids and it yields a meaningful distribution of local anisotropy in a deposited powder layer, based on which the hetereogeneity of the packing can be quantified. The surface roughness of the deposited powder also plays a crucial role in determining the functionality and aesthetics of the final product. Achieving the desired surface finish is essential for optimizing performance and ensuring consistent product quality. Surface roughness, similar to density, is also influenced by the interplay between process parameters [25, 16, 32, 33] and material properties such as cohesion, particle size distribution [14] and particle shape [12, 34]. In particular, cohesion strongly influences the surface roughness during spreading. The powder bed surface roughness increases with cohesion due to powder agglomeration and particle removal caused by particle-to-blade cohesion during spreading [18, 20]. In DEM simulations, the surface roughness is typically evaluated by measuring the local surface height determined by the maximum vertical coordinate of the powder bed and monitoring its spatial variation. Using this variation as a metric of uniformity, surface roughness is calculated as the mean deviation of surface height from the powder bed average height [14, 18]. Experimentally, the surface height can be determined using optical 3D digital microscopy [32] or high-speed laser profilometry [33]. Surface roughness can be quantified either using planar profile measurements in two dimensions[12, 25] or areal measurements in three dimensions [18, 35]. While metrics like the standard deviation of the global surface height distribution is informative, it does not offer a complete description of the surface profile. In this study, we evaluate the skewness and the kurtosis of the height distribution calculated using an efficient digitization method [35]. These characteristics offer further insight into the presence of local outliers in surface roughness and the extent to which they deviate from the mean surface plane. We also address the problem that for a given set of surface height values, the distribution cannot well describe the local fluctuations because a spatial rearrangement of the height values does not change the distribution. This is similar to the aforementioned problem that the global packing density cannot sufficiently describe the heterogeneity of the packing. To this end, we quantify the spatial fluctuations of the free surface height of the powder layer through a coarse-graining approach to calculate the squared local spatial gradient of the Voronoi cell-averaged height. This quantity again yields a meaningful distribution that can be described by a single parameter, quantifying the height fluctuations. ## 2 Model ### Numerical Setup We employ DEM to obtain particle-scale information on powder layers created by a spreading process, using MercuryDPM [36]. The simulation setup is shown in Figure 1. The powder is spread by a blade tool, moving at constant velocity \(v_{T}\) along the spreading direction, \(x\)[37; 38]. We simulate a small slice of the powder bed of length \(10\) mm in the \(x\)-direction and width \(1\) mm in the lateral \(y\) direction where periodic boundary conditions are applied. For the subsequent analysis, we consider the range \(0\leq x\leq 7\,\mathrm{mm}\)]. It is assumed that the substrate is flat and the coefficient of friction between the wall and the particles is equal to that of the particle-particle interaction. A log-normal particle size distribution is considered with mean particle diameter \(D_{50}=37\,\mathrm{\mu m}\), \(D_{10}=24\,\mathrm{\mu m}\), and \(D_{90}=56\,\mathrm{\mu m}\). The particles are initially generated in front of the spreader tool at \((x,y,z)\in[0.5,2.5]\) mm \(\times[0,1]\) mm \(\times[0,h]\) as shown in Figure 1(a), filling a total bulk particle volume of \(0.75\) mm\({}^{3}\) which is sufficient to create a powder layer of \(10\) mm length, \(1\) mm width, and tool gap \(H=100\,\mathrm{\mu m}\), where the tool gap is defined as the gap between the base of the blade and the substrate, as shown in Figure 1(b). The spreading process starts at a constant velocity \(v_{T}=10\) mm/s with initially all particles at rest. It ends when the blade arrives at the end after \(1.2\) s, and the simulation ends at time \(1.5\) s once the system is relaxed again, i.e., when the kinetic energy is sufficiently low. ### Contact models #### 2.2.1 Hertz-Mindlin visco-elastic contact model The visco-elastic Hertz-Mindlin contact model (no-slip solution) [39; 40] is employed to calculate the normal and tangential elastic contact forces between particles, respectively. The normal force for the Hertz visco-elastic model is given as \[\vec{F_{n}}=\min\left(0,-\rho\xi^{3/2}-\frac{3}{2}A_{n}\rho\sqrt{\xi}\dot{\xi }\right)\vec{e_{n}} \tag{1}\] where \(\xi=R_{i}+R_{j}-|\vec{r}_{i}-\vec{r}_{j}|\) is the compression of two interacting particles \(i\), \(j\) of radii \(R_{i}\) and \(R_{j}\) at positions \(\vec{r}_{i}\) and \(\vec{r}_{j}\) and \(\vec{e}_{n}=(\vec{r}_{i}-\vec{r}_{j})/|\vec{r}_{i}-\vec{r}_{j}|\) is the normal unit vector, \(A_{n}=5\times 10^{-6}\) s is the normal dissipative parameter, calculated as in [41], considering a coefficient of restitution of 0.4 for the characteristic blade velocity 10 mm/s and \[\rho=\frac{4}{3}\,E^{*}\,\sqrt{R^{*}} \tag{2}\] with the effective radius \(R^{*}\). The effective elastic modulus, \[E^{*}=\left(\frac{1-\nu_{i}^{2}}{E_{i}}+\frac{1-\nu_{j}^{2}}{E_{j}}\right)^{-1} \tag{3}\] depends on the elastic moduli and the Poisson ratio of the material of particles \(i\) and \(j\). We model the tangential viscoelastic forces following the no-slip solution of Mindlin [42] for the elastic part and Parteli and Poschel [23] for the tangential dissipative constant \(A_{t}\approx 2A_{n}E^{*}\), which are capped by the static friction force between two particles. The tangential force is given by \[\vec{F}_{t}=-\min\left[\mu|\vec{F}_{n}|,\int_{path}8G^{*}\sqrt{R^{*}\xi}\,ds+A _{t}\sqrt{R^{*}\xi}v_{t}\right]\vec{e}_{t}\,, \tag{4}\] Figure 1: Numerical setup for powder spreading on a planar substrate during spreading showing (a) initial configuration and spreading setup for (b) cohesionless and (c) cohesive powders. with the friction coefficient, \(\mu\), the effective shear modulus \[G^{*}=\left(\frac{2-\nu_{i}}{G_{i}}+\frac{2-\nu_{j}}{G_{j}}\right)^{-1} \tag{5}\] which for particles of identical material simplifies to \(G^{*}=\frac{4G}{2-\nu}\), and the tangential relative displacement of the particles, \(ds\). #### 2.2.2 Non-linear cohesive model To simulate particle cohesion, we incorporated adhesive forces described by the Johnson-Kendall-Roberts model [43] (JKR) and attractive forces using a model for non-bonded van der Waals interactions [8]. The JKR adhesive force is computed as \[\vec{F}_{\mathrm{JKR}}=4\sqrt{\pi a^{3}\gamma E^{*}}\;\vec{e_{n}} \tag{6}\] where \(\gamma\) is the surface energy density and \(a\) is the contact radius related to deformation, calculated using \[\xi=\frac{a^{2}}{R^{*}}-\sqrt{\frac{4\pi a\gamma}{E^{*}}} \tag{7}\] The maximum interaction distance at which the contact breaks under tension is given by \[\xi_{t}=\frac{1}{2}\frac{1}{6^{1/3}}\frac{a^{2}}{R^{*}} \tag{8}\] The non-bonded van der Waals attractive force [8; 44; 45] reads \[\vec{F}_{\mathrm{vdW}}=\begin{cases}\frac{A_{H}R^{*}}{6D_{\mathrm{min}}^{2}} \;\vec{e}_{n},&\text{if }\xi>0\\ \frac{A_{H}R^{*}}{6(\xi-D_{\mathrm{min}})^{2}}\;\vec{e}_{n},&\text{if }-D_{ \mathrm{max}}\leq\xi\leq 0\\ 0,&\xi<-D_{\mathrm{max}}\end{cases} \tag{9}\] where \(D_{\mathrm{min}}=1.65\) A is a parameter introduced to avoid a singularity [8], \(D_{\mathrm{max}}\) is the maximum interaction distance of the van der Waals interaction, which is set as 1 \(\mu\)m [8] and \(A_{H}\) is the Hamaker constant which relates to the surface energy density via \[A_{H}=24\pi D_{\mathrm{min}}^{2}\gamma\,. \tag{10}\] ### Material Parameters The powder spreading process is simulated considering a metallic Ti-6Al-4V powder. The material and simulation parameters can be found in Table 1. According to the experimental measurement of the angle of repose of \(41^{\circ}\) and matched with the simulation results of Meier et al. [19], the surface energy of Ti-6Al-4V is 0.1 mJ/m\({}^{2}\). To study the effect of particle cohesion on the powder quality, we simulate the powder spreading process for varying surface energy \(\gamma\) from 0 to 0.5 mJ/m\({}^{2}\) in steps of 0.05 mJ/m\({}^{2}\). In general, for cohesive bonds, the surface energy \(\gamma\) quantifies the energy associated with disrupting a bond between cohesive particles to create surface. Thus, varying \(\gamma\) is a meaningful representation of varying cohesion intensity between neighboring particles. We introduce the Bond number \[Bo=\frac{36\gamma}{\rho D_{50}^{2}g} \tag{11}\] to characterize the ratio between interparticle cohesion and gravity, where \(D_{50}=37\,\mu\)m. Note that \(Bo=54.5\) corresponds to the surface energy of the Ti-6Al-4V powder with \(\gamma=0.1\) mJ/m\({}^{2}\)[19]. For the given particle and material parameters, the values of \(\gamma\in\{0,0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5\}\), mJ/m\({}^{2}\) correspond to \(Bo\in\{0,27.2,54.5,81.7,108.9,136.2,163.4,190.6,217.9,245.1,272.3\}\). ## 3 Local density characterization of the powder layer The quality of the produced powder layer is closely related to the packing density of the particles prior to sintering [20]. The density of the layer can be quantified by the ratio of the volume occupied by particles and the total volume, \(\phi=V_{\rm solid}/V_{\rm total}\). For sufficiently small \(V_{\rm total}\), \(\phi\) can be considered as a local variable. Low packing fraction values indicate loose structures that are prone to defects in the final product. In general, a high packing fraction is desirable for high product quality. The packing density is calculated locally for subsections of each layer to provide information about the spatial variability of voids throughout the layer. To this end, the local packing fraction is calculated for horizontal strips across the spreading distance \(x\), of fixed width equal to 1 mm. The strip size is chosen to be sufficiently large so that it contains a representative number of particles and voids for the calculation of the packing fraction. The density calculations are performed using YADE, where a dedicated algorithm exists for density calculations [46]. Alternative, high-resolution techniques have been proposed to calculate the density of granular packings based on the exact partial intersection volume between spheres and mesh elements [47]. Powder spreading leads to unstructured, inhomogeneous layers of material with spatially varying packing characteristics. This is the motivation behind calculating the packing density locally, for subsections of the layer along the spreading direction, aiming to explore the degree of density inhomogeneity within each layer. Figure 2 shows values of the packing fraction for various values of cohesion as a function of \(Bo\). Evidently, cohesive materials lead to loose packings of the powder material, which is in agreement with previous observations [20]. For increasing cohesion from \(Bo=0\) to \(Bo=272.3\), the mean packing fraction reduces by nearly 60%, from \(\phi\approx 0.60\) to \(\phi\approx 0.25\), while the scattering of the values also increases slightly. The reduction of the average \begin{table} \begin{tabular}{l l l} \hline \hline variable & unit & value \\ \hline particle density (\(\rho\)) & kg/m\({}^{3}\) & 4430 \\ elastic modulus (\(E\)) & MPa & 2.30 \\ Poisson’s ratio (\(\nu\)) & - & 0.40 \\ sliding friction coeff. (\(\mu\)) & - & 0.10 \\ particle diameter (\(d_{p}\)) & \(\mu\)m & \(12-79\) \\ \hline \hline \multicolumn{3}{l}{additional parameters describing cohesion} \\ \hline surface energy (\(\gamma\)) & mJ/m\({}^{2}\) & \(0-0.5\) \\ surface energy of Ti-6Al-4V & mJ/m\({}^{2}\) & 0.1 \\ interaction distance (\(D_{\rm max}\)) & \(\mu\)m & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: DEM simulation parameters packing fraction is gradual for increasing Bond number (within the studied range of values), i.e., no sudden transitions are observed between layers made of powders with similar Bond numbers. The data points in Figure 2 are colored according to their distance, \(x\), from the starting point of spreading, where a clear trend is not observed, indicating that the degree of scattering does not correlate with the spreading distance, \(x\). ## 4 Local structural anisotropy characterization of cohesive particle layers ### Heterogeneous packing of cohesive particles Figure 3(a) shows an example of the deposited layer of highly cohesive particles (\(Bo=190.6\)). We observe a heterogeneous structure comprising regions of dense and loose packing. At the particle scale, the density of neighbors surrounding each particle can be highly anisotropic, which could have important implications for subsequent processes such as heat transfer and phase change. Although such spatial fluctuations can be reflected by bin-averaged density, as done in Figure 2, and the degree of fluctuation of densities at different locations depends on a manually chosen bin sizes. To avoid the need to manually specifying sampling length scales, we adapt a method of characterizing the structural heterogeneity using a particle-level measurement based on the packing anisotropy from the Voronoi tessellation [27]. ### Anisotropy Vector and Divergence The measure of the local structural anisotropy was developed by Rieser et al. from the observation that the center of a particle deviates from the centroid of its Voronoi cell in a disordered Figure 2: Local packing fraction \(\phi\) as a function of Bond number \(Bo\). The horizontal lines note the global packing fraction value for each layer. The sample points are colored according to their distance \(x\) from the starting point of spreading. packing [28]. Any two particles with a shared Voronoi cell face are defined as neighbors; from this, a Delaunay triangulation is generated by connecting groups of three mutual neighbors into triangles. Figure 3(a) includes a schematic illustration of the Voronoi tessellation calculated based on the projections of the particle positions on the xy-plane, which is plotted below the particle packing with the Delaunay triangle \(k\) and the anisotropy vector \(\vec{C}\) pointing from particle centers to corresponding Voronoi cell centroids. For a triangle representing a densely occupied area (overpacked, like the one depicted), all the \(\vec{C}\) vectors point outward. For a triangle representing a void (underpacked), the vectors point inward. We quantify the extent to which the vectors \(\vec{C}\) at the three vertices of a Delauney triangle point inward or outward by calculating the divergence of the vectors of a triangle \(k\) with area \(A_{k}\). This is calculated based on the concept of constant strain triangle in finite element analysis [48]. The local structural anisotropy, \(Q_{k}\), calculated from the divergence, defined as \[Q_{k}\equiv\left(\nabla\cdot\vec{C}\right)\,\frac{A_{k}}{\bar{A}}\,, \tag{12}\] where \(\bar{A}\) is the average of all \(A_{k}\) within the packing. By construction, \(Q_{k}\) is dimensionless with a mean near zero. It is sensitive to the local structural anisotropy and has a geometrical significance: positive (negative) values correspond to overpacked (underpacked) regions. Figure 3: Characterizing the structural anisotropy of the deposited layer. (a) Granular packing of powder layer for \(Bo=190.6\). The points represent the projections of particle centers on the \(xy\)-plane. The corresponding 2D Voronoi tessellations are also shown on the plane. Also shown in (a), are a schematic representation of the particle packing with superimposed Voronoi tessellation (blue) and Delaunay triangles (green). The vectors \(C_{p}\) (red) point from particle centers to the centroids of Voronoi cells. The calculated \(Q_{k}\) for (b) \(Bo=0\) and (c) \(Bo=272.3\), where the Delaunay triangles are colored by the corresponding \(Q_{k}\) values. The challenging aspect here is to extend the 2D calculation to a quasi-2D thin free-surface layer with a height of two to three particle diameters, which is typical in powder spreading. Since the deposited layer is thin and many regions only contain a single layer of particles, as shown in Figure 3, we use the 2D projections of the particles on the \(xy\) plane for calculating the anisotropy. This simplification could lead to contributions of highly positive \(Q_{k}\), especially in non-cohesive packing, due to vertically aligned particles in a quasi-2D layer. To see the influence of such scenarios, we scale \(Q_{k}\) with the ratio of the projected area of overlapping particles, \(A_{p}\), on the \(xy\) plane and the sum of the real area of the particles, \(A_{r}=\sum\pi r_{i}^{2}\). The scaled \(Q_{k}^{\prime}\) is given as follows: \[Q_{k}^{\prime}\equiv Q_{k}\,\frac{A_{p}}{A_{r}}=\left(\nabla\cdot\vec{C} \right)\,\frac{A_{k}}{\vec{A}}\,\frac{A_{p}}{A_{r}}\,. \tag{13}\] For highly dense packings of vertically aligned particles, \(A_{p}/A_{r}<1\), and the anisotropy is reduced. For dilute packings, \(A_{p}/A_{r}=1\), and thus \(Q_{k}^{\prime}\) coincides with the original definition without correction. ### Divergence Fields and Distributions Figure 3(b) and (c) show the \(Q_{k}\) map for \(Bo=0\) and \(Bo=272.3\), respectively, where the triangles are colored according to the corresponding values of \(Q_{k}\). For \(Bo=0\), the triangles share similar areas, and the \(Q_{k}\) value fluctuates between positive and negative randomly in space. For \(Bo=272.3\), large triangles corresponding to underpacked regions exist, making the nearby \(Q_{k}\) values highly positive or negative, indicating strong anisotropy. Note that the \(Q_{k}\) value of each individual triangle is determined by its immediate neighborhood, rather than the overall packing density. The dense and homogeneous regions in both \(Bo=0\) and \(Bo=272.3\) have low \(Q_{k}\) values and only anisotropic regions show extreme values, which mostly exist in \(Bo=272.3\). Figure 4: (a) Probability density of normalized divergence of center-to-centroid vectors for the quasi-2D packing of powder deposited for different \(Bo\). \(Q_{k}>0\) regions are more densely packed than their surroundings; hence, we call these regions overpacked. \(Q_{k}<0\) regions are more loosely packed than their surroundings and are, therefore, labeled underpacked. The solid curves are Gaussian fits \(\bar{Q}_{k}-0.5\) to \(\bar{Q}_{k}+0.5\). (b) Standard deviations (red circles) and skewness (blue squares) vs. \(Bo\). The standard deviations and skewness of the distributions of \(Q_{k}\) (solid) and \(Q_{k}^{\prime}\) (hollow) are compared. The distribution of \(Q_{k}\) is a strong structural indicator that is associated with important mechanical properties of a disordered packing, including jamming and shear band formation [27; 29; 30; 31]. In dense and homogeneous regions, \(Q_{k}\) fluctuates randomly, and a peak in the \(Q_{k}\) distribution around \(Q_{k}=0\) is typically observed [27]. In heterogeneous regions with high anisotropy, the highly positive and negative \(Q_{k}\) values show up together on the tails of the distribution, making them deviate from Gaussian. Figure 4(a) shows the distribution of \(Q_{k}\) for different \(Bo\). The majority of the \(Q_{k}\) resides in the region around zero. It can be fitted to a Gaussian distribution using values between \(\bar{Q}_{k}-0.5\) to \(\bar{Q}_{k}+0.5\) for each data set (solid curves), where \(\bar{Q}_{k}\) is the mean of the distribution. For lower \(Bo\), we observe a consistent slope of the distribution throughout the range \(Q_{k}<0\), which is an indication of a homogeneous structure throughout the packing. In contrast, a transition of the slope of the distribution at \(Q_{k}=-1\) is clearly distinct for higher \(Bo\), suggesting the coexistence of dense homogeneous regions and dilute heterogeneous regions for highly cohesive materials: for \(Q_{k}<-1\), the distribution deviates from Gaussian and becomes exponential-like for higher \(Bo\). This exponential tail corresponds to the existence of highly underpacked sites distributed sparsely in the packing, as seen in Figure 3(c); for \(-1<Q_{k}<1\), the distribution is narrower with increasing \(Bo\), indicating the existence of locally homogeneous packing. This variation in structure is also evident in cohesive systems shown in the experimental studies by Xiao et al. [29]. Such a variation is not observed for non-cohesive experimental particle systems in Harrington et al. [31] for disordered particle packings. To quantify the difference in packing heterogeneity for different \(Bo\), we show the standard deviation and the skewness of the \(Q_{k}\) distribution in Figure 4(b). The standard deviation reflects the portion of highly anisotropic sites (triangles) in a packing. The skewness roughly compares the degree of anisotropy of loosely packed sites to densely packed sites. For higher cohesion, particles can sustain more voids during spreading and encounter higher local anisotropy, leading to a more heterogeneous overall packing structure. As a result, the standard deviation increases with \(Bo\), which reflects the difference seen in Figure 3 in a quantitative way. The skewness decreases with \(Bo\), which reflects the growing tail at the negative end of the \(Q_{k}\) distribution. This corresponds to the fact that the void sites not only grow larger in number, but also have larger sizes at higher \(Bo\). We compared the standard deviations and the skewness of the distributions with the anisotropy calculated using Equation 12 and Equation 13 for \(Q_{k}^{\prime}\), respectively. The divergence \(Q_{k}\) from the condition in Equation 12 gives slightly higher values for both the standard deviation and the skewness but qualitatively shows the same behavior as \(Q_{k}^{\prime}\). ## 5 Surface roughness characterization of powder layer ### Digitized free surface height characterization Measuring the surface roughness of the powder layer is of interest, as it is related to the roughness of the final product, while distinct rough features are areas prone to become the source of defects. As discussed in the previous sections, packing density and structural packing anisotropy are integral elements in assessing the quality of the finished part. However, they do not provide information on the irregularity of the surface texture of the powder layer. To this end, it is useful to characterize the surface roughness of the produced layers corresponding to different cohesion values. Various aspects of roughness can be characterized via the calculation of independent quantitative indices that provide diverse morphological information on the layer's topography. Previous work focused on the two-dimensional characterization of surface roughness features of powder layers, using indices that correspond to planar rough profiles, usually taken as representative of the real rough profile [23; 24]. Here, the three-dimensional surface profile of each powder layer was reconstructed for each layer after spreading is completed. The particles located near the top of the powder layer were identified, and points on their surface were calculated using a regular sampling grid [35], giving a surface height profile, \(z_{\text{s}}\), similar to Meier et al. [18]. Note that if the surface of the substrate is directly exposed at a sampling point, a value of zero is recorded. Figure 5 shows the surface heights of the 11 studied powder layers of different cohesion. Interestingly, the surface height reaches a maximum value of up to \(150\,\mu\)m for larger Bond numbers, which is larger than the gap height of \(100\,\mu\)m, shown in Figure 1(b). This typically occurs for fine cohesive powders, due to decreased flowability of the powders when the cohesion effects dominate gravity and inertia, resulting in the formation of agglomerates with internal cavities and irregular surface profiles [17; 49]. The surface height of the cohesionless powder layer (\(Bo=0\)) does not exceed the gap height. Figure 6 shows three example distributions of the measured surface height, which shows that the distribution widens as \(Bo\) increases. However, the shapes of the height distributions are rather complicated, and require many parameters to describe as listed in the following subsection. A spike at \(z_{s}=0\) exists for all three \(Bo\), which corresponds to the exposed substrate surface. ### Roughness characterization using height distributions The current state-of-the-art for characterizing rough surfaces, as outlined in ISO 25178 [50], calculates roughness indices based on the surfaces of real, three-dimensional texture profiles. Using distributions of \(z_{\text{s}}\), the surface roughness is characterized in terms of arithmetic mean height (\(S_{a}\)), root mean square height (\(S_{q}\)), skewness (\(S_{sk}\)) and kurtosis (\(S_{ku}\)). The height deviation of the surface roughness from a mean surface height of the entire layer is used, which is \(z_{m}\). We show these Figure 5: Height of powder layer surfaces for various Bond numbers from \(Bo=0\) to \(Bo=272.3\). roughness parameters in Figure 7 for increasing Bond number values, where the points are colored according to their distance \(x\) from the start of spreading, and along the spreading direction. A clear correlation was not found between any of the surface roughness parameters and their distance from the initial spreading position. We next discuss the significance of the roughness parameters individually. The arithmetic mean height is calculated as: \[S_{a}=\frac{1}{A}\iint\limits_{A}\left|\bar{z}\left(x,y\right)\right|\mathrm{d}x \,\mathrm{d}y. \tag{14}\] where \(\bar{z}=z_{\mathrm{s}}-z_{m}\) is the height of a point on the layer surface, measured from the plane of mean surface height, \(z_{\mathrm{s}}\) is the measured free surface height, \(x\) and \(y\) the horizontal coordinates of the point along and transversely the spreading direction, and \(A\) the area occupied by the layer. It becomes evident in Figure 7(a) that for powders of increasing cohesion, the arithmetic mean height increases almost linearly with the Bond number up to values of \(Bo=217.9\). This indicates that cohesive powders lead to higher deviations from the mean height, and to rougher surface texture profiles. Also, the scatter of measurements increases slightly for larger Bond numbers, which points to the conclusion that more cohesive powders feature more heterogeneous profiles, with taller peaks and deeper valleys. To further validate this trend, the root mean square height is calculated as: \[S_{q}=\sqrt{\frac{1}{A}\iint\limits_{A}\bar{z}^{2}\left(x,y\right)\mathrm{d}x \,\mathrm{d}y}. \tag{15}\] It can be seen in Figure 7(b) that the root mean square height presents the same general trend as the scattering of the arithmetic mean, where more cohesive powders form layers with more heterogeneous height distributions. This is in agreement with findings from the literature [18; 20]. These two measures of the average height of the powder surface texture are informative regarding the extent of the roughness, but provide no information about their morphology. To this end, the skewness and kurtosis of the surface height profiles are examined. The height skewness is calculated as: \[S_{sk}=\frac{1}{{{S_{q}}^{3}}}\left[\frac{1}{A}\iint\limits_{A}\bar{z}^{3} \left(x,y\right)\mathrm{d}x\,\mathrm{d}y\right]. \tag{16}\] Figure 6: The distribution of digitized free surface height distributions for \(Bo=0\), \(Bo=136.2\), and \(Bo=272.3\). Skewness is a measure of the asymmetry of the layer height distribution around the mean plane. Negative skewness values (\(S_{sk}<0\)) indicate that the height distribution is skewed above the mean height plane, with a few deep valleys, zero skewness values (\(S_{sk}=0\)) correspond to a symmetric surface, where peaks and valleys occupy the same amount of surface in average, while positive values (\(S_{sk}>0\)) indicate that the height distribution is skewed below the mean height plane, with a few tall peaks. Figure 7(c) shows a monotonically increasing trend of skewness with increasing Bond number values, where powders with lower cohesion (\(Bo<190.6\)) demonstrate negative skewness, with average cohesion (\(Bo\approx 190.6\)) nearly zero skewness and with higher cohesion (\(Bo>190.6\)) positive skewness values. The height kurtosis is calculated as: Figure 7: Surface roughness parameters (a) arithmetic mean height \(S_{a}\) (b) root mean square height \(S_{q}\) (c) skewness \(S_{sk}\) and (e) kurtosis \(S_{ku}\) shown as a function of \(Bo\). The solid horizontal lines note the global values of the surface roughness parameters for each layer. The sample points are colored according to their distance \(x\) from the starting point of spreading. The dashed line for \(S_{sk}=0\) marks the threshold between profiles where most rough features appear above the mean plane (\(S_{sk}<0\)) and below it (\(S_{sk}>0\)). The dashed line for \(S_{ku}=3\) marks the threshold between rough profiles with rounded peaks (\(S_{ku}<3\)) and with sharp ones (\(S_{ku}>3\)). \[S_{ku}=\frac{1}{{S_{q}}^{4}}\left[\frac{1}{A}\iint\limits_{A}\bar{z}^{4}\left(x,y \right)\mathrm{d}x\,\mathrm{d}y\right]. \tag{17}\] Like skewness, kurtosis describes a particular morphological aspect of the surface height distribution. Skewness is a metric of whether most of the rough profile is positioned above or below the mean height plane. Kurtosis provides information on the average shape of the surface texture asperities, and can be seen as a probability density sharpness of the rough features. Low kurtosis values (\(S_{ku}<3\)) indicate platykurtic surface texture profiles of well-rounded asperities presenting short tails, zero kurtosis values (\(S_{ku}\approx 3\)) correspond to mesokurtic profiles of Gaussian-like asperities characterized by medium-sized tails, while high kurtosis values (\(S_{ku}>3\)) correspond to leptokurtic surface profiles, with sharp, spike-like asperity characteristics presenting long tails. Figure 7(d) shows the kurtosis values for the various powder layers of varying cohesion, where the parameter shows a non-monotonous, mostly declining trend for increasing Bond number. It becomes evident that the powder layer corresponding to zero cohesion (\(Bo=0\)) features high kurtosis values (\(S_{ku}>3\)), while layers made of cohesive powders feature lower kurtosis values (\(S_{ku}<3\)). For the higher end of the studied cohesion levels (\(Bo>217.9\)) kurtosis shows a mild increasing trend, which is however characterized by a high degree of scatter, making a further interpretation challenging. Combining the observations of all roughness parameters for the studied powder layers of varying Bond number, it can be inferred that increasing cohesion leads to powder layers characterized by increased roughness, where the layer lies mostly below its average height, and presents a few, rounded peaks. For less cohesive powders, the corresponding layers are characterized by less pronounced rough features of a sharper nature. These observations can possibly be explained by considering that cohesive particles tend to agglomerate into larger clusters, which appear to be more rounded at the scale of the full powder layer, compared to cohesionless particles which pack without demonstrating clustering, and thus it is more probable for them to have individual particles deposited on the surface of the layer, which macroscopically resemble sharp peaks. ### Spatial fluctuation of the free surface height While examining the digitized height distribution is informative, it does not contain information on the spatial arrangement of the height profile. For a given set of digitized height values, a permutation of their spatial arrangement does not change the distribution. This problem is analogous to the problem where a global packing density does not offer information on the homogeneity of the packing. Therefore, we again use the projection-based Voronoi and Delaunay tessellations as in Section 4 to address this issue. For a sphere packing, the digitized free surface height values for each sphere are spatially correlated as they can be fully described by the center coordinates and the radius of the sphere. To reduce this correlation, a spatial coarse graining at the length scale of a particle's diameter is required. We average the free surface height value, \(z_{\mathrm{s}}\), in each Voronoi cell, defining a cell-averaged height, \(z_{v}\), at the center of each corresponding sphere, which is also a vertex in the Delaunay triangulation. The calculated distributions of \(z_{v}\) for different cohesion are shown in Figure 8(a), which are colored by the corresponding \(Bo\). Results show that the distribution widens with increasing \(Bo\), which agrees with results in Figure 7. The variance of the \(z_{v}\) distribution can be calculated as \(\sigma_{z_{v}}^{2}\), but as mentioned earlier, it does not contain information of the spatial height fluctuation. To demonstrate the permutation problem, a sketch is made in Figure 8(b) where the Delaunay triangles are drawn in black, and the height of each vertical stick from a vertex represents \(z_{v}\). For simplicity, we show idealized scenarios with only two height values, which can be organized into scenario A where the short surfaces (blue) and tall surfaces (red) are spatially segregated, and scenario B where they are mixed, with both cases having the same height distribution. The degree of "mixing" between taller and shorter surfaces needs to be quantified for a more complete description of the free surface profile. This can be described by how different the values are for the three vertices in an triangle, and this difference is small for most triangles in A and large for most triangles in B. To quantify this, we use the square of the first spatial derivative, \(|\nabla z_{v}|^{2}\), and an integration of this quantity over the entire domain, which is essentially the Dirichlet Energy [51; 52; 53] \[E_{\Lambda}=\frac{1}{2\sigma_{zv}^{2}}\int|\nabla z_{v}|^{2}dA, \tag{18}\] which quantifies the degree of variation of a function in a given domain, with the function being the height, \(z_{v}\), that varies on the 2D domain \(A\) on the \(xy\) plane. In a lattice triangulation, this quantity can be digitized as Figure 8: Quantifying mixing of surface heights. (a) Distributions of Voronoi cell-averaged surface height for different \(Bo\). (b) Illustration of poor mixing (left) and well mixing (right) that generate the same surface height distribution. (c) Distributions of the Dirichlet energy of individual triangles for different \(Bo\). (d) The total Dirichlet energy for each \(Bo\). Inset shows the fitted exponential distribution constant for each \(Bo\). \[E_{\Lambda}=\frac{1}{2}\sum_{k}\Lambda_{k}=\frac{1}{2}\sum_{k(l,m,n)}\frac{1}{2 \sigma_{zv}^{2}}\left[\cot\alpha_{lm}(z_{v,l}-z_{v,m})^{2}+\cot\alpha_{ln}(z_{v,l }-z_{v,n})^{2}+\cot\alpha_{mn}(z_{v,m}-z_{v,n})^{2}\right], \tag{19}\] where \(\Lambda_{k}\) is the normalized Dirichlet energy for a single triangle \(k\), and \(l,m,n\) are the vertices of \(k\), and \(\alpha_{lm}\) is the angle facing the edge connected by \(l\) and \(m\). The distribution of \(\Lambda_{k}\) of all analyzed triangles for each \(Bo\) is shown in Figure 8(c). For each \(Bo\), the distribution is a straight line on a log-lin scale suggesting an exponential distribution, \(P(\Lambda_{k})=\lambda e^{-\lambda\Lambda_{k}}\). Unlike the distribution of the digitized surface height with complicated shapes and spikes at \(z_{\rm s}=0\) (Figure 6), the exponential distribution can be conveniently described by a single parameter, \(\lambda\), which sets the rate of decay for \(P(\Lambda_{k})\). It can be seen from Figure 8(c) that the more cohesive cases have faster decays with higher values near zero. To quantify the variation and the decay, we plot the total Dirichlet Energy for each \(Bo\) in Figure 8(d) and the fitted distribution parameter \(\lambda\) as an inset, both decreasing with \(Bo\). Note that with the normalization by \(\sigma_{z_{v}}^{2}\), these two quantities truly reflect the blending of taller and shorter surfaces, not the spread of surface heights. These results quantitatively show that at the length scale set by particle size, higher cohesion results in less local height fluctuation, despite having a higher spread in height values. This is because low cohesion particles pack densely and homogeneously, and the surface height fluctuates at the particle scale, which is similar to scenario A in Figure 8(a). On the other hand, high cohesion particles form dilute and heterogeneous packings with clustering that is more similar to scenario B. In this sense, the local surface height fluctuations and the local packing anisotropy in a layer should be closely related, which is subject to future studies. ## 6 Conclusions This work quantifies the structural features with a focus on density and surface roughness in powder layers in DEM simulations using realistic cohesive interaction forces. We first used a more traditional approach by calculating global values to show the general trend of decreasing density and increasing surface roughness as cohesion increases. The global structural features was calculated by digitization of the simulated spheres at a very fine scale and then samples globally by binning for density and by examining the distribution for the surface height profile. The increase in the surface roughness was then further interpreted by examining higher moments of the height distribution, including the skewness and the kurtosis, both show a gradual evolution for layers with neighboring Bond numbers. In particular, for \(Bo=0\) the skewness \(S_{sk}<0\) and kurtosis \(S_{ku}>3\), indicating that most rough features appear above the mean height plane and have sharp peaks, while for \(Bo=272.3\) we observe the inverse trend, i.e. the skewness \(S_{sk}>0\) and kurtosis \(S_{ku}<3\), indicating that most rough features appear below the mean height plane and have more rounded peaks. To highlight the increasing heterogeneity of the density and surface profile, we also developed Voronoi-based metrics that quantifies the spatial fluctuations of these quantities of interest. For density fluctuation, the divergence of the Voronoi anisotropy vector, \(Q_{k}\), was adopted for the thin deposited particle layers as a geometrical measure of their structural heterogeneity. The transition in the slope of \(Q_{k}\) distribution at \(Q_{k}=-1\) displays a signature for the coexistence of dense regions with homogeneous structures as well as dilute regions with highly anisotropic structures, which is typical for cohesive materials. With increasing cohesion, both the standard deviation and skewness of the \(Q_{k}\) distributions exhibit a consistent, monotonic change, indicating increasing structural heterogeneity of the deposited layer. We quantified the fluctuation of the free surface height using the Voronoi cell-averaged height. Instead of focusing on the global distribution of this height, which contains no information on the spatial arrangement of the height values, we calculate the local squared spatial gradient as a measure of how well the taller and shorter surfaces are mixed. The distribution of the squared gradient is exponential which can be quantified by a single parameter. When normalized by the variance of the surface height, both the fitted distribution parameter and the total sum of the squared gradient show a decrease with increasing \(Bo\). This quantitatively demonstrates that higher cohesion leads to reduced local height fluctuation despite the height values having a wider spread, which is possibly because that the packing density heterogeneity results in significant fluctuations of the free surface at a larger length scale. In contrast, at lower cohesion levels, particles densely and homogeneously pack, resulting in more surface height fluctuation at the particle scale. The additional sets of metrics for the spatial fluctuation of density and surface height, combined with the global metrics, offer a more complete description of the packing structure than the traditionally used bulk-averaged values. This set of parameters can not only serve as a quantification of the quality of spreading but can also be used as a structural basis for modeling and analysis of subsequent processes, such as heat transfer and binder infiltration, as the heterogeneity of the packing structure on the particle level is important in these processes. For thicker layers and non-spherical particles, the 2D projection-based Voronoi calculation could lose its validity, but the same concept can be extended using real 3D set Voronoi analysis [54; 26]. ## Acknowledgement We gratefully acknowledge Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for funding the Collaborative Research Center 814 (CRC 814), Project Number 61375930-SFB 814 'Additive Manufacturing', sub-project B1. We also thank Humboldt Research Foundation for granting the 'Humboldt Research Fellowship'. The work was supported by the Interdisciplinary Center for Nanostructured Films (IZNF), the Competence Unit for Scientific Computing (CSC), and the Interdisciplinary Center for Functional Particle Systems (FPS) at Friedrich-Alexander-Universitat Erlangen-Nurnberg.
2304.07696
Learning-Based One-Bit Maximum Likelihood Detection for Massive MIMO Systems: Dithering-Aided Adaptive Approach
In this paper, we propose a learning-based detection framework for uplink massive multiple-input and multiple-output (MIMO) systems with one-bit analog-to-digital converters. The learning-based detection only requires counting the occurrences of the quantized outputs of -1 and +1 for estimating a likelihood probability at each antenna. Accordingly, the key advantage of this approach is to perform maximum likelihood detection without explicit channel estimation which has been one of the primary challenges of one-bit quantized systems. However, due to the quasi-deterministic reception in the high signal-to-noise ratio (SNR) regime, one-bit observations in the high SNR regime are biased to either +1 or -1, and thus, the learning requires excessive training to estimate the small likelihood probabilities. To address this drawback, we propose a dither-and-learning technique to estimate likelihood functions from dithered signals. First, we add a dithering signal to artificially decrease the SNR and then infer the likelihood function from the quantized dithered signals by using an SNR estimate derived from a deep neural network-based estimator which is trained offline. We extend our technique by developing an adaptive dither-and-learning method that updates the dithering power according to the patterns observed in the quantized dithered signals. The proposed framework is also applied to channel-coded MIMO systems by computing a bit-wise and user-wise log-likelihood ratio from the refined likelihood probabilities. Simulation results validate the performance of the proposed methods in both uncoded and coded systems.
Yunseong Cho, Jinseok Choi, Brian L. Evans
2023-04-16T04:59:04Z
http://arxiv.org/abs/2304.07696v2
Adaptive Learning-Based Maximum Likelihood and Channel-Coded Detection for Massive MIMO Systems with One-Bit ADCs ###### Abstract In this paper, we propose a learning-based detection framework for uplink massive multiple-input and multiple-output (MIMO) systems with one-bit analog-to-digital converters. The learning-based detection only requires counting the occurrences of the quantized outputs of -1 and +1 for estimating a likelihood probability at each antenna. Accordingly, the key advantage of this approach is to perform maximum likelihood detection without explicit channel estimation which has been one of the primary challenges of one-bit quantized systems. The learning in the high signal-to-noise ratio (SNR) regime, however, needs excessive training to estimate the extremely small likelihood probabilities. To address this drawback, we propose a dither-and-learning technique to estimate likelihood functions from dithered signals. First, we add a dithering signal to artificially decrease the SNR and then infer the likelihood function from the quantized dithered signals by using an SNR estimate derived from a deep neural network-based offline estimator. We extend our technique by developing an adaptive dither-and-learning method that updates the dithering power according the patterns observed in the quantized dithered signals. The proposed framework is also applied to state-of-the-art channel-coded MIMO systems by computing a bit-wise and user-wise log-likelihood ratio from the refined likelihood probabilities. Simulation results validate the detection performance of the proposed methods in both uncoded and coded systems. Massive MIMO, one-bit ADC, dithering, maximum likelihood detection, deep neural network. ## I Introduction Massive MIMO systems for sub-6 GHz wireless communications [2, 3] and millimeter wave (mmWave) communications [4, 5, 6] have been considered as one of the emerging technologies for future communications because of the outstanding gain in spectral efficiency and capacity [7]. As wireless communication systems continue to grow in popularity and importance, there is a need to investigate communication systems that are not only reliable and high-performing, but also energy-efficient for various future wireless applications such as internet-of-things (IoT), extended reality, and smart grid [8, 9]. Because of the small wavelength of mmWave signals and small antenna spacing, a mmWave system allows the installation of more antennas per unit area, each of which is connected to a radio frequency (RF) chain with a pair of high-precision data converters. However, the use of a large number of high-resolution analog-to-digital converters (ADCs) at receivers results in prohibitively huge power consumption, which becomes the main bottleneck in the practical deployment because a high-resolution ADC is particularly power-hungry as the power consumption of an ADC tends to scale up exponentially with the number of quantization bits. To overcome the circuit power issue, deploying low-precision ADCs has been considered as a low-power solution over the past years [10, 11, 12, 13, 14]. As an extreme case of the low-resolution data converters, the use of one-bit data converters has emerged and become particularly attractive due to the ability to enhance power efficiency, lower hardware cost, and simplify analog processing of receivers [15, 16, 17, 18, 19, 20, 21, 22]. Because of the strong nonlinearity, data detection and channel estimation with one-bit data converters become more challenging; however, the use of massive antenna arrays can alleviate the performance loss [23, 24]. Nevertheless, when conventional signal processing algorithms are applied directly to low-resolution systems, significant performance losses can be experienced due to the severe nonlinear distortions that low-resolution ADCs cause. State-of-the-art one-bit detection, beamforming, and channel estimation techniques have been developed in the recent decades [17, 18, 19, 20, 21, 25, 26]. Low-complexity symbol-level beamforming methods for one-bit quantized systems were developed for quadrature-amplitude-modulation (QAM) constellations [17]. Taking into account the heavily quantized signals and antenna correlations, an iterative multiuser detection by using a message-passing de-quantization algorithm was devised in [18]. In [19], a high-complexity one-bit ML detection and low-complxity zero-forcing (ZF)-type detection methods were developed. In terms of MIMO detectors, by converting the ML estimation problem in [19] to convex optimization, the optimal maximum-likelihood (ML) detector was introduced and the near-ML detector was also proposed by transforming the ML detection problem into a tractable convex optimization problem [20]. Successive-interference-cancellation one-bit receivers that can be applied to modern channel coding techniques was presented in [21]. Machine learning techniques were also employed for one-bit detection [27, 28, 29]. It was shown in [27] that support vector machines can be used for efficient channel estimation and data detection with one-bit quantized observations. In [28], the conventional orthogonal frequency division multiplexing precoder and decoder are replaced with artificial neural networks to enable unsupervised autoencoder-based detection. [29] combined a linear estimator based on the Bussgang decomposition and a model-based deep neural network approach to make data detection with one-bit ADCs adaptive to the current channel. Although the introduced state-of-the-art one-bit detectors provide high detection performance, detection methods require the estimation of channel state information (CSI) which is one of the key challenges in one-bit quantized systems. Accordingly, various channel estimation methods have been developed such as least-squares (LS), ML, ZF, and Bussgang decomposition-based methods [30, 20]. Combined with antenna-wise non-zero thresholding for one-bit quantizers, the majorization-minimization-based ML channel estimator was proposed in [25]. In [26], it was shown that Bussgang decomposition-based channel estimator with linear equalizers can provide reliable performance for high-order constellations in one-bit ADC systems. Supervised deep learning in learning a mapping from the one-bit quantized measurements to the channels was utilized in [31]. Such channel estimation schemes with one-bit quantized signals, however, still suffer degradation in estimation accuracy compared with high-precision ADC systems. Learning-based data detection techniques have recently been investigated to remove or minimize the requirement for explicit channel estimation in one-bit ADC systems [32, 33, 34, 35]. The authors in [32] applied sphere decoding to the one-bit quantized system and showed that the detection complexity is reduced while achieving near-optimal performance. Viewing the one-bit ADC systems as a classification problem, various supervised-learning-based data detection techniques were provided by estimating effective channels and learning the non-linear system response [33]. In [34], however, a channel estimation was done to initialize likelihood functions for ML detection, and a learning-based likelihood function was used for post-update of the likelihood functions. In contrast, the authors in [35] used an estimated channel to generate a noisy training pilots and developed an expectation-maximization algorithm that facilitates the likelihood probability learning process. Unlike previous learning-based approaches that focused on developing detection mechanisms based on estimated channels, we focus on applying one-bit ML detection and learning likelihood functions without channel estimation. ### _Contributions_ In this work, we explore a learning-based ML detection approach that replaces a one-bit channel estimation stage with a counting-based learning process for an uplink multiuser MIMO systems with one-bit ADCs. The contributions of this work are summarized as follows: * We propose a dither-and-learning technique to infer the likelihood functions from dithered signals. Such an approach significantly reduces the number of zero-valued likelihood functions experienced by naive learning-based one-bit detection. After the dithering process, we obtain a preferable statistical pattern in the one-bit quantized output sequences with moderate sign changes thanks to the reduced SNR. Then a denoising phase retrieves the actual likelihood functions without the impact of the dithering noise. The proposed method allows estimating the likelihood functions with a reasonable training length by drawing meaningful sign patterns in the quantized output sequence. * To further improve learning accuracy, we develop an adaptive dither-and-learning technique for adjusting each antenna element's dithering power according the patterns observed in the quantized dithered signals. Since the performance of the proposed dithering-based learning algorithm is affected by the dithering power, the proposed feedback-based adaptive algorithm effectively adjusts the dithering noise power depending on the pattern of the one-bit quantized outputs. A deep neural network-based offline SNR estimation method is also developed to enable the denoising phase of the dithering-based learning in the practical systems. * In order to further apply the learning-based scheme to modern communication frameworks rather than being limited to hard-output detection, we compute the log-likelihood ratio (LLR), i.e., soft output, which is fed into a channel-decoder. Noting that the LLR needs to be defined for an individual binary bit of each user, we separate the index set of all possible symbol vectors into two disjoint subgroups and compare the sum of the likelihood probabilities over the two subgroups. * Simulation results validate that, in contrast to the conventional learning-based one-bit ML detectors and other channel estimation-based one-bit detectors, the proposed learning-based one-bit detector can achieve comparable performance to the optimal one-bit ML detection that requires perfect CSI and exhibit more reliable detection performance in both uncoded and coded simulations. _Notation_: \(\mathbf{A}\) is a matrix and \(\mathbf{a}\) is a column vector. \(\mathbf{A}^{T}\) and \(\mathbf{a}^{T}\) denote the transpose operation of matrix and column vector, respectively. We denote \(a_{i}\) as the \(i\)th element of \(\mathbf{a}\). With mean \(\mu\) and variance \(\sigma^{2}\), we generate a real Gaussian distribution and a complex Gaussian distribution using \(\mathcal{N}(\mu,\sigma^{2})\) and \(\mathcal{CN}(\mu,\sigma^{2})\), respectively. \(\mathrm{diag}(\mathbf{a})\) creates a diagonal matrix that has \(a_{i}\)'s as its diagonal entries. \(\mathbf{1}_{N}\) and \(\mathbf{0}_{N}\) are a \(N\times 1\) one vector and zero vector, respectively. \(\mathbf{I}_{N}\) denotes the \(N\times N\) identify matrix. \(\mathrm{Re}\{\mathbf{A}\}\) and \(\mathrm{Im}\{\mathbf{A}\}\) take the real and imaginary part of \(\mathbf{A}\), respectively. \(\mathbb{1}\left\{A\right\}\) denotes the indicator function which outputs 1 if \(A\) is true, and 0 otherwise. \(\mathbb{P}[\cdot]\) and \(\mathbb{E}[\cdot]\) are the probability and expectation operators, respectively. ## II System Model ### _Signal Model_ We consider uplink multiuser MIMO communication systems where the AP equipped with \(N_{r}\) receive antennas concurrently communicates with \(N_{u}\) single-antenna users. We suppose \(N_{r}\gg N_{u}\) in the context of massive MIMO systems. Each antenna element has its own dedicated RF chain as well as individual in-phase and quadrature one-bit ADCs. We assume a block fading channel model whose channel matrix is invariant for \(N_{c}\) coherent time slots. We then split the uplink transmission into a training phase with \(N_{t}\) time slots and a data transmission phase with \(N_{d}\) slots, i.e., \(N_{c}=N_{t}+N_{d}\). During the training phase, each user transmits up to \(N_{t}\) pilot symbols. We use \(K\) to denote the number of possible pilot symbol combinations of \(N_{u}\) users, e.g., \(K=2^{N_{u}}\) for binary phase shift keying at all \(N_{u}\) users. We also use \(N_{\textbf{tr}}\) to represent the number of transmissions of each combination. This implies \(N_{t}\geq KN_{\textbf{tr}}\) to learn the characteristics of all possible combinations. Let \(\mathbb{Q}_{M}\) denote the set of constellation points of \(M\)-ary QAM scheme from which \(\bar{x}_{u}[t]\) is generated where \(\bar{x}_{u}[t]\) is the complex-valued data symbol of user \(u\) at time \(t\). We assume \(\bar{x}_{u}[t]\in\mathbb{Q}_{M}\) to have zero mean and unit variance, i.e., \(\mathbb{E}[\bar{x}_{u}]=0\) and \(\mathbb{E}[|\bar{x}_{u}[t]|^{2}]=1\). A symbol vector \(\bar{\textbf{x}}[t]=[\bar{x}_{1}[t],\ldots,\bar{x}_{N_{u}}[t]]^{T}\in\mathbb{ Q}_{M}^{N_{u}}\), \(t\in\{1,\ldots,N_{c}\}\) denotes the collection of the transmitted signals from \(N_{u}\) users at time \(t\). We consider each user to adopt \(M\)-ary QAM constellation and thus, the number of possible symbol vectors \(\bar{\textbf{x}}[t]\) becomes \(K=M^{N_{u}}\). Assuming that the symbols from users are concurrently received and jointly processed at the AP, the received analog complex baseband signal vector at time \(t\) is represented as \[\bar{\textbf{r}}[t]=\sqrt{\rho}\bar{\textbf{H}}^{T}\bar{\textbf{x}}[t]+\bar{ \textbf{z}}[t], \tag{1}\] where \(\bar{\textbf{H}}\in\mathbb{C}^{N_{u}\times N_{r}}\) is the complex-valued channel matrix between the AP and \(N_{u}\) users, whose \(i\)th column vector, i.e., \(\bar{\textbf{h}}_{i}\), indicates the channel vector defined between all user and the \(i\)th antenna element of the AP. The transmit power is denoted as \(\rho\), and the additive white complex Gaussian noise vector \(\bar{\textbf{z}}[t]\) follows \(\bar{\textbf{z}}[t]\sim\mathcal{CN}(\textbf{0}_{N_{r}},N_{0}\textbf{I}_{N_{r}})\). Here, we define the SNR as \[\gamma=\rho/N_{0}. \tag{2}\] Then, each real and imaginary component of the received signals in (1) is quantized with one-bit ADCs which only reveal the sign of the signals, i.e., either \(+1\) or \(-1\). The quantized signal can be represented as \[\bar{\textbf{y}}[t]=\mathcal{Q}(\mathrm{Re}\{\bar{\textbf{r}}[t]\})+j \mathcal{Q}(\mathrm{Im}\{\bar{\textbf{r}}[t]\}) \tag{3}\] where \(\mathcal{Q}(a)=(-1)^{1\{a\leq 0\}}\in\{-1,+1\}\) is an element-wise one-bit quantizer which returns \(+1\) if the input is positive, or \(-1\) otherwise. The received signal in the complex-vector expression \(\bar{\textbf{r}}[t]\) can be rewritten in a real-valued vector representation as \[\textbf{r}[t]=\begin{bmatrix}\mathrm{Re}\{\bar{\textbf{r}}[t]\}\\ \mathrm{Im}\{\bar{\textbf{r}}[t]\}\end{bmatrix}=\sqrt{\rho}\textbf{H}^{T} \textbf{x}[t]+\textbf{z}[t] \tag{4}\] where \[\textbf{H}^{T}=\begin{bmatrix}\mathrm{Re}\{\bar{\textbf{H}}^{T}\}&-\mathrm{ Im}\{\bar{\textbf{H}}^{T}\}\\ \mathrm{Im}\{\bar{\textbf{H}}^{T}\}&\mathrm{Re}\{\bar{\textbf{H}}^{T}\}\end{bmatrix}, \tag{5}\] \[\textbf{x}[t]=\begin{bmatrix}\mathrm{Re}\{\bar{\textbf{x}}[t]\}\\ \mathrm{Im}\{\bar{\textbf{x}}[t]\}\end{bmatrix}, \tag{6}\] \[\textbf{z}[t]=\begin{bmatrix}\mathrm{Re}\{\bar{\textbf{z}}[t]\}\\ \mathrm{Im}\{\bar{\textbf{z}}[t]\}\end{bmatrix}. \tag{7}\] where \(\textbf{z}[t]\sim\mathcal{N}(\textbf{0}_{2N_{r}},\frac{N_{u}}{2}\textbf{I}_{2N _{r}})\). Accordingly, we also rewrite the quantized signal in a real-vector form as \[\textbf{y}[t] =\mathcal{Q}(\textbf{r}[t]) \tag{8}\] \[=\mathcal{Q}(\sqrt{\rho}\textbf{H}^{T}\textbf{x}[t]+\textbf{z}[t]), \tag{9}\] which is composed of \(2N_{r}\) real-valued observations of either \(-1\) or \(+1\). Throughout the paper, we consider to have \(2N_{r}\) antennas to denote the real-valued ports for ease of notation, i.e., the \(i\)th antenna in the real-value representation corresponds to \(y_{i}[t]\). ### _One-Bit ML Detection with CSI_ We first introduce the conventional one-bit ML detection with the full CSI. We define the index set of all possible symbol vectors as \(\mathcal{K}=\{1,\ldots,K\}\) and use \(\textbf{s}_{k}\) to denote the \(k\)th pilot symbol vector in a real-vector form. Let \(\mathbf{P}^{(\beta)}\in[0,1]^{K\times 2N_{r}}\) with \(\beta\in\{-1,+1\}\) denote the matrix of likelihood probabilities whose \(\mathbf{P}_{k,i}^{(\beta)}\) means the probability that the \(i\)th antenna component receives \(\beta\) when the users transmit the \(k\)th symbol vector \(\textbf{s}_{k}\). Assuming uncorrelated antennas, the likelihood probability of the one-bit quantized signal vector \(\textbf{y}[t]\) for a given channel \(\mathbf{H}\) and transmit symbol vector \(\textbf{s}_{k}\) is given as \[\mathbb{P}(\textbf{y}[t]|\mathbf{H},\textbf{s}_{k})=\prod_{i=1}^{2N_{r}} \mathbf{P}_{k,i}^{(y_{i}[t])}. \tag{10}\] We remark that the likelihood function for the \(i\)th antenna element of an observation \(y_{i}[t]\in\{-1,+1\}\) with the perfect CSI can be computed as \[\mathbf{P}_{k,i}^{(y_{i}[t])} =\mathbb{P}(y_{i}[t]|\textbf{h}_{i},\textbf{s}_{k}) \tag{11}\] \[=\Phi\left(y_{i}[t]\psi_{k,i}\right), \tag{12}\] where \[\psi_{k,i}=\sqrt{\frac{\rho}{N_{0}/2}}\textbf{h}_{i}^{T}\textbf{s}_{k} \tag{13}\] is the effective output of the \(i\)th antenna in real-value representation when transmitting the \(k\)th symbol vector, and \(\Phi(x)=\int_{-\infty}^{x}\frac{1}{\sqrt{2\pi}}e^{-\tau^{2}/2}d\tau\) is the cumulative distribution function (CDF) of a standard Gaussian distribution. Based on (10), the one-bit ML detection rule is given as \[k^{\star}[t]=\operatorname*{argmax}_{k\in\mathcal{K}}\prod_{i=1}^{2N_{r}} \mathbf{P}_{k,i}^{(y_{i}[t])}. \tag{14}\] The detected real-valued symbol vector is then defined as \(\hat{\textbf{x}}[t]\) = \(\textbf{x}_{k^{\star}[t]}\) which can be mapped to \(\hat{\textbf{x}}[t]\in\mathbb{Q}_{M}^{N_{u}}\) as detected QAM symbols by performing the reverse operation of (6). Assuming an equal probability for each pilot symbol vector, (14) provides the optimal detection. We note that the optimal ML detection in (14) requires full CSI for computing (11). The channel estimation, however, can be greatly burdensome in massive MIMO systems and much less accurate for receivers employing one-bit ADCs. In this regard, it is desirable to perform the optimal detection without requiring explicit channel estimation in one-bit massive MIMO systems. ## III Preliminary: Naive One-bit ML Detection without CSI Now, we outline a direct learning-based one-bit ML detection strategy that does not require channel estimation. Although this approach still requires \(N_{\text{tr}}\) training sequences, the learning principle is greatly simpler than the one-bit channel estimation, thereby providing robust detection performance. Each pilot symbol vector \(\mathbf{s}_{k}\in\mathbb{Q}_{M}^{Nu}\) is transmitted \(N_{\text{tr}}\) times throughout the pilot transmission of length \(N_{t}\). The AP aims to approximate the true likelihood probability \(\mathbf{P}_{k,i}^{(\beta)}\) by observing the frequency of \(y_{i}[t]=+1\) and \(y_{i}[t]=-1\) during the transmission as \[\hat{\mathbf{P}}_{k,i}^{(\beta)}\!=\!\left\{\!\!\begin{array}{l} \hat{\mathbf{P}}_{k,i}^{(+1)}=\frac{1}{N_{\text{tr}}}\sum_{\tau=1}^{N_{\text{ tr}}}\mathbbm{1}\{y_{i}[(k-1)N_{\text{tr}}+\tau]=+1\}\\ \hat{\mathbf{P}}_{k,i}^{(-1)}=1-\hat{\mathbf{P}}_{k,i}^{(+1)}\end{array}\!\!\!\!\! \right. \tag{15}\] where \(\beta\in\{+1,-1\}\). The operation in (15) measures the number of \(+1\)'s at the \(i\)th antenna element out of the \(N_{\text{tr}}\) observations triggered by \(\mathbf{s}_{k}\). After learning the likelihood functions, the AP obtains the estimate of the likelihood probability for a given data signal \(\mathbf{y}[t]\) as \[\mathbb{P}(\mathbf{y}[t]|\mathbf{H},\mathbf{s}_{k})\!\approx\] \[\prod_{i=1}^{2N_{\text{tr}}}\!\!\left(\hat{\mathbf{P}}_{k,i}^{(+ 1)}\mathbbm{1}\left\{y_{i}[t]\!=\!+\!1\right\}\!+\!\hat{\mathbf{P}}_{k,i}^{(-1) }\mathbbm{1}\left\{y_{i}[t]\!=\!-\!1\right\}\!\right)\!, \tag{16}\] and the receiver can perform the ML detection in (14) by searching the best index that maximizes (16) over \(\mathcal{K}\). Although such one-bit ML approaches can provide a near-optimal detection performance with simple function learning, they may suffer from critical performance degradation due to a limited amount of training as stated in the following remark: **Remark 1** (Under-trained likelihood functions).: At the high SNR, the \(N_{\text{tr}}\) quantized output of each antenna is repeatedly observed to be either all \(+1\)'s or all \(-1\)'s due to the low power of the aggregate noise. This phenomenon results in obtaining a number of zero-valued empirical likelihood functions in (15), e.g., \(\hat{\mathbf{P}}_{k,i}^{(\beta)}=0\) because the one-bit quantized observations at the high SNR regime become quasi-deterministic such that it is difficult to observe a change in the sign of the quantized output sequences during the \(N_{\text{tr}}\) transmissions of the symbol vector \(\mathbf{s}_{k}\). Such a zero-valued likelihood function, called under-trained likelihood function, completely ruins the ML detection rule since the ML computation in (16) can be completely negated by any zero probability. Fig. 1 shows the symbol error rates (SERs) of the optimal one-bit ML detection and the naive approach with the number of training samples \(N_{\text{tr}}\in\{10,100,1000\}\) for \(N_{r}=32\) receive antennas, \(N_{u}=3\) users and \(4\)-QAM modulation with respect to the SNR. It is observed that although we increase the number of pilot signals, the naive approach starts to suffer at the medium to high SNR since the under-trained likelihood functions start to appear more frequently with the SNR. Therefore, such a critical drawback of the naive learning-based approach needs to be resolved to deploy the one-bit ADC systems in practice. ## IV Adaptive Statistical Learning without CSI In this section, we present an adaptive learning-based ML detection method for one-bit ADC systems in order to closely achieve the optimal CSI-aware ML detection performance without suffering the error floor of the naive learning approach observed in Fig. 1 and without requiring explicit estimation of channels. Being identical to the maximum a posteriori estimation, the ML estimation is optimal in minimizing the probability of detection error when all possible transmit symbols have an equal probability of being transmitted. Accordingly, the proposed method can achieve the detection performance close to optimal without explicit channel estimation. ### _Dither-and-Learning_ To resolve the problem of the under-trained likelihood functions, we propose the dither-and-learning method that can learn the likelihood functions with a reasonable training Figure 1: Symbol error rate simulation results of the optimal one-bit ML detection with full CSI against naive learning-based one-bit ML detection for \(N_{r}=32\) receive antennas, \(N_{u}=3\) users, \(4\)-QAM, and \(N_{\text{tr}}\in\{10,100,1000\}\) pilot signals. Figure 2: A receiver architecture for the pilot transmission phase with dithering signal added before quantization. Based on the feedback information, the variance of the dithering signal is updated. length \(N_{\text{tr}}\). As shown in Fig. 2, the AP appends antenna-wise dithering signals \(d_{i}[t]\) to the analog baseband received signal \(r_{i}[t]\) during the training phase. After dithering, the quantization input for transmitted symbol \(\mathbf{s}_{k}\) in the real-vector form becomes \[\mathbf{r}_{\text{D},k}[t] =\mathbf{r}_{k}[t]+\mathbf{d}[t] \tag{17}\] \[=\sqrt{\rho}\mathbf{H}\mathbf{s}_{k}+\mathbf{z}[t]+\mathbf{d}[t]. \tag{18}\] We let \(\sigma_{i}^{2}/2\) denote the variance of the real-valued dithering signal at the \(i\)th antenna and consider \(\mathbf{d}[t]\sim\mathcal{N}(\mathbf{0}_{2N_{r}},\mathbf{\Sigma})\) where \(\mathbf{\Sigma}=\operatorname{diag}(\sigma_{1}^{2}/2,\ldots,\sigma_{2N_{r}}^{2} /2)\). The distribution of the dithering signal is controlled at the AP. The dithered and quantized signal associated with the \(k\)th symbol vector becomes \[\mathbf{y}_{\text{D},k}[t]=\mathcal{Q}(\sqrt{\rho}\mathbf{H}\mathbf{s}_{k}+ \mathbf{z}[t]+\mathbf{d}[t])\in\{+1,-1\}^{2N_{r}}. \tag{19}\] As a next step, the AP computes the estimated likelihood function for the dithered signals \(\hat{\mathbf{P}}_{\text{D},k,i}^{(\beta)}\) as in (15) for \(\beta\in\{+1,-1\}\). Without loss of generality, let us fix \(\beta=+1\) for ease of explanation. Then, \(\hat{\mathbf{P}}_{\text{D},k,i}^{(+1)}\) offers an estimate of the actual likelihood functions as shown in (12) with increased noise power: \[\hat{\mathbf{P}}_{\text{D},k,i}^{(+1)}\approx\Phi\left(\sqrt{\frac{2\rho}{N_ {0}+\sigma_{i}^{2}}}\mathbf{h}_{\text{T}}^{\text{T}}\mathbf{s}_{k}\right). \tag{20}\] Assuming \(N_{0}\) (equivalently, SNR) is known at the AP, the AP can find the estimate of \(\psi_{k,i}\) in (13) by leveraging (20). Such denoising is computed as \[\hat{\psi}_{k,i}=\sqrt{1+\frac{\sigma_{i}^{2}}{N_{0}}}\Phi^{-1}\left(\hat{ \mathbf{P}}_{\text{D},k,i}^{(+1)}\right) \tag{21}\] Finally, the AP uses \(\tilde{\psi}_{k,i}\) to approximate the true (non-dithered) likelihood function \(\mathbf{P}_{k,i}^{(+1)}\) as \[\hat{\mathbf{P}}_{k,i}^{(+1)}=\Phi\left(\hat{\psi}_{k,i}\right). \tag{22}\] Since the likelihood function of the dithered signal \(\hat{\mathbf{P}}_{\text{D},k,i}^{(+1)}\) in (20) is much less likely to have zero probability compared with that of the non-dithered case, the AP can learn the majority of the likelihood functions \(\hat{\mathbf{P}}_{k,i}^{(+1)}\) with a reasonable training length. When we observe zero likelihood functions after the dither-and-learning process, we set a very small probability that is lower than any of the non-zero likelihood functions, i.e., \[\tilde{\mathbf{P}}_{k,i}^{(\beta)}=p_{\min},\quad\forall i\in\mathcal{A}_{k}^ {0}(\beta), \tag{23}\] where \(p_{\min,k}<\min_{j\in\mathcal{A}_{k}^{\text{av}}(\beta)}\tilde{\mathbf{P}}_{k,j}^{(\beta)},\ \forall\beta\), \(\mathcal{A}_{k}^{0}(\beta)\) indicates the index set of zero likelihood functions for \(\mathbf{s}_{k}\) and \(\beta\), and \(\mathcal{A}_{k}^{\text{av}}(\beta)\) is the index set of non-zero likelihood functions for \(\mathbf{s}_{k}\) and \(\beta\). For the proposed dither-and-learning method, intuitively, the power of dithering signals affects the learning performance as stated in Remark 2. **Remark 2**.: The level of dithering power is important as low dithering power continues to trigger under-trained likelihood functions, and high dithering power hinders recovering the symbol information, leading noise term dominant. Based on Remark 2, we further propose an adaptive dithering power update method in the following section. ### _Adaptive Dithering Power Update_ Fixing dithering variance does not suitably adjust the dithering power, and this behavior can cause two fundamental problems: 1) when the dithering power is low and the SNR remains high, it is highly probable to have undesirably many under-trained likelihood functions and 2) for high dithering power, although the dither-and-learning procedure successfully prevents the under-trained likelihood functions, the estimate of the effective output in (21) cannot be accurate due to the large randomness of the dithering signals. In this respect, the AP has to properly determine dithering power considering the system environment. To this end, we empirically update the dithering power by leveraging feedback based on the behavior of received observations and propose the adaptive dither-and-learning (ADL) method that fits the dithering power into a suitable range. As shown in Fig. 3, we first divide the \(N_{\text{tr}}\) signals of each pilot symbol \(\mathbf{s}_{k}\) into \(N_{s}\) disjoint sub-blocks in which each sub-block accommodates \(N_{\text{tr}}^{\text{sub}}=N_{\text{tr}}/N_{s}\) training samples where \(N_{\text{tr}}\) is assumed to be a multiple of \(N_{s}\). Then, the \(n\)th dithered and quantized sub-block observed at the \(i\)th antenna when transmitting \(\mathbf{s}_{k}\) can be represented as \[\tilde{\mathbf{y}}_{\text{D},k,i,n}\] \[=\big{\{}y_{\text{D},k,i}\big{[}(k-1)N_{\text{tr}}+(n-1)\,N_{ \text{tr}}^{\text{sub}}+1\big{]},\] \[\quad\ldots,y_{\text{D},k,i}\big{[}(k-1)N_{\text{tr}}+nN_{\text{ tr}}^{\text{sub}}\big{]}\big{\}}^{T}\in\{+1,-1\}^{N_{\text{tr}}^{\text{sub}}}, \tag{24}\] where \(n\in\{1,\ldots,N_{s}\}\) and \(y_{\text{D},k,i}[t]\) denotes the dithered observation at the \(i\)th antenna at time \(t\) for the \(k\)th pilot symbol vector \(\mathbf{s}_{k}\). When the received training sequence is either \(\tilde{\mathbf{y}}_{\text{D},k,i,n}=+\mathbf{1}_{N_{\text{tr}}^{\text{sub}}}\) or \(\tilde{\mathbf{y}}_{\text{D},k,i,n}=-\mathbf{1}_{N_{\text{tr}}^{\text{sub}}}\) for antenna \(i\), the dither power is regarded to be lower than the desirable Fig. 3: Communication data frame with a pilot transmission and a data transmission phases. dithering power for \(\mathbf{s}_{k}\) at antenna \(i\) in the current system. In such a case, we increase the dithering noise variance of the \(i\)th antenna for the next sub-block by \(\Delta\), i.e., \[\sigma_{i}^{2}\leftarrow\sigma_{i}^{2}+\mathcal{I}_{i}\Delta, \tag{25}\] where \(\mathcal{I}_{i}\) is the indicator function defined for the \(i\)th antenna, i.e., \(\mathcal{I}_{i}=1\) if \(\tilde{\mathbf{y}}_{\text{D},k,i,n}=+\mathbf{1}_{N_{tr}^{\text{sub}}}\) or \(\tilde{\mathbf{y}}_{\text{D},k,i,n}=-\mathbf{1}_{N_{tr}^{\text{sub}}}\), and \(\mathcal{I}_{i}=0\) otherwise. This allows that the subsequent training sequence is more likely to observe the sign change within \(N_{tr}^{\text{sub}}\) quantized outputs thanks to the increased perturbation. Upon completing all sub-blocks, the likelihood probability of symbol vector \(k\) is determined by computing the mean of the likelihood probabilities for all \(N_{s}\) sub-blocks associated with symbol vector \(\mathbf{s}_{k}\). Algorithm 1 summarizes the adaptive dither-and-learning (ADL) process. We note that the fixed dither-and-learning method in Section IV-A is the special case of the ADL method with \(N_{s}=1\). We also remark that the ADL method prevents not only the under-trained likelihood functions but also the undesirably large fluctuations of the received signals since the dithering power update is supervised by the AP to fit into the appropriate SNR region based on the observations. ``` 1 Initialize \(\tilde{\mathbf{P}}_{k,i}^{(+1)}=0\ \ \forall k,i\) 2 Fix the increment of the dithering variance, \(\Delta\). 3for\(k=1\ to\ K\)do 4 Initialize \(\sigma_{i}^{2}=\sigma^{2}\) and \(\mathcal{I}_{i}=0\ \ \forall i\). 5for\(n=1\ to\ N_{s}\)do 6for\(i=1\ to\ 2N_{r}\)do 7 Observe \(\tilde{\mathbf{y}}_{\text{D},k,i,n}\) (24) during \(N_{tr}^{\text{sub}}\) slots 8 Compute \(\tilde{\mathbf{P}}_{\text{D},k,i}^{(S)}\) of \(\tilde{\mathbf{y}}_{\text{D},k,i,n}\) using (15) 9 Compute \(\hat{\psi}_{k,i}\) in (21) 10\(\tilde{\mathbf{P}}_{k,i}^{(+1)}\leftarrow\tilde{\mathbf{P}}_{k,i}^{(+1)}+\frac{ 1}{N_{s}}\Phi\left(\hat{\psi}_{k,i}\right)\) 11\(\mathcal{I}_{i}\leftarrow\mathbb{I}\left\{\tilde{\mathbf{y}}_{\text{D},k,i,n}= +\mathbf{1}_{N_{tr}^{\text{sub}}}\right\}\)\(+1\left\{\tilde{\mathbf{y}}_{\text{D},k,i,n}=-\mathbf{1}_{N_{tr}^{\text{sub}}}\right\}\)\(\sigma_{i}^{2}\leftarrow\sigma_{i}^{2}+\mathcal{I}_{i}\Delta\) 12 13 end for 14 15 end for 16 17 end for return\(\tilde{\mathbf{P}}^{(+1)}\) and \(\tilde{\mathbf{P}}^{(-1)}=1-\tilde{\mathbf{P}}^{(+1)}\). ``` **Algorithm 1**Adaptive Dither-and-Learning (ADL) ### _SNR Estimation_ In spite of the properly managed dithering power, the computation of likelihood probabilities using the denoising process in (21) requires the perfect knowledge of the SNR \(\gamma\) or equivalently, the AWGN noise variance \(N_{0}\). In this work, we also perform the SNR estimation via offline supervised learning using the deep neural network as shown in Fig. 4. The offline training first collects training data points \(\{\mathbf{y}[j];\gamma[j]\}\) where \(\mathbf{y}[j]\in\{+1,-1\}^{2N_{r}}\) is the \(j\)th one-bit quantized observations and \(\gamma[j]\) is the true SNR at time \(j\). Once sufficient samples are collected, the AP selects a portion of the data points as training samples and performs the supervised offline learning with \(\mathbf{y}[j]\) as inputs and \(\gamma[j]\) as outputs to be estimated. Assuming that there exist \(L\) hidden layers, the estimated SNR is represented as the scalar output of the neural network expressed as \[\hat{\gamma}[j]=\mathbf{w}_{L}^{T}\mathbf{a}_{L-1}+b_{L}, \tag{26}\] where each intermediate vector is defined as \(\mathbf{a}_{\ell}=\phi\left(\mathbf{W}_{\ell}\mathbf{a}_{\ell-1}+\mathbf{b}_ {\ell}\right)\) for \(\ell\in\{1,\ldots,L-1\}\) with the initial point \(\mathbf{a}_{0}=\mathbf{y}[j]\) when \(\phi(\cdot)\) is the element-wise activation function such as rectified linear unit or sigmoid functions. The deep neural network is updated by minimizing the estimation error, e.g., \((\gamma[j]-\hat{\gamma}[j])^{2}\), and hence estimates the SNR by extracting meaningful information of the one-bit observations such as statistical pattern and the number of \(+1\)'s or \(-1\)'s. ## V Extension to Channel Coding Even though the one-bit ML detection has attractive aspects, we are still confined to the uncoded hard-decision scenarios. Modern communication frameworks should be paired with channel coding that exhibits an impressive gain and performance calibration; however, soft outputs are needed for the decoding perspective. In this section, we first introduce a frame structure to use channel coding, after that we describe how to generate soft metrics from the previously trained likelihood functions. ### _Frame Structure_ For a channel-coded communication framework, we first assume that a (\(\kappa\), \(\eta\)) binary code with the code rate of \(\kappa/\eta\) is used throughout the paper. At the beginning of the framework, each user \(u\) then generates uncoded binary messages of length \(\kappa\), e.g., \(\mathbf{m}_{u}\in\{0,1\}^{\kappa}\). Encoding with the pre-arranged channel coding scheme, we have the codeword for \(\mathbf{m}_{u}\in\{0,1\}^{\kappa}\), denoted as \(\mathbf{c}_{u}\in\{0,1\}^{\eta}\). As the last step, each user combines \(q\ (=\log_{2}M)\) pieces of information together to map the binary bits into an \(M\)-ary QAM symbol, and then the transmitted symbol of the \(u\)th user at time slot \(t\) is represented as \[\bar{s}_{u}[t]=f_{M}\left(\{c_{u}[(t-1)q+1],\ldots,c_{u}[tq]\}\right) \tag{27}\] where \(f_{M}:\{0,1\}^{q}\rightarrow\mathbb{Q}_{M}\) is the constellation mapping function from \(q\) binary bits to \(M\)-ary QAM symbols and \(t\in\{1,\ldots,\eta/q\}\) where \(\eta/q\) means the number of channel uses for a data subframe of each user by mapping \(q\) bits into Figure 4: Illustration of the SNR offline training via deep neural networks. a symbol. The overall communication structure is illustrated in Fig. 3. Each subframe of the data transmission phase is composed of the \(N_{\text{d}}^{\text{sub}}=\eta/q\) channel uses, and the data transmission phase consists of the \(D\) subframes, i.e., \(N_{\text{d}}=DN_{\text{d}}^{\text{sub}}\). ### _Soft Metric_ In Section IV, we produced a posteriori probabilities (APPs) utilizing the repeated transmissions with \(N_{\text{tr}}\) pilot signals per possible symbol vector and the ADL technique. Furthermore, from the calculated APPs which are the likelihood probabilities, we can compute a likelihood ratio for a given data payload observation \(\mathbf{y}_{d}[t]\). We note that the one-bit observation at the \(t\)th time slot is held accountable for the LLR computation of the \(q\) positions of each user; as a result, the LLR needs to be calculated based on the user-wise and bit-wise operation. To this end, for the \(\ell\)th bit of the QAM symbol of user \(u\), we separate the index set of all possible symbol vectors into two non-overlapping subgroups as follows: \[\mathcal{S}_{\ell,b}^{u}\triangleq\{k\mid\bar{s}_{k,u}=f_{M}\left(\left\{c_{ 1},\ldots,c_{q}\right\}\right),c_{\ell}=b,k\in\mathcal{K}\}, \tag{28}\] where \(b\in\{0,1\}\) and \(\bar{s}_{k,u}\) denotes the \(u\)th element of \(\bar{\mathbf{s}}_{k}\) which is the QAM symbol of user \(u\). Consequently, each subset in (28) is crafted to separate \(K\) indices into two disjoint sets in terms of the \(\ell\)th bit of the \(u\)th user's bit sequence that corresponds to \(\bar{s}_{k,u}\). Note that the subsets are defined regardless of current observations and computed only once when the set of system parameters is configured. Leveraging the two separated subgroups and the likelihood probabilities for the given observation, the corresponding LLR of the \(\ell\)th bit of the \(u\)th user at time \(t\) can be represented as \[\Lambda_{(t-1)q+\ell}^{u}(\mathbf{y}_{d}[t]|\mathbf{H})\] \[\overset{(a)}{=}\log\frac{\mathbb{P}(c_{u}[(t-1)q+\ell]=0| \mathbf{y}_{d}[t],\mathbf{H})}{\mathbb{P}(c_{u}[(t-1)q+\ell]=1|\mathbf{y}_{d}[ t],\mathbf{H})}\] \[\overset{(b)}{=}\log\frac{\mathbb{P}(\mathbf{y}_{d}[t]|c_{u}[(t-1) q+\ell]=0,\mathbf{H})}{\mathbb{P}(\mathbf{y}_{d}[t]|c_{u}[(t-1)q+\ell]=1, \mathbf{H})}\] \[\overset{(c)}{=}\log\frac{\sum_{k\in\mathcal{S}_{\ell,0}^{*}} \mathbb{P}(\mathbf{y}_{d}[t]|\mathbf{s}_{k},\mathbf{H})\mathbb{P}(\mathbf{s}_ {k})}{\sum_{k\in\mathcal{S}_{\ell,1}^{*}}\mathbb{P}(\mathbf{y}_{d}[t]|\mathbf{ s}_{k},\mathbf{H})\mathbb{P}(\mathbf{s}_{k})}\] \[\overset{(d)}{=}\log\frac{\sum_{k\in\mathcal{S}_{\ell,0}^{*}} \prod_{i=1}^{2N_{r}}\mathbb{P}_{k,i}^{(u_{d},i[t])}}{\sum_{k\in\mathcal{S}_{ \ell,1}^{*}}\prod_{i=1}^{2N_{r}}\mathbb{P}_{k,i}^{(u_{d},i[t])}}, \tag{29}\] where \(\ell\in\{1,\ldots,q\}\), \(t\in\{1,\ldots,\eta/q\}\), \((a)\) is from the definition of LLR, \((b)\) is from Bayes' rule with the equiprobability of \(\mathbf{y}_{d}\) and \(\mathbf{c}_{u}\), (c) comes from the definition of sets defined in (28), and \((d)\) is from the equiprobability of \(\mathbf{s}_{k}\) and the ML detection rule in (10). Finally, the collected LLRs associated with the \(u\)th user, i.e., \(\{\Lambda_{1}^{u},\ldots,\Lambda_{\eta}^{u}\}\), are conveyed to a channel decoder to recover the message \(\mathbf{m}_{u}\in\{0,1\}^{\kappa}\). Therefore, the ADL-based estimates of the likelihood functions can be successfully used for computing the LLR of the channel decoder. ## VI Simulation Results In this section, we evaluate the performance of the proposed learning-based method in terms of the number of under-trained likelihood functions, the symbol error probability (SER) for uncoded systems, and the frame error probability (FER) for coded systems. We consider Rayleigh fading model \(\bar{\mathbf{H}}\) whose each element follows \(\mathcal{CN}(0,1)\). We initialize the dithering variance as \(\sigma_{i}^{2}=\rho/2\) and the increment as \(\Delta=\rho/3\) for all AP antennas in the ADL case. ### _Under-trained Likelihood Functions_ Fig. 5 shows the average number of under-trained likelihood functions, i.e., \(\hat{\mathbf{P}}_{k,i}^{(b)}=0\), out of \(2N_{r}\) antennas over the wide range of simulated SNR levels. For the learning-based detectors, we use \(N_{\text{tr}}=45\) and compare the naive learning and the ADL methods with \(N_{s}\in\{1,3,5\}\). Recall that the ADL method with \(N_{s}=1\) reduces to the case that uses identical and fixed dithering power without adaptation. As the SNR increases, the number of under-trained likelihood functions for the non-dithering case rapidly approaches \(2N_{r}\). For the ADL case with \(N_{s}=1\), i.e., fixed dithering power, however, the number of under-trained likelihood functions much slowly increases with the SNR and converges to around \(20\) thanks to the dithering effect. In addition, for the ADL method with a non-trivial split factor, the number of under-trained likelihood functions increases only to \(17\) and \(9\) when \(N_{s}=3\) and \(N_{s}=5\), respectively. Since the ADL method decides whether to increase the dithering noise depending on the realization of each sub-block, we can further optimize the learning procedure in terms of the number of under-trained likelihood functions. If we properly increase \(N_{s}\), each antenna is more likely to avoid zero-valued likelihood functions. As a result, with the adaptive dithering, the proposed algorithm can estimate much more valid likelihood functions, thereby increasing the detection accuracy. Fig. 5: The number of under-trained likelihood functions among \(2N_{r}\) likelihood functions for \(N_{\text{tr}}=4\) users, 4-QAM, \(N_{r}=32\) antennas, and \(N_{\text{tr}}=45\) pilot signals with Rayleigh channels. The proposed adaptive dither-and-learning (ADL) method divides the training period into \(N_{s}\in\{1,3,5\}\) sub-blocks for the feedback-driven update of dithering power. ### _Uncoded System: Symbol Error Rate_ To evaluate the data detection performance of the proposed methods in the multiuser massive MIMO system, we compare the following detection methods: 1. Naive learning-based one-bit ML 2. ADL-based one-bit ML (proposed) 3. ADL-based one-bit ML with estimated SNR (proposed) 4. Minimum-Center-Distance (MCD) [33] 5. One-bit ZF with perfect CSI [19] 6. One-bit ML with perfect CSI (optimal one-bit detection) 7. One-bit ML with estimated CSI 8. Infinite-bit ML with perfect CSI (optimal detection) We note that the learning-based methods: 1) Naive one-bit ML, 2) ADL one-bit ML, 3) ADL one-bit ML with estimated SNR, and 4) MCD, do not require explicit channel estimation; however, the other methods either assume perfect CSI or estimated CSI at the AP. The learning-based methods transmit \(N_{\text{tr}}\) pilot signals per each training symbol vector, which requires \(KN_{\text{tr}}\) pilot signals in total. Accordingly, we consider that the conventional one-bit ML detection with an estimated channel also uses \(KN_{\text{tr}}\) pilot signals to estimate the channel. In our simulations, the one-bit channel estimation method developed in [20] is adopted to provide the estimated CSI. For readability of the curves, we compare MCD for the 16-QAM case shown in Fig. 8. Fig. 6 presents the SER results for \(N_{r}=32\) antennas, \(N_{u}=4\) users, \(N_{\text{tr}}=45\) pilot signals, and 4-QAM. As expected from Fig. 5, the naive-learning approach shows the catastrophic result from the medium to high SNR due to the large number of zero-valued likelihood functions. The one-bit ZF detection that applies the pseudo-inverse matrix of the perfectly-known channel matrix onto the one-bit observations shows the large performance degradation with the error floor at the medium and high SNR regime. The one-bit ML detection with the one-bit estimated channels shows a larger deviation from the optimal one-bit ML detection with perfect CSI as the SNR increases due to the channel estimation error. Unlike the above benchmarks, the proposed ADL one-bit ML methods closely follow the SER performance curve of the optimal one-bit ML case by avoiding under-trained likelihood functions as shown in Fig. 5 and learning the likelihood functions with high accuracy. In addition, the proposed ADL method with \(N_{s}=3\) has around \(1.0~{}\mathrm{dB}\) gain over the ADL method with fixed dithering power, i.e., \(N_{s}=1\), which demonstrates the gain of adaptive dithering based on the feedback. We can also notice that the performance gap between the ADL method with the perfect SNR and the ADL with the estimated SNR is marginal. This observation validates the fact that the offline supervised SNR learning can successfully capture the observation pattern to estimate the SNR required for the de-noising phase in the ADL method. Lastly, we observe that the optimal one-bit ML detection with \(N_{r}=32\) achieves similar target SER, e.g., \(10^{-4}\) to \(10^{-5}\), as the infinite-resolution ML detection with \(N_{r}=10\) antennas. By deploying \(\sim 3\times\) more antennas with the low-cost one-bit ADCs, we can compensate for the severe non-linearity loss caused by one-bit ADCs and achieve higher detection performance than the infinite-bit ADC system in the low to medium SNR regime. Fig. 7 shows the SER performance of the one-bit ML algorithms for different training length, \(N_{\text{tr}}\in\{45,90\}\) with \(N_{r}=32\) AP antennas, \(N_{u}=4\) users, and 4-QAM. We first observe that both the naive learning-based one-bit ML and the conventional one-bit ML with the estimated channel still show the noticeable performance degradation from the proposed methods for both the short and long training lengths, \(N_{\text{tr}}\in\{45,90\}\). This implies that to achieve the optimal one-bit ML performance, it is necessary to use a great number of training symbols for the naive learning-based one-bit ML Figure 6: Symbol error rate results with \(N_{u}=4\) users, \(N_{r}=32\) AP antennas, \(N_{\text{tr}}=45\) pilot signals, and 4-QAM constellation. The proposed adaptive dither-and-learning (ADL) uses \(N_{s}\in\{1,3\}\) split factors. Figure 7: Symbol error rate results with \(N_{u}=4\) users, \(N_{r}=32\) AP antennas, \(N_{\text{tr}}\in\{45,90\}\) pilot signals, and 4-QAM constellation. The proposed adaptive dither-and-learning (ADL) uses \(N_{s}\in\{1,3\}\) split factors. and the conventional one-bit ML with estimated channels. In contrast, the proposed ADL-based one-bit ML detection offers robust performance in terms of training length. In particular, the SER improvement of increasing \(N_{s}=1\) to \(N_{s}=3\) for the ADL method with \(N_{\mathsf{tr}}=90\) is about \(0.2\) dB which is small compared with that for the ADL method with \(N_{\mathsf{tr}}=45\). Therefore, we can claim that the proposed ADL method is more beneficial for the system with the limited amount of pilot signals, and using proper adaptation stages further improves the detection performance. We can also find out that the ADL case with \(N_{s}=3\) and \(N_{\mathsf{tr}}=45\) achieves almost the same performance as the case \(N_{s}=1\) and \(N_{\mathsf{tr}}=90\), which emphasizes that adaptive learning can effectively reduce the amount of training sequences. Fig. 8 shows the SER performance curves for \(N_{r}=64\) AP antennas, \(N_{u}=3\) users, and 16-QAM. We use \(N_{\mathsf{tr}}=45\) training symbols for the learning-based approaches. It is remarkable that the proposed ADL method still offers a robust detection performance whereas the one-bit ZF with perfect CSI and the one-bit ML with the estimated CSI present largely degraded detection performance. Although the MCD method shows the lower SER than the other benchmarks, the performance gap from the proposed method is not trivial and increases with the SNR. In this regard, the simulation results demonstrate that the proposed method outperforms the state-of-the-art one-bit detection methods, is more robust to communication environments, and requires shorter training sequences. ### _Coded System: Frame Error Rate_ We consider the MIMO system with \(N_{r}=32\) antennas, \(N_{u}=4\) users, and 4-QAM. As a sophisticated channel coding, we adopt a rate-1/2 polar code of length 128, i.e., \((\kappa,\eta)=(64,128)\) and a list decoding with list size 8 is used for the decoding method of the polar code. In the coded system, we also extend the naive learning-based one-bit ML detection to the coded system and compare the following methods: 1. Naive learning-based one-bit ML 2. ADL-based one-bit ML (proposed) 3. One-bit successive cancellation soft-output (OSS) [21] For the ADL methods, we allocate \(N_{\mathsf{tr}}=45\) pilot signals to each symbol vector. Unlike the learning-based methods, the OSS detector assumes perfect CSI to compute LLRs. Accordingly, it can be regarded as an FER lower bound, and we include it for providing the performance guideline. Recall that to use state-of-the-art channel codes, we calculate LLRs using the likelihood probabilities derived by each method. Fig. 9 illustrates the FER of the channel-coded systems. The naive learning one-bit detection no longer experiences the tragic reverse trend shown in the uncoded systems; however, the performance gap from the proposed method grows up as SNR increases. In addition, the FER of the ADL method with \(N_{s}=3\) split factor is placed between that of the OSS detector and the ADL method with \(N_{s}=1\), thereby showing the advantage over the ADL with fixed dithering power. Again, the improvement achieved by the ADL method with \(N_{s}=3\) is because the ADL method can accurately learn the likelihood probabilities by avoiding zero-valued likelihood functions even with the limited amount of training sequences. In summary, although the performance of the naive learning-based approach is devastated by the under-trained probabilities in the uncoded system, the likelihood probability in (16) is still capable of being computed with the under-trained likelihood functions for the LLR defined in (29) for the coded systems. Regarding the probability learning accuracy, however, the proposed ADL Figure 8: Symbol error rate results with \(N_{u}=3\) users, \(N_{r}=64\) AP antennas, \(N_{\mathsf{tr}}=45\) pilot signals, and 16-QAM constellation. The proposed adaptive dither-and-learning (ADL) method divides the training period into \(N_{s}\in\{1,3\}\) sub-blocks. Figure 9: Frame error rate results for \(N_{u}=4\) users, \(N_{r}=32\) AP antennas, \(N_{\mathsf{tr}}=45\), 4-QAM constellation, and a polar code of rate \(1/2\) where \((\kappa,\eta)=(64,128)\). The proposed adaptive dither-and-learning (ADL) method learns the likelihood probability with split factor \(N_{s}\in\{1,3\}\). The one-bit successive-cancellation soft-output (OSS) detector is valid in the case of perfect CSI. method can perform better than the naive learning approach, thereby increasing the performance gap with the SNR. ## VII Conclusion In this paper, we proposed the statistical learning-based ML detection method for uplink massive MIMO communication systems with one-bit ADCs. Since the performance of learning-based one-bit detection approaches can be severely degraded when the number of training samples is insufficient, the proposed method handled such challenges by injecting dithering noise to facilitate the acquisition of statistical patterns. Without requiring explicit channel knowledge, the dither-and-learning method performed one-bit ML detection through learning likelihood functions at each antenna. The proposed method was more robust to the number of training symbols because the adaptive randomness triggers moderate fluctuation in the change of signs of the training sequence, thereby successfully extracting the statistical pattern of one-bit quantized signals. We further adapted dithering power to fit the AP into the appropriate SNR region in accordance with observations. In addition, deep neural network-based SNR estimation for denoising and extension to channel-coded systems were also proposed for more practical scenarios. Simulation results validated the detection performance of the proposed method in terms of the training amount, SER, and FER. Therefore, the proposed method can be a potential low-power and low-complexity multiuser communication solution for 6G applications such as IoT.
2303.03457
Spelling convention sensitivity in neural language models
We examine whether large neural language models, trained on very large collections of varied English text, learn the potentially long-distance dependency of British versus American spelling conventions, i.e., whether spelling is consistently one or the other within model-generated strings. In contrast to long-distance dependencies in non-surface underlying structure (e.g., syntax), spelling consistency is easier to measure both in LMs and the text corpora used to train them, which can provide additional insight into certain observed model behaviors. Using a set of probe words unique to either British or American English, we first establish that training corpora exhibit substantial (though not total) consistency. A large T5 language model does appear to internalize this consistency, though only with respect to observed lexical items (not nonce words with British/American spelling patterns). We further experiment with correcting for biases in the training data by fine-tuning T5 on synthetic data that has been debiased, and find that finetuned T5 remains only somewhat sensitive to spelling consistency. Further experiments show GPT2 to be similarly limited.
Elizabeth Nielsen, Christo Kirov, Brian Roark
2023-03-06T19:29:20Z
http://arxiv.org/abs/2303.03457v1
# Spelling convention sensitivity in neural language models ###### Abstract We examine whether large neural language models, trained on very large collections of varied English text, learn the potentially long-distance dependency of British versus American spelling conventions, i.e., whether spelling is consistently one or the other within model-generated strings. In contrast to long-distance dependencies in non-surface underlying structure (e.g., syntax), spelling consistency is easier to measure both in LMs and the text corpora used to train them, which can provide additional insight into certain observed model behaviors. Using a set of probe words unique to either British or American English, we first establish that training corpora exhibit substantial (though not total) consistency. A large T5 language model does appear to internalize this consistency, though only with respect to observed lexical items (not nonce words with British/American spelling patterns). We further experiment with correcting for biases in the training data by fine-tuning T5 on synthetic data that has been debiased, and find that finetuned T5 remains only somewhat sensitive to spelling consistency. Further experiments show GPT2 to be similarly limited. ## 1 Introduction The probabilities that neural language models (LMs) assign to strings can be used to assess how effectively they capture linguistic dependencies found in their training data. Much as in psycholinguistic experiments on human language speakers, we can present LMs with strings both with and without agreement in key dependencies and measure the assigned probabilities to determine whether the model has learned these linguistic generalizations or not (see e.g., Futrell et al.2018). For example, sentences both with and without subject/verb number agreement (but otherwise identical) can be used to assess whether the model accounts for that particular dependency, even over long distances. Various long-distance dependencies have been investigated in this manner, from purely linguistic phenomena such as syntactic dependencies (e.g., Gulordava et al.2018) to extra-linguistic phenomena such as socio-cultural biases (e.g., Rudinger et al.2018). In this paper, we examine dependencies based on orthographic cues to language variety. Many LMs are trained on large corpora scraped from the web, and data from different language varieties are often combined. For example, LMs trained on web-scraped English (e.g., the WebText Corpus of Radford et al.2019) encounter British English, North American English, and multiple World Englishes. Likewise, Spanish web corpora may include several distinct varieties of Latin American Spanish, as well as Iberian Spanish (e.g., Kilgarriff and Renau2013). Here we use differences between British and American English spelling conventions to ask whether LMs trained on large and diverse collections of English learn to apply these conventions consistently within the same span of text. For example, if the British spelling of the word _labour_ appears in a sentence prefix, will the LM assign higher probabilities to continuations that maintain British spelling conventions (e.g., _organisation_) over those that have American-spelled forms (_organization_)? To the extent that such models are used within response generation systems or for next word prediction in virtual keyboards, maintaining such consistency would be strongly desirable so users receive results appropriate for their locale. Of course, as with any such dependencies, models can only learn generalizations that are present in the data, so we also look at the degree to which corpora used to train the large LMs (LLMs) that we investigate (as well as a few others) demonstrate spelling convention consistency. Assessing whether syntactic or semantic generalizations are learned by models trained on noisy, errorful and inconsistent data is complicated by the difficulty in quantifying the actual degree of consistency of the dependency in the data itself. In contrast to structural linguistic generalizations or other implicit information, the explicitness of spelling conventions permits straightforward corpus analysis in addition to model probing, providing another avenue for explaining model performance. The results of our data analysis are presented in SS4. We find that relevant web-scraped English text used to train LLMs unsurprisingly does not provide perfect consistency -- and further that it is heavily skewed towards American spelling conventions -- but that it provides as much or more consistency than some curated corpora such as the British National Corpus (BNC Consortium, 2007). We then present methods, in SS5, to measure the degree to which two neural LLMs - T5 (Raffel et al., 2020) (both with and without additional finetuning) and GPT2 - exhibit spelling variation consistency. We find that T5 without finetuning demonstrates a general preference for consistency, but that this preference is weaker for British than American English and does not extend robustly to nonce words. Finetuning T5 on a synthetically modified portion of the British National Corpus reduces the preference for American English. We then modify our conditional probability calculations to allow demonstration of similar patterns of model behavior for GPT2, a very differently architectured and trained LLM (Radford et al., 2019). Lastly, in SS6, we take a slightly deeper dive into the kinds of (and reasons for) spelling convention inconsistencies in some corpora analyzed in SS4. Overall, we demonstrate that, while T5 and GPT2 display some sensitivity to spelling convention differences, this cannot be relied on to produce consistent generated output. If reliable spelling consistency is an application requirement, additional post-processing may need to be applied to LLM output. This paper makes several key contributions. First, we provide methods for straightforwardly assessing the ability of LLMs to capture certain well-attested long-distance dependencies in English, and demonstrate the strengths and shortcomings of two well-known models in doing so. This opens up the possibility of exploratory studies in languages where such conventions are less well documented. In contrast to the most heavily investigated types of long-distance dependencies (e.g., syntactic), the (previously unexplored) dependency of spelling convention consistency is directly observable in the surface string and hence is relatively easy to assess in both models and data. As a result, it can be seen as a useful task for assessing LM learning in general. We also document the degree to which web-scraped corpora exhibit spelling consistency, making clear that the models have plenty of room for improvement. However, American English is shown to be far more heavily represented in the training corpora than British English, to the point that performance for British English is demonstrably far worse than for American English, something that language generation or word prediction systems must address for equitable performance. ## 2 Background ### Dependencies and LMs Much of the work investigating whether large language models capture long-distance linguistic generalizations has focused on non-surface dependencies, such as co-reference. In order to correctly identify that two expressions refer to the same entity, models often need to identify complex syntactic relationships (e.g., c-command), or build a model of entities over an entire discourse (e.g., Clark and Manning 2016). Despite this complexity, LLMs have shown some promise as general-purpose co-reference resolvers (Joshi et al., 2019). This suggests that they can learn to model complex long-distance dependencies. Other research has shown more directly that LLMs model syntactic dependencies. A common methodology is to compare an LM's surprisal directly to psycholinguistic data (Futrell et al., 2018). If the LM still performs like a human on examples that require modeling hierarchical relationships between tokens, this suggests that the LM has learned some part of the more complex syntactic structure of the language. Work such as Futrell et al. (2018) has shown that a recurrent neural network language model achieves surprisal rates that mimic human processing, including in these syntactically complex situations. This suggests that an RNN LM can be sensitive to complex syntactic relationships as well. Similar methods have been used to show LMs learning syntactic dependencies in Linzen et al. (2016), Frank et al. (2016), and Brennan et al. (2020). Another class of methods for assessing whether LMs learn complex syntactic dependencies involves probing the models themselves to evaluate whether syntax-like relationships between tokens can be discovered. Details of their methods vary widely, but Clark et al. (2019), Hewitt and Manning (2019), and Lin et al. (2019) all suggest that many LMs learn complex syntactic dependencies. In contrast, the topic of the current paper - spelling convention dependencies - is a relatively surface-level dependency. A model does not need to capture the syntactic or semantic relationship between two words in order to evaluate spelling consistency, rather simply their co-occurrence. Given prior results showing that LMs can and do learn complex semantic and syntactic relationships between words, one might expect that a relatively simple dependency like spelling convention should be easy for an LM to learn. ### Spelling variation As discussed by Berg and Aronoff (2017), the orthography of English has never been regulated by an official body, but has rather emerged dynamically over time. Dictionaries played a key role in settling spelling conventions, with Samuel Johnson's (1755) dictionary being the key source of contemporary British spelling conventions and Webster's (1828) dictionary the key source of contemporary American spelling. The latter included spelling reforms such as using the suffix _-or_ instead of _-our_ for certain words, e.g., _labor_ instead of _labour_. These reforms were adopted in American spelling but not in British spelling conventions. This history makes English an interesting case study for spelling variation in particular. Languages that have historically had centralized regulatory institutions, such as the French or Royal Spanish Academies, have much less purely orthographic variation. For example, despite many lexical differences, there are few spelling differences between Iberian and Latin American Spanish. On the other hand, there are many language situations that have considerably more spelling variation. For example, speakers of South Asian languages that are traditionally written with Brahmic or Arabic scripts often write using the Latin alphabet in contexts like SMS messages and social media (Roark et al., 2020). This kind of informally romanized text presents many spelling variations due to these languages' lack of orthography in the Latin script. The well-documented nature of English spelling variation and its close ties to standardized regional varieties make it a good initial case study for whether LLMs learn systematic variation in the data. If so, such models may be useful in more exploratory studies, such as the above-mentioned scenario where no official orthography exists. As far as we are aware, the issue of spelling convention consistency in language models has not been investigated. Nguyen and Grieve (2020) looked at whether word embeddings are _robust_ to spelling variation, not whether generative language models capture spelling consistency. That paper focused mainly on the kinds of variation that arise in informal social media text, but they also examined British versus American spelling. Unsurprisingly, they found that cosine similarity between British and American spelled variants are high relative to other patterns of informal spelling variability. ### Prompting LMs In the present work, we construct prompts to measure the probability assigned to various tokens by LLMs. In constructing these prompts, we take into account the findings of recent work on prompting LMs. Our work is different from the sort of prompting described by these papers, which generally includes features such as task-specific prefixes containing instructions (e.g., Raffel et al.2020), verbalized class labels (e.g., Schick and Schutze2021), or in-context learning (e.g., Brown et al.2020), none of which are present in our approach. However, work such as Webson and Pavlick (2022) has shown large effects due to small variations in the wording of prompts, even if the reasons for these effects are not apparent. Therefore, we choose to present the model with several different prompts and average the probabilities over all prompts, in order to account for possible variation. ## 3 Data and models To assess the spelling convention consistency of data and models, we use a list of British and American English spelling differences that is part of the open-source American British English Translator.1 We used the 1706 word pairs in the data/american_spellings.json file at that site. This list includes American and British spelling variants for words with common differences such as _-orl-our_ (e.g., vapor/vapour), _-izel-ise_ (realize/realise), consonant doubling (modeling/modelling), _-erl-re_ (liter/litre), along with some number of term-specific spelling differences (aluminum/aluminium). We use this list to create prompts for probing the language models and to establish the consistency of usage within corpora, i.e., whether strings found in this list consistently follow one convention or the other when they co-occur. For model probing, we examine T5 (Raffel et al., 2020), a general purpose encoder-decoder model. We use the t5-large architecture variant on the T5X codebase,2 which has approximately 770M parameters. For English, T5 is (pre-)trained using a span corruption objective on the Colossal Clean Crawled Corpus (C4), an English language collection derived from Common Crawl (Raffel et al., 2020).3 Footnote 2: [https://github.com/google-research/5x/blob/main/docs/models.md#5-checkpoints](https://github.com/google-research/5x/blob/main/docs/models.md#5-checkpoints) Footnote 3: [http://commoncrawl.org/](http://commoncrawl.org/) We also examine GPT2, for which we use the open-source HuggingFace implementation (Radford et al., 2019). Unlike T5, GPT2 is a purely autoregressive language model rather than an encoder-decoder sequence-to-sequence model. It is trained to perform next-word prediction rather than fill in corrupted spans of text. GPT2 is built on OpenAI's WebText corpus (Radford et al., 2019), of which there is an open-source variant available.4 Footnote 4: [https://skylion007.github.ioOpenWebTextCorpus/](https://skylion007.github.ioOpenWebTextCorpus/) We examine C4 and OpenAI's WebText corpus for spelling convention consistency, along with several other corpora: English Wikipedia (downloaded 06-21-2020); the Billion Word Benchmark (Chelba et al., 2013), which is a collection of newswire text; and the British National Corpus (BNC Consortium, 2007),5 which is a balanced corpus of both written and spoken material.6 Footnote 5: [http://www.natorop.ox.ac.uk/](http://www.natorop.ox.ac.uk/) Footnote 6: Code for querying corpora and generating prompts, as well as other relevant data and code, can be found at [https://github.com/google-research/google-research/tree/master/spelling_convention_nlm](https://github.com/google-research/google-research/tree/master/spelling_convention_nlm). ## 4 Training corpora consistency To examine spelling consistency in training data, we made use of the list of spelling variants and the five corpora mentioned in Section 3: C4, the OpenWebText Corpus (OWT), English Wikipedia (EngWiki), the Billion Word Benchmark (BWB), and the British National Corpus (BNC). We convert all strings in each corpus to lowercase, and treat all characters outside of the a-z range as whitespace for tokenization. We look for exact matches of list items in the resulting whitespace-delimited tokens. Let \(V_{\text{US}}\) be the US spelling variants7 of the words in the list and \(V_{\text{UK}}\) the UK spelling variants. For each corpus \(C\), let \(s^{k}=w_{1}\dots w_{|s^{k}|}\) represent the \(k\)th string in the corpus, consisting of \(|s^{k}|\) words. We extract all pairs of words \((w_{i},w_{j})\) from \(s^{k}\) such that \(i<j\) and \(w_{i},w_{j}\in V_{\text{US}}\bigcup V_{\text{UK}}\). Each extracted pair \((w_{i},w_{j})\) is placed into one of three classes: the pair is (1) _US-matched_ if \(w_{i},w_{j}\in V_{\text{US}}\); (2) _UK-matched_ if \(w_{i},w_{j}\in V_{\text{UK}}\); and (3) _mismatched_ otherwise. We then aggregate the counts for pairs in these three bins across all strings in the corpus. Footnote 7: For convenience, we use US as shorthand for American and UK as shorthand for British. Table 1 presents the number of pairs extracted from each corpus and the percentage of those within each class. Several things jump out from these results. First, all of the corpora, other than the British National Corpus, have significantly more US-matched pairs than UK-matched pairs, with OWT and C4 being the most skewed towards US-matched pairs. This likely indicates a heavy overall skew towards US spelling variants, leading to a high prior probability of US spelling variants in LLMs. Second, the percentage of extracted pairs that are mismatched are non-negligible, however there is a lot of consistency. For example, in the C4 corpus, if a word from \(V_{\text{UK}}\) is the first word of a pair, the probability that the next word will also be from \(V_{\text{UK}}\) is nearly three times the probability that it is from \(V_{\text{US}}\).8 Finally, both English Wikipedia and the British National Corpus have somewhat elevated levels of mismatch compared to the other corpora, something we look at more closely in Section 6. Footnote 8: Mismatched pairs in all corpora are roughly equally split between having \(V_{\text{US}}\) or \(V_{\text{UK}}\) words first. Hence, for C4, 5.4% of pairs are \(V_{\text{UK}}\) followed by \(V_{\text{US}}\) words (half of the mismatched probability), while 14.7% are \(V_{\text{UK}}\) followed by \(V_{\text{UK}}\). Having established that the level of mismatch in the C4 corpus used to train T5 is at the lower end \begin{table} \begin{tabular}{l|r|r r r} & \multicolumn{2}{c}{total \# of} & \multicolumn{3}{c}{X-matched \%} \\ Corpus & word pairs & US & UK & mis \\ \hline C4 & 542,755,756 & 74.6 & 14.7 & 10.8 \\ OWT & 42,255,261 & 79.7 & 11.5 & 8.8 \\ EngWiki & 1,527,529 & 58.0 & 26.5 & 15.4 \\ BWB & 442,733 & 67.5 & 23.6 & 8.9 \\ BNC & 74,072 & 14.5 & 64.8 & 20.8 \\ \hline \end{tabular} \end{table} Table 1: Study of word pairs found in the same string from either UK or US spelling list over corpora of different sizes and characteristics, with percent of _US_-matched, _UK_-matched and _mismatched_ US/UK pairs. observed in the data we examined,9 we now move on to examine whether the trained models pick up on these dependencies. Footnote 9: We note again the benefit of these explicit surface-level dependencies – we can easily assess the prevalence/consistency of the training data, in contrast to structural dependencies. ## 5 Language model consistency From the dictionary presented in Section 3, we kept only the words that can be described by a small number of rules, e.g., the variation between _-ize_ and _-ise_, etc, leaving us with 1266 options. For efficiency, we sample \(\approx\)16k prompt-target pairs (16028) from all possible \(1266^{2}\) combinations. To eliminate all sources of variation besides the pair of words being tested, we created several template sentences into which we can insert pairs of words. The full set of templates is presented in Table 9 in Appendix A. Several considerations informed how we formulated the templates so that they work for all the tokens we wanted to test. First and most obviously, we need to ensure that all tokens in a template are variety-neutral. This ensures that the probability of any of our test words being British or American will not be swayed by any regional bias in the template. While neutrality is difficult to enforce perfectly within a single frame, we hope that by using multiple different templates, we can mitigate unknown sources of bias via averaging. Second, we need templates that will be syntactically and semantically acceptable, regardless of the inserted tokens. LLMs may assign low probability to tokens that result in grammatically unacceptable or semantically unlikely sentences, and we want to avoid introducing this source of variation. This is challenging, since the tokens we are testing include different parts of speech and come from very different semantic domains, hence there are few contexts where all tokens would be acceptable. Fortunately, this problem has an analogue in linguistics: linguists interested in detailed phonetic description often elicit tokens in set contexts to eliminate extraneous sources of acoustic variation (Bowern, 2015). The approach these linguists often take is to use a template that _mentions_ the tokens in question rather than _using_ them. We follow this approach, and use templates similar to (1), which contain a list of word mentions. (1) _My preferred words are...,..., and tree._ We then substitute pairs of words from our dictionary into the spaces marked with ellipses, both with consistent and inconsistent spelling conventions. In other words, given the pair of dictionary entries _realize_/_realise_ and _center_/_centre_, we use the template above to generate the four test sentences: 1. [label=(2)] 2. 1. US/US: _My preferred words are_ _realize_, _center_, and tree._ 2. US/UK: _My preferred words are_ _realize_, _centre_, and tree._ 3. UK/US: _My preferred words are_ _realise_, _center_, and tree._ 4. UK/UK: _My preferred words are_ _realise_, _centre_, and tree._ We use T5 to score the probability of generating the second bolded word, as shown in Example (2), given the first. In the above template, the two words are adjacent in the string. We also include a non-adjacent condition, which augments the templates by adding ten variety-neutral tokens between the bold-face words. For the above sample, the non-adjacent variant would be: (3) _My preferred words are..., flower, interesting, jump, ponderous, sky, skipping, desk, small, ladder, lovely,..., and tree._ Since T5 is a seq2seq model trained on a span-corruption objective, we present a prompt that includes a priming word and a blank span token representing the second word: 1. [label=(4)] 2. My preferred words are **flavour**, <blank-span-1>, and tree The decoder then scores an output string that replaces the blank, but represents the known inputs with span markers instead: (5) <input-span-1> **harbour** <input=span-2> Thus we are effectively computing the probability that the blank span will be filled with a particular word (with a US or UK spelling), given the visible input sentence (which contains a US or UK primer) -- P("harbour" "My preferred words are flavour,..., and tree.") We report a few different measures to give a picture of how strongly each model prefers spelling consistency: mean conditional probabilities, prediction accuracy and mutual information. We then examine behavior with nonce words. ### Measure 1: conditional probability tables The first measure we use to show the preferences of each model is a 2x2 table of the conditional probability of the second probe word, given the first. For ease of interpretation, we normalize the conditional probabilities for each conditioning word as though the two alternative second words (US and UK) are the only possibilities, i.e., the two conditional probabilities are made to sum to 1. That is, \(P(UK|US)+P(US|US)=1\) and \(P(US|UK)+P(UK|UK)=1\) for each example. These conditional probabilities are then averaged over the whole test corpus (16028 word pairs replicated across 29 template sentences10 for a total of 464812 samples) for both the adjacent and non-adjacent conditions. Table 2 presents these mean conditional probabilities for base T5 and T5 fine-tuned (TF+FT) on a synthetic balanced corpus derived from the BNC (see SS5.2), alongside conditional probabilities calculated from the pairs extracted for the analysis in Table 1 from their training corpus (C4), under the same adjacent and non-adjacent conditions.11 Footnote 10: For information on the variance across prompts, see Appendix A. Footnote 11: The conditional probabilities from C4 are simply the probability that Word 2 is from the UK or US class given the class of Word 1, with extracted pairs split by whether the words were adjacent or not in the string. Adjacent pairs account for roughly 1% of all pairs in the corpus. As can be seen from these results, T5 shows a preference for spelling consistency in both the adjacent and non-adjacent conditions -- probabilities for both the consistent US and consistent UK conditions are higher than the probabilities for the respective inconsistent conditions. The differences are notably larger in the adjacent conditions than the non-adjacent conditions, indicating that the preference for spelling consistency attenuates somewhat over longer strings. The model also shows a preference for US forms overall, assigning a higher probability to a US form after a UK form than to a UK form after a US form. This is likely due to US forms being over-represented in the training data, leading to high prior probability. Comparing the model and corpus columns in Table 2, the degree of consistency preference displayed by T5 in the adjacent condition is actually very similar to the consistency levels in the C4 training corpus (similarly replicating the bias for US forms). However, C4 is much more consistent in the non-adjacent condition than T5, indicating that the model is failing to capture some long-distance dependencies. ### Finetuning on synthetic data Finding naturally occurring English text using perfectly consistent spelling conventions of sufficient size to help improve a model's consistency may be difficult, given the results presented in Table 1. It would be useful, however, to determine if T5 could be finetuned with some resource to exhibit better spelling consistency. To that end, we created a synthetic version of the BNC, which was changed to exhibit perfect consistency of British and American spelling conventions for the words in our lexicon. This synthetic BNC corpus was produced as follows. Using our list of spelling variants, we identified strings in the corpus that contained an instance of either the American or British spelling. We then produced a synthetic consistent American spelling version of these strings by using the American spelling of all of the words, along with a synthetic consistent British spelling version of these strings by using the British spelling of all of the words. The resulting corpus is thus balanced between American and British spelling for these 1706 words, and every sentence is consistent in using one convention or the other. In total, the syn \begin{table} \begin{tabular}{l|c|c c||c c||c c} \hline \hline & \multicolumn{3}{c||}{T5} & \multicolumn{2}{c||}{T5+FT} & \multicolumn{2}{c}{C4} \\ Condition & \multicolumn{1}{c||}{Word 1} & \multicolumn{2}{c||}{Word 2} & \multicolumn{2}{c||}{Word 2} & \multicolumn{2}{c}{Word 2} \\ & & US & UK & US & UK & US & UK \\ \hline Adjacent & US & 0.86 & 0.14 & 0.66 & 0.34 & 0.91 & 0.09 \\ & UK & 0.39 & 0.61 & 0.44 & 0.56 & 0.38 & 0.62 \\ \hline Non-adjacent & US & 0.83 & 0.17 & 0.69 & 0.31 & 0.93 & 0.07 \\ & UK & 0.48 & 0.52 & 0.43 & 0.57 & 0.27 & 0.73 \\ \hline \hline \end{tabular} \end{table} Table 2: Conditional probability of Word 2, given a template with Word 1, given by T5 (no finetuning) and T5+FT (finetuned on synthetic balanced BNC data). For each instance, the probability has been normalized over each condition (corresponding to each row for the model). We also present the conditional probabilities from pairs found in the training corpus C4. thetic corpus contains 954238 sentences,12 equally split between US and UK spelling conventions. A small random subset of 2560 sentences was reserved for validation, and T5 was finetuned on the rest. Finetuning used the same span-filling masked LM task used for pretraining, with dropout set to 0.1, and the loss normalizing factor set to 233472 as suggested in the T5 documentation. Fine-tuning started at the default T5-large checkpoint, which represents 1000700 steps, and proceeded another 99300 steps at a batch size of 128. Footnote 12: In our testing, this was not enough data to reliably train a T5-large LLM from scratch. As seen in Table 2, finetuning on this synthetic corpus does not appear to improve overall spelling consistency - quite the opposite. However, it does have at least two interesting effects. First, as might be expected, the overwhelming preference for US English shown by base T5 is reduced. Furthermore, the finetuned model is better able to retain long-distance information -- there is no dropoff in consistency between the adjacent and non-adjacent conditions as seen for T5 without finetuning. ### Measure 2: prediction accuracy While the conditional probabilities in Table 2 show the overall preferences of the models over the test set, we also want a measure that captures how often the LLMs assign consistent pairs a higher probability than inconsistent pairs. In Table 3 we show the percentage of the test set examples for which each model predicted consistency over inconsistency. The results show a similar pattern as the conditional probability measures in Table 2. Again, finetuning lowers overall consistency, but results in less drop-off in non-adjacent vs. adjacent conditions. ### Measure 3: mutual information We also calculated the average mutual information (MI) across all prompt/target pairs in order to measure the strength of association between spelling conventions in both words. For each pair, we calculate four joint probabilities -- P(US,US), P(US,UK), P(UK,US), p(UK,UK). We assume these four probabilities make up the entire universe with respect to a particular prompt/target pair, and normalize them so they sum to 1. This also allows us to easily calculate marginal probabilities simply by adding the appropriate joint probabilities - e.g., P(US prompt) = P(US,US) + P(US,UK). To calculate MI, we use a formula based on the log-likelihood ratio calculation in Moore (2004), but equivalent to the standard formulation for mutual information, where \(x,y\) are the two probe words: \[\sum_{x\in\{\text{UK},\text{US}\},y\in\{\text{UK},\text{US}\}}p(x,y)\log\frac{ p(x,y)}{p(x)p(y)}\] Since T5 is trained on masked token prediction, to measure the joint probability \(p(x,y)\) of each pair of probe words \(x,y\) we can simply mask both probing tokens and measure the probability of generating both of them. That is, we present T5 with (6-a) and measure the probability of (6-b): (6) 1. My preferred words are <blank-span-1>, <blank-span-2>, and tree. <input-span-1> flavour <input-span-2> harbour <input-span-3> Table 4 presents these mutual information values. There doesn't seem to be a significant difference between adjacent and non-adjacent conditions for either T5 variant, though finetuning does seem to cause an overall drop in MI, in line with the overall drop in consistency seen in the measures above. ### Nonce forms We want to determine if T5 assigns the probabilities reported above on the basis of dependencies between specific lexical items, or if it is learning sub-word generalizations. In other words, does the model learn that specific words like _flavour_ and _realise_ are more likely to co-occur than _flavour_ and _realize_? Or does it learn that words containing _-our_ are more likely to co-occur with words containing _-ise_? Since the model is trained using Sentence \begin{table} \begin{tabular}{l c c} \hline Condition & T5 & T5+FT \\ \hline Adjacent & 0.0048 & 0.0017 \\ Non-adjacent & 0.0044 & 0.0015 \\ \hline \end{tabular} \end{table} Table 4: Average mutual information in the adjacent and non-adjacent conditions. \begin{table} \begin{tabular}{l|c c||c c} \hline & \multicolumn{2}{c||}{Word 1 = US} & \multicolumn{2}{c}{Word 1 = UK} \\ \hline Condition & T5 & T5+FT & T5 & T5+FT \\ \hline Adjacent & 92.2 & 71.1 & 65.1 & 63.4 \\ Non-adjacent & 88.7 & 77.7 & 54.3 & 63.2 \\ \hline \end{tabular} \end{table} Table 3: Percent of test set examples for which each model prefers consistent over inconsistent spelling. Piece tokenization Kudo and Richardson (2018), it is possible that it exploits sub-word features. One way of testing if a model can use sub-word features is to create nonce words that contain British- or American-specific sub-words. If the model treats these as being British or American, this is an indication that the model is able to pick up on sub-word features. We created a list of ten nonce forms by changing, adding, or removing one to three letters in existing words in our dictionary of American and British forms. These forms are shown in Table 5. We use the same probing template and method as described above. For each probe, we use a real American or British word for the first probe word, and one of the nonce forms shown in Table 5 for the second. For this experiment we queried the base T5 model in the adjacent condition. The resulting conditional probability table is shown in Table 6. Table 6 shows that the patterns shown in Section 5.1 above do not generalize very strongly to nonce forms. The probabilities assigned to US forms following UK forms are on average higher than UK forms following UK forms. However, the difference between these alternatives is attenuated compared to when Word 1 is a US form, indicating that (a) there is a heavy skew towards US spelling conditions in the training data; but (b) some sensitivity to the UK context, if not enough to counteract the high US form priors. This suggests that the results in Table 2 are to a large extent driven by lexical dependencies rather than any lower-level spelling patterns encoded by wordpieces. ### Autoregressive LLMs Many commonly-used LLMs (including T5) are trained to predict words in the input that have been masked out. Another common class of LLMs, however, are trained to perform next-word prediction instead. To examine how such autoregressive architectures handle spelling consistency, we experiment with OpenAI's GPT2 Radford et al. (2019), which has a readily available open-source implementation through HuggingFace.13 Footnote 13: [https://huggingface.co/gpt2](https://huggingface.co/gpt2) As GPT2 is purely autoregressive, we cannot compute the probability that a particular probe word will fill a masked sentence span as easily as we could with T5. We can only efficiently compute the probability of a suffix given a prefix. Given this caveat, we have at least two options for assigning conditional probability scores, neither of which should be treated as exactly comparable to the T5 scores above. First, we can count only the logits corresponding to the target word: P("harbour"| "My preferred words are flavour,"). This local score ignores any words in the template occurring after the target word. Second, we can compute from the start of the target to the end of the sentence: P("harbour, and tree"| "My preferred words are flavour,"), which accounts for the post-target suffix of the sentence. Tables 7 and 8 show results for both of these methods for calculating the conditional probability, compiled in the same way as the T5 results in Tables 2 and 3. Table 7 also includes the conditional probabilities from GPT2's training corpus, OWT. We see that GPT2 shows a similar preference for consistency as T5, but only very locally. There is a large drop-off in preference for consistency when moving from adjacent to non-adjacent conditions, or when including the completion of the sentence in the calculation. For UK English in particular, any preference for consistency completely disappears beyond the immediate vicinity of the priming word, and the model returns to chance performance on the task. ## 6 Further analysis of corpora We now return to a slightly more detailed examination of two of the corpora presented in Table 1, \begin{table} \begin{tabular}{l c c c} \hline \hline & & \multicolumn{2}{c}{Word 2} \\ & & US & UK \\ \hline \multirow{2}{*}{Word 1} & US & 0.68 & 0.32 \\ & UK & 0.56 & 0.44 \\ \hline \hline \end{tabular} \end{table} Table 6: Conditional probability table for nonce forms given by T5. The table shows the conditional probability of Word 2 (which is a nonce form), given Word 1. For each instance, the probability has been normalized over each condition (i.e., each row in the table). \begin{table} \begin{tabular}{c c||c c} \hline \hline British & American & British & American \\ \hline glavour & glavor & reptalise & reptalize \\ mentre & menter & amolirise & amolirize \\ unulise & unulize & sphectre & sphecter \\ malvour & malvor & imminise & imminize \\ harbour & larbor & voitre & voiter \\ \hline \hline \end{tabular} \end{table} Table 5: Nonce forms created by making one to three changes to words in the American-British dictionary. English Wikipedia and the British National Corpus, both of which had relatively high levels of mismatch compared to the other corpora. Wikipedia is an interesting case, since the documents are collectively edited by potentially a large number of contributors, which may lead to higher expected mismatch than in other corpora. For example, one version of the article on _air lock_ used both US-spelling of the word _vapor_ and the UK-spelling (_vapour_). This is explained via three versions of the introductory sentence to the page, shown in Table 11 in Appendix B, where the two spellings are added to the sentence at different times, years apart. The amount of mismatch in the British National Corpus is perhaps more surprising, given the provenance of the materials and intent of the collection. However the diversity of sources, which include things such as journal articles and edited volumes, likely leads to similar issues to those found in Wikipedia, along with simple human error and/or inconsistency. Table 12 in Appendix B presents a few examples of sentences with words from both spelling conventions, with American _-ize_ spellings mixed with British _-ise_ or _-our_ versions. ## 7 Conclusion and Future Work We have presented results showing that T5 does tend towards consistency in spelling, but not to the degree that could be relied upon should such consistency be desired in generated text. We show that this general preference for consistency reflects the data that the model is trained on, which also is mostly consistent, but with a significant proportion of exceptions. The model's behavior is also shown to be affected by the relative frequency of language varieties in the training data. We took advantage of the explicit and surface-accessible nature of these dependencies to attribute some model performance to the training data, while also demonstrating that modeling improvements should be possible, since the training data itself is substantially more consistent than the models. These results suggest several possible avenues for future work. First, methods for addressing bias in training data should yield improvements for British spelling consistency in these models. We also intend to extend these results to languages other than English and investigate how spelling variation in other language situations is learned by LLMs. Some of the methods we used here rely on the fact that English spelling variation is quite thoroughly catalogued. Extending this work to less-documented cases of language variation will require us to either (1) collect data about spelling variation from language informants or data, or (2) develop methods that require less prior knowledge. In the interest of finding methods that are extensible to the greatest number of cases, we intend to pursue path (2), working on methods to mine information about language variation from large corpora and LLMs that have been trained on them. ## Acknowledgements Thanks to Alexander Gutkin, Shankar Kumar, Arya McCarthy and Richard Sproat for useful discussion and comments, and to the anonymous reviewers for helpful suggestions. \begin{table} \begin{tabular}{l|c c c||c c||c c} \hline \hline \multirow{3}{*}{Condition} & \multirow{3}{*}{Word 1} & \multicolumn{2}{c||}{GPT2 (tgt only)} & \multicolumn{2}{c||}{GPT2 (to EOS)} & \multicolumn{2}{c}{OWT} \\ \cline{3-8} & & \multicolumn{2}{c||}{Word 2} & \multicolumn{2}{c||}{Word 2} & \multicolumn{2}{c}{Word 2} \\ & & US & UK & US & UK & US & UK \\ \hline Adjacent & US & 0.87 & 0.13 & 0.69 & 0.31 & 0.95 & 0.05 \\ & UK & 0.36 & 0.64 & 0.51 & 0.49 & 0.34 & 0.66 \\ \hline Non-adjacent & US & 0.83 & 0.17 & 0.66 & 0.33 & 0.95 & 0.05 \\ & UK & 0.49 & 0.51 & 0.54 & 0.46 & 0.28 & 0.72 \\ \hline \hline \end{tabular} \end{table} Table 7: Conditional probability of Word 2, given a template with Word 1, given by GPT2 scored until the end of the target word only (tgt only) and scored until the end of the sentence (to EOS). We also present the conditional probabilities from pairs found in the training corpus, OWT. \begin{table} \begin{tabular}{l|c c||c c} \hline \hline & \multicolumn{2}{c||}{Word 1 = US} & \multicolumn{2}{c}{Word 1 = UK} \\ \hline Condition & GPT2 & GPT2 & GPT2 & GPT2 \\ & target & EOS & target & EOS \\ \hline Adjacent & 94.2 & 70.1 & 70.8 & 49.4 \\ Non-adjacent & 92.5 & 67.5 & 54.6 & 45.1 \\ \hline \hline \end{tabular} \end{table} Table 8: Percent of test set examples for which each GTP2 scoring variant prefers consistent over inconsistent spelling. ### Limitations Our work is focused on just a single case study of spelling variation. As detailed in Section 2, English is a good candidate for a case study for several reasons, but it would be beneficial to extend this work to other language situations. Another limitation was our choice to focus on already existing pre-trained models, rather than directly controlling the training data that is input to each model. This means some of the conclusions about the connection between training data and outcome are tentative, pending experimental confirmation. ## Ethics Statement This work does not propose a new model or dataset, but rather probes the behavior of existing models. Thus novel ethical considerations about model behavior and dataset contents are not directly raised by this work. While not explicitly focused on ethical considerations, this paper's methods hopefully contribute to better understanding model behavior, and could be used to understand the ways in which large language models treat underrepresented and marginalized language varieties.
2308.01132
Integrating transfer matrix method into SCAPS-1D for addressing optical losses and per-layer optical properties in perovskite/Silicon tandem solar cells
SCAPS-1D software ignores optical losses and recombination junction (RJ) layer in studying tandem solar cells (TSCs). This paper presents an optoelectronic study of a perovskite/Silicon TSC, comparing the effects of using two different methods of calculating filtered spectra on the photovoltaic performance parameters of tandem device. It is shown that integrating transfer matrix (TM) method into SCAPS-1D addresses per-layer optical losses and provides a platform for optimizing the RJ layer in TSCs. Using Beer-Lambert (BL) method for calculating the filtered spectra transmitted from the perovskite top sub-cell is revealed to overestimate the cell efficiency by ~4%, due to its inability to fully address optical losses. Also, the BL method fails to tackle any issues regarding optical improvement through ITO ad-layer on the RJ. Using TM formalism, the efficiency of the proposed perovskite/Silicon TSC is shown to be increased from 19.81% to 23.10%, by introducing the ITO ad-layer on the RJ. It is the first time that the effect of filtered spectrum calculation method is clearly investigated in simulating TSCs with SCAPS-1D. The results pave the way to introduce the optical loss effects in SCAPS-1D and demonstrate that the BL method that has been used before needs to be revised.
Peymaneh Rafieipour, Aminreza Mohandes, Mohammad Moaddeli, Mansour Kanani
2023-08-02T13:18:58Z
http://arxiv.org/abs/2308.01132v1
**Integrating transfer matrix method into SCAPS-1D for addressing optical losses and per-layer optical properties in perovskite/Silicon tandem solar cells** ## Abstract SCAPS-1D software ignores optical losses and recombination junction (RJ) layer in studying tandem solar cells (TSCs). This paper presents an optoelectronic study of a perovskite/Silicon TSC, comparing the effects of using two different methods of calculating filtered spectra on the photovoltaic performance parameters of tandem device. It is shown that integrating transfer matrix (TM) method into SCAPS-1D addresses per-layer optical losses and provides a platform for optimizing the RJ layer in TSCs. Using Beer-Lambert (BL) method for calculating the filtered spectra transmitted from the perovskite top sub-cell is revealed to overestimate the cell efficiency by \(\sim\)4%, due to its inability to fully address optical losses. Also, the BL method fails to tackle any issues regarding optical improvement through ITO ad-layer on the RJ. Using TM formalism, the efficiency of the proposed perovskite/Silicon TSC is shown to be increased from 19.81% to 23.10%, by introducing the ITO ad-layer on the RJ. It is the first time that the effect of filtered spectrum calculation method is clearly investigated in simulating TSCs with SCAPS-1D. The results pave the way to introduce the optical loss effects in SCAPS-1D and demonstrate that the BL method that has been used before needs to be revised. **Keywords:** SCAPS-1D; Optical Loss; Transfer Matrix Method; Beer-Lambert method; Monolithic Perovskite/Silicon Tandem Solar Cell; Recombination Junction ## 1 Introduction Since their first fabrication in 1955 [1], silicon solar cells (SSCs) have demonstrated power conversion efficiencies (PCEs) over 25% [2]. In spite of their high stability and advanced processing technology, their efficiencies are still facing challenges in terms of Shockley-Queisser (SQ) theoretical limit [3]. The easiest way to overcome the SQ limitation of a solar cell (SC) device involves designing a series of devices called tandem solar cells (TSCs), by connecting two or multiple SCs that are electrically or mechanically linked together [4]. 2-terminal (2-T) monolithic TSCs are the most attractive and cost-effective architectures in which the layers are deposited directly on top of each other and the device requires only one external circuit and one substrate. To attain high efficiencies in 2-T monolithic TSCs, the currents between two sub-cells are required to become matched and the optical losses are required to become minimized [4]. The electrical coupling of top and bottom sub-cells linked in series in 2-T monolithic TSCs is provided by the recombination junction (RJ) that could be comprised of transparent conducting oxides (TCOs) such as ITO as the recombination layer or two adjacent heavily n-doped (n\({}^{++}\)) and p-doped (p\({}^{++}\)) regions [5]. High optical transmittance and electrical conductivity as well as low lateral electrical conductivity make TCOs as ideal candidates for the recombination layer in 2-T TSCs [5]. However, they pose reflection losses due to their refractive index mismatch with adjacent layers. Therefore, light management is a coupled requirement in order to form an appropriate 2-T monolithic TSC [6]. On the other hand, selecting suitable top and bottom sub-cells are another major factor that should be considered for breaking the SQ efficiency limit. Organic-inorganic lead halide perovskites (PVKs) are excellent absorber materials for SCs due to their extraordinary properties such as high optical absorption coefficient, long carrier diffusion length, long carrier lifetime and tunable band gap [7]. The PCE of perovskite SCs (PSCs) has been increased from 3.8% in 2009 to 25.7% in 2022, approaching the record of the leading silicon reputable technology [8]. The high band gap of PVKs makes them the best candidate for the top sub-cell to be linked with silicon bottom sub-cell in 2-T monolithic TSCs, as demonstrated for the first time in 2015 [9]. To achieve the highest performance for 2-T monolithic Perovskite/Silicon TSCs, modifications to the configuration as well as materials have been reported [8]. Notably, the highest efficiency of a 2-T monolithic Perovskite/Silicon TSC was accomplished by KAUST (King Abdullah University of Science and Technology) and was reported as 33.7% [[https://www.nrel.gov/pv/cell-efficiency.html](https://www.nrel.gov/pv/cell-efficiency.html)]. With the continuity and growing research interest in TSCs, more accurate and realistic performance prediction modeling of them is in demand. Solar Cell Capacitance Simulator (SCAPS) software is a well-known powerful software introduced for simulating different SC devices and plotting the results in the form of energy band diagrams, current density-voltage (J-V) curve, external quantum efficiency (EQE) curve and performance parameters [10]. To obtain J-V curve of the SC, SCAPS-1D solves Poisson and continuity equations with exact boundary conditions for holes and electrons in one dimension (see tables S1-S5). However, SCAPS-1D faces major challenges for simulating 2-T monolithic TSCs due to the existence of RJ layer. A common approach for solving this issue is to study the top sub-cell under the illumination of AM1.5 Global (AM1.5G) sun spectrum and the bottom sub-cell under the illumination of the filtered spectrum transmitted from the top sub-cell. In many previous theoretical studies of 2-T monolithic TSCs using SCAPS-1D, the Beer-Lambert (BL) method was used for calculating the filtered spectra [11-23]. In 2021, P. Yang et. al. evaluated the potential of using CsPb(1\({}_{\text{x}}\),Br\({}_{\text{y}}\))\({}_{\text{3}}\) perovskites as the wide bandgap top sub-cell along with the low bandgap crystalline silicon bottom sub-cell in two-terminal (2-T) and four-terminal (4-T) tandem solar cells [11]. They obtained the optimum perovskite bandgap required to maximize efficiency in each configuration. In 2-T and 4-T tandem configurations, it was demonstrated that the maximum efficiencies of 29.23% and 28.5% can be achieved by CsPbI\({}_{\text{3}}\) and CsPbBr\({}_{\text{3}}\) top sub-cells, respectively. Also, the optimized perovskite thicknesses were obtained as 275 nm and 400 nm in 2-T and 4-T tandem configurations, respectively [11]. In another theoretical study, J. Madan et. al designed a 23.36% perovskite-PbS CQD tandem device with 1.79 V (Voc), 16.67 mA.cm\({}^{\text{-2}}\) (Jsc) and 78.3% (FF) performance photovoltaic parameters [12]. They obtained the conversion efficiencies of 14.60% and 9.07% for top and filtered bottom sub-cells with optimized thicknesses of 143 nm and 470 nm, respectively. They changed the MAPbI\({}_{\text{3}}\) perovskite absorption coefficient, perovskite thickness and perovskite/transport layer defect densities, evaluating their impacts on the perovskite top sub-cell performance parameters and transmitted filtered spectra [12]. In 2021, N. Shrivastav et. al. optimized a 29.15% efficient perovskite-silicon TSC to unlock 33% conversion efficiencies [13]. They varied the thicknesses of top and bottom sub-cells simultaneously to obtain the optimized thickness values required for current matching. The perovskite top and silicon bottom sub-cells were simulated under the illuminations of AM1.5G and filtered spectra, respectively. They achieved the conversion efficiencies of 20.58% and 12.15% for top and bottom sub-cells with optimized thicknesses of 336 nm and 150 micron, respectively. The photovoltaic parameters of the optimized tandem device were reported as 2.02 V (Voc), 20.11 mA.cm\({}^{-2}\) (Jsc), 81.36% (FF) and 33.05% efficiency [13]. Very recently, N. Shrivastav et. al. studied a perovskite/CIGS monolithic TSC and optimized the thicknesses of top and bottom sub-cells to achieve the 1.92 V (Voc), 20.04 mA.cm\({}^{-2}\) (Jsc), 77% (FF) and 29.7% efficiency for a tandem device [14]. The matched current density was reported for the absorber thicknesses of 347 nm and 2 micron for top and bottom sub-cells, respectively [14]. In 2022, Jafarzadeh et. al. designed a perovskite-homojunction SnS tandem solar cell and achieved the optimized 28.92% efficiency [15]. They started with the optimization of SnS homojunction SC by varying layer thicknesses, doping concentrations, defect densities and interface defects. Then, an optimization of the perovskite SC by varying the absorber bandgap and thickness was performed. Thereafter, the tandem device was simulated with the optimized bandgap of 1.67 eV for the perovskite layer and the photovoltaic parameters of 1.99 V (Voc), 16.99 mA.cm\({}^{-2}\) (Jsc), 85.15% (FF) and 28.92% efficiency were obtained following the current-matching technique [15]. In the aforementioned articles and other related researches published in the literature [11-23], increasing the efficiency of TSC was achieved through choosing the suitable active material/transport layers and optimizing material characteristics. Few researches have paid attention to this important point that the BL method used for calculating the filtered spectra does not include all optical loss mechanisms. There are different optical losses affecting the transmitted light spectrum, including parasitic light absorption in charge transport layers, light reflections from the interfaces between adjacent layers and light scattering from the grain boundaries inside the layers. Since the performance of the bottom sub-cell is highly dependent on the filtered spectrum, it is necessary to give more importance to the method of calculating the filtered spectrum. Although the BL method is very simple and straightforward, it is only valid for homogeneous media, under some approximations that will be addressed in the following. First, light scattering in the layers is ignored since the real part of the complex refractive indices of materials is assumed to be zero. Second, the light reflection from the interfaces between different layers is ignored because the refractive index mismatch between the adjacent layers and the Fresnel reflection coefficient equals zero. Therefore, the reported values for the photovoltaic parameters and optimized thicknesses of the tandem SCs are overestimated and not reliable for practical applications. This necessity demands that a research work with an optical view deals with the simulation of tandem SCs using SCAPS-1D. Up to our knowledge, there is only one paper that has used TM method for calculating the filtered spectra and simulating a perovskite-silicon tandem solar cell with SCAPS-1D. In 2023, R. Pandey et. al. designed a perovskite-Silicon tandem solar cell with the use of a lead-free MASnI\({}_{2}\)Br\({}_{1}\) perovskite and achieved a maximum conversion efficiency of 30.7% with a V\({}_{\infty}\) of 2.14 V [24]. They reported a performance investigation of MASnI\({}_{3\text{-x}}\)Br\({}_{\text{x}}\)-based perovskite SCs by varying halide composition, perovskite thickness and perovskite bulk defect density [24]. Although the results were interesting, the effects of using different methods for calculating the filtered spectrum on the photovoltaic performance parameters of the tandem SC were not discussed. Therefore, the importance of using the right method in the simulation of 2-terminal monolithic TSCs with SCAPS-1D is still not clear. To fulfill this requirement, in this paper, we aim to evaluate the performance parameters of an emerging perovskite-silicon tandem SC using TM and BL methods and compare their corresponding results with each other. Standalone and 2-T monolithic tandem configuration of the Perovskite/Silicon as top/bottom sub-cells will be accomplished by absorber layer thickness variation, energy band diagram, J-V curve, EQE curve, filtered spectra, current matching, and tandem performance parameters, using SCAPS-1D. In addition, optical loss effects related to two commonly used RJs including: (I) Spiro-OMeTAD (HTL)/n\({}^{++}\)-Silicon hybrid RJ and (II) Spiro-OMeTAD (HTL)/ITO/n\({}^{++}\)-Silicon RJ will be studied with SCAPS-1D. Using TM method, it will be shown that the device performance metrics is improved by inserting an ITO layer in the RJ, as was expected from the literature [25]. In addition, we will analyze per-layer parasitic light absorption and total reflection from perovskite top sub-cell, by using TM method. We will show that the major contribution of total optical loss is from interface reflections rather than the layer parasitic absorption or intralayer light scattering. That is why BL method fails to fully address optical losses and results in an overestimation of the cell efficiency. The remaining parts of the paper are organized as follows: The next section covers the device structures, simulation methodology and methodology dependence of filtered spectra for both TSC configurations with different RJs. The formulations of TM and BL methods will be described in subsection 2-2. In the third section, simulation results with using both TM and BL methods will be presented and the effects of including optical loss effects in SCAPS-1D will be discussed. ## 2- Device structure and simulation methodology ### 2-1 SCAPS-1D device simulation The present work is carried out using SCAPS-1D software, developed in the Department of Electronics and Information Systems (ELIS), University of Gent, Belgium [26]. The proposed 2-T monolithic tandem architecture is composed of perovskite and silicon SCs that are stacked on each other. The simulation methodology of TSCs through SCAPS-1D concerns two steps: First, standalone simulations of perovskite top sub-cell and silicon bottom sub-cell are carried out under the illumination of AM1.5G spectrum to optimize the PV parameters of each device. Next, the PVS/Silicon TSC is simulated by the current matching strategy, using the AM1.5G and filtered spectra illuminated on the perovskite top sub-cell and silicon bottom sub-cell, respectively. Figure 1(a) and (b) present a schematic illustration of the standalone simulations of perovskite top sub-cell and silicon bottom sub-cell, respectively. The top sub-cell is composed of fluorine-doped tin oxide (FTO) as the front contact, SnO\({}_{2}\) as the electron transport layer (ETL), perovskite as the absorbing layer, Spiro-OMeTAD as the hole transport layer (HTL) and Au as the back contact. In addition, the bottom sub-cell consists of n\({}^{++}\)-Si as the ETL, n-Si as the absorbing layer and p\({}^{++}\)-Si as the HTL. The layer materials and their corresponding thicknesses are described in Figure 1(a) and (b). When studying perovskite/Silicon TSCs, which are illustrated schematically in Figure 1(c) and (d), the filtered spectrum transmitted from the perovskite top sub-cell is illuminated on two different configurations of the bottom sub-cell that are different in their RJ. In addition, the thickness of HTL (n\({}^{++}\)-Si) in each configuration has been optimized, independently. The perovskite active layer is Cs\({}_{0.05}\)(FA\({}_{0.85}\)MA\({}_{0.15}\))\({}_{0.95}\)Pb(I\({}_{0.85}\)Br\({}_{0.15}\))\({}_{3}\), which is chosen due to its large band gap (1.57-1.60 eV) [27] and denoted by CsFAMA in this paper. In the simulation, CsFAMA is assumed as a single-graded perovskite with a linearly graded band gap varied between 1.57 eV and 1.60 eV. To obtain the required parameters for simulating PSC, they are calibrated in such a way to reproduce the experimental J-V curve obtained by Bu et. al. [28]. According to [28], the thicknesses of SnO\({}_{2}\), CsFAMA and Spiro-OMeTAD layers are assumed as 50 nm, 700 nm and 300 nm, respectively. Figure 2 shows that the calibrated results are in accordance with the experimental J-V curve of the PSC [28], which confirms the validity of our calibration of the top sub-cell design. We use the SSC, studied in [16], as the bottom sub-cell in the designed perovskite/silicon tandem configuration. Figure 1: Schematic illustration of the SCAPS-1D simulation methodology: (a) standalone PSC under AM1.5G light irradiation, (b) standalone SSC under AM1.5G light irradiation, and (c) 2-T monolithic Perovskite/Si TSC with AM1.5G spectrum illuminates the top sub-cell and the filtered spectrum illuminates the bottom sub-cell which is designed in two different configurations. The next step is to optimize the top and bottom sub-cells in standalone conditions. To minimizing parasitic absorption losses (see Figure S1); the thicknesses of SnO\({}_{2}\) and Spiro-OMeTAD are chosen as 50 nm and 100 nm, respectively [29,30]. The thickness of CsFAMA is then varied and J-V curve as well as EQE curve of the PSC is simulated. The results are shown in Figure 3. An optimization study of PV parameters of the PSC demonstrates the PCE of 15.68%, V\({}_{\infty}\) of 1.10 V, J\({}_{\infty}\) of 21.97 mA/cm\({}^{2}\), FF of 64.97% and the series resistance (R\({}_{s}\)) of 9.97 \(\Omega\).cm\({}^{2}\). Similar procedure is done for the silicon bottom sub-cell and the results are depicted in Figure S8. Also, the dependency of the performance parameters V\({}_{\infty}\), J\({}_{\infty}\), FF, and PCE of the top and bottom sub-cells on the thicknesses of their corresponding absorber layers are illustrated in Figures S6 and S9. **2-2 Formulations of TM and BL methods** To calculate the filtered spectrum, the perovskite top sub-cell is assumed as a multilayer structure composed of m individual layers and the transmitted light is obtained by TM and BL methods. In the BL method, the filtered spectrum is computed by the following equation [11-23]: \[I(\lambda)=I_{0}(\lambda)\exp\left(\sum_{j=1}^{m}-\alpha_{j}(\lambda)\,d_{j}\right) \tag{1}\] where I0(\(\lambda\)) denotes the AM1.5G spectrum, \(\lambda\) is the wavelength, \(\alpha_{\mathrm{j}}\)(\(\lambda\))=4\(\pi\kappa_{\mathrm{j}}\)/\(\lambda\) is the absorption coefficient of each layer in the perovskite top sub-cell, \(\kappa_{\mathrm{j}}\) is the extinction coefficient and dj is the layer thickness. On the contrary, in the TM method, a total TM (S) is assigned to the multiplayer structure that connects the amplitudes of the electric fields on both sides of it [31]: \[\left[\begin{array}{c}E_{0}^{+}\\ E_{0}^{-}\end{array}\right]=S\left[\begin{array}{c}E_{m+1}^{+}\\ E_{m+1}^{-}\end{array}\right] \tag{2}\] where E\({}^{+}\) and E\({}^{\cdot}\) refer to the amplitudes of the electric fields propagating in the direction of light incidence and against it, respectively. The index 0 and m+1 are used to denote the incident and outgoing media, respectively. For a stack of linear, homogeneous and isotropic media, the total TM is a 2\(\times\)2 matrix calculated by multiplying the transfer matrices of each layer. The individual transfer matrices of the layers are obtained by multiplying two other matrices named as phase matrix I and refraction matrix L that describe optical characteristics of the layers. \[S=\left[\begin{array}{cc}S_{11}&S_{12}\\ S_{21}&S_{22}\end{array}\right]=\left(\prod_{j=1}^{m}\boldsymbol{I}_{(j-1)j} \boldsymbol{L}_{j}\right)\cdot\boldsymbol{I}_{m(m+1)} \tag{3}\] When light travels through a multilayer structure, it encounters successive refraction at the interfaces and propagation inside the layers. The refraction matrix accounts for the light reflection and transmission at the interfaces between two adjacent layers. Considering the light refraction at the interface between the jth layer and the (j+1)th, the refraction matrix is defined as [31]: \[\boldsymbol{I}_{j(j+1)}=\frac{1}{t_{j(j+1)}}\left[\begin{array}{cc}1&r_{j(j+ 1)}\\ r_{j(j+1)}&1\end{array}\right] \tag{4}\] where rjk is the complex Fresnel reflection coefficient and tjk is the complex Fresnel transmission coefficient at the interface between the jth layer and the (j+1)th layer. At normal incidence, the Fresnel reflection and transmission coefficients are [31]: \[r_{j(j+1)}=\frac{N_{j}-N_{(j+1)}}{N_{j}+N_{(j+1)}} \tag{5}\] \[t_{j(j+1)}=\frac{2N_{j}}{N_{j}+N_{(j+1)}} \tag{6}\] The phase matrix describes the propagation of the electric field inside the medium and is defined as [31]: \[L_{j}=\begin{bmatrix}e^{-i\delta_{j}d_{j}}&0\\ 0&e^{i\delta_{j}d_{j}}\end{bmatrix} \tag{7}\] where dj is the thickness of the jth layer, and \(\delta_{\mathrm{j}}\)=2\(\pi\)Nj/\(\lambda\) is the phase change of the electromagnetic wave when it travels inside the jth layer. Nj is the complex refractive index of the jth layer and is given by Nj=nj+ixj, where nj and xj are the index of refraction and the extinction coefficient of the jth layer. The total complex reflection and transmission coefficients of the multilayer structure can then be calculated from the total S matrix by the following equations [31]: \[r=\frac{E_{0}^{-}}{E_{0}^{+}}=\frac{S_{21}}{S_{11}} \tag{8}\] \[t=\frac{E_{m+1}^{+}}{E_{0}^{+}}=\frac{1}{S_{11}} \tag{9}\] The transmitted and reflected intensities from the multilayer structure are then calculated by two physical quantities, total transmittance (T) and total reflectance (R), defined as follows [31]: \[T=\frac{I_{m}}{I_{0}}=\frac{n_{m+1}}{n_{0}}\left|t\right|^{2} \tag{10}\] \[R=\frac{I_{ref}}{I_{0}}=\left|r\right|^{2} \tag{11}\] where n0 and nm+1 are the refractive indices of the incident and outgoing media, respectively. Due to the energy conservation, the total absorbance is defined as: \[A=1-T-R \tag{12}\] As derived from the aforementioned equations, understanding the complex refractive index of the layers is necessary for obtaining the optical properties of the multilayer structure. In this paper, the complex refractive indices of FTO, SnO2, CsFAMA, Spiro-OMeTAD, crystalline silicon and ITO are derived from the literature [32-37]. Also, the AM1.5G sun spectrum (1000 W/m2), defined in the SCAPS-1D library, is used as the light illuminated on the perovskite top sub-cell. ### 2-3 Top cell filtered spectrum dependency on BL and TM method Figure 4 shows the filtered spectra for two different configurations of silicon bottom sub-cell without and with the ITO recombination layer, respectively. The perovskite thickness is 100 nm. The transmitted spectra in each case are calculated by using BL and TM methods, via the equations 1 and 11, respectively. Using TM method, it is observed from Figure 4(a) that less light is transmitted from top perovskite sub-cell in the visible and near-infrared (NIR) wavelengths. The reason is referred to the interfacial reflection and scattering losses that are incorporated in the formalism of TM method, compared with BL method. It is then inferred that the PV parameters of the bottom sub-cell as well as the Perovskite/Silicon TSC are dependent on the method of calculating the filtered spectrum. This will be discussed later in sub-section 3-1. In addition, the AM1.5G sun spectrum and the perovskite thickness variation of the filtered spectrum are shown in Figures S10 and S11, using TM and BL methods, respectively. It is inferred from these figures that increasing the perovskite thickness enhances the light absorption in the visible part of the spectrum and results in the reduction of the transmitted light. Using TM method, it is also demonstrated from Figure 4(b) that introducing ITO as the recombination layer to the RJ enhances the light transmitted from the perovskite top sub-cell. On the contrary, using BL method for calculating the filtered spectrum, it is shown by comparing the red dashed lines presented in Figure 4(a) and (b) that the filtered spectrum exhibits no difference with the addition of ITO recombination layer. Therefore, BL method is unable to reveal the effects of modifying and optimizing the RJ in SCAPS-1D. It is because any information of the outgoing medium including its refractive index is not incorporated in the BL formalism, as shown in equation 1. However, by looking at equations 5, 6 and 10 in the TM formalism, it is revealed that the refractive index of the outgoing medium (with the index of m+1) affects the amount of light transmitted and reflected. As it is stated by equation 5, the lower the refractive index contrast between two adjacent layers, the lower the light reflection coefficient from the interface between the layers. Figure 5(a) and (b) show the refractive index of Spiro-OMeTAD [36] in the comparison with that of ITO [35] and crystalline silicon [32], respectively. Since the difference between the refractive index values of ITO and Spiro-OMeTAD is lower at wavelengths above 700 nm (NIR wavelengths) compared with that of n\({}^{++}\)-Si and Spiro-OMeTAD, the fraction of light reflected from the Spiro-OMeTAD/ITO interface is lower than that from the Spiro-OMeTAD/n\({}^{++}\)-Si interface. Hence, the optical loss in the case where the ITO layer is added to the RJ is decreased and the light transmitted from the perovskite top sub-cell is increased at NIR wavelengths. Due to the increased optical power illuminated on the silicon bottom sub-cell, its PV parameters are expected to be enhanced in the case where the ITO layer is sandwiched between Spiro-OMeTAD and n\({}^{++}\)-Si layers. The effects of introducing the ITO recombination layer to the RJ on the PV parameters of the silicon bottom sub-cell and the perovskite/Silicon TSC will be discussed in sub-section 3-2. Figure 4: The filtered spectra corresponding to the perovskite thickness of 100 nm for two TSC configurations with RJs of: (a) Spiro-OMeTAD/ n\({}^{++}\)-Si RJ, and (b) Spiro-OMeTAD/ITO/n\({}^{++}\)-Si. ## 3 Result and discussion ### 3-1 Photovoltaic parameters sensitivity on BL vs. TM In order to evaluate the methodology dependence of the optimized PV results of the 2-T monolithic perovskite/Silicon TSC, first, it is needed to find the optimized thicknesses at which a same current pass through the top and bottom sub-cells. Therefore, we change the perovskite thickness and calculate the corresponding filtered spectra transmitted from the top perovskite sub-cell. Each filtered spectra is then illuminated on the silicon bottom sub-cell and its resultant Jsc values for different silicon thicknesses are obtained by SCAPS. Figure 6 depicts the Jsc variation of the bottom sub-cell as a function of the silicon thickness, when TM method is used for calculating different filtered spectra corresponding to different perovskite thicknesses. Here, the RJ is formed by Spiro-OMeTAD (with 100 nm thickness and NA of 10\({}^{18}\) cm-3) HTL of top sub-cell and n\({}^{++}\)-Si (with 100 nm thickness and ND of 10\({}^{22}\) cm-3) ETL of bottom sub-cell, when stacked on each other. It is observed that regardless of the perovskite thickness, Jsc of the bottom sub-cell is increased to a maximum value and then decreased. It is attributed to the ability of charge carries to travel across the silicon absorber layer and reach the back electrode [38]. When the silicon thickness is very high, it is probable that many charge carriers recombine before entering the charge extraction layer. Hence, at higher values of the silicon thickness, less charge carriers can reach the back electrode and Jsc decreases. Another noticeable result is that the maximum value of Jsc passing through the bottom sub-cell is decreased by increasing the perovskite thickness. It is due to the enhanced absorption of light in the perovskite layer that decreases the light intensity illuminated on the bottom sub-cell (see Figures S10 and S11). Furthermore, the silicon thickness at which the maximum Jsc is achieved shifts to higher values by an increase in the perovskite thickness. In other words, from a practical point of view, a higher silicon thickness is needed for a higher perovskite thickness in order to attain a high value of Jsc. It is then inferred from the obtained results that the maximum value of 120 micron for the silicon thickness is sufficient for optimizing the PV parameters of the 2-T monolithic perovskite/Si TSC in the current matching technique. Therefore, in the following, we will increase the silicon thickness up to 120 microns to have an increasing increment for the Jsc of the bottom sub-cell and avoid its reduction. The detailed Jsc values of the bottom sub-cell corresponding to different silicon thicknesses are presented in Figure 5: Comparing the refractive indices of: (a) Spiro-OMeTAD with ITO, (b) Spiro-OMeTAD with silicon (n\({}^{++}\)-Si). Data are extracted from the literature [32; 35; 36]. table S6. A same behavior (not shown here) is obtained when the BL method is used for calculating the filtered spectra. Figure 7 (a) to (d) provide 2D counter plots of J\({}_{\mathrm{sc}}\), PCE, FF, and V\({}_{\mathrm{oc}}\) of the bottom sub-cell, describing their variations as a function of perovskite and silicon thicknesses. The results reveal that for a constant perovskite thickness (a specific filtered spectrum), V\({}_{\mathrm{oc}}\) and FF of the bottom sub-cell are decreased by increasing the thickness of the silicon absorber layer. On the other hand, for a constant thickness of the silicon absorber layer, increasing the perovskite thickness results in a slight reduction of V\({}_{\mathrm{oc}}\) and FF of the bottom sub-cell. It is because the opportunity of charge carrier separation is decreased, by increasing the thickness of the absorber layer. On the contrary, the J\({}_{\mathrm{sc}}\) of the bottom sub-cell is increased slightly when the silicon thickness is increased due to increasing light absorption and generation rate in the silicon layer. Then, J\({}_{\mathrm{sc}}\) of the bottom sub-cell tends to saturate as the thickness of silicon approaches 120 micron. It is consisted with the results shown in Figure 6. On the other hand, at a constant silicon thickness, the Jsc of the bottom sub-cell is decreased by increasing the perovskite thickness, due to the higher light absorption in the perovskite top sub-cell. The behavior of PCE is not linear. At a constant perovskite thickness, PCE of the bottom sub-cell increases to a maximum value and then decreases slightly, by increasing the silicon thickness. While at a constant thickness of the silicon layer, PCE is decreased by increasing the perovskite thickness. In addition, maximum PCE values of the bottom sub-cell can be achieved for very low perovskite thicknesses varying in the range of 50 nm to 100 nm and the silicon thicknesses varying in the range of 20 \(\upmu\)m to 80 \(\upmu\)m. Moreover, the maximum values of J\({}_{\mathrm{sc}}\) of the bottom sub-cell can be achieved when the perovskite thickness is lower than 100 nm and the silicon thickness is between 60 and 120 \(\upmu\)m. In addition, maximum values of V\({}_{\mathrm{oc}}\) and FF of the bottom sub-cell are achievable for very low thicknesses of silicon and perovskite absorber layers. Similar behaviors are observed for the case where BL method is used for calculating filtered spectra and the corresponding results are exhibited in figure S12. Figure 6: J\({}_{\mathrm{sc}}\) of the bottom sub-cell against the thickness of the silicon layer for different perovskite thicknesses of 50, 150, 300, 700, 900, 1000 nm. The TM method is used for calculating the filtered spectra. A comparison between the performance parameters \(\mathrm{V_{oc}}\), \(\mathrm{J_{sc}}\), FF and PCE of the silicon bottom sub-cell, when BL and TM methods are used for calculating the filtered spectra, are provided in Figure 8. The thickness of the silicon absorbing layer is 120 microns. Each data in Figure 8 is corresponded to a filtered spectrum that is calculated for a specific thickness of the perovskite layer. The observed behaviors are consisted with the results presented in Figure 7, except that the PV parameters of the bottom sub-cell in the case of using TM method are lower than the case where BL method is used. To describe precisely, the \(\mathrm{J_{sc}}\) values calculated by these two methods differ by almost 8 mA/cm\({}^{2}\) at the perovskite thickness of 50 nm and by 4 mA/cm\({}^{2}\) at the perovskite thickness of 1000 nm. Also, the difference between the values of PCE calculated by these two methods is varied between 2.3% at the perovskite thickness of 50 nm to 2.4% at the perovskite thickness of 1000 nm. Similarly, the difference between the values of \(\mathrm{V_{oc}}\) is approximately 0.023 V at the perovskite thickness of 50 nm and 0.023 V at the perovskite thickness of 1000 nm. Also, the values of FF calculated by these two methods differ by 0.45% at the perovskite thickness of 50 nm and by 0.40% at the perovskite thickness of 1000 nm. These differences that are more pronounced for PCE and \(\mathrm{J_{sc}}\) are ascribed to the additional optical losses that are incorporated in the formalism of TM method, as mentioned in sub-section 2-3. Figure 7: 2D counter plots of the performance parameters \(\mathrm{V_{oc}}\), \(\mathrm{J_{sc}}\), FF, and PCE of the bottom sub-cell, when illuminating by the filtered spectra calculated using TM method. The thickness of the silicon layer is altered from 10 to 120 μm in 12 steps and the thickness of the perovskite layer is changed from 50 to 1000 nm in 20 steps. Figure 9 presents the results of current matched analysis using TM and BL methods. The Jsc variation of the silicon bottom sub-cell as a function of the silicon thickness under the illumination of different filtered spectra along with the Jsc variation of the perovskite top sub-cell as a function of the perovskite thickness under the illumination of AM1.5G spectrum are depicted in Figure 9 (a) and (c) corresponded to the cases of using TM and BL methods, respectively. Using SCAPS-1D scripting features mentioned in the Supplementary Info of the reference [13], the optimized PV parameters of 2-T monolithic perovskite/Silicon TSC are calculated in both cases and the results are described in Table 1. The obtained optimized perovskite and silicon thicknesses for the case where TM method is used for calculating the filtered spectra are 158 nm and 80 \(\upmu\)m, respectively. While the optimized perovskite and silicon thicknesses in the case where BL method is used are obtained as 246 nm and 40 microns, respectively. Figure 9 (b) and (d) shows the J-V curves of the standalone top sub-cell, standalone bottom sub-cell, bottom sub-cell under the illumination of the filtered spectrum and tandem cell at the current matched situation for the optimized thicknesses obtained in the cases of using TM and BL methods, respectively. It is observed that only one current is passed through top and bottom sub-cells, as was expected from the current matching technique. The current matching curves corresponding to the optimized thicknesses in the cases where TM and BL methods are used is depicted in Figure 9(e). Evidently, the Jsc values of the silicon bottom sub-cell are lower when TM method is used. It is due to the increased optical losses that are incorporated in the formalism of TM method. Another noticeable point in Figure 9(e) is the difference between the optimized thicknesses of the perovskite absorbing layer predicted by these two methods (see the current matched points in Figure 9(e)). Figure 8: Comparing the performance parameters Vsc, Jsc, FF, and PCE of the bottom sub-cell using BL and TM methods. The silicon thickness is 120 \(\upmu\)m, while the perovskite thickness is increased from 50 to 1000 nm. ## 6 Conclusion Figure 9: (a) Current matching curves for the case where TM method is used for calculating filtered spectra, (b) J–V curve of standalone top and bottom sub-cells, bottom sub-cell under the illumination of the filtered spectrum (calculated using TM method), and the tandem cell, (c) Current matching curves for the case where BL method is used for calculating filtered spectra, (d) J–V curve of standalone top and bottom sub-cells, bottom sub-cell under the illumination of the filtered spectrum (calculated using BL method), and the tandem cell, (e) Current matching curves corresponding to the maximum efficiency of tandem device, simulated using BL and TM methods. By comparing the detailed data presented in Table 1 for the optimized 2-T monolithic perovskite/Silicon TSC, it is clearly observed that the optimized thicknesses for the perovskite and silicon layers as well as the PV parameters depend on the method of calculating the filtered spectrum. Differences in the optimized thicknesses of the absorbing layers of top and bottom sub-cells are noteworthy in practical fabrication and any changes in the thicknesses of the fabricated layers can affect the performance parameters as well as the fabrication costs of tandem cells. In addition, increasing the number of layers will enhance the differences in the results obtained by using these two methods. This is due to the fact that the optical losses caused by light reflections and scatterings from grain boundaries and interfaces increase with the number of layers and their thicknesses. Also, the size of grain boundaries and scattering elements in the layers affect the light scattering that can be addressed using TM method. Hence, the BL method is unable to reveal the effects of the light scattering, interfacial reflections and reflections from the grain boundaries, except parasitic light absorption in the calculations. Given these details, using TM method will produce simulation findings that experimentalists can rely on for real-world applications. ### 3-2 Investigation of optical loss effects related to RJ So far, the focus of our work has been on the effects of using TM and BL methods on the PV parameters of the bottom sub-cell as well as the tandem configuration. Herein, we use TM and BL methods to investigate the optical loss effects related to the RJ in the SCAPS-1D device simulator. Two perovskite/Si TSCs that differ in their RJs are studied: the first one with Spiro-OMeTAD (HTL)/n\({}^{++}\)-Si hybrid RJ and the second one with Spiro-OMeTAD (HTL)/TO/n\({}^{++}\)-Si RJ. First, it is needed to optimize two different bottom sub-cells under the AM1.5G light irradiation. In the first configuration that the RJ is formed by HTL/n\({}^{++}\)-Si stack, we optimize the thickness of n\({}^{+-}\)-Si layer to boost the efficiency of the silicon bottom sub-cell. In the second configuration that the RJ is formed by HTL/TTO/n\({}^{++}\)-Si stack, we optimize the ITO thickness. The thicknesses of n-Si and p\({}^{++}\)-Si in both configurations are 80 micron and 20 nm, respectively. Figure 10(a) represents the PCE variation of the silicon bottom sub-cell in the first configuration against the thickness of n\({}^{++}\)-Si layer. The obtained result show that the optimum n\({}^{++}\)-Si thickness of 50 nm can achieve the highest efficiency of 19.54% along with the V\({}_{\rm{oc}}\) of 0.68 V, J\({}_{\rm{sc}}\) of 37.53 mA/cm\({}^{2}\), and FF of 76.55%, under the AM1.5G light irradiation. Then, the silicon bottom sub-cell in the \begin{table} \begin{tabular}{c c c c c|c c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} **Optimized** \\ **performance** \\ **parameter** \\ \end{tabular} } & \multicolumn{4}{c|}{**BL method**} & \multicolumn{4}{c}{**TM method**} \\ \cline{2-10} & \begin{tabular}{c} **Standalone** \\ **top cell** \\ **(246 nm)** \\ \end{tabular} & \begin{tabular}{c} **Standalone** \\ **bottom cell** \\ **(40 \(\mu\)m)** \\ \end{tabular} & \begin{tabular}{c} **Bottom** \\ **cell under** \\ **filtered** \\ **spectrum** \\ \end{tabular} & \begin{tabular}{c} **Tandem** \\ **solar** \\ **(158 nm)** \\ \end{tabular} & \begin{tabular}{c} **Standalone** \\ **bottom cell** \\ **(80 \(\mu\)m)** \\ \end{tabular} & \begin{tabular}{c} **Bottom** \\ **cell under** \\ **filtered** \\ **spectrum** \\ \end{tabular} & \begin{tabular}{c} **Tandem** \\ **solar** \\ **cell** \\ **spectrum** \\ \end{tabular} \\ \hline **PCE (\%)** & 14.45 & 19.86 & 17.06 & 21.08 & 12.58 & 19.43 & 14.72 & 17.82 \\ **Jsc** & 18.42 & 36.36 & 18.45 & 18.42 & 15.17 & 37.33 & 15.09 & 15.17 \\ **Voc (V)** & 1.135 & 0.695 & 0.670 & 1.8 & 1.15 & 0.68 & 0.64 & 1.79 \\ **FF (\%)** & 69.11 & 78.57 & 77.06 & 63.46 & 72.11 & 76.55 & 74.94 & 65.67 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance parameters of the standalone top sub-cell, standalone bottom sub-cell, bottom sub-cell under filtered spectrum, and tandem solar cell using TM and BL methods. second configuration is studied and the PCE dependence of the silicon bottom sub-cell on the variation of ITO thickness is investigated. The thickness of n\({}^{++}\)-Si layer in this configuration is assumed as 100 nm. According to Figure 10(b), the ITO thickness of 10 nm is the optimum value that attains the highest efficiency of 19.82% along with the V\({}_{\infty}\) of 0.6810 V, J\({}_{\infty}\) of 38.41 mA/cm\({}^{2}\), and FF of 75.74% for the silicon bottom sub-cell. Figure 11 shows the effects of introducing ITO recombination layer to the Spiro-OMeTAD/n\({}^{++}\)-Si stack on the PV parameters of the silicon sub-cell. Both TM and BL methods are used for the calculation of different filtered spectra corresponding to different perovskite thicknesses. The thickness of silicon absorber layer is 120 \(\upmu\)m. As it was expected, an improvement in the performance parameters of the silicon bottom sub-cell by the addition of ITO recombination junction is observed only when the TM method is integrated with SCAPS-1D. Figure 10: The PCE dependence of the silicon bottom sub-cell as a function of: (a) thickness of n\({}^{++}\)-Si layer in the configuration with HTL/ n\({}^{++}\)-Si, (b) thickness of ITO layer in the configuration with HTL/ITO/ n\({}^{++}\)-Si. Figure 12 shows the results of current match analysis for two different perovskite/Silicon TSCs with different RJs. Two top figures depict the results for the case where TM method is used for calculating the filtered spectra, while two bottom figures show the results corresponded to the case where BL method is used. As shown in Figure 12(a), using TM method, the current matched point between the perovskite top sub-cell and the silicon bottom sub-cell is enhanced from 15.31 mA/cm\({}^{2}\) to 18.41 mA/cm\({}^{2}\), when the ITO layer is added to the Spiro-OMeTAD/n\({}^{+}\)-Si hybrid RJ. On the other hand, the optimized perovskite thickness is increased from 161 nm to 245 nm, when the silicon thickness is constant and equals 120 micron. Figure 12(b) illustrates the J-V curves of the standalone top sub-cell, bottom sub-cell under the illumination of filtered spectrum and the tandem cell at two different current matched situations that are corresponded to two different RJs. The filtered spectra are calculated using TM method. As it was expected, the matched current passes through the top and bottom sub-cells are increased by introducing ITO recombination layer to the Spiro-OMeTAD/n\({}^{+}\)-Si RJ. However, using BL method, the optimized perovskite thickness and the matched current shows no difference, as it is evident from Figure 12(c) and (d). Table 2 elucidates the detailed values of the performance parameters V\({}_{\text{oc}}\), J\({}_{\text{sc}}\), FF, and PCE of the standalone top sub-cell (STC), the bottom sub-cell under the illumination of the filtered spectrum (FBC) and the tandem solar cell (TSC), when the current matched situation is established for two different configurations of the bottom sub-cell. The filtered spectra are calculated using TM method. The thickness of the silicon absorber layer is 120 micron. The obtained results show that introducing ITO as the Figure 11: The dependence of the PV parameters: (a) V\({}_{\text{oc}}\), (b) J\({}_{\text{sc}}\), (c) FF and (d) PCE of the silicon bottom sub-cell on the modification of RJ, by sandwiching ITO between Spiro-OMeTAD and n\({}^{+}\)-Si. The filtered spectra are calculated by using both TM and BL methods. recombination layer to the Spiro-OMeTAD/n\({}^{++}\)-Si hybrid RJ improves the performance of the tandem solar cell device. Our simulation findings are consisted with the discussions presented in [25]. Hence, integrating TM method with the SCAPS-1D device simulator makes possible the investigation of RJs in 2-T monolithic TSCs. \begin{table} \begin{tabular}{c c c|c c|c c} \hline \multirow{2}{*}{**Optimized performance**} & \multicolumn{2}{c|}{**Standalone top cell**} & \multicolumn{2}{c|}{**Bottom cell under filtered**} & \multicolumn{2}{c}{**Tandem solar cell**} \\ \cline{2-7} & **ITO** & **Without ITO** & **ITO** & **Without ITO** & **ITO** & **Without ITO** \\ & **(245 nm)** & **(161 nm)** & **(245 nm)** & **(161 nm)** & **ITO** & **ITO** \\ \hline **PCE (\%)** & 14.44 & 12.67 & 16.33 & 14.49 & 23.10 & 19.81 \\ **Jsc (mA/cm\({}^{2}\))** & 18.40 & 15.31 & 18.46 & 15.28 & 18.40 & 15.31 \\ **Voc (V)** & 1.13 & 1.15 & 0.630 & 0.624 & 1.77 & 1.81 \\ **FF (\%)** & 69.13 & 71.98 & 74.34 & 74.59 & 71.06 & 71.48 \\ \hline \end{tabular} \end{table} Table 2: Performance parameters of the standalone top cell (STC), bottom cell under filtered spectrum (FBC), and tandem solar cell (TSC) using the TM method for calculating the filtered spectra. The thickness of silicon absorber layer is 120 micron. Thickness values mentioned in the parenthesis are related to the perovskite absorber layer. Data are corresponded to two different RJs formed by Spiro-OMeTAD/n\({}^{++}\)-Si (w.o. ITO) and Spiro-OMeTAD/ITO/n\({}^{+-}\)-Si (ITO): Figure 12: Current matching curves corresponding to the maximum efficiency of the tandem device as well as J–V curves of standalone top sub-cell, bottom sub-cell under the illumination of filtered spectrum and tandem cell with two different RJs formed by Spiro-OMeTAD/n\({}^{++}\)-Si (w.o. ITO) and Spiro-OMeTAD/ITO/n\({}^{+-}\)-Si (ITO): (a, b) TM method, and (c, d) BL method are used for calculating the filtered spectra. **3-1 Per-layer optical loss analysis using TM method** In this sub-section, using TM method, we analyze the performance of PSC from an optical point of view. Figure 13(a) exhibits the contribution of each layer to the total absorbance, when stacked on each other in the PSC. The respective spectra are obtained by calculating the total electric filed inside each layer, as described in [31]. The hybrid RJ is assumed as Spiro-OMeTAD/n\({}^{++}\)-Si. The thicknesses of FTO, SnO\({}_{2}\), CsFAMA and Spiro-OMeTAD layers are assumed constant and equal to 100 nm, in order to focus on the material dependent behavior of light through transmitting the perovskite solar cell. It is observed that FTO and perovskite layers have more contribution to the total absorbance, in comparison with the other layers. Figure 13(b) provides a schematic illustration of the amount of light absorbed in each layer and transmitted from the PSC. The specified wavelengths 410 nm, 480 nm, 530 nm, 570 nm, 610 nm, 630 nm and 800 nm are chosen to represent the violet, blue, green, yellow, orange, red and NIR spectra. The percentages in the layers and bellow the Spiro-OMeTAD denote the light absorbance and transmittance at the selected wavelengths, respectively. From Figure 13(b), it is possible to estimate the optical loss and the fraction of light transmitted from the PSC at the specified wavelengths. In addition, Figure 13(c) depicts the total transmittance, total reflectance and total absorbance of PSC that are calculated from equations 10, 11 and 12, respectively. Obviously, the amount of light lost through reflection is significant for the light with wavelengths above 600 nm. This optical loss, when the PSC is implemented as the top sub-cell in a perovskite/Si tandem configuration, will reduce the performance of bottom sub-cell. Therefore, modifying the interface between the perovskite top sub-cell and the silicon bottom sub-cell in a 2-T monolithic perovskite/Silicon TSC is a very important issue. Figure 14 (a), (b) and (c) compares the total absorbance, reflectance and transmittance spectra of the perovskite top sub-cell, when the ITO recombination layer is introduced to the RJ. It is deduced from Figure 14(a) that the absorbance of PSC shows no significant change, by the addition of ITO between Spiro-OMeTAD and n\({}^{++}\)-Si. However, Figure 14(b) reveals that the reflection loss at NIR wavelengths is greatly reduced, when the RJ consists of ITO recombination layer. According to equation 5, the light reflectance at each layer depends on the refractive indices of its adjacent layers. As depicted in Figure 5, introducing ITO decreases the refractive index mismatch when light exits the PSC and results in the reduction of reflection losses. Therefore, more light is transmitted from the PSC at NIR wavelengths, as clearly observed in Figure 14(c). Obtained results suggest that integrating TM method with SCAPS-1D makes the optimization and evaluation of RJs in 2-T monolithic TSCs possible. Based on these spectra presented in Figure 14, the layers can be selected in such a way that the reflection and parasitic absorption losses in the layers become minimized. The present study can pave the way for further theoretical investigations on optimizing RJs, via SCAPS-1D. Figure 13: (a) Total absorbance and the fraction of light absorbed in each layer, when they are stacked on each other to form the perovskite solar cell, (b) Schematic illustration of the perovskite top sub-cell under the AM1.5G light irradiation. The percentage of light absorbed in the layers along with the light transmitted from the cell is denoted in the figure. The specified colors represent the wavelengths of 410 nm, 480 nm, 530 nm, 570 nm, 610 nm, 630 nm and 800 nm for the incident light. (c) Total absorbance, transmittance and reflectance spectra of light, when it passes across the perovskite top sub-cell. ## 4 Conclusion In this paper, the TM method was integrated into SCAPS-1D to investigate a 2-terminal monolithic perovskite/Silicon TSC from both electrical and optical point of views. The corresponding results were compared with the case in which BL method was integrated into SCAPS-1D. It was demonstrated that using BL method for calculating the filtered spectra transmitted from the perovskite top sub-cell results in an overestimation of the photovoltaic properties and the corresponding SCAPS-1D simulations are unable to fully address all phenomena that light encounters in each layer and interfaces. Therefore, using BL method can only give a poor prediction of optimized thicknesses, layer configurations in TSCs and their corresponding photovoltaic metrics. Our theoretical investigation revealed a reduction of ~4% in the efficiency of tandem device, when the optical losses are addressed correctly in the SCAPS-1D by using TM method. In addition, the effects of ITO thin add-layer sandwiched between Spiro-OMeTAD and n\({}^{+}\)-Si in the interconnection layer, on cell efficiency were studied using TM and BL methods. The results confirmed that BL method is unable to include optical effects at the recombination junction and returns a negligible change on cell efficiency for ITO-included device. However, an improvement of tandem device efficiency from 19.81% to 23.10% was observed with TM method for the same configuration. The results demonstrated that using TM method integrated into SCAPS-1D can pave the way for further accurate optoelectronic investigations on optimizing RJs in TSCs. ## Acknowledgment The authors acknowledge Vice-Presidency of Sci. & Tech., Iran and Center for National Macro Technology Projects (grant number 11.69206) for financial support. MK thanks Prof. H. Nadgaran, Physics Dept., Shiraz Uni. for constructive discussions and his group support to complete this project. ## Conflicts of interest There are no conflicts of interest to declare. Figure 14: (a) Total absorbance, (b) Total reflectance and (c) Total transmittance spectra of the perovskite top sub-cell. The thicknesses of FTO, SnO\({}_{2}\), CsFAMA and Spiro-OMeTAD are 100, 50, 100 and 100 nm, respectively. ## References * [1] M.A. Green, Silicon photovoltaic modules: a brief history of the first 50 years, Prog. Photovoltaics Res. Appl. 13 (2005) 447-455. * [2] K. Yoshikawa, H. Kawasaki, W. Yoshida, T. Irie, K. Konishi, K. Nakano, T. Uto, D. Adachi, M. Kanematsu, H. Uzu, K. Yamamoto, Silicon heterojunction solar cell with interdigitated back contacts for a photoconversion efficiency over 26%, Nat. Energy. 2 (2017) 17032. * [3] W. Shockley, H.J. Queisser, Detailed Balance Limit of Efficiency of p-n Junction Solar Cells, J. Appl. Phys. 32 (1961) 510-519. * [4] S. Akhil, S. Akash, A. Pasha, B. Kulkarni, M. Jalalah, M. Alsaiari, F. A. Harraz, R. G. Balakrishna, Review on Perovskite Silicon Tandem Solar Cells: Status and Prospects 2T, 3T and 4T for Real World Conditions, Mater. Des. 211 (2021) 110138. * [5] C. Li, Y. Wang, W. C. H. Choy, Efficient Interconnection in Perovskite Tandem Solar Cells, Small Methods. 4 (2020) 2000093. * [6] Q. Xu, Y. Zhao, X. Zhang, Light Management in Monolithic Perovskite/Silicon Tandem Solar Cells, Sol. RRL. 4 (2020) 1900206. * [7] M. A. Green, A. Ho-Baillie, H. J. Snaith, The Emergence of Perovskite Solar Cells, Nat. Photonics. 8 (2014) 506-514. * [8] A. W. Y. Ho-Baillie, J. Zheng, M. A. Mahmud, F.-J. Ma, D. R. McKenzie, M. A. Green, Recent Progress and Future Prospects of Perovskite Tandem Solar Cells, Appl. Phys. Rev. 8 (2021) 041307. * [9] C. D. Bailie et al., Semi-Transparent Perovskite Solar Cells for Tandems with Silicon and CIGS, Energy Environ. Sci. 8 (2015) 956-963. * [10] M. Belarbi, O. Zeggai, S. Louhibi-Fasla, Numerical Study of Methylammonium Lead Iodide Perovskite Solar Cells Using SCAPS-1D Simulation Program, Mater. Today Proc. 51 (2022) 2115-2119. * [11] P. Yang, P. Liu, S. Ullah, J. Wang, L. Liu, S.-E. Yang, H. Guo, L. Wang, Y. Chen, The Investigation of CsPb(1l-xBrx)3/Crystalline Silicon Two- and Four-Terminal Tandem Solar Cells, Sol. Energy. 216 (2021) 145-150. * [12] J. Madan, K. Singh, R. Pandey, Comprehensive Device Simulation of 23.36% Efficient Two-Terminal Perovskite-PbS COD Tandem Solar Cell for Low-Cost Applications, Sci. Rep. 11 (2021) 19829. * [13] N. Shrivastav, J. Madan, R. Pandey, A. E. Shalan, Investigations Aimed at Producing 33% Efficient Perovskite-Silicon Tandem Solar Cells through Device Simulations, RSC Adv. 11 (2021) 37366-37374. * [14] N. Shrivastav, S. Kashyap, J. Madan, A. K. Al-Mousoi, M. K. A. Mohammed, M. K. Hossain, R. Pandey, J. Ramanujam, Perovskite-CIGS Monolithic Tandem Solar Cells with 29.7% Efficiency: A Numerical Study, Energy & Fuels. 37 (2023) 3083-3090. * [15] F. Jafarzadeh, H. Aghili, H. Nikbakht, S. Javadpour, Design and Optimization of Highly Efficient Perovskite/Homojunction SnS Tandem Solar Cells Using SCAPS-1D, Sol. Energy. 236 (2022) 195-205. * [16] K. Amri, R. Belghouthi, M. Aillerie, R. Gharbi, Device Optimization of a Lead-Free Perovskite/Silicon Tandem Solar Cell with 24.4% Power Conversion Efficiency, Energies. 14 (2021) 3383. * [17] M. T. Islam, M. R. Jani, A. F. Islam, K. M. Shorowordi, S. Chowdhury, S. S. Nishat, S. Ahmed, Investigation of CsSn \({}_{0.3}\)Ge\({}_{0.5}\)J\({}_{0}\)-Sn's Tandem Solar Device Utilizing SCAPS Simulation, IEEE Trans. Electron Devices. 68 (2021) 618-625. * [18] J. Madan, Shivani, R. Pandey, R. Sharma, Device Simulation of 17.3% Efficient Lead-Free All-Perovskite Tandem Solar Cell, Sol. Energy. 197 (2020) 212-221. * [19] M. M. Salah, A. Zekry, M. Abouelatta, A. Shaker, M. Mousa, F. Z. Amer, R. I. Mubarak, A. Saeed, High-Efficiency Electron Transport Layer-Free Perovskite/GeTe Tandem Solar Cell: Numerical Simulation, Crystals. 12 (2022) 878. * [20] M. Azadinia, M. Ameri, R. T. Ghahrizjani, M. Fathollahi, Maximizing the Performance of Single and Multijunction MA and Lead-Free Perovskite Solar Cell, Mater. Today Energy. 20 (2021) 100647. * [21] A. Alsalme H. Alsaedi, Twenty-Two Percent Efficient Pb-Free All-Perovskite Tandem Solar Cells Using SCAPS-1D, Nanomaterials. 13 (2022) 96. * [22] E. Raza, Z. Ahmad, F. Aziz, M. Asif, M. Q. Mehmood, J. Bhadra, N. J. Al-Thani, Design and Optimization of Four-Terminal Mechanically Stacked and Optically Coupled Silicon/Perovskite Tandem Solar Cells with over 28% Efficiency, Heliyon. 9 (2023) e13477. * [23] S. Sarker, M. T. Islam, A. Rauf, H. Al Jame, M. R. Jani, S. Ahsan, M. S. Islam, S. S. Nishat, K. M. Shorowordi, S. Ahmed, A SCAPS Simulation Investigation of Non-Toxic MAGeI3-on-Si Tandem Solar Device Utilizing Monolithically Integrated (2-T) and Mechanically Stacked (4-T) Configurations, Sol. Energy. 225 (2021) 471-485. * [24] R. Pandey, S. Bhattarai, K. Sharma, J. Madan, A. K. Al-Mousoi, M. K. A. Mohammed, M. K. Hossain, Halide Composition Engineered a Non-Toxic Perovskite-Silicon Tandem Solar Cell with 30.7% Conversion Efficiency, ACS Appl. Electron. Mater. (2023) [https://doi.org/10.1021/acsaelm.2c01574](https://doi.org/10.1021/acsaelm.2c01574). * [25] M. De Bastiani, A. S. Subbiah, E. Aydin, F. H. Isikgor, T. G. Allen, S. De Wolf, Recombination Junctions for Efficient Monolithic Perovskite-Based Tandem Solar Cells: Physical Principles, Properties, Processing and Prospects, Mater. Horizons. 7 (2020) 2791-2809. * [26] M. Burgelman, P. Nollet, S. Degrave, Modelling Polycrystalline Semiconductor Solar Cells, Thin Solid Films. 527 (2000) 361-362. * [27] S. Iqbal, K. Riaz, H. Imran, Y. H. Khattak, F. Baig, Z. Ahmad, Computational Modelling of Monolithically Stacked Perovskite/Silicon Tandem Solar Cells Using Monofacial and Bifacial Designs, Optik. 206 (2020) 163427. * [28] T. Bu, X. Liu, Y. Zhou, J. Yi, X. Huang, L. Luo, J. Xiao, Z. Ku, Y. Peng, F. Huang, Y. B. Cheng, J. Zhong, A Novel Quadruple-Cation Absorber for Universal Hysteresis Elimination for High Efficiency and Stable Perovskite Solar Cells, Energy Environ. Sci. 10 (2017) 2509-2515. * [29] D. T. Grant, K. R. Catchpole, K. J. Weber, T. P. White, Design Guidelines for Perovskite/Silicon 2-Terminal Tandem Solar Cells: An Optical Study, Opt. Exp. 24 (2016) A1454-A1470. * [30] J. Werner, C.-H. Weng, A. Walter, L. Fesquet, J. P. Seif, S. De Wolf, B. Niesen, C. Ballif, Efficient Monolithic Perovskite/Silicon Tandem Solar Cell with Cell Area >1 Cm 2, J. Phys. Chem. Lett. 7 (2016) 161-166. * [31] L. A. A. Pettersson, L. S. Roman, O. Inganas, Modeling Photocurrent Action Spectra of Photovoltaic Devices Based on Organic Thin Films, J. Appl. Phys. 86 (1999) 487-496. * [32] M. A. Green, Self-Consistent Optical Parameters of Intrinsic Silicon at 300K Including Temperature Coefficients, Sol. Energy Mater. Sol. Cells. 92 (2008) 1305-1310. * [33] S. Manzoor, J. Hausele, K. A. Bush, A. F. Palmstrom, J. Carpenter, Z. J. Yu, S. F. Bent, M. D. Mcgehee, Z. C. Holman, Optical Modeling of Wide-Bandgap Perovskite and Perovskite/Silicon Tandem Solar Cells Using Complex Refractive Indices for Arbitrary-Bandgap Perovskite Absorbers, Opt. Exp. 26 (2018) 27441-27460. * [34] B. Chen, Z. Yu, K. Liu, X. Zheng, Y. Liu, J. Shi, D. Spronk, P. N. Rudd, Z. Holman, J. Huang, Grain Engineering for Perovskite/Silicon Monolithic Tandem Solar Cells with Efficiency of 25.4%, Joule. 3 (2019) 177-190. * [35] Q. Lin, A. Armin, R. C. R. Nagiri, P. L. Burn, P. Meredith, Electro-Optics of Perovskite Solar Cells, Nat. Photonics. 9 (2015) 106-112. * [36] M. Filipic, P. Loper, B. Niesen, S. De Wolf, J. Kre, C. Ballif, M. Topic, CH_3NH_3PbI_3 Perovskite / Silicon Tandem Solar Cells: Characterization Based Optical Simulations, Opt. Exp. 23 (2015) A263-A278. * [37] M. S. Alias, I. Dursun, M. I. Saidaminov, E. M. Diallo, P. Mishra, T. K. Ng, O. M. Bakr, B. S. Ooi, Optical Constants of CH_3NH_3PbBr_3 Perovskite Thin Films Measured by Spectroscopic Ellipsometry, Opt. Exp. 24 (2016) 16586-94. * [38] S. Pang, H. Hu, J. Zhang, S. Lv, Y. Yu, F. Wei, T. Qin, H. Xu, Z. Liu, G. Cui, NH 2 CH=NH 2 Pbl 3 : An Alternative Organolead Iodide Perovskite Sensitizer for Mesoscopic Solar Cells, Chem. Mater. 26 (2014) 1485-1491. **Supplementary Information** **Integrating transfer matrix method into SCAPS-1D for addressing optical losses and per-layer optical properties in perovskite/Silicon tandem solar cells** **Peymaneh Rafieipour*, Aminreza Mohandes, Mohammad Moaddeli, Mansour Kanani** _*Corresponding author: [email protected]_ **Content:** * **Optical study of the perovskite top sub-cell**.. light absorbed in the Spiro-OMeTAD as a function of its thickness is plotted in Figure S1 (d). The results demonstrate that the parasitic light absorption in the Spiro-OMeTAD at near-IR wavelengths is reduced, when its thickness is decreased from 500 nm to 100 nm. \[I_{h} = -\frac{\mu_{g}n}{q}\frac{d\bar{r}_{n}}{dx}\] \[J_{p} = +\frac{\mu_{p}p}{q}\frac{d\bar{r}_{p}}{dx}\] [MISSING_PAGE_POST] The optical filter of CsFAMA% is tabulated in Table, which reveals the optical reflection at various wavelengths at the interface of glass/FTO. The defect's energy level is located in the center of the Eg and is dispersed in a Gaussian distribution with a characteristic energy of 0.1 eV. The defect type is neutral and the electron and hole capture cross sections are 2\(\times\)10\({}^{-14}\) cm\({}^{2}\)(Minemoto et al., 2019; Minemoto & Murata, 2015), except for the CsFAMA layer. The defect density, N\({}_{\text{t}}\), is related to the capture cross section of the hole and electron as in the following formula: \[N_{\text{t}} =\ \frac{1}{\sigma_{n,\text{p}}\times\tau_{n,\text{p}}\times\nu_{ \text{t}}}\] (eq. S1) where \[\nu_{\text{t}}\] is the thermal velocity of electrons and holes, \[\tau_{\text{n,p}}\] is the carrier life time of electrons and holes and is equal to 210 ns. \[\sigma_{\text{n,p}}\] is the capture cross section of electrons and holes. The values of \[\tau_{\text{n,p}}\] are available in sup. Info. of the ref(Bu et al., 2017). Therefore, the capture cross section of the hole and electron for the CsFAMA layer are \[3.245\times 10^{-17}\] cm\({}^{2}\) and \[3.727\times 10^{-17}cm^{2}\], respectively. As presented in Table, the perovskite absorber layer, called CsFAMA layer, is p-type and the hole and electron defect densities in the CsFAMA layer are observed to be \(1.47\times 10^{16}\) and \(1.28\times 10^{16}\)\(cm^{-3}\), respectively [3]. It causes the electron and hole carrier lifetimes to be 210 ns and the electron and hole carrier diffusion lengths, L\({}_{\text{n}}\) and L\({}_{\text{p}}\) to be 5.2 and 5.2 \(\upmu\)m, respectively. First, we reproduced the experimental J-V characteristic of the PSC with 15.52% efficiency mentioned in the literature [3]. The simulation was employed with the parameters considered in Table 10 and Table 10, as well as Table 10 which represents the fitted parameters with experimental J-V curve for the rear contact interface. The device is optimized by the perovskite absorber layer thickness variation from 50 to 1000 nm in 20 steps, while the thickness of HTL is 300 nm. Figure (a) depicts the calibration of J-V curve. Whereas, the calibration of EQE curve is shown in Figure (b), under the AM1.5G illumination. As shown in Figure S2(a), we find the optimized thickness of 550 nm for the perovskite absorber layer and 300 nm for the HTL, under the AM1.5 G spectrum. Figure (a) depicts the J-V curve, which represents the calibration of the top sub-cell with a 550 nm perovskite absorber layer and 300 nm HTL. Whereas, the EQE curve calibrated with the same thicknesses and under the illumination of AM1.5G spectrum is shown in Figure (b). The results show that the performance parameters of PCE = 15.68%, V\({}_{\infty}\) =1.09 V, J\({}_{\infty}\) = 22.25 mA/cm\({}^{2}\), and FF = 64.21% with R\({}_{\rm s}\) = 9.97 \(\Omega\).cm\({}^{2}\) are gained. Figure (a) shows the result of J-V curve when the thickness of the perovskite absorber layer and the Spiro-OMeTAD are 500 and 100 nm, respectively. Figure (b) displays the EQE curve for the aforementioned thicknesses and under the AM1.5G illumination. A PCE of 15.68%, V\({}_{\infty}\) of 1.10 V, J\({}_{\infty}\) of 21.97 mA/cm\({}^{2}\), and FF of 64.79%, with R\({}_{\rm s}\) of 9.97 \(\Omega\).cm\({}^{2}\) are obtained from the J-V curve. The energy band diagrams for the perovskite top sub-cell are attained by device simulation and illustrated in Figure (a) and (b). To comprehend the production and direction of the electron-hole pair motion, data are found in both dark and light conditions. Furthermore, the presence of interface produces an appropriate electric field to separate and collect the charge carriers and avoids the movement of the same counterpart, as presented in Figure (b). The performance parameters as a function of the thickness variation can be comprehended in Figure S. For an increase in thickness up to 700 nm, \(\text{J}_{\text{sc}}\) increases to 22.81 \(\text{mA/cm}^{2}\) and after that almost remains constant. This increase is because of the enhanced absorption and the generation rate. Nevertheless, the inverse changes of \(\text{V}_{\text{oc}}\) with respect to thickness have occurred owing to the decreased electric field strength across the thick perovskite layer. The excitons created by the absorption of photons are unable to go over the barrier potential which is the depletion layer. This may give rise to greater recombination rate of charge carriers and leads to a decrease in the \(\text{V}_{\text{oc}}\). Consequently, the reduction of the electric field reduces the possibility of charge carrier separation and diminishes the \(\text{V}_{\text{oc}}\) and FF of the PSC. Increasing the series resistance of thick perovskite layers is another factor that helps to reduce FF. The influence of \(\text{V}_{\text{oc}}\), \(\text{J}_{\text{sc}}\), and FF on PCE is found, which confirms that an increase in \(\text{J}_{\text{sc}}\) dominates the decrease in \(\text{V}_{\text{oc}}\) and FF. In addition, PCE increases to 15.53% with increasing the thickness up to 400 nm and after that almost remains unchanged. The various experimental studies have already displayed that the performance of solar cells is highly dependent on the morphology of the perovskite absorber layer, which has a direct effect on the photocurrent carrier lifetime and diffusion length [6, 7]. The silicon solar cell, which is comprised of p\({}^{++}\)-Si/n-Si/n\({}^{++}\)-Si and has been calibrated before (Amri et al., 2021) is used in our study. To identify the charge carrier dynamics for the silicon bottom sub-cell, energy band diagrams are drawn in Figure (a) and (b). In addition, the presence of the interface creates a suitable electric field for the separation and collection of charge carriers and the suppression of similar counterpart motion, as shown in Figure (b). Now, we change the thickness of the silicon absorber layer from 10 to 120 \(\upmu\)m and study the J-V and EQE curves of the bottom sub-cell. The obtained results for different thicknesses of the n-Si layer are presented in Figure S8. \begin{table} \begin{tabular}{l c c c} \hline Parameters & n”-si & n-si & p”-si \\ \hline Thickness (\(\upmu\)m) & 0.1 & 80 & 0.02 \\ \(\rm N_{a}\left(cm^{-3}\right)\) & 0 & 0 & 5.0\(\times 10^{19}\) \\ \(\rm N_{b}\left(cm^{-3}\right)\) & 10\({}^{2}\) & 10\({}^{14}\) & 0 \\ \(\rm E_{z}\left(eV\right)\) & 1.12 & 1.12 & 1.12 \\ \(\chi\left(eV\right)\) & 4.05 & 4.05 & 4.05 \\ \(\epsilon_{r}\) & 11.9 & 11.9 & 11.9 \\ \(\mu_{n}\left(cm^{2}\middle/_{V,g}\right)\) & 1.04\(\times 10^{3}\) & 1.04\(\times 10^{3}\) & 1.04\(\times 10^{3}\) \\ \(\mu_{p}\left(cm^{2}\middle/_{V,g}\right)\) & 4.2\(\times 10^{2}\) & 4.2\(\times 10^{2}\) & 4.2\(\times 10^{2}\) \\ \(\rm N_{c}\left(cm^{-3}\right)\) & 2.8\(\times 10^{19}\) & 2.8\(\times 10^{19}\) & 2.8\(\times 10^{19}\) \\ \(\rm N_{v}\left(cm^{-3}\right)\) & 2.6\(\times 10^{19}\) & 2.6\(\times 10^{19}\) & 2.6\(\times 10^{19}\) \\ \hline \end{tabular} \end{table} Table S5: Elementary parameters for silicon simulation (Burgelman et al., 2000). (Amri et al., 2021) The performance parameters of the Si bottom sub-cell as a function of the Si thickness variation can be comprehended in Figure S9. ### S4: Calculated filtered spectra using BL and TM methods Figure 15: AM1.5G and filtered spectra. Different filtered spectra are corresponded to different perovskite thicknesses and are calculated by using the transfer matrix (TM) method. ## S5: Detailed values of \(\mathbf{J_{sc}}\) of the bottom sub-cell Table S6: \(\mathbf{J_{sc}}\) values of the bottom sub-cell corresponding to different silicon absorber layer thicknesses. The silicon bottom sub-cell is studied under different filtered spectra (using the TM method) corresponding to different perovskite thicknesses. \(\mathbf{J_{sc}}\) values are reported for the minimum (10 \(\upmu\)m), maximum (500 \(\upmu\)m), and optimized thicknesses of the silicon absorber layer. \begin{tabular}{c c c} \hline **Perovskite thickness (nm)** & **Silicon thickness (\(\upmu\)m)** & \(\mathbf{J_{sc}}\) **of bottom sub-cell** \\ & & **(mA/cm\({}^{2}\))** \\ \hline & 10 & 15.82 \\ & **120** & **18.88 (max)** \\ & & 500 & 14.75 \\ \hline & 10 & 12.43 \\ & **130** & **15.44 (max)** \\ & & 500 & 12.18 \\ \hline & 10 & 10.15 \\ & **140** & **13.18 (max)** \\ & & 500 & 10.32 \\ \hline & 10 & 8.49 \\ & **140** & **11.54 (max)** \\ & & 500 & 9.14 \\ \hline & 10 & 8.11 \\ & **150** & **11.16 (max)** \\ & & 500 & 8.85 \\ \hline & 10 & 7.92 \\ 1000 & **140** & **10.96 (max)** \\ & & 500 & 8.65 \\ \hline \end{tabular} Figure 16: AM1.5G and filtered spectra. Different filtered spectra are corresponded to different perovskite thicknesses and are calculated by using the Beer-Lambert (BL) method. ## S6: Counter plots of the photovoltaic parameters of the bottom sub-cell
2306.10593
Cosmic GREA from SMBH growth
General Relativistic Entropic Acceleration (GREA) gives a general framework in which to study multiple out-of-equilibrium phenomena in the context of general relativity, like the late accelerated expansion of the universe or the formation of galaxies and the large scale structure of the universe. Here we study the consequences of mass accretion onto massive Black Holes. We find that a population of Super Massive Black Holes (SMBH) whose mass grows significantly due to accretion can act as a source of entropic acceleration and constitute a significant part of the present acceleration of the Universe.
Juan Garcia-Bellido
2023-06-18T16:35:44Z
http://arxiv.org/abs/2306.10593v1
# Cosmic GREA from SMBH growth ###### Abstract General Relativistic Entropic Acceleration (GREA) gives a general framework in which to study multiple out-of-equilibrium phenomena in the context of general relativity, like the late accelerated expansion of the universe or the formation of galaxies and the large scale structure of the universe. Here we study the consequences of mass accretion onto massive Black Holes. We find that a population of Super Massive Black Holes (SMBH) whose mass grows significantly due to accretion can act as a source of entropic acceleration and constitute a significant part of the present acceleration of the Universe. ## I I. Introduction Understanding the origin of Dark Matter (DM) and Dark Energy (DE) is one of the fundamental quests of Modern Cosmology. Although their phenomenology is well understood, and the actual values of the parameters that characterize these two contributions to the matter and energy content of the universe can be determined to the level of a few percent, their nature is a total mystery. Primordial Black Holes (PBH) have recently gained a renaissance as a serious contender for all the DM in the universe [1], when generated during the radiation era from large matter fluctuations that seed small and large scale structures. Those PBH can explain a plethora of astrophysical and cosmological phenomena, and have been suggested to explain the unexpected gravitational wave events seen by LIGO/Virgo [2]. Moreover, we have recently proposed that for systems in which there is a production of entropy, the laws of thermodynamics must be incorporated into the Einstein equations via a thermodynamic constraint on the variational principle of the matter-gravity action [3]. An effective way of doing this is by adding a viscosity term to the energy-momentum tensor. This new term could give rise to an accelerating universe from the quantum entanglement entropy associated to our cosmological horizon [4], and compared with cosmological observations [5]. However, other sources of entropy could also contribute to the local expansion of the universe. We pointed out in Ref. [3] that in the absence of matter accretion or merging, the conservation of BH entropy cannot account for any cosmic acceleration. However, super massive black holes (SMBH) at the centers of galaxies are surrounded by a massive accretion disk which feeds into the black hole and their masses can grow very fast [6]. We consider here the effect that such an early growth in the mass of SMBH has for the entropic force that accelerates the universe. We will conclude that these PBH seeds, which have grown since recombination to become the present SMBH [1], could also be the source for the Dark Energy of the universe. ## II II. Green and the Einstein equations The basic concept here is coarsegraining. Suppose we have a mechanical system that consists of two components: i) a set of _slow_ degrees of freedom described by canonical coordinates with conjugate momenta (\(q\), \(p\)), and ii) some _fast_ degrees of freedom coarsegrained as a thermodynamical system characterized by macroscopic quantities, entropy and temperature (\(S\), \(T\)). The action is then given by \[{\cal S}=\frac{1}{2\kappa}\int d^{4}x\sqrt{-g}\,R+\int d^{4}x\sqrt{-g}\,{\cal L }_{m}(g_{\mu\nu},\,s)\,,\] where \(s\) is the entropy density and \(\kappa=8\pi G\). The variational principle tells us that \[\delta{\cal S}= \int d^{4}x\left(\frac{1}{2\kappa}\frac{\delta(\sqrt{-g}\,R)}{ \delta g^{\mu\nu}}+\frac{\delta(\sqrt{-g}\,{\cal L}_{m})}{\delta g^{\mu\nu}} \right)\delta g^{\mu\nu}\] \[+\!\int d^{4}x\sqrt{-g}\,\frac{\partial{\cal L}_{m}}{\partial s} \delta s\,.\] The interaction between the two components is described by a thermodynamical constraint, in the form of the First Law of Thermodynamics, \(\frac{\partial{\cal L}_{m}}{\partial s}\delta s=\frac{1}{2}f_{\mu\nu}\,\delta g ^{\mu\nu}\), which gives rise to the Einstein field equations _extended_ to out-of-equilibrium phenomena [3], \[G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}R\,g_{\mu\nu}=\kappa\,(T_{\mu\nu}-f_{\mu\nu} )\equiv\kappa\,{\cal T}_{\mu\nu}\,. \tag{1}\] Here \(f_{\mu\nu}\) arises from the first law of thermodynamics \[-dW=-\vec{F}\cdot d\vec{x} = dU+\left(p-T\frac{dS}{dV}\right)dV \tag{2}\] \[\equiv dU+\tilde{p}\,dV\] where we have defined an _effective_ pressure \(\tilde{p}\) which reduces to the usual fluid pressure \(p\) in the absence of entropy production. This extra component to the Einstein equations can be interpreted as an effective bulk viscosity term of a real (non-ideal) fluid [3], with \(\Theta=D_{\lambda}u^{\lambda}\) the trace of the congruence of geodesics, \[f_{\mu\nu}=\zeta\,\Theta\,(g_{\mu\nu}+u_{\mu}u_{\nu})=\zeta\,\Theta\,h_{\mu \nu}\,, \tag{3}\] such that the covariantly-conserved energy-momentum tensor has the form of a perfect fluid tensor, \[{\cal T}^{\mu\nu} = p\,g^{\mu\nu}+(\rho+p)u^{\mu}u^{\nu}-\zeta\,\Theta\,h^{\mu\nu} \tag{4}\] \[= \tilde{p}\,g^{\mu\nu}+(\rho+\tilde{p})u^{\mu}u^{\nu}\,, \tag{5}\] and, imposing the thermodynamic constraint (2), the bulk viscosity coefficient \(\zeta\) can be written as \[\zeta=\frac{T}{\Theta}\frac{dS}{dV}\,. \tag{6}\] In the case of an expanding universe, \(\Theta=\frac{d}{dt}\ln V=3H\) and the coefficient becomes \(\zeta=T\dot{S}/(9H^{2}a^{3})\), see [4], with \(S\) the entropy per comoving volume of the Universe. Entropy production therefore implies \(\zeta>0\). Note that the energy-momentum tensor is still diagonal, \({\cal T}^{\mu}_{\ \nu}={\rm diag}(-\rho,\,\tilde{p},\,\tilde{p},\,\tilde{p})\), and that the \(00\) component is unchanged with respect to GR. Only the \(ij\) component has the entropy-growth dependence via \(\tilde{p}\). The Raychaudhuri equation [7] for geodesic motion (\(a^{\mu}=u^{\nu}D_{\nu}u^{\mu}=0\)) in the absence of shear (\(\sigma_{\mu\nu}\,\sigma^{\mu\nu}=0\)) and vorticity (\(\omega_{\mu\nu}\,\omega^{\mu\nu}=0\)) is given by \[\frac{D}{d\tau}\Theta+\frac{1}{3}\Theta^{2} = -R_{\mu\nu}u^{\mu}u^{\nu}\] \[= -\kappa\left(T_{\mu\nu}u^{\mu}u^{\nu}+\frac{1}{2}T^{\lambda}_{ \ \lambda}-\frac{3}{2}\zeta\Theta\right)\] \[= -\frac{\kappa}{2}(\rho+3\tilde{p})=-\frac{\kappa}{2}\left(\rho+ 3p-3T\frac{dS}{dV}\right)\,.\] Due to the extra entropic term in the effective pressure \(\tilde{p}\), even for matter that satisfies the strong energy condition, \(\rho+3p>0\), it is possible to prevent gravitational collapse, i.e. \(\dot{\Theta}+\Theta^{2}/3>0\), as long as the production of entropy is significant enough, \(3TdS/dV>(\rho+3p)>0\). ## III III. Entropy of the BH horizon One can also wonder about the effect of the entropy associated to space-time itself, in particular to horizons. It can be incorporated in a natural way by extending the Einstein-Hilbert action with a surface term, the Gibbons-Hawking-York (GHY) term of Refs. [8, 9]. Let us consider a space-time manifold \({\cal M}\) with metric \(g_{\mu\nu}\), which has a horizon hypersurface that we denote by \({\cal H}\). This is a submanifold of the whole space-time. By taking \(n^{\mu}\), the normal vector to the hypersurface \({\cal H}\), we can define an inherited metric on \({\cal H}\): \[h_{\mu\nu}=g_{\mu\nu}+n_{\mu}n_{\nu}\,. \tag{8}\] With this, one can define the GHY term as \[S_{\rm GHY}=\frac{1}{8\pi G}\int_{\cal H}d^{3}y\sqrt{h}K\,, \tag{9}\] where \(K\) is the trace of the extrinsic curvature of the surface. Notice that we are not foliating the entire space-time, but rather considering the properties of a particular hypersurface, the horizon. From the thermodynamic point of view, the GHY term contributes to the free energy of the system. Hence, it can be related to the temperature and entropy of the horizon as [3] \[S_{\rm GHY}=-\int dt\,N(t)\,TS\,. \tag{10}\] where we have kept the lapse function \(N(t)\), to indicate that the variation of the total action with respect to it will generate a Hamiltonian constraint with an entropy term together with the ordinary matter-energy terms. This leads to the realization that what drives gravity in a thermodynamical context is not just the internal energy of a system, \(U\), but its Helmholtz free energy, \(F=U-TS\). In other words, entropy gravitates, or perhaps we should better say entropy "antigravitates" since it is responsible for a repulsive force. What gravitates is information. #### iii.0.1 Schwarzschild black hole In order to illustrate this, let us now compute the GHY action for the event horizon of a Schwarzschild black hole of mass \(M\). Its space-time is described by the metric: \[ds^{2}=-\left[1-\frac{2GM}{r}\right]\!dt^{2}+\left[1-\frac{2GM}{r}\right]^{-1} \!\!\!dr^{2}+r^{2}d\Omega^{2}\,. \tag{11}\] The normal vector to a 2-sphere of radius \(r\) around the origin of coordinates is \[n=-\sqrt{1-\frac{2GM}{r}}\partial_{r}\,. \tag{12}\] With this, the trace of the extrinsic curvature for such a sphere scaled by the metric determinant is \[\sqrt{h}K=(3GM-2r)\sin\theta\,. \tag{13}\] Integrating over the angular coordinates and setting the 2-sphere at the event horizon, i.e. \(r=2GM\), and restoring for a moment \(c\), the GHY boundary term becomes \[S_{\rm GHY}=-\frac{1}{2}\int dt\,Mc^{2}=-\int dt\,T_{\rm BH}S_{\rm BH}\,, \tag{14}\] where \(T_{\rm BH}\) is the Hawking temperature and \(S_{\rm BH}\) is the Bekenstein entropy of the Schwarzschild black hole [10], \[k_{\rm B}T_{\rm BH}=\frac{\hbar c^{3}}{8\pi GM}\,,\hskip 28.452756ptS_{\rm BH}=k_{ \rm B}\,\frac{4\pi GM^{2}}{\hbar c}\,. \tag{15}\] This favors the interpretation of the GHY boundary term as a contribution to the Helmholtz free energy in the thermodynamic sense, \(F=U-TS\). Note also that the action (14) is classical, essentially the rest mass energy of the BH, although both the Hawking temperature and Bekenstein entropy are quantum mechanical quantities, associated with the entanglement of the fundamental degrees of freedom between the interior and exterior of the horizon of a black hole. We interpret this result as being an _emergent_ phenomenon, from microscopic degrees of freedom to a coarsegrained description in terms of thermodynamical quantities like temperature and entropy, where all fundamental constants \((\hbar,k_{\rm B},G)\) cancel out, except \(c\). In Ref. [3] we argued that, in the absence of significant clustering or merging, the masses of black holes remained constant and thus there would be no entropy production associated with stellar black holes in our universe. However, let us consider here an alternative possibility. ## IV PBH at the origin of both dark matter and dark energy We consider here the possibility that Dark Matter is composed of Primordial Black Holes (PBH) and that a small fraction of these black holes, with masses \(M_{\rm BH}\sim 10^{6}M_{\odot}\), constitute the seeds of SMBH at the centers of galaxies [1]. These black holes accrete mass from the environment at a rate that is commensurate with the rate of expansion of the universe. When accretion of gas from the surroundings reaches the Eddington limit, the mass of the SMBH grows like [11] \[\dot{M}=\frac{4\pi G\,m_{p}}{0.1c\,\sigma_{T}}\,M\simeq\frac{M}{40\,{\rm Myr}} =\frac{2}{t(z_{*})}\,M\,, \tag{16}\] where \(m_{p}\) is the proton mass and \(\sigma_{T}\) is the Thomson cross-section, and we have used \(t(z_{*}\simeq 35)=80\) Mpc for a Universe with \(\Omega_{\rm M}=0.31\). If we now assume that SMBH continue to accrete gas at the Eddington limit with a rate that decreases in time with the density of matter available in the universe (\(\dot{\rho}/\rho=-2/t\)), then the mass of SMBH will grow due to accretion as \(M\propto t^{2}\sim a^{3}=V\) in the past, at least since \(z_{*}\simeq 35\), see also [12]. If we compute the general relativistic entropic acceleration (GREA) induced by the growth of entropy associated with this increase in mass, \(V\,dS_{\rm SMBH}=2\,S_{\rm SMBH}dV\), we see that it contributes with a _constant_ negative pressure \[p_{S}=-T\frac{dS}{dV}=-2\frac{TS}{V}=-\frac{N_{\rm SMBH}M_{\rm SMBH}}{V}=- \rho_{\rm SMBH}\,, \tag{17}\] where the total entropy is \[S=\sum_{i}S_{\rm SMBH}^{(i)}=N_{\rm SMBH}S_{\rm SMBH},\] with \(N_{\rm SMBH}\) the total number of SMBH in the universe, assumed _constant_ (i.e. without SMBH merging). We then find the Raychaudhuri equation (7), where we have separated the ordinary (adiabatic) matter characterized by (\(\rho\), \(p\)) from the SMBH, \[\dot{H}+H^{2} = \frac{\ddot{a}}{a}\,= -\frac{4\pi G}{3}\left(\rho+3p+\rho_{\rm SMBH}+3p_{S}\right) \tag{18}\] \[= -\frac{4\pi G}{3}\left(\rho+3p\right)+\frac{8\pi G}{3}\rho_{\rm SMBH }\,.\] The last _constant_ term can be interpreted as an _effective_ and _positive_ Cosmological Constant term \(\Lambda=\kappa\,\rho_{\rm SMBH}\), driving cosmic acceleration, while the rest of the (adiabatic) matter and radiation in the universe, the baryons and photons, as well as the PBH that do not accrete significantly and thus act as Cold Dark Matter, would contribute to cosmic deceleration. We can now evaluate the contribution of such a term to the present acceleration of the universe. If PBH constitute all of the Dark Matter in the universe, and a small fraction of these (the seeds of SMBH) have masses that increase with the cosmic volume, then their contribution is identical to that of a cosmological constant with the same density, \[H^{2}=\frac{8\pi G}{3}\left(\rho+\rho_{\rm PBH}+\rho_{\rm SMBH}\right)\,. \tag{19}\] What these equations (18) and (19) are telling us is that primordial SMBH, rather than contributing to the Dark Matter of the universe today, they are actually the source of Dark Energy. Whether there has been a gradual change from DM to DE over the course of time, as PBH grow due to accretion, is a matter of discussion when compared with cosmological observations [5]. There is also the possibility that only a small fraction of the PBH that contributed to DM in the early universe, e.g. just the SMBH in the centers of galaxies, grew sufficiently rapidly to contribute significantly to the GREA of the universe. In that case, the bulk of the PBH would still contribute to DM today and only a small fraction, around \(f_{\rm SMBH}=5\times 10^{-5}\) of all PBH [13], would contribute to DE in the form of rapidly accreting SMBH with entropy growth associated with their horizons (17). The rapid increase in mass of the SMBH at the centers of galaxies since \(z_{*}\simeq 35\) can quickly increase their contribution to DE (and compensate for their tiny contribution in numbers to the total amount of PBH), \[\Omega_{\rm DE}=f_{\rm SMBH}\,\Omega_{\rm DM}\,(1+z_{*})^{3}=0.69\,, \tag{20}\] for \(\Omega_{\rm DM}=0.26\). A more sophisticated computation may be needed for the case that PBH in a broader range of masses happen to accrete mass at a slower rate but for a longer time, \[\Omega_{\rm DE}=\Omega_{\rm DM}\int\frac{f(M)}{M}\,\frac{dM}{dz}dz\,, \tag{21}\] where \(f(M)\) is the fraction of DM in the form of PBH that accrete mass with a rate \(dM/dz\). The integral of all contributions would have to be used to estimate not only the actual value of \(\Omega_{\rm DE}\), but also its possible rate of change, and thus the effective DE parameters (\(w_{0}\), \(w_{a}\)), which could then be compared with observations [5]. Such a computation is beyond the scope of this letter. Note that we have assumed above that the SMBH are uniformly distributed in our universe when they started accreting gas from their surroundings at \(z_{*}\sim 35\). Their density at that time, \(\rho_{\rm SMBH}\simeq\rho_{c}^{0}\,f_{\rm SMBH}\,\Omega_{\rm DM}\,(1+z_{*})^{3} =10^{6}\,M_{\odot}/(20\,{\rm kpc})^{3}\), is about a SMBH within a sphere of 20 kpc radius, which would correspond to a comoving radius of less than a Mpc today, well below the scale of inhomogeneities. Therefore, we can ignore possible inhomogeneities in the local GRE acceleration induced by the distribution of SMBH and their mass growth. ## V V. Conclusions We have explored in this letter the possibility that GREA may account for the present acceleration of the universe arising from the entropy growth associated with the mass accretion onto SMBH since the cosmic dark ages. This way, a tiny fraction of the PBH that seed structure and contribute to the Dark Matter of the universe, more specifically the SMBH at the centers of galaxies seeded by the massive PBH from the \(e^{+}e^{-}\) annihilation era [14], would gain mass and drive also the acceleration of the universe. It is only recently that SMBH, seeded by massive PBH, have started to accrete mass and induce a cosmic acceleration via the entropic force associated with GREA. In the past, only matter and radiation drove the decelerated expansion of the universe. When GREA started to drive acceleration via SMBH growth, it acted as an effective cosmological constant term, which eventually dominated the free energy budget. This could explain the actual value of the so-called dark energy density today, as well as the coincidence problem, i.e. why both dark energy and dark matter contributions are of the same order. The local GREA around each SMBH is sufficiently uniformly distributed in the past (when mass growth from accretion was dominant), over comoving scales of order a Mpc, that we can assume homogeneity of the accelerated expansion over the entire universe. Moreover, GREA from the cosmological causal horizon [4] is still a valid alternative. Both come from emergent phenomena associated with horizon entropies; they both have a quantum origin in entanglement, and they could have comparable contributions to the present acceleration of the universe. Which one dominates today and how does this splitting determine the rate of change of acceleration over time, and thus observations of the effective Dark Energy parameters (\(w_{0}\), \(w_{a}\)), is still a matter of investigation. What is the fate of the SMBH's contribution to DE? There will be a time in which mass growth of SMBH will stop, after consuming the majority of the gas in their accretion disks. From then onwards, SMBH will conserve entropy (unless they merge with other SMBH), and thus the associated GREA will stop being a driving accelerating force. In fact, this epoch of stalled mass growth may be near the present age of the Universe. These quiescent supermassive black holes will then only contribute as CDM, like the rest of PBH, and thus will decelerate the expansion of the universe, which will redshift away their energy density and will end in an empty Universe (possibly after the evaporation of all these BH due to Hawking radiation [10]), corresponding to a Minkowsky space-time. ## VI Acknowledgements The author acknowledges support from the Spanish Research Project PID2021-123012NB-C43 [MICINNFEDER], and the Centro de Excelencia Severo Ochoa Program CEX2020-001007-S at IFT.
2302.02460
Nonparametric Density Estimation under Distribution Drift
We study nonparametric density estimation in non-stationary drift settings. Given a sequence of independent samples taken from a distribution that gradually changes in time, the goal is to compute the best estimate for the current distribution. We prove tight minimax risk bounds for both discrete and continuous smooth densities, where the minimum is over all possible estimates and the maximum is over all possible distributions that satisfy the drift constraints. Our technique handles a broad class of drift models, and generalizes previous results on agnostic learning under drift.
Alessio Mazzetto, Eli Upfal
2023-02-05T19:09:50Z
http://arxiv.org/abs/2302.02460v2
# Nonparametric Density Estimation under Distribution Drift ###### Abstract We study nonparametric density estimation in non-stationary drift settings. Given a sequence of independent samples taken from a distribution that gradually changes in time, the goal is to compute the best estimate for the current distribution. We prove tight minimax risk bounds for both discrete and continuous smooth densities, where the minimum is over all possible estimates and the maximum is over all possible distributions that satisfy the drift constraints. Our technique handles a broad class of drift models, and generalizes previous results on agnostic learning under drift. Nonparametric Density Estimation under Distribution Drift ## 1 Introduction Density estimation is a fundamental concept in statistics with numerous applications in data analysis and machine learning. Given a set of samples, the goal is to best estimate the probability distribution that generated these samples, often subject to some parametric or nonparametric assumptions on the family of candidate distributions. This problem has been studied extensively, for both discrete and continuous distribution functions, assuming that the samples are independent and identically distributed according to the distribution that we aim to estimate (Devroye & Lugosi, 2001). In many data analysis applications, ranging from customers' preferences to weather conditions, the assumption that the samples are identically distributed is unrealistic. The underlying distribution is gradually changing in time, and estimating the current distribution inevitably relies on past data from related but not identical distributions. This work present the first tight bounds for density estimates of both discrete and continues smooth distributions from samples that are independent but are generated by distributions that drift in time. Distribution drift has been studied in the context of agnostic learning with the assumption of equal bounded drift at each step (Bartlett, 1992), i.e. there is a constant \(\Delta>0\), such that the drift between two distribution \(i\) steps apart is bounded by \(i\Delta\). In this case, it has been shown that the minimax risk for learning a family of binary classifiers with VC dimension \(\nu\) is \(\Theta((\nu\Delta)^{1/3})\)(Barve & Long, 1996; Long, 1999), where the minimax risk characterizes the maximal expected error of the best algorithm that solves the problem within the specified drift assumptions. The minimax risk has not been studied with other drift patterns in in this context. In this work we study the more general problem of density estimation, under a more detailed families of drift models. In particular, our results apply to any _regular_ drift sequence, where the \(i\)th element of this sequence provides an upper bound to the distance of the current distribution and the \(i\) distribution in the sequence, and the regularity assumption prevents any abrupt change in the drift (Definition 3.1). The distance used depends on the specific estimation problem, in particular we will use the total variation distance in the case of discrete densities, and the \(L_{2}\) distance in the case of smooth densities. The bounded drift per step is one possible model in our setting. However, when more information is available, we obtain more informative tight bounds. For the problem of estimating a discrete distribution with support size \(k\) using \(n\) samples from a drifting distribution, we show a minimax risk of \(\Theta\left(\sqrt{k/r}\right)\) with respect to the total variation distance, where \(r\leq n\) is a integer that is easily derived from the drift sequence. For the special case of no drift, we retrieve the known minimax risk \(\Theta(\sqrt{k/n})\) for the estimation of a discrete density with \(n\) independent and identically distributed samples. Since the problem of estimating a discrete density with support size \(k\) with respect to the total variation distance can be reduced to the problem of agnostic learning a family of binary classifiers with VC dimension \(k\), we also show that our results implies a lower bound on the minimax risk for the latter problem. In particular, our results generalize the previously known lower bound that only holds for the case of bounded drift at each step (Barve & Long, 1996). The following simple example demonstrates the power of our approach. Assume a sample space of size \(k\), and a distribution drift that follows this pattern: a probability mass of \(1-\Delta\) is distributed between the \(k\) elements and does not change in time. The remaining \(\Delta\) probability mass is redistributed between the \(k\) elements at each step. The drift in each step is bounded by \(\Delta\) and based only on this bound the best estimate can only guarantee a \(\Theta((k\Delta)^{1/3})\) error. However, the drift between the current distribution and any past distribution is also bounded by \(\Delta\). Incorporating this extra information, our technique provides a tighter \(\Theta(\Delta)\) error estimate. In Section 4.1, we show a similar gap for the agnostic learning problem. For the smooth density estimation problem, we seek to estimate a probability density that is \(\beta\)-smooth (Definition 5.1). We focus on the nonparametric case, i.e. we do not assume any parametric assumption on the target density. We establish a minimax risk of \(\Theta\left(r^{-2\beta/(2\beta+1)}\right)\) with respect to the integrated squared loss, where again \(r\leq n\) depends on the drift sequence. This results extends the known minimax risk \(\Theta\left(n^{-2\beta/(2\beta+1)}\right)\) for estimating a density from \(n\) independent and identically distributed samples (Van der Vaart, 2000). The results we have discussed so far establish the minimax risk for learning the current distribution at a given specific time based on the past data. However, we are also interested in characterizing the minimax rate of the average risk for the online version of those problems, where we are required to provide an estimate of the current distribution at each step. This is a more challenging problem, as in the lower bound construction, we need to show that we frequently incur an high estimation error. Nonetheless, we show that in the case of a bounded drift \(\Delta\) at each step, the minimax rate for the online estimation of a discrete density with support size \(k\) is \(\Theta((k\Delta)^{1/3})\). As previously discussed, this result also applies to the problem of agnostic learning a family of binary classifiers. This is the first work in the literature to provide a characterization of the minimax rate for an online learning problem in a distribution drift setting. Following previous work on this setting, our upper bounds on the minimax risk are obtained by considering an estimator over a window of recent samples of properly chosen size. The size of the window is chosen to minimize the trade-off between the variance of the estimator and the error introduced by considering samples from distributions that are further away in time and exhibit a large drift. In the literature, the only lower bound construction for drifting distribution is specific to the problem of agnostic learning of binary classifiers (Barve & Long, 1996), and it assumes a bounded drift at each step. In our paper, we develop a novel proof strategy that allows us to obtain tight lower bound for both the problems of discrete density estimation, and smooth density estimation under any arbitrary regular drift sequence. We believe that our method is of independent interest, and can be possibly applied to other estimation problems with drifting distributions. ### Related Work The distribution drift setting has been introduced in the context of the supervised learning of binary predictors (Helmbold & Long, 1991; Bartlett, 1992; Helmbold & Long, 1994). In this line of work, it has been shown that there exists an algorithm that finds a binary predictor whose expected prediction error with respect to the current distribution is at most \(O(\sqrt[3]{\nu\Delta})\) larger then the expected error of the best predictor in the family, where \(\nu\) is the VC-dimension of the considered family of binary predictors and \(\Delta\) is an upper bound to the total variation distance of two consecutive distributions (Long, 1999). This upper bound is tight (Barve & Long, 1996). More recent work generalized this analysis to provide upper bounds to learning any family of predictors with bounded Rademacher complexity, and introduced finer measure of distance between consecutive distributions (Mohri & Munoz Medina, 2012): this work uses tools from transfer learning theory (Mansour et al., 2009), as we can observe that learning with distribution drift is a special case of learning with domain shift (Ben-David et al., 2010). The problem of relaxing the independent and identically distributed assumptions on the samples for density estimation has been studied in the literature from a theoretical perspective. However, these work significantly diverge from our setting as they use different set of assumptions. Multiple work addressed the problem of estimating the stationary distribution of a Markov process (Roussas, 1969; Wen et al., 2020), even for arbitrary initial distribution (Gilbert & Wartenberg, 1984). In (Phillips & Park, 1998), the authors developed an asymptotic theory for the kernel density estimate of a random walk and the kernel regression estimator of a non stationary first order autoregression. More similar to our setting, recent work (Gokcesu & Kozat, 2017) provides parametric density estimation results for family of exponential family where the parameters of the distributions are allowed to slowly change at each step. Many other algorithms have also been proposed for online learning of densities (Kristan et al., 2011; Garcia-Trevino & Barria, 2012), however they do not come with any theoretical analysis. The problem of density estimation in the case of independent and identically distributed samples has been extensively studied in the literature, see (Silverman, 1986; Groeneboom & Jongloed, 2014; Scott, 2015) for an overview of old and recent work in this topic. In the case of estimating a distribution with finite support size \(k\), it is folklore that we can achieve an expected error \(O(\sqrt{k/n})\) in total variation distance with \(n\) samples. This upper bound is tight (Anthony et al., 1999), and the minimax risk bound has been computed with exact constants (Kamath et al., 2015). In the case of the estimation of a \(\beta\)-smooth density from \(n\) independent and identically distributed samples, it is possible to obtain an expected squared error of \(O\left(n^{-\frac{2\beta}{2\beta+1}}\right)\), and we refer to (Tsybakov, 2009) for a recent book on the topic. This upper bound is achieved by using different methods as kernel density estimation (e.g., see Van der Vaart (2000)), and it can be proven to be tight by using information-theoretic methods from minimax theory (Devroye & Gyorfi, 1987; Yu, 1997). ### Our Contributions 1. We introduce the concept of _regular drift sequence_ - a general framework for characterizing distribution drift (Section 3). 2. We establish the minimax risk for discrete density estimation with respect to any regular drift sequence (Section 4). 3. We show a generalization of previous lower bound for the problem of agnostic learning a family of binary classifier to any regular drift sequence (Section 4.1). 4. We establish the minimax rate for the online problem of estimating a discrete density with a bounded drift at each step (Section 4.4). 5. We establish the minimax risk for estimating a smooth density with respect to any regular drift sequence (Section 5) ## 2 Preliminary Let \([n]=\{1,\ldots,n\}\) for \(n\in\mathbb{N}\). Consider a non-empty _sample space_\(\mathcal{X}\) equipped with a \(\sigma\)-algebra. Let \((X_{i})_{i\in\mathbb{N}}\) be a an independent1 stochastic process defined over \(\mathcal{X}\), and let \(P_{i}\) be the probability distribution of the random variable \(X_{i}\). Given \(n\in\mathbb{N}\), let \(\mathbf{X}_{n}=(X_{1},\ldots,X_{n})\) be the random vector of the first \(n\) elements of the random process. Since the \(X_{i}\)'s are independent, the distribution of \(\mathbf{X}_{n}\), can be written as a _product distribution_\(S=P_{1}\times\ldots\times P_{n}\) over \(\mathcal{X}^{n}\). Given \(i\leq n\), we denote with \(\theta_{i}(S)=P_{i}\) the \(i\)-th component of \(S\). Footnote 1: A stochastic process is independent iff every finite subset of its random variables is mutually independent. Let \(\mathcal{S}_{n}\) be a family of candidate probability distributions for the (unknown) distribution \(S\) of the random vector \(\mathbf{X}_{n}\). Given an observed \(\mathbf{X}_{n}\sim S\), our goal is to estimate \(P_{n}=\theta_{n}(S)\). Let \(\hat{\theta}_{n}=\hat{\theta}_{n}(\mathbf{X}_{n})\) be an estimator of this property. Given a suitable metric \(d(\cdot,\cdot)\) that quantifies the error of the estimation, the _minimax risk_ at time \(n\) is \[\inf_{\hat{\theta}_{n}}\sup_{S\in\mathcal{S}_{n}}\mathbb{E}_{\mathbf{X}_{n}\sim S }\left[d\left(\hat{\theta}_{n}(\mathbf{X}_{n}),\theta_{n}(S)\right)\right]\enspace, \tag{1}\] where we take the supremum (worst-case) over all the product distributions \(S\) in \(\mathcal{S}_{n}\), and the infimum is over all possible estimators \(\hat{\theta}_{n}\). The minimax risk quantifies the largest estimation error that the best estimator can possibly achieve with respect to \(\mathcal{S}_{n}\) at a given time \(n\). We omit the subscript \(n\) when it is clear from the context. For each estimation problem, we will adopt the most used distance in the literature for the specific estimation problem, that is the _total variation distance_ for discrete densities (probability mass function), and the \(\ell_{2}\)_distance_ for smooth densities. We are also interested in the _minimax rate of the average risk_, which quantifies the average of the estimation errors of a online algorithm that at each step observes a new random variable, and outputs an estimate of its distribution based on all the previous observations. Given \(i\leq n\), we let \(\hat{\theta}_{i}(\mathbf{X}_{i})\) be an estimator of \(\theta_{i}(S)=P_{i}\). Given a metric \(d\), the minimax rate of the average risk is defined as \[\inf_{\hat{\theta}_{1},\ldots,\hat{\theta}_{n}}\sup_{S\in\mathcal{S}_{n}} \mathbb{E}_{\mathbf{X}_{n}\sim S}\left[\frac{1}{n}\sum_{i=1}^{n}d\left(\hat{\theta }_{i}(\mathbf{X}_{i}),\theta_{i}(S)\right)\right]\enspace. \tag{2}\] Due to space constraints, some of the proofs are deferred to the appendix. ## 3 Distribution Drift The family of candidate probability distributions \(\mathcal{S}_{n}\) is defined by the assumptions on the distribution drift in the stochastic process. The most widely used assumption in the literature is a bound on the drift at each step (Bartlett, 1992). In our setting, this is formally expressed as follows: there exists \(\Delta>0\) such that \(d(P_{i},P_{i+1})\leq\Delta\) for any \(i\leq n-1\). In the context of learning binary functions, minimax risk lower bound are known under this setting (Barve & Long, 1996; Long, 1999). However, this is a very pessimistic assumption, as in the worst case the distance from \(P_{i}\) to \(P_{n}\) is \((n-i)\Delta\), since we can accumulate an additive error \(\Delta\) at each step. As an illustrative example that shows the drawback of this assumption, let \(\Delta\in(0,1)\) and consider a sequence of distributions \(P_{1},\ldots,P_{n}\) over \(\mathbb{N}\) such that \(P_{i}\) takes the value \(0\) with probability \(1-\Delta\), and the value \(i\) with probability \(\Delta\). This sequence has bounded drift at each step, i.e. the total variation distance between \(P_{i}\) and \(P_{i+1}\) is equal to \(\Delta\). However, the total variation distance between \(P_{n}\) and \(P_{i}\) is also equal to \(\Delta\) rather than \((n-i)\Delta\). The minimax risk subject to only the weak condition \(d(P_{i},P_{i+1})\leq\Delta\) can be arbitrarily far from the best estimation error. An alternative assumption is a polynomial drift (Hanneke et al., 2015). That is, we assume that there exists a value \(\alpha\in[0,1]\) such that \(d(P_{i},P_{n})\leq(n-i)^{\alpha}\Delta\). While this assumption allows to obtain closed-formula upper bounds on the error for the problem of agnostic learning, no lower bounds are known in this setting. In this work, we introduce and present upper and lower bounds with a more detailed approach for defining drift. **Definition 3.1**.: A vector \(\mathbf{\Delta}_{n}=(\Delta_{1},\ldots,\Delta_{n})\in\mathbb{R}_{\geq 0}^{n}\) is a _regular drift sequence_ for a product distribution \(S=P_{1}\times\ldots\times P_{n}\) with respect to a metric \(d(\cdot,\cdot)\) if: 1. \(\Delta_{i}\) is an upper bound on the drift between \(P_{i}\) and \(P_{n}\), i.e. \(d(P_{i},P_{n})\leq\Delta_{i}\), 2. the sequence \(\Delta_{1},\ldots,\Delta_{n}\) is non-increasing, 3. there is no abrupt change in the drift: there is a constant \(c\) such that \(\Delta_{i-1}/\Delta_{i}\leq c\) for any \(i=2,\ldots,n-1\), 4. \(\Delta_{i}=0\iff i=n\). Comment.Our results also hold if we substitute in the above definition the requirement that \(d(P_{i},P_{n})\leq\Delta_{i}\) with the requirement that \(d(P_{i},P_{i+1})\leq\Delta_{i}-\Delta_{i+1}\) for any \(i\leq n-1\). The latter property is stronger as it implies the former, however our lower bound proof works with either definition, and we will us them interchangeably. In our work, we characterize the minimax risk of density estimation problems under an arbitrary regular drift sequence \(\mathbf{\Delta}_{n}\). Noticeably, this is the first work to provide lower bound for estimation problems under such a general model of drift, as the previous lower bound construction assumed a bounded drift at each step (Barve & Long, 1996). ## 4 Discrete Density Estimation In this section, we show the minimax risk and the minimax rate of the average risk for the problem of estimating discrete distributions with finite support under distribution drift. Without loss of generality, we can let the sample space be \(\mathcal{X}=[k]\) where \(k\) denotes the support size. Since the sample space is discrete, a distribution over \(\mathcal{X}\) is a probability mass function \(P\), such that \(P(j)=\Pr_{X\sim P}(X=j)\) for \(j\in[k]\). Following previous work on estimating discrete distributions, we evaluate the quality of the estimation by the _total variation distance_ metric. Given two distribution \(P\) and \(Q\) over \([k]\), their total variation distance is defined as \[\operatorname{TV}(P,Q)\doteq\frac{1}{2}\sum_{j\in[k]}\left|P(j)-Q(j)\right|\.\] We consider the following family of probability distributions \(\mathcal{S}(\mathbf{\Delta}_{n},k)\) over \(\mathcal{X}^{n}\) with regular drift \(\mathbf{\Delta}_{n}\). **Definition 4.1**.: Let \(\mathcal{X}=[k]\). Let \(\mathbf{\Delta}_{n}\in\mathbb{R}_{\geq 0}^{n}\), and let \(k>0\). A product distribution \(S=P_{1}\times\ldots\times P_{n}\) over \(\mathcal{X}^{n}\) belongs to \(\mathcal{S}_{n}(\mathbf{\Delta}_{n},k)\) if and only if \(\mathbf{\Delta}_{n}\) is a regular drift sequence for \(S\) with respect to the metric \(\operatorname{TV}\). We establish the following minimax risk in this setting: **Theorem 4.2**.: _Let \(\mathcal{S}_{n}(\mathbf{\Delta}_{n},k)\) be defined as in Definition 4.1, and let_ \[r^{*}=\max\left\{r\in[n]:\Delta_{n-r+1}\leq\sqrt{\frac{k}{r}}\right\}\] _If \(r^{*}\) is well-defined and \(r^{*}>k\), then:_ \[\inf_{\tilde{\theta}_{n}}\sup_{S\in\mathcal{S}_{n}(\mathbf{\Delta}_{n},k)} \underset{\mathbf{X}^{n}\sim S}{\mathbb{E}}\operatorname{TV}\left(\hat{ \theta}_{n}(\mathbf{X}_{n}),\theta_{n}(S)\right)=\Theta\left(\sqrt{\frac{k}{ r^{*}}}\right)\] This is the first work to characterize the minimax risk for discrete density estimation problem under distribution drift. In Section 4.2, we analyze a simple algorithm that achieves the upper bound of the theorem. Our results extend the known minimax risk of \(\Theta(\sqrt{k/n})\) for estimating a discrete distribution with \(n\) independent and identically distributed samples. In fact, for \(\mathbf{\Delta}_{n}\to 0\), we have that \(r^{*}=n\). Noticeably, Theorem 4.2 provides matching upper and lower bound for any regular drift sequence \(\mathbf{\Delta}_{n}\), and this is the first theoretical work within the distribution drift literature that provides a lower bound in such a general drift setting. As a simple corollary of this theorem, we can obtain the minimax risk for more specific drift assumptions previously used in the literature. Bounded drift at each step.Assume that there exists a constant \(\Delta>0\) such that for any \(S=P_{1}\times\ldots\times P_{n}\), it holds that \(\operatorname{TV}(P_{i},P_{i+1})\leq\Delta\). We can invoke Theorem 4.2 with the regular drift sequence \(\mathbf{\Delta}_{n}=(\Delta\cdot(n-1),\ldots,\Delta,0)\). As \(r^{*}=\max\left\{r\in[n]:(r-1)\Delta\leq\sqrt{k/r}\right\}\), we can observe that for \(n\gtrsim(k/\Delta^{2})^{1/3}\), we have that \(r^{*}=\Theta\left((k/\Delta^{2})^{1/3}\right)\), and thus the minimax risk in this setting is \(\Theta((k\Delta)^{1/3})\). Polynomial drift.Assume that there exists a \(\alpha\in(0,1]\) such that \(\operatorname{TV}(P_{i},P_{n})\leq(n-i)^{\alpha}\Delta\) for all \(i\in[n]\). We can invoke Theorem 4.2 with the regular drift sequence \(\mathbf{\Delta}_{n}=(\Delta\cdot(n-1)^{\alpha},\Delta\cdot(n-2)^{\alpha} \ldots,\Delta,0)\), and we obtain that for \(n\gtrsim(k/\Delta^{2})^{1/(2\alpha+1)}\), the minimax risk in this setting is \(\Theta\left((k\Delta)^{1/(2\alpha+1)}\right)\). This is the first work to show a lower bound with this drift assumption. ### Connection to Agnostic Learning We can easily show that the lower bound of Theorem 4.2 also applies to the problem of agnostic learning a family of binary functions with VC dimension \(k\). In fact, consider the family of binary functions \(\mathcal{F}=\{f_{A}:A\subseteq[k]\}\), where \(f_{A}(j)=\mathbf{1}_{\{j\in A\}}\) for any \(j\in[k]\), and observe that the VC-dimension of \(\mathcal{F}\) is \(k\). Let \(\hat{P}\) be any estimator of \(P_{n}\), and let \(\hat{P}(A)=\sum_{j\in A}\hat{P}(j)\) for any \(A\subseteq[k]\). By using the definition of total variation distance, we have that \[\sup_{f_{A}\in\mathcal{F}}\left|\underset{X\sim P_{n}}{\mathbb{E}}f_{A}(X)- \hat{P}(A)\right|=2\operatorname{TV}(P_{n},\hat{P})\ \,\] which shows that the problem of estimating the distribution \(P_{n}\) under total variation distance can be reduced to the problem of agnostic learning the family \(\mathcal{F}\) with respect to the distribution \(P_{n}\). We can conclude that the lower bound of Theorem 4.2 applies to the problem of agnostic learning a family of binary functions with VC dimension \(k\) in a distribution drift setting. For the case of bounded drift at each step, as observed in the previous subsection, the lower bound is \(\Omega((k\Delta)^{1/3})\) for sufficiently large \(n\), and we retrieve the result of Barve & Long (1996). Theorem 4.2 generalizes this lower bound to a more general model of drift, giving tighter bounds when possible, as shown in the example in the Introduction. ### Upper Bound To prove the upper bound of Theorem 4.2 fix a parameter \(r\leq n\) and consider the empirical distribution \(\hat{P}^{r}\) over the latest \(r\leq n\) random variables: \[\hat{P}^{r}(j)=\frac{1}{r}\sum_{i=n-r+1}^{n}\mathbf{1}_{\{X_{i}=j\}}\qquad \forall j\in[k]\enspace. \tag{3}\] Analogously, we define the average of the latest \(r\) distributions as \(P^{r}=(1/r)\sum_{i=n-r+1}^{n}P_{i}\). In order to evaluate the expected error obtained by using \(\hat{P}^{r}\) as an estimate, we use the triangle inequality to decompose the error into two terms \[\mathbb{E}\operatorname{TV}(P_{n},\hat{P}^{r})\leq\mathbb{E}\operatorname{TV }(P^{r},\hat{P}^{r})+\operatorname{TV}(P^{r},P_{n})\enspace. \tag{4}\] The first error term of this upper bound is the _statistical error_ of estimating the distribution \(P^{r}\) by its empirical distribution \(\hat{P}^{r}\). This error is related to the variance of the estimator which depends on the support size of the estimated distributions. **Proposition 4.3**.: \(\mathbb{E}\operatorname{TV}(P^{r},\hat{P}^{r})\leq(1/2)\sqrt{k/r}\)_._ Proof.: By definition \(\mathbb{E}\operatorname{TV}(P^{r},\hat{P}^{r})=(1/2)\sum_{j\in[k]}\mathbb{E} \left|\hat{P}_{r}(j)-P_{r}(j)\right|\), and using Jensen's inequality we have that for any \(j\in\mathbb{N}\), \(\mathbb{E}\left|\hat{P}_{r}(j)-P_{r}(j)\right|\leq\sqrt{\mathbb{V}(\hat{P}_{r} (j))}\). Since \(\hat{P}_{r}(j)\) is the distribution of an average of \(0\)-\(1\) random variables, we have \[\sqrt{\mathbb{V}(\hat{P}_{r}(j))} =\frac{1}{r}\sqrt{\sum_{i=n-r+1}^{n}P_{i}(j)(1-P_{i}(j))}\] \[\leq\frac{1}{r}\sqrt{\sum_{i=n-r+1}^{n}P_{i}(j)}=\sqrt{\frac{P^{ r}(j)}{r}}\] Thus, we have \[\mathbb{E}\operatorname{TV}(P^{r},\hat{P}^{r})\leq\frac{1}{2}\sum_{j\in[k]} \sqrt{\mathbb{V}(\hat{P}_{r}(j))}\leq\frac{1}{2\sqrt{r}}\sum_{j\in[k]}\sqrt{P ^{r}(j)}\] We conclude the proof using Cauchy-Schwarz inequality: \[\sum_{j\in[k]}\sqrt{P^{r}(j)}\leq\sqrt{\sum_{j\in[k]}P^{r}(j)}\sqrt{\sum_{j\in [k]}1}=\sqrt{k}\enspace.\] The second error term of the upper bound (4) is the _drift error_, and describes how far the distribution \(P^{r}\) is to \(P_{n}\). Observe that if the samples were identically distributed, this error would be zero. The drift error can be upper bounded by using the information on the drift sequence \(\mathbf{\Delta}_{n}\). **Proposition 4.4**.: \(\operatorname{TV}(P^{r},P_{n})\leq\Delta_{n-r+1}\)_._ Proof.: We can rewrite the total variation distance as \[\operatorname{TV}(P^{r},P_{n}) =\frac{1}{2}\sum_{j=1}^{k}\sum_{i=n-r+1}^{n}|P^{r}(j)-P_{n}(j)|\] \[\leq\frac{1}{2}\sum_{j=1}^{k}\frac{1}{r}\sum_{i=n-r+1}^{n}|P_{i} (j)-P_{n}(j)|\] where in the last inequality we used the triangle inequality and the definition of \(P^{r}\). Therefore, we obtain that \(\operatorname{TV}(P^{r},P_{n})\leq(1/r)\sum_{i=n-r+1}^{n}\operatorname{TV}(P_ {i},P_{n})\). We conclude the proof by observing that by the monotonicity of the drift sequence \(\operatorname{TV}(P_{i},P_{n})\leq\Delta_{i}\leq\Delta_{n-r+1}\) for any \(i\geq n-r+1\). By using Proposition 4.3 and 4.4, we can upper bound the estimation error (4) as \(\mathbb{E}\operatorname{TV}(P_{n},\hat{P}^{r})\leq(1/2)\sqrt{k/r}+\Delta_{n-r+1}\). There is a trade-off: by choosing a larger \(r\), we obtain a smaller statistical error, but potentially a larger drifting error. The value \(r^{*}\) of Theorem 4.2 represents an optimal criteria (up to constants) to solve this trade-off for any regular drift sequence \(\mathbf{\Delta}_{n}\). The upper bound of the theorem follows by setting the parameter \(r\) equal to \(r^{*}\), for which \(\Delta_{n-r^{*}+1}\leq\sqrt{k/r^{*}}\). ### Lower Bound The lower bound of Theorem 4.2 is proven by using information-theoretical tools from minimax theory (Yu, 1997). In particular, we select a particular family of product probability distributions over \(\mathcal{X}^{n}\) from \(\mathcal{S}(\mathbf{\Delta}_{n},k)\), and obtain our lower bound by arguing that the observed values do not provide enough information to distinguish among those distributions (Lemma 4.5). This family of product probability distributions is constructed as follows. Let \(r\) be a parameter such that \(1\leq r\leq n\). Each product probability distribution in our family has the same distribution for the first \(n-r\) random variables. That is, the first \(n-r\) random variables provide no information to decide among the family. For each product distribution in this family, the last \(r\) random variables steadily drift in distribution in a distinct direction subject to the constraint of the drift sequence \(\mathbf{\Delta}_{n}\). We obtain a trade-off: if the value of \(r\) is large, it is easier to decide among the family as we have more time to drift apart, but we make a bigger error if we cannot decide correctly. Conversely, if the value of \(r\) is small, it is harder to decide among the family, but we make a smaller error as there is less time to drift apart. Similarly to the upper bound, we obtain a tight lower bound by setting the parameter \(r\) equal to \(r^{*}\) as defined in Theorem 4.2. This choice is adopted throughout this subsection. We distinguish two cases: \((a)\)\(r^{*}=n\) and \((b)\)\(r^{*}<n\). In case \((a)\), we argue that the minimax error is at least equal to the lower bound \(\Omega(\sqrt{k/r^{*}})\) for discrete density estimation with \(n=r^{*}\) independent and identically distributed samples. In the remaining of this subsection, we focus on case \((b)\). In order to establish lower bound for the minimax risk and the minimax cumulative risk, we use Assouad's Lemma as the main technical tool. This is the first work to use information-theoretic tools to provide lower bound in a drift setting. Assouad's Lemma uses a family of probability distribution \(\{S_{w}:w\in\{0,1\}^{m}\}\) indexed over a hypercube \(\{0,1\}^{m}\) for some \(m\geq 1\). For two binary strings \(v,w\in\{0,1\}^{m}\), their Hamming distance is defined as \(h(v,w)=\sum_{i=1}^{m}\mathbf{1}_{\{v_{i}|w_{i}\}}\). **Lemma 4.5** (Assouad's Lemma).: _Let \(\theta(\cdot)\) be a target property to estimate. Let \(\{S_{w}:w\in\{0,1\}^{m}\}\subseteq\mathcal{S}\) be a family of probability distributions indexed by \(w\). Let \(p\geq 1\). Then:_ \[\inf_{\hat{\theta}}\sup_{S\in\mathcal{S}}\operatorname*{\mathbb{E }}_{\boldsymbol{X}\sim\mathcal{S}}\left[2^{p}d^{p}\left(\hat{\theta}( \boldsymbol{X}),\theta(S)\right)\right]\] \[\geq\frac{m}{4}\left(\min_{v\neq w}\frac{d(\theta(S_{w}),\theta( S_{v}))}{h(v,w)}\right)\bigg{[}\min_{\begin{subarray}{c}v,w:\\ \left\|w\right\|_{1}>\left\|v\right\|_{1}\\ h(v,w)=1\end{subarray}}e^{-\operatorname{KL}(S_{w}\|S_{v})}\bigg{]}\] _where \(\operatorname{KL}\) is the Kullback-Leibler divergence and \(\hat{\theta}\) is any estimator of \(\theta(S)\)._ Our statement of Assouad's Lemma follows immediately by adapting to our notation its classic statement as in (Van der Vaart, 2000, Lemma 24.3). Differently from the latter statement, we state it with the \(\operatorname{KL}\)-divergence by using the known inequality \(\|P\wedge Q\|=\|Q\wedge P\|\geq(1/2)\exp(-\operatorname{KL}(P\|Q))\) that holds for any distributions \(P\) and \(Q\). Our formulation is more convenient for the computations of this paper. Without loss of generality, assume that \(k\) is even. Our goal is to properly construct a family of sequence of drifting distributions and apply Assouad's Lemma. We construct a family of product distributions \(\{S_{w}:w\in\{0,1\}^{k/2}\}\) as follows. For each \(w\in\{0,1\}^{k/2}\), we have that \(S_{w}=P_{w,1}\times\ldots\times P_{w,n}\) is the product distributions of \(n\) discrete probability distributions over \([k]\). For any \(j\in[k]\) and \(w\in\{0,1\}^{k/2}\), we define \[P_{w,i}(j)=\begin{cases}\frac{1}{k}&\text{if }i\leq n-r^{*}\\ \frac{1}{k}+(-1)^{j}w_{\lceil j/2\rceil}\frac{\Delta_{n-r^{*}+1}-\Delta_{i}}{ k}&\text{if }i>n-r^{*}\end{cases}\] Intuitively, for any \(i\geq n-r^{*}+1\), if \(w_{j}=1\), then the probabilities of the elements \(2j-1\) and \(2j\) change as follows: \(P_{w,i}(2j-1)\) decreases, while the probability \(P_{w,i}(2j)\) increases of the same amount. The following proposition shows that our family of product distributions is well-defined. **Proposition 4.6**.: \(\{S_{w}:w\in\{0,1\}^{k/2}\}\subseteq\mathcal{S}_{n}(\boldsymbol{\Delta}_{n},k)\)__ Proof.: First, we have that each \(P_{w,i}\) is a well defined probability distribution for any \(w\) and \(i\), as \(\sum_{j\in[k]}P_{w,i}(j)=1\) by construction, and \(0\leq P_{w,i}(j)\leq 1\) for any \(j\in[k]\), since \(\Delta_{n-r^{*}+1}\leq\sqrt{k/r^{*}}<1\) by assumption of the theorem. Second, \(S_{w}\) satisfies the assumptions on the drift sequence \(\boldsymbol{\Delta}_{n}\) of Definition 4.1. In fact, for any \(i<n-r^{*}+1\), we have that \(\operatorname{TV}(P_{w,i},P_{w,i+1})=0\leq\Delta_{i}-\Delta_{i+1}\), and for any \(i\geq n-r^{*}+1\), we have that \[\operatorname{TV}(P_{w,i},P_{w,i+1}) =\frac{1}{2}\sum_{\ell\in[k/2]:w_{\ell}=1}\frac{2}{k}\left|\Delta_ {i}-\Delta_{i+1}\right|\] \[=(\Delta_{i}-\Delta_{i+1})\frac{\|w\|_{1}}{k}\leq\Delta_{i}-\Delta _{i+1}\] By using the triangle inequality, this also implies that \(\operatorname{TV}(P_{w,i},P_{w,n})\leq\Delta_{i}\) for any \(i\in[n]\). Let \(\theta_{n}(\cdot)\) be defined as in Section 3. The next two technical propositions show how to compute the quantities required to apply Assouad's Lemma in our setting. **Proposition 4.7**.: _Given \(w,w^{\prime}\in\{0,1\}^{k/2}\), we have that \(\operatorname{TV}(\theta_{n}(S_{w}),\theta_{n}(S_{w^{\prime}}))=(\Delta_{n-r^ {*}+1}/k)\cdot h(w,w^{\prime})\)._ Proof.: By definition of \(\theta_{n}(\cdot)\), we have that \(\operatorname{TV}(\theta_{n}(S_{w}),\theta_{n}(S_{w^{\prime}}))=\operatorname {TV}(P_{w,n},P_{w^{\prime},n})\). Thus, \[\operatorname{TV}(P_{w,n},P_{w^{\prime},n})=\frac{1}{2}\frac{\Delta_{n-r^{*}+1 }-\Delta_{n}}{k}\sum_{\ell=1}^{k/2}2|w^{\prime}_{\ell}-w_{\ell}|,\] and the statement follows by observing that \(\Delta_{n}=0\) and that \(\sum_{\ell}|w^{\prime}_{\ell}-w_{\ell}|=h(w,w^{\prime})\). **Proposition 4.8**.: _Let \(w,w^{\prime}\in\{0,1\}^{k}\) such that \(h(w,w^{\prime})=1\), and let \(w_{q}\neq w^{\prime}_{q}\) be the bit in which they differ. Assume that \(w_{q}=1\). Then \(\operatorname{KL}(S_{w}\|S_{w^{\prime}})\leq 2\)._ Proof.: By using the factorization property of the KL-divergence (see Proposition A.1 in the appendix), we have that \(\operatorname{KL}(S_{w}\|S_{w^{\prime}})=\sum_{i=n-r^{*}+1}^{n}\operatorname{ KL}(P_{w,i}\|P_{w^{\prime},i})\), since \(P_{w,i}=P_{w^{\prime},i}\) for \(i<n-r^{*}+1\). By using the definition of \(\operatorname{KL}\), and the definition of \(S_{w}\), we obtain \[\operatorname{KL}(S_{w}\|S_{w^{\prime}}) =\sum_{i=n-r^{*}+1}^{n}\left[P_{w,i}(2q)\log\left(\frac{P_{w,i}(2q )}{P_{w^{\prime},i}(2q)}\right)\right.\] \[\left.+\,P_{w,i}(2q-1)\log\left(\frac{P_{w,i}(2q-1)}{P_{w^{\prime},i}(2q-1)}\right)\,\right]\,\] as \(P_{w,i}\) and \(P_{w^{\prime},i}\) only differ on the elements \(2q-1\) and \(2q\) for \(i\geq n-r^{*}+1\). If we expand the computation above, we have \[\operatorname{KL}(S_{w}\|S_{w^{\prime}})= \frac{1}{k}\sum_{i=n-r+1}^{r}\Bigg{\{}\left(1+\Delta_{i}\right) \log\left(1+\Delta_{i}\right)\] \[\quad+\left(1-\Delta_{i}\right)\log\left(1-\Delta_{i}\right) \Bigg{\}}\] For any \(i\geq n-r^{*}+1\), the following chain of inequality holds \(\Delta_{i}\leq\Delta_{n-r^{*}+1}\leq\sqrt{k/r^{*}}<1\). Thus, we can use the inequality \((1+x)\log(1+x)+(1-x)\log(1-x)\leq 2x^{2}\) that holds for any \(|x|<1\) (see Proposition A.2). We obtain: \[\mathrm{KL}(S_{w}\|S_{w^{\prime}})\leq\frac{2}{k}\sum_{i=n-r^{*}+1}^{n}\Delta _{i}^{2}\leq(2r^{*}/k)\Delta_{n-r^{*}+1}^{2}\] where the last inequality follows from the assumption that the sequence \(\Delta_{1},\ldots,\Delta_{n}\) is non-increasing. We can conclude the proof by observing that due to the definition of \(r^{*}\), it holds that \(\Delta_{n-r^{*}+1}^{2}\leq k/r^{*}\). We apply Assouad's Lemma with the family of product distributions \(\{S_{w}:w\in\{0,1\}^{k/2}\}\subseteq\mathcal{S}_{n}(\boldsymbol{\Delta}_{n},k)\), and obtain the following lower bound to the minimax risk \[\frac{k}{16}\left(\min_{v\neq w}\frac{d(\theta(S_{w}),\theta(S_{v}))}{h(v,w)} \right)\left(\min_{\begin{subarray}{c}v,w\\ h(v,w)=1\end{subarray}}e^{-\mathrm{KL}(S_{w}\|S_{v})}\right)\] We use Propositions 4.7 and 4.8 to lower bound the above expression, and obtain \[\inf_{\hat{\theta}_{n}}\sup_{S\in\mathcal{S}_{n}(\boldsymbol{\Delta}_{n},k)} \operatorname*{\mathbb{E}}_{\boldsymbol{X}_{n}\sim S}\mathrm{TV}\left(\hat{ \theta}_{n},\theta_{n}(S)\right)\geq\frac{\Delta_{n-r^{*}+1}}{16e^{2}}\] Note that since \(r^{*}<n\), by using the definition of \(r^{*}\), we have that \(\Delta_{n-r^{*}}>\sqrt{k/(r^{*}+1)}\). Due to the regularity assumption of the drift sequence \(\boldsymbol{\Delta}_{n}\), there exists a constant \(c\) such that \(\Delta_{n-r^{*}}/\Delta_{n-r^{*}+1}\leq c\). Therefore, we have that \(\Delta_{n-r^{*}+1}\geq\frac{1}{c}\Delta_{n-r^{*}}=\Omega(\sqrt{k/r^{*}})\), and this concludes the proof of the lower bound of Theorem 4.2. ### Minimax Rate for the Average Risk Theorem 4.2 provides the minimax risk for estimating a distribution at a given time \(n\). In this subsection, we want to characterize the minimax rate of the average risk defined as in (2) for the online version of this problem. In particular, we want to show that the lower bound proven in Theorem 4.2 for a specific time \(n\) is not a rare event but can hold on average for arbitrarily long sequence of estimates. We study this problem for the case of bounded drift at each steps, i.e. there exists a constant \(\Delta>0\) such that \(\mathrm{TV}(P_{i},P_{i+1})\leq\Delta\) for any \(i\leq n-1\). Let \(\mathcal{S}_{n}(\Delta,k)\) denote the family of product distributions \(S=P_{1}\times\ldots\times P_{n}\) over \(\mathcal{X}^{n}\) for which this property holds. The minimax rate of the average risk over \(n\) steps is \[\Pi_{n}(\Delta,k)\doteq\inf_{\hat{\theta}_{1},\ldots,\hat{\theta}_{n}}\sup_{S \in\mathcal{S}_{n}(\Delta,k)}\operatorname*{\mathbb{E}}_{\boldsymbol{X}_{n} \sim S}\sum_{i=1}^{n}\frac{\mathrm{TV}(\hat{\theta}_{i}(\boldsymbol{X}_{i}), \theta_{t}(S))}{n}\] and we let \(\Pi(\Delta,k)\doteq\lim_{n\to\infty}\Pi_{n}(\Delta,k)\). The value \(\Pi(\Delta,k)\) represents the limit minimax rate over a arbitrarily large number of steps for the online estimation of a discrete density over \([k]\) with the assumption that the distance of two consecutive distribution is upper bounded by \(\Delta\). We can show the following result. **Theorem 4.9**.: _Let \(\Delta\in(0,1/k)\). Then, we have that \(\Pi(\Delta,k)=\Theta((k\Delta)^{1/3})\)_ As noted in Section 4.1, this result also applies for the online problem of agnostic learning a family of binary classifiers with VC dimension \(k\) in the setting of bounded drift at each step. The upper bound is an obvious corollary of Theorem 4.2. The difficulty is in obtaining the lower bound. We note that the lower bound construction of Section 4.3 cannot be used to prove a lower bound of the minimax rate for the average risk. In fact, that construction relies on the fact that at a given time \(n\), a drift has occurred only for the latest \(r^{*}\) distribution, and it does not provide a tight lower bound for the estimation at all times \(i\leq n\). In order to prove the lower bound of Theorem 4.9, we adopt a different construction. We provide a sketch of the proof. Let \(n=m\nu\). We consider product distributions \(S=P_{1}\times\ldots\times P_{n}\) that can be partitioned into \(m\) blocks of length \(\nu\). Let \(B_{i}=P_{(\ell-1)r}\times\ldots\times P_{\ell r}\) be the product distribution of the block \(\ell\), i.e. the distribution of the random variables \((X_{(\ell-1)r},\ldots,X_{\ell r})\). We let \(S\) exhibit a periodic structure. In particular, we guarantee that the first distribution and last distribution of each block \(B_{\ell}\) is a uniform distribution over \([k]\), i.e. \(P_{(\ell-1)\nu}\) and \(P_{\ell\nu}\) are both uniform distributions for every \(\ell\in[m]\). This property plays a double role: it allows us to construct \(S\) by considering a sequence of blocks; and the estimation of each block is independent, since samples outside of the block \(\ell\) do not help to decide for the distribution \(B_{\ell}\) due to the periodic structure of \(S\). The proof of the lower bound revolves around the fact that estimating each individual block is hard. In particular, we can show a lower bound of \(\Omega\left((k\Delta)^{1/3}\right)\) to the average error of estimating a block \(B_{\ell}\) given \(\boldsymbol{X}_{\nu(\ell+1)}\). This result is obtained by using Assouad's Lemma on a properly defined family of blocks \(\mathcal{B}\). For any sequence of estimators \(\hat{\theta}_{1},\ldots,\hat{\theta}_{n}\), we use this previous result to show how to iteratively construct a sequence of blocks \(B_{1},\ldots,B_{m}\) from \(\mathcal{B}\) such that the the average risk of those estimators with respect to the distribution \(S=B_{1}\times\ldots\times B_{m}\) is \(\Omega\left((k\Delta)^{1/3}\right)\). By taking \(m\to\infty\), this is sufficient to prove the lower bound. The details of the full proof are deferred to the appendix. ## 5 Smooth Density Estimation In this section, we establish the minimax risk for the problem of estimating smooth densities under distribution drift. Let our sample space be any arbitrary interval \(I\subseteq\mathbb{R}\), i.e. \(\mathcal{X}=I\). Given \(\boldsymbol{X}_{n}\), our goal is to estimate the density of the distribution \(P_{n}\). In this setting, we also use \(P(x)\) to refer to the continuous density of a distribution \(P\) at \(x\in\mathcal{X}\). Following previous work on nonparametric density estimation, we characterize the smoothness of a density in a Sobolev sense (Tsybakov, 2009). **Definition 5.1**.: Let \(\beta\in\mathbb{N}_{+}\). A probability density \(P\) over \(\mathcal{X}\) is \(\beta\)-smooth if \(P\) is differentiable \(\beta\) times, \(P^{(\beta-1)}\) is absolutely continuous and \(\int\left(P^{(\beta)}(x)\right)^{2}dx<\infty\). In order to evaluate the error of our estimate, we use the squared \(L_{2}\) distance between densities. Given two densities \(f\) and \(g\) over \(\mathcal{X}\), their \(L_{2}\) distance is defined as \[L_{2}(f,g)\doteq\|f-g\|=\sqrt{\int_{\mathbb{R}}\left(f(x)-g(x)\right)^{2}}\enspace.\] In the density estimation literature, the quantity \(L_{2}^{2}\) is also referred to as _mean integrated squared error_, and it is the most commonly used measure of error. In our work, we consider the following family of smooth probability measures \(\mathcal{S}(\mathbf{\Delta}_{n},\beta)\) over \(\mathbf{X}_{n}\) with regular drift \(\mathbf{\Delta}_{n}\). **Definition 5.2**.: Let \(\mathbf{\Delta}_{n}\in\mathbb{R}_{\geq 0}^{n}\) be a regular drift sequence, and let \(\beta>0\). A product distribution \(S=P_{1}\times\ldots\times P_{n}\) over \(\mathcal{X}^{n}\) belongs to \(\mathcal{S}(\mathbf{\Delta}_{n},\beta)\) if and only if: \((a)\)\(P_{i}\) is \(\beta\)-smooth for \(i\in[n]\); \(\mathbf{\Delta}_{n}\) is a regular drift sequence for \(S\) with the metric \(L_{2}\). We can establish the following minimax risk in this setting. **Theorem 5.3**.: _Let \(\mathbf{\Delta}_{n}\in\mathbb{R}_{+}^{n}\) be a regular drift sequence and let \(\beta>0\). Let \(\mathcal{S}_{n}(\mathbf{\Delta}_{n},\beta)\) be defined as in Definition 5.2, and let_ \[r^{*}=\max\left\{r\in[n]:\Delta_{n-r+1}\leq\left(\frac{1}{r}\right)^{\frac{ \beta}{2\beta+1}}\right\}\] _Let \(r^{*}\geq 1\) be well-defined. We have:_ \[\inf_{\hat{\theta}_{n}}\sup_{S\in\mathcal{S}_{n}(\mathbf{\Delta}_{n},\beta)} \underset{\mathbf{X}_{n}\sim S}{\mathbb{E}}\|\theta_{n}(S)-\hat{\theta}_{n}(\mathbf{ X}_{n})\|^{2}=\Theta\left((r^{*})^{\frac{-2\beta}{2\beta+1}}\right)\] If \(\mathbf{\Delta}_{n}\to 0\), we have that \(r^{*}=n\), and we retrieve the known minimax rate \(\Theta\left(n^{-\frac{2\beta}{2\beta+1}}\right)\) for estimating a \(\beta\)-smooth density from \(n\) independent and identically distributed samples. We can achieve the upper bound of Theorem 5.3 with a properly constructed kernel density estimator. A kernel \(K\) is a function \(K:\mathbb{R}\mapsto\mathbb{R}\) such that \(\int K(u)du=1\). Given a kernel \(K\) and a smoothing parameter \(h\), the Parzen-Rosenblatt kernel density estimator (Rosenblatt, 1956; Parzen, 1962) over the previous \(r\) samples is defined as \[\hat{P}_{K,h}^{r}(x)=\frac{1}{rh}\sum_{i=n-r+1}^{n}K\left(\frac{X_{i}-x}{h} \right)\quad.\] The parameter \(h\) is also referred to as bandwidth. In order to obtain an accurate estimator for highly smooth function, we need to define a special class of kernel functions. Let \(\beta\geq 1\) be an integer. **Definition 5.4**.: Let \(\beta\geq 1\) be an integer. We say that \(K:\mathbb{R}\mapsto\mathbb{R}\) is a kernel of order \(\beta\) if the functions \(u\mapsto u^{j}K(u)\), with \(j=0,1,\ldots,\beta\) are integrable and satisfy \[\int K(u)du=1,\ \int u^{j}K(u)du=0\ \ \text{for}\ j=1,\ldots,\beta\] \[\int K^{2}(u)du<\infty,\quad\int|u|^{\beta}|K(u)|du<\infty\enspace.\] We refer to the work of Tsybakov (2009) for a discussion of kernel of order \(\beta>1\). It can be proven that a kernel of order \(\beta\geq 2\) cannot be non-negative, and therefore we could obtain an estimate of the density that is negative. This problem can be addressed by taking only the positive part of the estimate, as described in the previous reference. If we let \(K\) be a kernel of order \(\beta\), we can prove that for any \(P_{1}\times\ldots\times P_{n}=S\in\mathcal{S}(\mathbf{\Delta}_{n},\beta)\), it holds that \[\mathbb{E}\left\|\hat{P}_{K,h}^{r}-P_{n}\right\|^{2}=O\left(\Delta_{n-r+1}^{2} +\frac{1}{r\cdot h}+h^{2\beta}\right)\enspace.\] Observe that the optimal choice of the bandwidth \(h\) to minimize the above upper bound is independent of \(\mathbf{\Delta}_{n}\). By choosing the value \(h=\Theta\left(r^{-1/(2\beta+1)}\right)\), the previous upper bound becomes \[\mathbb{E}\left\|\hat{P}_{K,h}^{r}-P_{n}\right\|^{2}=O\left(\Delta_{n-r+1}^{2} +\left(\frac{1}{r}\right)^{\frac{2\beta}{2\beta+1}}\right)\enspace.\] This bound represents a trade-off between the drift error and the statistical error of the estimation. If we choose \(r\) as \(r^{*}\), we obtain the upper bound of the theorem. The lower bound of the theorem is proven by using a similar strategy to the one used for discrete densities Section 4.3: we construct a family of product distributions that satisfy the assumption on the drift and use Assouad's Lemma. We refer the details of the proof to the appendix. We point out that it is also possible to prove an average minimax risk result similar to Theorem 4.9 for smooth densities by modifying the proof of the discrete case. ## 6 Conclusion and Open Questions We obtain tight minimax risk bounds for the discrete and smooth density estimation problems under a general model of distribution drift. We also present the first average minimax risk rate in teh drift setting. Our results also apply for the important problem of agnostic learning of a family of binary classifiers, improving the known state-of-the-art bounds in the drift setting. A major open problem is to provide a competitive algorithm that is oblivious to the drift sequence. We refer to (Hanneke et al., 2015) for preliminary results in this direction for the problem of realizable supervised learning under distribution drift.
2305.09592
Trojan Playground: A Reinforcement Learning Framework for Hardware Trojan Insertion and Detection
Current Hardware Trojan (HT) detection techniques are mostly developed based on a limited set of HT benchmarks. Existing HT benchmark circuits are generated with multiple shortcomings, i.e., i) they are heavily biased by the designers' mindset when created, and ii) they are created through a one-dimensional lens, mainly the signal activity of nets. We introduce the first automated Reinforcement Learning (RL) HT insertion and detection framework to address these shortcomings. In the HT insertion phase, an RL agent explores the circuits and finds locations best for keeping inserted HTs hidden. On the defense side, we introduce a multi-criteria RL-based HT detector that generates test vectors to discover the existence of HTs. Using the proposed framework, one can explore the HT insertion and detection design spaces to break the limitations of human mindset and benchmark issues, ultimately leading toward the next generation of innovative detectors. We demonstrate the efficacy of our framework on ISCAS-85 benchmarks, provide the attack and detection success rates, and define a methodology for comparing our techniques.
Amin Sarihi, Ahmad Patooghy, Peter Jamieson, Abdel-Hameed A. Badawy
2023-05-16T16:42:07Z
http://arxiv.org/abs/2305.09592v2
# Trojan Playground: A Reinforcement Learning Framework for Hardware Trojan Insertion and Detection ###### Abstract Current Hardware Trojan (HT) detection techniques are mostly developed based on a limited set of HT benchmarks. Existing HT benchmarks circuits are generated with multiple shortcomings, _i.e._, i) they are heavily biased by the designers' mindset when they are created, and ii) they are created through a one-dimensional lens, mainly the signal activity of nets. To address these shortcomings, we introduce the first automated reinforcement learning (RL) HT insertion and detection framework. In the insertion phase, an RL agent explores the circuits and finds different locations that are best for keeping inserted HTs hidden. On the defense side, we introduce a multi-criteria RL-based detector that generates test vectors to discover the existence of HTs. Using the proposed framework, one can explore the HT insertion and detection design spaces to break the human mindset limitations as well as the benchmark issues, ultimately leading toward the next-generation of innovative detectors. Our HT toolset is open-source to accelerate research in this field and reduce the initial setup time for newcomers. We demonstrate the efficacy of our framework on ISCAS-85 benchmarks and provide the attack and detection success rates and define a methodology for comparing our techniques. Hardware Trojan, Hardware Security, Reinforcement Learning, Open-Source. ## I Introduction PER a DoD report [1] released in \(2022\), \(88\%\) of the production and \(98\%\) of the assembly, packaging, and testing of microelectronic chips are performed outside of the US. The growing multi-party production model has significantly raised security concerns about malicious modifications in the design and fabrication of chips, _i.e._, Hardware Trojan (HT) insertion. HTs are defined as any design/manufacturing violations in an integrated circuit (IC) with respect to the intent of the IC. Upon activation, an HT may lead to erroneous outputs (an example is seen in Figure 1) and/or leak of information [2]. According to the adversarial model introduced by Shakya _et al._[3], HTs can be inserted into target ICs according to the following scenarios: * Design source code or netlist can be infected with HTs by compromised employees. * Third-party intellectual properties (IPs) like processing cores, memory modules, I/O components, and network-on-chip [4] are often purchased and incorporated into a design to speed up time-to-market and lower design expenses. However, integrating IPs from untrusted vendors can pose a risk to the security and integrity of the IC. * An untrusted foundry may reverse-engineer the GDSII physical layout to obtain the netlist and insert HTs inside them. * Malicious third-party CAD tools may also insert HTs into designs We believe that HTs can be inserted into designs in any of the discussed adversarial models. Researchers have been mostly using established benchmarks reported by Shakya _et al._ and Salmani _et al._[3, 5] as a reference to study the impact of HTs1. Subsequently, various HT detection approaches have been developed based on these benchmarks over the past decade [6, 7, 8, 9]. Despite the valuable effort to create HT benchmarks for the community, these benchmarks are limited in terms of size and variety that are needed to push detection tools into more realistic modern scenarios. For instance, the small set of benchmarks means it is hard to leverage and train machine learning (ML) HT detectors, where insufficient training data negatively impact classification accuracy. Some research studies have tried to alleviate this problem by using techniques to shuffle data for ML-based detectors, _e.g._, the leave-one-out cross-validation method [7]; however, it does not solve the problem entirely. Additionally, the existing HT benchmarks suffer from an Fig. 1: An HT with a trigger and payload. Whenever A=1, B=1, C =0, the trigger is activated (D=1) and the XOR payload inverts the value of E. inherent human bias in the insertion phase, since they are tightly coupled with the designer's mindset. For instance, the HT benchmarks in [10] only consider signal activity for HT insertion, _i.e._, HTs are inserted into a pool of available rare nets of the circuit in a random fashion. The flaws in the insertion phase simplify the problem's complexity, leading security researchers to develop HT detectors finely tuned to flawed scenarios [9, 11]. In contrast, adversaries devise new HT attacks that combine different ideas where detectors fall short to expose them. Another equally important problem in this domain is having almost no HT detectors publicly available. This deprives other researchers of accessing these tools and imposes a considerable latency for newcomers to hardware security. This work attempts to move this research space forward by developing next-generation HT insertion and detection methods based on reinforcement learning (RL). The developed RL-based HT insertion tool creates new HT benchmarks according to the criteria passed to the tool by the user. The insertion criteria is an RL rewarding function modified by a user that relies on the RL agent to automatically insert HTs into designs. The netlist is considered an environment in which the RL agent tries to insert HTs to maximize a gained reward. The rewarding scheme of the proposed insertion tool is tunable, which can push the agent toward a specific goal in the training session. We believe that our insertion tool is a step towards preparing the community for future HTs inserted by non-human agents, _e.g._, AI agents. We also propose an RL-based HT detector with a tunable rewarding function that helps detect inserted HTs based on various strategies. We have studied three different detection rewarding functions for the RL detector agent to explore this space. The agent finds test vectors that yield the highest rewards per each reward function. Then, the generated test vectors are used to activate and find HTs in the IC. The test engineer passes the test vectors to the chip and monitors the output for any deviations from the golden model. Our proposed toolset enables the researchers to experience both HT insertion and detection within a unified framework. The framework only requires users to set the parameters to insert and detect HTs without human intervention. There have been previous efforts to automate the HT insertion and detection process [10, 12, 13]; however, they are either not open-source tools or need an intermediate effort hindering us from creating a vast quantity of HTs (more explanation in Section II). We make the following contributions in the paper with respect to our previous publications ([14, 15]) noting that all of the work will be released open-source: * We developed a tunable RL-based HT insertion tool free of human bias, capable of automatic HT insertion and creating a large population of valid HTs for each design * We introduce a tunable RL-based multi-criteria HT detection tool that helps a security engineer to better prepare for different HT insertion strategies. * We introduce and use a generic methodology to make fair comparisons between HT detectors. The methodology is based on a metric called the confidence value that helps the security engineer to select the proper detector based on the chip's application and security requirements. Our results show that our developed detection tool with all three of our detection approaches has a \(90.54\)% detection rate on average for our HT-inserted benchmarks. We compare these detection results to existing state-of-the-art detection methods and show how our techniques find previously unidentifiable HTs. As we believe that HT detection will be implemented as a variety of different detection strategies, the uniquely identified HTs suggest that our detection techniques and framework are important contributions to this space. The remainder of this paper is organized as follows: Section II reviews the related work and explains the fundamentals of RL. The mechanics of our proposed HT insertion and detection approaches are presented in Sections III and IV, respectively. We introduce our HT comparison methodology in Section V. Section VI demonstrates the experimental results and Section VII concludes the paper. ## II Related work This section reviews the previous studies in HT insertion and detection. ### _Hardware Trojan Insertion and Benchmarks_ The first attempts to gather benchmarks with hard-to-activate HTs were made by Shakya _et al._ and Salmani _et al._[3, 5]. A set of \(96\) trust benchmarks with different HT sizes and configurations are available at Trust-Hub [16]. While these benchmarks are a valuable contribution for the research community, they have three drawbacks: (1) The limited number of Trojan circuits represents only a subset of the possible HT insertion landscape in digital circuits, which hampers the ability to develop diverse HT countermeasures, (2) they lack incorporating state-of-the-art Trojan attacks and (3) they fail to populate a large enough HT dataset that is required for ML-based HT detection. Various approaches since have attempted to insert HTs. Jyothi _et al._[17] proposed a tool called TAINT for automated HT insertion into FPGAs at the RTL level, gate-level netlist, and post-map netlist. The tool also allows the user to insert HTs in FPGA resources such as Look-Up Tables (LUTs), Flip Flops (FFs), Block Random Access Memory (BRAM), and Digital Signal Processors (DSP). Despite the claimed automated process, the user is expected to select the trigger nets based on suggestions made by the tool. The results section shows that the number of available nodes in post-map netlists drops significantly, leaving less flexibility for Trojan insertion compared to RTL codes. Reverse engineering tools can also be used to identify security-critical circuitry in designs that can direct attackers to insert efficient HTs. Frybial _et al._[18] introduced HAL, a gate-level netlist reverse engineering tool that offers both offensive reverse engineering strategies and defensive measures, such as developing arbitrary Trojan detection techniques. The authors believe that adversaries are more likely to insert HTs through reverse engineering techniques and are less likely to have direct access to the original HDL codes. A hardware Trojan that leaks cryptographic keys has been inserted with the tool; nonetheless, it requires human effort for insertion, which hinders the process of producing a large HT dataset [19]. Cruz _et al._[10] tried to address the benchmark shortcomings by presenting a toolset capable of inserting a variety of HTs based on the parameters passed to the toolset. Their software inserts HTs with the following configuration parameters: the number of trigger nets, the number of rare nets among the trigger nodes, a rare-net threshold (computed with functional simulation), the number of the HT instances to be inserted, the HT effect, the activation method, its type, and the choice of payload. Despite increasing the variety of inserted HTs, there is no solution for finding the optimal trigger and payload nets. The TRIT benchmark set generated by this tool is available on Trust-Hub [16]. Cruz _et al._[19] propose MIMIC, an ML framework for automatically generating Trojan benchmarks. The authors extracted \(16\) functional and structural features from existing Trojan samples. Then, they trained ML models and generated a large number of hypothetical Trojans called _virtual Trojans_ for a given design. The virtual Trojans are then compared to a reference Trojan model and ranked. Finally, the selected Trojan will be inserted into the target circuit using suitable trigger and payload nets. The HT insertion process is extremely convoluted, requiring multiple stages and expertise. MIMIC is not released to the public and rebuilding the tool from their work is an extensive process. MIMIC's HT insertion criteria are very similar to [10] and it suffers the same shortcomings in [10]. In an attempt to deceive machine learning HT detection approaches, Nozawa _et al._[20] have devised adversarial examples. Their proposed method replaces the HT instance with its logically equivalent circuit so that the classification algorithm erroneously disregards it. To design the best adversarial example, the authors have defined two parameters: Trojan-net concealment degree (TCD) which is tuned to maximize the loss function of the neural network in the detection process, and a modification evaluating value (MEV) that should be minimized to have the least impact on circuits. These two metrics help the attacker to look for more effective logical equivalents and diversify HTs. The equivalent HTs are inserted in trust-hub benchmarks, and they decrease accuracy significantly. Sarihi _et al._[14] (our own work) insert a large number of HTs into ISCAS-85 benchmarks with Reinforcement Learning (RL). The HT circuit is an agent that interacts with the environment (the circuit) by taking \(5\) different actions (next level, previous level, same level up, same level down, no action) for each trigger input. Level denotes the logic level in the combinational circuits. The agent moves the Trojan inputs throughout the circuit and explores various locations suitable for embedding HTs. Triggers are selected according to a set of SCOAP (Sandia Controllability/Observability Analysis Program [21]) parameters, _i.e._, a combination of controllability and observability. The agent is rewarded in proportion to the number of circuit inputs it can engage in the HT activation process. Gohil _et al._[22] proposed ATTRITION, another RL-based HT insertion platform where signal probability is the target upon which the trigger nets are selected. The agent tries to find a set of so-called _compatible_ rare nets, _i.e._, a group of rare nets that can be activated together with an input test vector. The test vector is generated using a SAT-solver. The authors also propose a pruning technique to limit the search space for the agent to produce more HTs in a shorter period. The tool is claimed to be open-source, but only the source code was released.. Table I summarizes the existing artifacts and research in the HT insertion space. It represents the target technology (\(2^{nd}\) column); summarizes the insertion criteria (\(3^{rd}\) column); shows if the tool is automated (\(4^{th}\) column), and if the tool or its artifacts are openly released (\(5^{th}\) column). ### _Hardware Trojan Detection_ Chakraborty _et al._[23] introduced MERO, a test vector generator that tries to trigger possible HTs by exciting rare-active nets multiple times. The algorithm's efficacy is tested against randomly generated HTs with rare triggers. MERO's detection rate significantly shrinks as circuit size grows. Hasegawa _et al._[7] have proposed a machine-learning method for HT detection. The method extracts \(51\) circuit features from the trust-hub benchmarks to train a random forest classifier that eventually decides whether a design is HT-free or not. The HT classifier is trained on a limited HT dataset with an inherent bias during its insertion phase. Lyu _et al._[11] proposed TARMAC to map the trigger activation problem to the clique cover problem, _i.e._, treating the netlist as a graph. They utilized a SAT-solver to generate the test vector for each maximal satisfiable clique. The method lacks scalability as it should run on each suspect circuit separately. Also, the achieved performance is not stable [2]. Implementation of the method is neither trivial nor available online for researchers [22]. TGRL is an RL framework used to detect HTs [2]. The agent decides whether to flip a bit in the test vector according to an observed probability distribution. The reward function, which is a combination of the number of activated nets and their SCOAP [24] parameters, pushes the agent to activate as many signals as possible. Despite its higher HT detection rate than MERO and TARMAC, the algorithm was not tested on any HT benchmarks [22]. DETERRENT, an RL-based detection method [9], finds the smallest set of test vectors to activate multiple combinations of trigger nets. The RL state is a subset of all possible rare nets, and actions are appending other rare nets to this subset. The authors used a SAT-solver to determine if actions are \begin{table} \begin{tabular}{|c|l|c|c|c|} \hline **Tool** & **Domain** & **Insertion Criteria** & **Automatic** & **Open-Source** \\ \hline \hline Trust-Hub [3] & ASIC-PPCA & Secret Leader, Signal Prob. & ✗ & ✗ \\ \hline HIA [18] & ASIC-PPCA & Neighborhood Control Value & ✗ & ✗ \\ \hline TANI [17] & PPCA & Not Mertioned & ✗ & ✗ \\ \hline TRIT [10] & ASIC & Signal Prob. & ✗ & ✗ \\ \hline Nozawa _et al._[13] & ASIC & Transaction Prob. & ✗ & ✗ \\ \hline Nozawa _et al._[20] & ASIC & Same = (1) & ✗ & ✗ \\ \hline MSINE [19] & ASIC & Struct. ✗, Penings & ✗ & ✗ \\ \hline Snih _et al._[14] & ASIC & SCOP parameters & ✗ & ✗ \\ \hline ATTRITITIT [22] & ASIC & Signal Prob. & ✗ & ✗ \\ \hline \end{tabular} \end{table} TABLE I: Survey of previous HT insertion tools. compatible with the rare nets in the subsets and they only focus on signal-switching activities as their target. The HW2VEC tool [25] converts RTL-level and gate-level designs into a dataflow graph and abstract syntax tree to extract a feature set that represents the structural information of the design. Extracted features are used to train a graph neural network to determine whether a design is infected with HTs or not. The authors test the tool with \(34\) circuits infected by in-house generated HTs. It is very important to note that out of the methods reviewed above (and others studied but not discussed here), the only publicly available tool is HW2VEC. Table II summarizes the previous works in HT detection where researchers have used various criteria in detecting HTs (\(2^{nd}\) column), and the open-source state of the work (\(3^{rd}\) column). ## III The Proposed HT Insertion Figure 2 shows the flow of the proposed HT insertion tool. The first step creates a graph representation of the flattened netlist from the circuit. Yosys Open Synthesis Suite [26] translates the HDL (Verilog) source of the circuit into a JSON (JavaScript Object Notation) [27] netlist which enables us to parse the internal graph representation of the circuit. Next, the tool finds a set of rare nets to be used as HT trigger nets (this step is described in details in Subsection III-A). Finally, an RL agent uses the rare net information and attempts to insert an HT to maximize a rewarding function as described in section III-B. ### _Rare Nets Extraction_ We use the parameters introduced in [8] to identify trigger nets. These parameters are defined as functions of net _controllability_ and _observability_. Controllability measures the difficulty of setting a particular net in a design to either _'0'_ or _'1'_. Observability, on the other hand, is the difficulty of propagating a net value to at least one of the circuit's primary outputs [21]. The first parameter is called the HT trigger susceptibility parameter, and it is derived from the fact that low-switching nets have mainly a high difference between their controllability values. Equation 1 describes this parameter: \[HTS(Net_{i})=\frac{|CC1(Net_{i})-CC0(Net_{i})|}{Max(CC1(Net_{i}),CC0(Net_{i}))} \tag{1}\] where \(HTS\) is the HT trigger susceptibility parameter of the net; \(CC0(Net_{i})\) and \(CC1(Net_{i})\) are the combinational controllability \(0\) and \(1\) of \(Net_{i}\), respectively. The \(HTS\) parameter ranges between \([0,1)\) such that higher values correlate with lower activity on the net. The other parameter, specified in Equation 2, measures the ratio of observability to controllability: \[OCR(Net_{i})=\frac{CO(Net_{i})}{CC1(Net_{i})+CC0(Net_{i})} \tag{2}\] where \(OCR\) is the observability to controllability ratio. This equation requires that the HT trigger nets must be very hard to control, but not so hard to observe. Unlike the \(HTS\) parameter, \(OCR\) is not bounded, and it belongs to the interval of \([0,\infty)\). We will specify thresholds (see Section VI) for each parameter and use them as filters to populate the set of rarely-activated nets for our tool. ### _RL-Based HT Insertion_ The RL environment is, in fact, the circuit in which the agent is trying to insert HTs. The agent's action is to insert combinational HTs where trigger nets are ANDED, and the payload is an XOR gate (same as Figure 1). The RL agent starts from a reset condition where it takes a series of actions that eventually insert HTs in the circuit. Different HT insertion options are represented with a state vector in each circuit. For a given HT, the state vector is comprised of \(s_{t}=[s_{1},s_{2},...,s_{n-2},s_{n-1},s_{n}]\) where \(s_{1}\) through \(s_{n-2}\) are the logic-levels of the HT inputs, and \(s_{n-1}\) and \(s_{n}\) are the logic-levels of the target net and the output of the XOR payload, respectively. Figure 3 shows an example of how we conduct the circuit levelization. Here, the circuit Primary Inputs (PIs) are considered level \(0\). The output level of each gate is computed by Equation 3: \[Level(output)=MAX(Level(in_{1}),Level(in_{2}))+1 \tag{3}\] As an example, the HT in Figure 4 (in yellow) has the state vector \(s_{t}=[2,1,3,4]\). The action space of the described HT agent is multi-discrete, _i.e._, each input of the HT may choose an action from a set of five available actions. These actions are: * _Next level_: the input of the HT moves to one of the nets that are one level higher than the current net level. * _Previous level_: the input of the HT moves to one of the nets that is one level lower than the current net level. * _Same level up_: the input of the HT will move to one of the nets at the same level as the current net level. The net is picked by pointing to the next net in the ascending list of net ids for the given level. * _Same level down_: the input of the HT will move to one of the nets at the same level as the current net level. The net is picked by pointing to the previous net in the ascending list of nets for the given level. * _No action_: the input of the HT will not move. If an action leads the agent to step outside the circuit boundaries, it is substituted with a "No action". The action space is also represented by a vector where its size is equal to the number of the HT inputs, and each action can be one of the five actions above, e.g., for the HT in Figure 4, the action space would be \(a_{t}=[a_{1},a_{2}]\) since it has \begin{table} \begin{tabular}{|c|c|c|} \hline **Study** & **Detection Basis** & **Open-Source** \\ \hline \hline MERO [23] & Switching Activity & ✗ \\ \hline Hasegawa _et al._[7] & Netlist Features & ✗ \\ \hline TARMAC _et al._[11] & Switching Activity & ✗ \\ \hline TGRL _et al._[2] & Switching Activity & ✗ \\ \hline DEIERENT _et al._[9] & Switching Activity & ✗ \\ \hline HW2VEC [25] & Graph Structural Info. & ✓ \\ \hline \end{tabular} \end{table} TABLE II: Survey of Previous HT Detection Tools. two inputs. Hypothetical actions for the first and the second inputs can be the same level up/down and next/previous level, respectively. The flow of our RL inserting agent is described in Algorithm 1. The SCOAP parameters are first computed (line \(1\)). We specify two thresholds \(T_{HTS}\) and \(T_{OCR}\) and require our algorithm to find nets that have higher \(HTS\) values than \(T_{HTS}\) and lower \(OCR\) values than \(T_{OCR}\) (line \(2\)). These nets are classified as rare nets. The algorithm consists of two nested while loops that keep track of the terminal states and the elapsed timesteps. The latter defines the total number of samples the agent trains on. We have used the OpenAI Gym [28] environment to implement our RL agent. The first used method is called \(reset\_environment()\) which resets the environment before each episode and returns the initial location of the agent HT (line \(5\)). The HT is randomly inserted within the circuit according to the following set of rules. * Rule 1) Trigger nets are selected randomly from the list of the total nets. * Rule 2) Each net can drive a maximum of one trigger net. * Rule 3) Trigger nets cannot be assigned as the target. * Rule 4) The target net is selected with respect to the level of trigger nets. To prevent forming combinational loops, we specify that the level of the target net should Fig. 4: Obtaining the state vector in the presence of an HT in the circuit. Fig. 3: Levelizing a circuit. The output level of each digital gate is computed by \(max(Level(in1),Level(in2))+1\). Fig. 2: The proposed RL-based HT insertion tool flow. be greater than that of the trigger nets. In each episode of the training process, we keep the target net unchanged to help the RL algorithm converge faster. Instead of manually specifying a target net, we let the algorithm explore the environment and choose target net. The terminal state variable \(TS\) is set to \(False\) to check the termination condition for each episode. When the level of the trigger nets reaches the level of the target net, or the number of steps per episode reaches an allowed maximum (lines 6-7), \(TS\) becomes \(True\) which terminates the episode. The training process of the agent takes place in a loop where actions are being issued, rewards are collected, the state is updated, and eventually, the updated graph is returned. To test the value of an action taken by the RL agent (meaning if the HT can be triggered with at least one input pattern), we use \(PODEM\) (Path-Oriented Decision Making), an automatic test pattern generator [29] (line 9). This algorithm uses a series of backtracing and forward implications to find a vector that activates the inserted HT. If the HT payload propagates through at least one of the circuit outputs, the action gains a reward proportional to the number of rare triggers on the HT. After the number of rare triggers is counted in line \(10\), the agent is rewarded in lines \(11\) through \(25\). The rewarding scheme is designed such that the agent would start finding HTs with \(1\) rare trigger net and adds more rare while exploring the environment. Additionally, the exponential reward increase in each case ensures that the agent is highly encouraged to find HTs that have at least 3 or more rare trigger nets. In case an HT is not activated with \(PODEM\) or no rare nets are among the HT triggers, the agent will be rewarded \(-1\). Since the agent is unlikely to find high-reward HTs at the beginning of the exploration stage, the first two rewarding cases (\(temp_{reward}=1\) and \(temp_{reward}=2\)) should be set such that the agent sees enough positive rewarding improvements, yet be more eager to find more HTs that yield higher rewards. The reward values are assigned to different cases after conducting extensive experiments with the RL agent. To train the RL agent, we use the \(PPO\) (Proximal Policy Optimization) [30] RL algorithm. PPO can train agents with multi-discrete action spaces in discrete or continuous spaces. The main idea of PPO is that the new updated policy (which is a set of actions to reach the goal) should not deviate too far from the old policy following an update in the algorithm. To avoid substantial updates, the algorithm uses a technique called clipping in the objective function [30]. By using a clipped objective function, PPO restricts the size of policy updates to prevent them from deviating too much from the previous policy. This constraint promotes stability and ensures that the updates are controlled within a certain range, which helps to avoid any abrupt changes that may negatively affect the performance of the agent. At last, when the HTs are inserted, the toolset outputs Verilog gate-level netlist files that contain the malicious HTs (line \(30\)). ## IV The Proposed HT Detection From a detection perspective, we must determine whether a given circuit is clean or Trojan-infected. To achieve this goal, an RL agent is defined that applies its generated test vectors to circuits and checks for any deviation at the circuits' primary outputs with respect to the expected outputs (golden model). The agent interacts with the circuit (performs actions) by flipping the vector values aiming to activate certain internal nets. The action space is an \(n\)-dimensional binary array where \(n\) is the number of circuit primary inputs. The action space vector \(a_{t}\) is defined as \(a_{t}=[a_{1},a_{2},...,a_{n}]\). The agent decides to toggle each \(a_{i}\) to transition to another state or leave them unchanged. \(a_{i}=0\) denotes that the value of the \(i^{th}\) bit of the input vector should remain unchanged from the previous test vector. In contrast, \(a_{i}=1\) means that the \(i^{th}\) input bit should flip. The RL agent follows a \(\pi\) policy to decide which actions should be commenced at each state. The \(\pi\) policy is updated using a policy gradient method [31] where the agent commences actions based on probability distribution from \(\pi\) policy. The assumption is that attackers are likely to choose trigger nets that have a consistent value (\(0\) or \(1\)) most of the time. Thus, a detector aims to activate as many of these dormant nets as possible. We consider two different approaches for identifying such rare nets: **1) Dynamic Simulation**: We feed each circuit with \(100\)K random test vectors and record the value of each net. Then, we populate the switching activity statistics during the simulation time and set a threshold \(\theta\) for rare nets where the switching activity for a net below \(\theta\) denotes that the net is rare. \(\theta\) is in the range of \([0,1]\). **2) Static Simulation**: We use the \(HTS\) parameter in Equation 1 and a threshold to find rare nets. Categorizing rare nets with this approach provides the security engineer with an extra option for detection. In a circuit with \(m\) rare nets, the state space is defined as \(State_{t}=[s_{1},s_{2},...,s_{m}]\) where \(s_{i}\) is associated with the \(i^{th}\) net in the set. If an action (a test vector) sets the \(i^{th}\) net to its rare value, \(s_{i}\) will be \(1\); otherwise, \(s_{i}\) stays at \(0\). As can be inferred, the action and state spaces are multi-binary. Attackers tend to design multi-trigger HTs [10] and this should be considered when HT detectors are designed. The final purpose of our detector is to generate a set of test vectors that can trigger as many rare nets as possible. To achieve this goal, a part of the rewarding function should enumerate rare nets. However, we should avoid over-counting the situations in which a rare net has successive dependent rare nets. An example case is shown in Figure 5 where four nets \(net_{1}\), \(net_{2}\), \(net_{3}\), and \(net_{4}\) (with their switching probabilities and their rare values) are all dependent rare nets. Instead of including all four nets in the state space, we choose the rarest net as the representative net since activating the rarest net ensures the activation of the others as well. In this example, \(net_{4}\) is Fig. 5: State pruning identifies nets in the same activation path. selected as the set representative. This policy helps accelerate the RL agent to converge on the global minima faster. Figure 6 summarizes our proposed detection flow. As for rewarding the agent, we consider three rewarding functions, which we explain here. Our multi-rewarding detector enables security engineers to better prepare for attackers with different mindsets. ### _Rewarding function D1_ In our first rewarding function (Algorithm 2), we push the RL agent to build on its current state. We use a copy of the previous state and encourage the agent to generate state vectors that differ from the previous one. The hypothesis is to push the agent toward finding test vectors that lead to various unseen states. The pruned current and previous state vectors and its length are passed as inputs to Algorithm 2 to compute the reward. The rewarding function is composed of an \(immediate\) and a \(sequential\) part, which are initialized to \(0\) in lines \(1\) and \(2\), respectively. Whenever the state transitions, we iterate through the loop \(K\) times. We calculate the sequential reward by making a one-to-one comparison between the nets in the old and new states. According to lines \(5-11\), the highest reward is given when an action can trigger a net that was not triggered in the previous state, _i.e._, \(+40\). If a rare net is still activated in the current state, the agent will still get rewarded \(+20\). The worst state transition is whenever an action leads to a rare net losing its rare value, which is rewarded \(-3\). Lastly, if the agent cannot activate a rare net after a state transition, it will be rewarded \(-1\). The immediate award is simply the number of activated rare nets in the new state. The ultimate reward value is a linear combination of the immediate and sequential rewards with coefficients \(\lambda_{1}\) and \(\lambda_{2}\), respectively, which are tunable parameters to be set by the user. Note that we build the state vector with the obtained rare nets from functional simulation. ### _Rewarding function D2_ Algorithm 3 describes our second rewarding function. In this case, the agent gains rewards proportional to the difficulty of the rare nets triggered. First, the reward vector is initiated with a length equal to the state vector (line \(1\)). Each element in the reward vector has a one-to-one correspondence with rare nets on the state vector. The reward for each rare net is computed by taking the inverse of the net switching activity rate (line \(4\)). There are cases where a net might have a switching probability of \(0\). In such cases, activating the net would be rewarded 10X times the greatest reward in the vector (line \(12\)). Thus, upon the observance of every new state, the agent will be rewarded based on the nets that were activated and the reward vector (line \(18\)). If a rare net was not activated, -1 will be added to the final reward (line \(20\)). The algorithm aims to encourage the agent to directly trigger the rarest nets in the circuit. ### _Rewarding function D3_ In the third rewarding function, described in Algorithm 4, rare nets are populated based on the threshold of the \(HTS\) parameter computed during the static simulation using Equation 1. When a rare net in the set is activated, the agent is rewarded with the controllability of the rare value (line \(4\)). Otherwise, it will receive \(-1\) from the environment (line \(6\)). This scenario aims to investigate controllability-based HT detection with the RL agent. ``` Input:\(State_{pre}\), \(State_{cur}\), State Vector Length \(K\) Output:\(Reward_{final}\) 1:\(Reward_{lmd}=0\); 2:\(Reward_{Seq}=0\); 3:for\(k\in\{0,\dots,K-1\}\)do 4:if(\(State_{cur}[k]=0\) and \(State_{pre}[k]=0\))then 5:\(Reward_{Seq}+=-1\); 6:elseif(\(State_{cur}[k]=0\) and \(State_{pre}[k]=1\))then 7:\(Reward_{Seq}+=-3\); 8:elseif(\(State_{cur}[k]=1\) and \(State_{pre}[k]=0\))then 9:\(Reward_{Seq}+=40\); 10:elseif(\(State_{cur}[k]=1\) and \(State_{pre}[k]=1\))then 11:\(Reward_{Seq}+=20\); 12:endif 13:endfor 14:\(Reward_{Imd.}=State_{cur.count}(1)\) 15:\(Reward_{final}=\lambda_{1}\times Reward_{Seq}+\lambda_{2}\times Reward_{Imd}\) ``` **Algorithm 2** Rewarding Function 1 ## V The Proposed Generic HT-Detection Metric We propose the following methodology to the community to make fair and repeatable comparisons among HT detection methods. In addition, our methodology can help compare different HT insertion techniques for a given HT detector. This methodology obtains a confidence value that one can use to compare different HT detection methods. Figure 7 shows four possible outcomes when an HT detection tool studies a given circuit. From the tool user's perspective, the outcomes are probabilistic events. For example, Fig. 6: The proposed detection flow. when an HT-free circuit is being tested, the detection tool may either classify it as an infected or a clean circuit, _i.e._, \(Prob(FP)+Prob(TN)=1\) where \(FP\) and \(TN\) stand for _False Positive_ and _True Negative_ events. Similarly, for HT-infected circuits, we have \(Prob(FN)+Prob(TP)=1\). \(FN\) and \(FP\) are two undesirable outcomes at which detectors misclassify the given circuit. However, the \(FN\) cases pose a significantly greater danger as they result in a scenario where we rely on an HT-infected chip, whereas an \(FP\) case means wasting a clean chip by either not selling or not using it. So, we need to know how the user (might be a security engineer or a company representative) of HT detection tools prioritizes \(FN\) and \(FP\) cases. We define a parameter \(\alpha\) as the ratio of the undesirability of \(FN\) over \(FP\). The tool user determines \(\alpha\) based on characteristics and details of the application that eventually chips will be employed in, _e.g._, the risks of using an infected chip in a device with a sensitive application versus using a chip for home appliances. Note that this value is set by the user and not derived from the actual \(FP\) and \(FN\). After \(\alpha\) is set, it is plugged in Equation 4 and a general confidence basis \(Conf.\,Val\) is computed. \[Conf.\,Val=\frac{(1-FP)}{(1/\alpha+FN)} \tag{4}\] Using this metric, a fair comparison between HT detection methods can be made regardless of their detection criteria and implementation methodology. The defined confidence metric combines the two undesirable cases with respect to their severity from the security engineer's point of view. The \(Conf.\,Val\) ranges between \([\frac{0.5\alpha}{1+0.5\alpha}..\alpha]\). The closer the value is to \(\alpha\) is equivalent to more confidence in the detector. The absolute minimum of the \(Conf.\,Val=1/3\) that happens when \(\alpha=1\) and \(FP=FN=50\%\). Note that in this analysis, we assume that \(FN\) and \(FP\) are independent probabilities. We note that, for some detection methods, \(FP\) is always \(0\). For instance, test-based HT detection methods that apply a test vector to excite HTs use a golden model (HT-free) circuit for comparison and decision-making, and it is impossible for a non-infected circuit to have a mismatch with the golden model (from the perspective of functional simulation). It is impossible Fig. 8: Confidence value vs. the percentage of \(FN\) in our detectors assuming \(\alpha=10\) and \(\alpha=4\) Fig. 7: Possible outcomes of an HT detection trial. for such methods to falsely detect an HT in a clean circuit. However, our metric is general and captures such cases. Figure 8 shows the relation between the confidence value and the \(FN\) percentage for \(\alpha=10\) and \(\alpha=4\) for a test-based detector. As can be observed, the slopes of the graphs are different when \(FN\) approaches zero. The maximum tolerable \(FN\) is defined as an upper bound for the \(FN\) value at which we gain at least half the maximum confidence. As shown with the dashed lines in Figure 8, the maximum tolerable \(FN\) for \(\alpha=4\) and \(\alpha=10\) is, respectively, \(FN=25\%\) and \(FN=10\%\). Based on the figure, it can be inferred that choosing a higher base \(\alpha\) will make it more challenging to attain higher confidence values. This fact should be considered when choosing \(\alpha\) and interpreting the confidence values. We believe that, in addition to the detection quality, which can be measured by the proposed confidence value, HT detection methods should also be compared from a computational cost point of view. In particular, we encourage researchers to report the run-time of their methods and the training time, if applicable. ## VI Experimental Results and Discussion This section demonstrates the efficiency of the developed HT insertion and detection framework. For our experiments, we use an AMD EPYC 7702P 64-Core CPU with 512GB of RAM to train and test our agents. The training of the RL agents is done using the Stable Baselines library [32] with MLP (multi-layer perceptron) as the PPO algorithm policy [30]. The benchmark circuits are selected from ISCAS-85 [33], which are converted into equivalent circuit graphs using NetworkX [34]. Our toolset is developed in Python to 1) easily adopt available libraries, 2) facilitate future expansions and integration with other tools that researchers may develop. Table III provides details of the benchmark circuits used in our experiments. The table represents the number of primary inputs (\(2^{nd}\)column), logic levels (\(3^{rd}\)column), number of nodes including inputs, outputs, and logic gates (\(4^{th}\)column), and nets (\(5^{th}\)column). We have specified \(T_{OCR}\) and \(T_{HTS}\) such that \(5\%\) of all nets in each circuit are considered as _rare nets_ (\(6^{th}\) and \(7^{th}\) columns, respectively). This was done to enable a fair comparison between the circuits. Finally, the circuit functionality is listed in the \(8^{th}\) column. ### _Timing Complexity_ Table IV provides timing information spent on training the HT insertion and detection agents per circuit. The \(2^{nd}\) column shows the total timesteps for insertion/detection, and the \(3^{rd}\) column shows the total spent time. We initialize training the inserting agent in \(c432\) with \(120\)K timesteps and an episode length of \(450\). We increase both values by 10% for each succeeding circuit to ensure enough exploration is made in each circuit as their size grows. As for detection, we start with \(450\)K timesteps and increase it by 10% for subsequent circuits and we keep the episode length at 10. The short episode length allows the agent to experience different states, thereby increasing the chances of exploration. In the testing phase, the test vectors are collected after running the agent for \(20\)K episodes. In our experiments, \(c6288\) takes the most time in both insertion and detection scenarios (2.5 days) which we argue is reasonable for an attacker and the defense engineer. Note that we have not used any optimization techniques to reduce the number of gates and nets in the benchmarks. Such techniques can notably decrease the RL environment size, and subsequently, the training time. That being said, the impact of optimization techniques on detection/insertion quality should be investigated, but it is not within the scope of this paper. ### _Insertion, Detection, and Confidence Value Figures_ Figure 9 illustrates the logical depth distribution of rare nets in \(c3540\) and \(c5315\) circuits. Despite the fact that rare nets are mostly found in the lower logic levels, there are still a significant number of rare nets in the higher levels, which could potentially contribute to the creation of stealthier hardware Trojans. As explained in section III-B, the level of the HT trigger nets is limited by the payload's level. If a payload is not selected from the higher-level nets, the agent has less opportunity to explore higher-level trigger nets \begin{table} \begin{tabular}{|c|c|c|} \hline Benchmark & \begin{tabular}{c} Insertion/Detection \\ Timesteps \\ \end{tabular} & \begin{tabular}{c} Insertion/Detection \\ Training Time \\ \end{tabular} \\ \hline c432 & 120K / 450K & 1 hr 40 m / 1 hr 7 m \\ \hline c880 & 132K / 495K & 2 hr 36 m / 2 hr 7 m \\ \hline c1355 & 145K / 550K & 3 hr 10 m / 2 hr 27 m \\ \hline c1508 & 160K / 605K & 5 hr 25 m / 2 hr 40 m \\ \hline c2670 & 175K / 665K & 8 hr 1 m / 7 hr 23 m \\ \hline c3540 & 192K / 731K & 12 hr 1 m / 5 hr 24 \\ \hline c5315 & 211K / 800K & 23 hr 16 m / 15 hr 36 m \\ \hline c6528 & 232K / 880K & 57 hr 18 m / 59 hr 16 m \\ \hline c7552 & 255K / 970K & 26 hr 15 m / 44 hr 15 m \\ \hline \end{tabular} \end{table} TABLE IV: Mean HT detection/insertion training time of the RL algorithm for different ISCAS-85 benchmarks \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline **Benchmark** & **\# of Inputs** & **\# of Levels** & **\# of nodes** & **\# of nets** & \(T_{OCR}\) & \(T_{HTS}\) & **Description** \\ \hline c432 & 36 & 40 & 352 & 492 & 14 & 0.85 & 27-Channel interrupt Controller \\ \hline c880 & 60 & 43 & 607 & 889 & 15 & 0.82 & 8-Bit ALU \\ \hline c1355 & 41 & 44 & 957 & 1416 & 20 & 0.75 & 32-Bit SEC Circuit \\ \hline c1908 & 33 & 52 & 868 & 1304 & 14 & 0.90 & 16-Bit SEC/DED Circuit \\ c2670 & 233 & 28 & 1323 & 1807 & 20 & 0.83 & 12-bit ALU and Controller \\ \hline c3540 & 50 & 60 & 1539 & 2527 & 15 & 0.84 & 8-bit ALU \\ \hline c5315 & 178 & 63 & 2697 & 4292 & 21 & 0.79 & 9-bit ALU \\ \hline c6288 & 32 & 240 & 4496 & 6801 & 18 & 0.8 & 16x16 Multiplier \\ \hline c7552 & 207 & 53 & 3561 & 5433 & 20 & 0.8 & 32-Bit Adder/Comparator \\ \hline \end{tabular} \end{table} TABLE III: Characteristics of different circuits from ISCAS-85 benchmark which might harm the insertion exploration of new HTs. To enable more exploration, we define the following two payload selection scenarios: 1) \(P_{rand}\) in which the agent selects payloads randomly, and 2) \(P_{high}\) where payload net is selected such that at least 80% of rare nets are within the agent's sight. Table V provides information about the number of inserted HTs using \(P_{rand}\) and \(P_{high}\) scenarios for each benchmark circuit. The 2nd and 3rd columns show the total number of HTs successfully inserted by the agent. The numbers followed by each insertion scenario in the remaining columns show the number of rare nets among the \(5\)-input triggers. For instance, in \(c432\), \(1866\) HTs were inserted under \(P_{rand}\) where \(1688\) of those had \(3\) rare nets, \(160\) of those had \(4\) rare nets, and only \(18\) of those had \(5\) rare nets. As can be observed, in most cases, the number of inserted HTs under \(P_{high}\) is higher than \(P_{rand}\) with the exception of \(c6288\) and \(c7552\). Also, as the number of rare triggers increases, fewer HTs are inserted. In other words, it becomes more difficult for the RL agent to find HTs with higher rare nets. There are some cases under \(P_{rand}-5\) and \(P_{high}-5\) that the agent could not insert any HTs. These rows in the table are shown as 0, e.g, in \(c2670\). Figure 10 displays the HT detection accuracy percentages for the studied circuits under \(P_{rand}\) and \(P_{high}\) insertion scenarios. Besides \(D1\), \(D2\), and \(D3\), there is an extra detection scenario called _Combined_ where all the test vectors produced by \(D1\), \(D2\), and \(D3\) are consolidated and applied to the circuits for HT detection. No detection rates are reported in cases that no HTs were inserted. It can be observed from both Table V and Figure 10 that despite more inserted HTs in the \(P_{high}\) scenario, they do not evade detection any better than the random payload selection scenario and the detection rates are almost the same. Nevertheless, the extra inserted HTs under \(P_{high}\) can be used to train better ML HT detectors. Figure 10 also suggests that the existence of D1, D2, and D3 is vital to providing better HT detection coverage. Figure 11 displays the number of times each detector was ranked first in 9 benchmark circuits under our two insertion strategies. While \(D3\) ties with \(D2\) under \(P_{rand}\), it becomes the best detector under \(P_{high}\). \(D1\) only outperforms in 1 benchmark circuit in both scenarios. The figure suggests that developing HT detectors solely based on signal activity might not achieve the expected outcomes. Nevertheless, \(D2\) still plays an essential role in overall HT detection accuracy. The impact of the _Combined_ scenario is vital as it improves the overall detection accuracy in most cases. For instance, in \(c3540\), none of the detectors can perform better than \(60\%\) in the \(P_{rand}\) scenario while the _Combined_ detection accuracy is nearly \(75\%\). It also can be seen that adding more rare nets to the HT trigger does not necessarily lead to stealthier HTs. For example, in \(c880\), \(c1355\), and \(c1908\), there are HTs with 5 trigger nets that were 100% detected while the detection accuracy was less for HTs with fewer rare triggers in the same circuits. Another important observation is the different magnitude of detection accuracy among the benchmark circuits. While we achieve, \(100\%\) accuracy in \(c6288\), the same figure is about \(25\%\)-\(30\%\) lower in \(c3540\) and \(c6288\). We know from Table III that \(c6288\) is a multiplier circuit. It contains \(240\) full and half adders arranged in a \(15\times 16\) matrix [35]. \(c3540\), on the other hand, has \(14\) control inputs for multiplexing and masking data. \(c7552\) also contains multiple control signals and bit masking operations. Our hypothesis is that the detection accuracy is higher in \(c6288\) due to having fewer control signals that disable circuit components and signals. Accordingly, they get more frequently activated in \(c6288\) compared to \(c3540\) and \(c7552\). **In other words, these results imply that inserting \begin{table} \begin{tabular}{|c||c|c||c|c||c||c|c||c|} \hline **Benchmark** & \(P_{rand}-Total\) & \(P_{high}-Total\) & \(P_{rand}-3\) & \(P_{high}-3\) & \(P_{rand}-4\) & \(P_{high}-4\) & \(P_{rand}-5\) & \(P_{high}-5\) \\ \hline **c432** & 1866 & 2788 & 1688 & 2331 & 160 & 453 & 18 & 4 \\ \hline **c880** & 1954 & 2116 & 1595 & 1736 & 327 & 373 & 32 & 7 \\ \hline **c1355** & 921 & 1400 & 815 & 1116 & 86 & 268 & 20 & 16 \\ \hline **c1908** & 1247 & 1576 & 1121 & 1240 & 126 & 321 & **0** & 15 \\ \hline **c2670** & 206 & 434 & 188 & 406 & 18 & 28 & **0** & **0** \\ \hline **c3540** & 410 & 767 & 367 & 703 & 41 & 64 & 2 & **0** \\ \hline **c3515** & 434 & 797 & 406 & 719 & 28 & 77 & **0** & 1 \\ \hline **c6288** & 531 & 475 & 459 & 426 & 67 & 46 & 5 & 3 \\ \hline **c7552** & 769 & 683 & 704 & 615 & 64 & 67 & 1 & 1 \\ \hline \end{tabular} \end{table} TABLE V: Number of inserted HTs under \(P_{rand}\) and \(P_{high}\) scenarios for ISCAS-85 benchmark circuits Fig. 9: Distribution of rare nets in \(c3540\) and \(c5315\) HTs in control paths can lead to stealthier HTs than data paths in circuits.** Another interesting finding pertains to the detection rate in \(c432\). After administering \(100\)K random test patterns, we discovered that the rarest net in the circuit was triggered \(7\%\) of the times, which is in stark contrast to other circuits where a multitude of nets exhibit switching activity of less than \(1\%\). It implies that the inserted HTs in \(c432\) are probably activated easier with random test patterns. To prove this hypothesis, we generated \(20\)K random test patterns and passed them to the circuit. These test patterns detected \(99\%\) of HTs, indicating that attackers should carefully evaluate the activity profile of the nets prior to compromising circuits. To further evaluate the efficacy of our HT detectors, we compare the _Combined_ detector with DETERRENT [9] and HW2VEC [25], two state-of-the-art HT detectors. We use the test vectors generated by DETERRENT [9] and collect detection figures for \(4\) reported ISCAS-85 benchmark circuits, namely \(c2670\), \(c5315\), \(c6288\), and \(c7552\)2. We also replicate the steps in HW2VEC [25] by gathering the \(TJ\_RTL\) dataset which contains \(26\) HT-infected (labeled as \({}^{\prime}1^{\prime}\)) and \(11\) HT-Free circuits (labeled as \({}^{\prime}0^{\prime}\)). We train an MLP (multi-layer percep Fig. 10: Detection accuracy of D1, D2, D3, and _Combined_ scenarios under \(P_{high}\) and \(P_{high}\) insertion scenarios in ISCAS-85 benchmark circuits tron) binary classifier using a leave-one-out cross validation method to detect the HTs. For the test dataset, we collect the graph embeddings of the HTs generated by the inserting RL agent. Additionally, we add an HT-free version of the original ISCAS-85 cciiruits and another one synthesized with the academic _NanGateOpenCell45nm_ library to the test batch to record the number of \(TNs\) and \(FP\)s. As was explained and shown in Table II, DETERRENT solely takes signal activity into account while HW2VEC captures structural information of circuits. Figure 12 shows the detection accuracy of each HT detector for each benchmark circuit. The detection accuracy is reported for the total inserted HTs in Table V for both \(P_{rand}\) and \(P_{high}\) insertion scenarios. The figure shows that the _Combined_ detector outperforms DETERRENT and HW2VEC in \(3\) of our benchmark circuits. The average detection rate among the \(4\) benchmarks is \(87\%\) percent. While the detection gap between _Combined_ and DETERRENT is significant in \(c2670\) and \(c5315\), it is less evident in \(c6288\) and \(c7552\). HW2VEC, on the other hand, demonstrates minimal detection variance in all \(4\) circuits and outperforms _Combined_ in \(c7552\). Furthermore, HW2VEC illustrates robust performance with HT-Free circuits, where it correctly classifies all of them as \(TN\)s and a \(FP\) rate of \(0\). In another experiment, we train our MLP with \(TJ\_RTL\)\(+\)\(EPFL\)[36] benchmark suites to obtain a more balanced dataset (\(26\) instances labeled as \({}^{\prime}1^{\prime}\) and \(30\) instances labeled as \({}^{\prime}0^{\prime}\)). While the \(FP\) remains \(0\), similar to the previous experiment, the HT detection accuracy drops to \(48\%\). This sheds light on the shortcomings of the current benchmarks used for training ML HT detectors and it raises the necessity of having more diverse and larger dataset to attain more dependable results. Overall, these two experiments demonstrate the potential of the RL inserting agent and the advantages of a multi-criteria detector compared to a single-criterion (DETERRENT) HT detector. Table VI shows the individual detection contribution of \(D1\), \(D2\), and \(D3\) towards overall HT detection for each benchmark circuit. The \(2^{nd}\), \(4^{th}\) and \(6^{th}\) columns display the number of HTs exclusively detected by each detector followed by their contribution in the overall HT detection in the \(3^{rd}\), \(5^{th}\) and \(7^{th}\) columns for \(D1\), \(D2\), and \(D3\), respectively. As can be inferred, \(D3\) has the highest individual contribution, followed by \(D2\) and \(D1\). This table serves as another piece of evidence of the importance of the multi-criteria HT detector for higher accuracy. To compute the confidence value of each detector, the overall detection accuracy of each detector is computed in all 9 circuits under both insertion scenarios. Then, each averaged value is plugged in Equation 4. Assuming \(\alpha=10\), the confidence values for each of \(D1\), \(D2\), \(D3\), and _Combined_ scenarios are 2.43, 3.36, 3.09, and 5.13, respectively. Thus, the security engineer can put more confidence in the _Combined_ detector since it has the highest confidence values. DETERRENT's and HW2VEC's confidence values are 1.24 and 4.34, respectively. ### _Average Episode Length and Reward_ Figure 13 shows the average episode length and reward of the inserting and detector RL agents for the \(c5315\) benchmark circuit. As can be seen from Figure 13.a, initially, the agent leans more towards ending the training episodes to avoid \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Circuit** & \(D1\) \# & \(D1\) \% & \(D2\) \# & \(D2\) \% & \(D3\) \# & \(D3\) \% \\ \hline \hline **c432** & 2 & 0.1\% & 275 & 14.74\% & 297 & 15.86\% \\ \hline **c580** & 49 & 2.52\% & 16 & 0.81\% & 16 & 0.81\% \\ \hline **c1355** & 0 & 0\% & 0 & 0\% & 40 & 4.34\% \\ \hline **c1908** & 1 & 0.08\% & 1 & 0.08\% & 13 & 1.04\% \\ \hline **c2670** & 0 & 0\% & 1 & 0.48\% & 66 & 32.03\% \\ \hline **c3540** & 7 & 1.70\% & 29 & 7.07\% & 18 & 4.39\% \\ \hline **c3515** & 1 & 0.24\% & 8 & 1.93\% & 9 & 2.17\% \\ \hline **c6288** & 0 & 0\% & 0 & 0\% & 8 & 1.51\% \\ \hline **c7552** & 16 & 2.08\% & 29 & 3.77\% & 15 & 1.95\% \\ \hline \end{tabular} \end{table} TABLE VI: Individual contribution of \(D1\), \(D2\), and \(D3\) in detection of unique HTs Fig. 11: Comparing the number of times each of \(D1\), \(D2\) and \(D3\) are ranked as the best detector in our two insertion scenarios Fig. 12: Comparison of HW2VEC [25], _Combined_, and DETERRENT [9] detection rates under \(P_{rand}\) and \(P_{high}\) insertion scenarios further losses. This trend continues until it gradually starts to increase the episode length, resulting in an increase in reward, which can be observed in Figure 13.b. Eventually, the agent collects more and more rewards. Although the agent accumulates higher rewards in \(P_{high}\), the detection rate is not significantly different from \(P_{rand}\). Figure 13.c demonstrates the agent's ability to augment rewards in our three detection scenarios in an almost steady pace; it learns how to increase rewards along the way. It is worthwhile to point out that the proposed RL framework can save the state of the RL models at arbitrary intervals, which is useful to test the efficacy of the agent at different timesteps. Note that since the detector's episode length is always \(10\), this data was not included in the graph. The agent can always be trained for longer steps, but one should consider the trade-off between the amount of time required and the accuracy achieved. ### _Test Vector Size vs. Accuracy_ We also investigate the relationship between the number of applied test vectors and the HT detection accuracy. For this experiment, we collect a set of test vectors that have obtained a certain minimum of reward. To identify such vectors, we run the trained RL agent for \(20\)K episodes. We set a cut-off reward of one-tenth of the collected reward in the last training episode. We collect \(20\)K test vectors that surpass this reward threshold. The HT detection distribution of the collected test vectors is shown in Figure 14 for \(c1908\), \(c3540\), \(c5315\), and \(c7552\) under the \(P_{rand}\) insertion scenario and the \(D2\) detection scenario. The x-axis displays the intervals of the applied test vectors and the y-axis shows the detection percentage of each particular interval. As can be seen, the first \(2\)K vectors have the greatest contribution toward HT detection. This figure is nearly \(90\%\) for \(c1908\) while it is just below \(40\%\) for \(c7552\). A similar comparison can be made between different HT detectors to help us find out the relation between the quantity (number of test vectors) and the quality (the detection accuracy). Such analysis leads us to answer this question "Does adding more test vectors to the testing batch improve detection?" If the answer is negative, adopting more intelligent rewarding functions might be considered to offset this diminishing returns effect. That being said, in certain instances, adding more test batches leads to higher detection rates. We tested this scenario for \(c3540\) where the _Combined_ detection rate with \(20\)K test patterns is around \(80\%\) in the \(P_{rand}\) scenario. We ran the trained detector agents \(D1\), \(D2\), and \(D3\) for \(20\)K episodes, but this time, we collected all the test patterns that returned positive rewards. Accordingly, we collected \(191\)K, \(183\)K,\(121\)K for \(D1\), \(D2\), and \(D3\) and the detection rates were \(89\%\), \(86\%\), and \(97\%\), respectively. ## VII Conclusions This paper presented the first framework for joint HT insertion and detection. Both the inserting and detection RL agents have tunable rewarding functions that enable researchers to experiment with different approaches to the problem. This framework will accelerate HT research by helping the research community to evaluate their insertion/detection ideas with less effort. Our inserting tool provides a robust dataset that can be used for developing finer HT detectors, and our detector tool emphasizes the need for a multi-criteria detector that can cater to different HT insertion mindsets. We also presented a methodology to help the community compare HT detection methods, regardless of their implementation details. We apply this methodology to our HT detection and discovered that our tool offers the highest confidence in HT detection when using a combined detection scenario. As a future work, we would like to explore more benchmarks and create a more diverse HT dataset for the community.
2305.17974
On the Structure of Set-Theoretic Polygon Equations
Polygon equations generalize the prominent pentagon equation in very much the same way as simplex equations generalize the famous Yang-Baxter equation. In particular, they appeared as ''cocycle equations'' in Street's category theory associated with oriented simplices. Whereas the $(N-1)$-simplex equation can be regarded as a realization of the higher Bruhat order $B(N,N-2)$, the $N$-gon equation is a realization of the higher Tamari order $T(N,N-2)$. The latter and its dual $\tilde T(N,N-2)$, associated with which is the dual $N$-gon equation, have been shown to arise as suborders of $B(N,N-2)$ via a ''three-color decomposition''. There are two different reductions of $T(N,N-2)$ and $\tilde T(N,N-2)$, to ${T(N-1,N-3)}$, respectively $\tilde T(N-1,N-3)$. In this work, we explore the corresponding reductions of (dual) polygon equations, which lead to relations between solutions of neighboring (dual) polygon equations. We also elaborate (dual) polygon equations in this respect explicitly up to the octagon equation.
Folkert Müller-Hoissen
2023-05-29T09:33:18Z
http://arxiv.org/abs/2305.17974v4
###### Abstract ###### Abstract Polygon equations generalize the pentagon equation in very much the same way as simplex equations generalize the Yang-Baxter equation. Whereas the \((N-1)\)-simplex equation can be regarded as a realization of the higher Bruhat order \(B(N,N-2)\), the \(N\)-gon equation is a realization of the higher Tamari order \(T(N,N-2)\). The latter and its dual \(\tilde{T}(N,N-2)\), associated with which is the dual \(N\)-gon equation, have been shown to arise as suborders of \(B(N,N-2)\) via a three-color decomposition. There are two different reductions of \(T(N,N-2)\) and \(\tilde{T}(N,N-2)\), to \(T(N-1,N-3)\), respectively \(\tilde{T}(N-1,N-3)\). In this work, we explore the corresponding reductions of (dual) polygon equations, which lead to relations between solutions of neighboring polygon and dual polygon equations. We also elaborate (dual) polygon equations in this respect explicitly up to the octagon equation. **On the structure of set-theoretic polygon equations** Folkert Muller-Hoissen Institut fur Theoretische Physik, Friedrich-Hund-Platz 1, 37077 Gottingen, Germany [email protected] ## 1 Introduction An infinite family of "polygon equations" has been introduced in [11], based on the combinatorial structure of higher Tamari (partial) orders. The 4-gon equation is the condition of associativity for a binary operation. The dual 4-gon equation expresses coassociativity of a comultiplication. The 5-gon or pentagon equation is the most prominent member of this family and plays a profound role in mathematics and mathematical physics. Of particular relevance is the pentagon equation in the context of bi- and Hopf algebras. The crucial and fairly simple observation here is the following. Given a unital associative algebra \({\cal A}\) with identity element \(1_{\cal A}\), it carries a trivial (left) comultiplication1\(\Delta_{\ell}:{\cal A}\to{\cal A}\otimes{\cal A}\), \(a\mapsto a\otimes 1_{\cal A}\). Non-trivial comultiplications are then obtained via \(\Delta:=W\Delta_{\ell}W^{-1}\) if \(W\in{\cal A}\otimes{\cal A}\) is an invertible solution of the pentagon equation2\(W_{\bf 12}\,W_{\bf 13}\,W_{\bf 23}=W_{\bf 23}\,W_{\bf 12}\) in the threefold tensor product of \({\cal A}\), where the indices determine at which two components of it \(W\) acts. In fact, any comultiplication on a finite-dimensional algebra can be expressed in this way [46]. But also in the rigorous framework of infinite-dimensional \(C^{*}\)-algebras such an expression for comultiplication plays a profound role in surpassing technical problems in the duality theory of locally compact groups and for establishing a rigorous setting for quantum groups. Here \(W\) appears as a unitary linear operator under the name "multiplicative unitary" or (generalized) Kac-Takesaki operator (see, in particular, [58, 59, 52, 1, 66, 36, 60]). Footnote 1: Of course, there is a corresponding result for the “right” trivial comultiplication \(\Delta_{r}:{\cal A}\to{\cal A}\otimes{\cal A}\), \(a\mapsto 1_{\cal A}\otimes a\). Footnote 2: A somewhat more general result can be found in [8] and involves a “mixed” or “entwinning” pentagon equation. Corresponding versions of polygon equations, involving several different maps, have been treated in full generality in [11]. Also see [30], for example. The pentagon equation, sometimes called "fusion equation", shows up in conformal field theory as a condition for the fusion matrix, which arises from operator product expansion of chiral vertex operators, see [47], for example. The pentagon equation plays a crucial role as an associativity constraint for the basic functor of a monoidal category, notably because of Mac Lane's coherence theorem [38] (also see [31]). This in turn has roots in Tamari's work on relaxing associativity and Stasheff's work on homotopy associativity [53, 54]. One way to obtain topological invariants of manifolds is via triangulations, i.e., decompositions into simplices. To each simplex one assigns a mathematical object and builds from all of them an object (e.g., a "state sum") assigned to a triangulation. This has to be done in such a way that the latter object is invariant under manipulations of triangulations called Pachner moves or bistellar flips ([48], also see, e.g., [37]). In three dimensions, the simplices are tetrahedra. A Pachner (2,3)-move splits two adjacent tetrahedra into three and this requires that the objects associated with the tetrahedra, constituting a triangulation, satisfy a pentagon relation. Because of the pentagonal Biedenharn-Elliot identity [4], a Wigner 6-j symbol, or a generalization of it, is thus a choice for the object assigned to a tetrahedron [49, 62, 61].3 Footnote 3: Essentially, (generalized) 6-j symbols arise as follows. If there is a direct sum decomposition \(V_{i}\otimes V_{j}=\bigoplus_{\ell}H^{\ell}_{ij}\otimes V_{\ell}\) of vector spaces, the associator isomorphism \((V_{i}\otimes V_{j})\otimes V_{k}\to V_{i}\otimes(V_{j}\otimes V_{k})\) of a monoidal category induces maps \[\left\{\begin{array}{ccc}i&j&\ell\\ k&m&n\end{array}\right\}:H^{\ell}_{ij}\otimes H^{m}_{\ell k}\to H^{m}_{in} \otimes H^{n}_{jk}\,.\] In [3] it was shown that the Wheeler-DeWitt equation for three-dimensional general relativity reduces to the pentagon relation. There is a comprehensive literature around the idea of representing Pachner moves of a triangulation in three dimensions by a solution of the pentagon equation (or a pentagon relation), see in particular [22, 7, 32, 2, 57, 45]. In four dimensions, the dual hexagon equation plays a corresponding role [27] (also see Remark 6.2 below), in five dimensions it is the dual heptagon equation [34, 33, 35]. In the latter context it is also worth to mention that the higher Tamari orders, introduced in [11] and underlying polygon equations, are equivalent [65] to higher Stasheff-Tamari orders [14, 21] on the set of triangulations of a cyclic polytope. A pentagonal relation is also satisfied by the Rogers [50] and quantum dilogarithms4[17, 16]. Although it does not have the structure of the pentagon equation as considered in this work, there are certain relations, see, e.g., Example 7.20 below.5 Footnote 4: Also the quantum dilogarithm can be understood as a 6-j symbol [22]. Footnote 5: Also Drinfeld associators of quasi-Hopf algebras (see, e.g., [15]) are subject to a certain pentagon equation, but they are elements of a triple tensor product and therefore the corresponding pentagon equation is different from the standard one. Comparatively little is known so far about higher polygon equations. The dual tetragon, pentagon and hexagon equations appeared in category theory as a 2-, 3-, respectively 4-cocycle condition [55, 56]. This leads us to the following. **Conjecture 1.1**.: _The dual \((N+2)\)-gon equation is the \(N\)-cocycle condition in the cohomology defined in [55]._ From solutions of polygon equations and their duals, special solutions of simplex equations can be obtained [39, 29, 11, 51, 30]. A class of solutions of odd polygon equations with maps acting on direct sums has recently been obtained in [9]. In the present work we will explore polygon equations in the set-theoretic setting. Sections 2 and 3 recall from [11] the definition of higher Bruhat and higher Tamari orders, and of simplex and polygon equations. Section 4 deals with reductions of polygon equations and contains the most general results of this work. Section 5 presents the first few examples of polygon equations, extending some results obtained in [11]. Section 6 presents the dual versions of these equations. Section 7 deals with the special, but most important, case of \(N\)-gon equations for a single map, acting between Cartesian products of a set \(\mathcal{U}\). Partly directly, or from the general results of Section 4, we derive relations between solutions of a (dual) \(N\)-gon and (dual) \((N+1)\)-gon equation, \(N>2\). Section 8 contains some concluding remarks. ## 2 Basics of higher Bruhat and higher Tamari orders For a non-empty finite subset \(M\) of \(\mathbb{N}\), and \(n\in\mathbb{N}\), \(1\leq n\leq|M|\) (where \(|M|\) is the cardinality of \(M\)), let \(\binom{M}{n}\) denote the set of \(n\)-element subsets of \(M\). The _packet_\(P(M)\) of \(M\) is the set of \((|M|-1)\)-element subsets of \(M\). We write \(\overrightarrow{P}(M)\) for \(P(M)\) in lexicographic order, and \(\overleftarrow{P}(M)\) for \(P(M)\) in reverse lexicographic order. Let \(N\in\mathbb{N}\), \(N>1\), and \([N]=\{1,\ldots,N\}\). A _linear order_ (permutation) \(\rho\) of \(\binom{[N]}{n}\), \(n\in\mathbb{N}\), \(n<N\), is called _admissible_ if, for any \(K\in\binom{[N]}{n+1}\), the packet \(P(K)\) is contained in \(\rho\) in lexicographic or in reverse lexicographic order. Let \(A(N,n)\) denote the set of admissible linear orders of \(\binom{[N]}{n}\). An equivalence relation is defined on \(A(N,n)\) by \(\rho\sim\rho^{\prime}\) iff \(\rho\) and \(\rho^{\prime}\) only differ by exchange of two neighboring elements, not both contained in some packet. Then \(A(N,n)/\sim\), supplied with the partial order given via inversions of lexicographically ordered packets, \(\overrightarrow{P}(K)\mapsto\overrightarrow{P}(K)\), is the _higher Bruhat order_\(B(N,n)\). Next we consider the splitting of a packet, \(P(K)=P_{o}(K)\cup P_{e}(K)\), where \(P_{o}(K)\) (\(P_{e}(K)\)) is the half-packet consisting of elements with odd (even) position in the lexicographically ordered \(P(K)\). We say an element \(J\in P_{o}(K)\) is _blue_ in \(\overrightarrow{P}(K)\) and _red_ in \(\overleftarrow{P}(K)\), an element \(J\in P_{e}(K)\) is _red_ in \(\overrightarrow{P}(K)\) and _blue_ in \(\overleftarrow{P}(K)\). \(J\in P(K)\) is _blue_ (_red_) in \(\rho\in A(N,n)\) if \(J\) is _blue_ (_red_) with respect to all \(K\) for which \(J\in P(K)\) and either \(\overrightarrow{P}(K)\) or \(\overleftarrow{P}(K)\) is a subsequence of \(\rho\). It can happen that \(J\) is _blue_ in \(\rho\in A(N,n)\) with respect to some \(K\) and _red_ with respect to another \(K^{\prime}\). In such a case we color it _green_. By \(\rho^{(b)},\rho^{(r)},\rho^{(g)}\) we denote the blue, red, respectively green subsequence of \(\rho\). It has been shown in [11] that the projections \(B(N,n)\to B^{(c)}(N,n)\), \([\rho]\mapsto[\rho^{(c)}]\), \(c\in\{b,r,g\}\), are well-defined and that \(B^{(c)}(N,n)\) inherits a partial order from \(B(N,n)\). \(T(N,n):=B^{(b)}(N,n)\) are the _higher Tamari orders_6 and \(\tilde{T}(N,n):=B^{(r)}(N,n)\) are called _dual higher Tamari orders_. The inversion operation in case of \(T(N,n)\) is \(\overrightarrow{P}_{o}(K)\mapsto\overleftarrow{P}_{e}(K)\), \(K\in\binom{[N]}{n+1}\). In case of \(\tilde{T}(N,n)\), it is \(\overrightarrow{P}_{e}(K)\mapsto\overleftarrow{P}_{o}(K)\). Footnote 6: It has been conjectured in [10, 11] and proved in [65] that the higher Tamari orders are equivalent to the first higher Stasheff–Tamari orders in [14, 21]. ## 3 A brief account of simplex and polygon equations Let \(N>2\). With \(J\in\binom{[N]}{N-2}\) we associate a set \(\mathcal{U}_{J}\). For \(\rho\in A(N,N-2)\), let \(\mathcal{U}_{\rho}\) be the correspondingly ordered Cartesian product of the \(\mathcal{U}_{J}\), \(J\in\rho\). With \(K\in\binom{[N]}{N-1}=P([N])\) we associate a map \[R_{K}:\mathcal{U}_{\overrightarrow{P}(K)}\longrightarrow\mathcal{U}_{ \overrightarrow{P}(K)}\,.\] The \((N-1)\)_-simplex equation_ \[R_{\overrightarrow{P}([N])}=R_{\overleftarrow{P}([N])} \tag{3.1}\] may then be regarded as a realization of \(B(N,N-2)\). The expressions on both sides are compositions of maps \(R_{K}\), \(K\in P([N])\), applied on the left hand side in lexicographic, on the right hand side in reverse lexicographic order. Writing \(\overrightarrow{P}([N])=(K_{1},\ldots,K_{N})\), we have \(R_{\overrightarrow{P}([N])}=R_{K_{N}}\cdots R_{K_{1}}\). Hence, as a composition, \(R_{\overrightarrow{P}([N])}\) is actually in reverse lexicographic order, but the maps are applied in lexicographic order. We have to add the following rules in order for (3.1) to make sense. 1. Both sides of (3.1) act on \(\mathcal{U}_{\alpha}\) and map to \(\mathcal{U}_{\omega}\), where \(\alpha\) (\(\omega\)) is \({[N]\choose N-2}\) in lexicographic (reverse lexicographic) order. 2. Each of the maps \(R_{K}\) acts at consecutive positions in the respective multiple Cartesian product of spaces. 3. If \(J,J^{\prime}\in{[N]\choose N-2}\) are such that they do not both belong to \(P(K)\) for any \(K\in{[N]\choose N-1}\), then \[\cdots\times\mathcal{U}_{J}\times\mathcal{U}_{J^{\prime}}\times\cdots \ \boldsymbol{\sim}\ \cdots\times\mathcal{U}_{J^{\prime}}\times\mathcal{U}_{J}\times\cdots\] imposes an equivalence relation on Cartesian products. Starting with \(\mathcal{U}_{\alpha}\), it may be necessary to use the third rule in order to arrange that \(R_{K_{1}}\), respectively \(R_{K_{N}}\), can be applied, which means that the sets associated with elements of \(P(K_{1})\), respectively \(P(K_{N})\), have to be in lexicographic order and at neighboring positions in the respective multiple Cartesian product of sets. After an application of some \(R_{K}\), it may again be necessary to use the third rule in order to arrange a further application of a map \(R_{K^{\prime}}\), or to achieve the final reverse lexicographic order \(\mathcal{U}_{\omega}\). That this works, is a consequence of the underlying structure of higher Bruhat orders [40, 41, 11]. We have to stress that (3.1) is _not_ the form in which simplex equations usually appear in the literature, see [11] for the relation and references. With each \(K\in{[N]\choose N-1}\), now we associate a map \[T_{K}:\mathcal{U}_{\overrightarrow{P}_{o}(K)}\longrightarrow\mathcal{U}_{ \overrightarrow{P}_{e}(K)}\,.\] Writing \(K=(k_{1},\ldots,k_{N-1})\), with \(k_{i}<k_{i+1}\), \(i=1,\ldots,N-2\), we have \[\mathcal{U}_{\overrightarrow{P}_{o}(K)} = \mathcal{U}_{K\setminus\{k_{N-1}\}}\times\mathcal{U}_{K\setminus \{k_{N-3}\}}\times\cdots\times\mathcal{U}_{K\setminus\{k_{1+(N\,\mathrm{mod} \,2)\}}}\,,\] \[\mathcal{U}_{\overrightarrow{P}_{e}(K)} = \mathcal{U}_{K\setminus\{k_{2-(N\,\mathrm{mod}\,2)\}}}\times \cdots\times\mathcal{U}_{K\setminus\{k_{N-4}\}}\times\mathcal{U}_{K\setminus \{k_{N-2}\}}\,.\] The \(N\)_-gon equation_ \[T_{\overrightarrow{P}_{o}([N])}=T_{\overrightarrow{P}_{e}([N])} \tag{3.2}\] may be regarded as a realization of \(T(N,N-2)\). It is well defined if we require the following rules [11]. 1. Let \(\alpha\) (\(\omega\)) be again \({[N]\choose N-2}\) in lexicographic (reverse lexicographic) order, and let \(\alpha^{(b)}\) and \(\omega^{(b)}\) be the corresponding blue parts. Both sides of (3.2) act on \(\mathcal{U}_{\alpha^{(b)}}\) and map to \(\mathcal{U}_{\omega^{(b)}}\). 2. Each of the maps \(T_{K}\) acts at consecutive positions in the respective multiple Cartesian product of sets. 3. If \(J,J^{\prime}\in{[N]\choose N-2}\) are such that they do not both belong to \(P(K)\) for any \(K\in{[N]\choose N-1}\), then \[\cdots\times\mathcal{U}_{J}\times\mathcal{U}_{J^{\prime}}\times\cdots\ \ \boldsymbol{\sim}\ \cdots\times\mathcal{U}_{J^{\prime}}\times\mathcal{U}_{J}\times\cdots\,.\] As in the case of simplex equations, in order to apply or work out a polygon equation, we have to check at each step whether a map \(T_{K}\) can be applied directly or whether we first have to use the third rule above in order to achieve a reordering of the respective multiple Cartesian product. In any case, we have to keep track of the numbering of the sets, even if they are identical as sets. It is therefore convenient to realize the above equivalence relation by introducing explicitly transposition maps (sometimes called flip or switch maps) in the equations, at the price of ending up with a form of the equation that looks more complicated and apparently lost its universal structure, but it is often better suited for applications. Instead of keeping track of the numbering of sets, we then have to keep track of the first position on which a map acts in a multiple Cartesian product. This has been done in [11]. For several polygon equations, we will recall the resulting form in Section 5. In this form, we can best deal with the case of prime interest, where all the sets \(\mathcal{U}_{J}\) are the same and there is only a single map \(T\). If \(N\) is odd, (3.2) can be written as \[T_{\hat{1}}T_{\hat{3}}\cdots T_{\widehat{N-2}}T_{\widehat{N}}=T_{\widehat{N-1 }}T_{\widehat{N-3}}\cdots T_{\hat{2}}\,, \tag{3.3}\] where \(\hat{k}:=[N]\setminus\{k\}\) (complementary index notation). If \(N\) is even, (3.2) can correspondingly be expressed as \[T_{\hat{2}}T_{\hat{4}}\cdots T_{\widehat{N-2}}T_{\widehat{N}}=T_{\widehat{N-1 }}T_{\widehat{N-3}}\cdots T_{\hat{1}}\,. \tag{3.4}\] With each \(K\in\binom{[N]}{N-1}\), \(N>2\), we also associate a map \[\tilde{T}_{K}:\mathcal{U}_{\widehat{P}_{\epsilon}(K)}\longrightarrow\mathcal{ U}_{\widehat{P}_{o}(K)}\,.\] The _dual \(N\)-gon equation_ \[\tilde{T}_{\widehat{P}_{\epsilon}([N])}=\tilde{T}_{\widehat{P}_{o}([N])} \tag{3.5}\] may be regarded as a realization of the dual Tamari order \(\tilde{T}(N,N-2)\). Both sides act on \(\alpha^{(r)}\), which is equal to \(\omega^{(b)}\) totally reversed, and map to \(\omega^{(r)}\), which is equal to \(\alpha^{(b)}\) totally reversed. For odd \(N\), (3.5) is \[\tilde{T}_{\hat{2}}\tilde{T}_{\hat{4}}\cdots\tilde{T}_{\widehat{N-1}}=\tilde{ T}_{\hat{N}}\tilde{T}_{\widehat{N-2}}\cdots\tilde{T}_{\hat{3}}\tilde{T}_{\hat{1} }\,, \tag{3.6}\] which is (3.3) reversed. For even \(N\), we have \[\tilde{T}_{\hat{1}}\tilde{T}_{\hat{3}}\cdots\tilde{T}_{\widehat{N-1}}=\tilde{ T}_{\hat{N}}\tilde{T}_{\widehat{N-2}}\cdots\tilde{T}_{\hat{2}}\,, \tag{3.7}\] which is (3.4) reversed. Simplex equations, and also (dual) polygon equations, are interrelated by a kind of integrability feature, which crucially distinguishes them from similar equations. We refer to [11] for the general structure, but in Section 5 we elaborate this feature for some examples of polygon equations. **Remark 3.1**.: Realizing the equivalence \(\thicksim\) by a transposition map \(\mathcal{P}\), we can think of working in a _symmetric_ monoidal category. Relaxing the symmetry condition, it becomes a braided monoidal category. We will not consider the latter generalization in this work. \(\Box\) ## 4 Reductions of polygon equations For any fixed \(k\in[N+1]\), there is a projection \(\operatorname{pr}_{k}:A(N+1,n+1)\to A(N,n)\), obtained by restricting \(\rho\in A(N+1,n+1)\) to the subsequence consisting only of elements \(K\in\binom{[N+1]}{n+1}\) with \(k\in K\). The set of all these subsequences is in bijection with \(A(N,n)\), simply by deleting \(k\) in each \(K\) and an obvious renumbering. Moreover, the projection is compatible with the equivalence relation \(\sim\) and induces an order-preserving projection \(B(N+1,n+1)\to B(N,n)\). See Remark 2.5 in [11]. But only for \(k\in\{1,N+1\}\), the projection \(\operatorname{pr}_{k}\) is compatible with the \(3\)-color decomposition, see Remark 2.17 in [11]. If \(k=1\), this yields order-preserving projections \(T(N+1,n+1)\to T(N,n)\) and \(\tilde{T}(N+1,n+1)\to\tilde{T}(N,n)\). For \(k=N+1\) we have order-preserving projections \(T(N+1,n+1)\to\tilde{T}(N,n)\) and \(\tilde{T}(N+1,n+1)\to T(N,n)\). In particular, there are thus projections \(T(N+1,N-1)\to T(N,N-2)\) and \(T(N+1,N-1)\to\tilde{T}(N,N-2)\), which then correspond to reductions of the \((N+1)\)-gon to the \(N\)-gon equation, respectively the dual \(N\)-gon equation. In the same way, the projections \(\tilde{T}(N+1,N-1)\to\tilde{T}(N,N-2)\) and \(\tilde{T}(N+1,N-1)\to T(N,N-2)\) lead to reductions of the dual \((N+1)\)-gon to the dual \(N\)-gon equation, respectively the \(N\)-gon equation. ### The first projection This is essentially obtained by dropping those \(T_{K}\), \(K\in\binom{[N+1]}{N}\), where \(1\notin K\), and those sets \(\mathcal{U}_{J}\), where \(1\notin J\). Note that the packet \(P(K)\) has \(N\) elements and, if \(1\in K\), its last member in the lexicographic order is \(K\setminus\{1\}\). Let \(N\) be even and \(K\) such that \(1\in K\). Then \(K\setminus\{1\}\) is the first element of \(\overleftarrow{P}_{e}(K)\). Hence \[T_{K}:\mathcal{U}_{\overleftarrow{P}_{o}(K)}\to\mathcal{U}_{K\setminus\{1\}} \times\mathcal{U}_{\overleftarrow{P}_{e}^{\prime}(K)}\,,\] where \(\overleftarrow{P}_{e}^{\prime}(K)\) is \(\overleftarrow{P}_{e}(K)\) without the element \(K^{\prime}:=K\setminus\{1\}\). By disregarding the first component of its codomain, \(T_{K}\) induces a map \[T_{K^{\prime}}:\mathcal{U}_{\overleftarrow{P}_{o}(K^{\prime})}\to\mathcal{U} _{\overleftarrow{P}_{e}(K^{\prime})}\qquad K^{\prime}\in\binom{\{2,\ldots N+1 \}}{N-1}\,,\] where, for \(J\in P(K^{\prime})\), we set \(\mathcal{U}_{J}:=\mathcal{U}_{\{1\}\cup J}\). The \((N+1)\)-gon equation \[T_{\hat{1}}T_{\hat{3}}\cdots T_{\widehat{N-1}}T_{\widehat{N+1}}=T_{\widehat{ N}}T_{\widehat{N-2}}\cdots T_{\hat{2}}\] reduces to \[T_{\hat{3}}^{\prime}\cdots T_{\widehat{N-1}}^{\prime}T_{\widehat{N+1}}^{\prime }=T_{\widehat{N}}^{\prime}T_{\widehat{N-2}}^{\prime}\cdots T_{\hat{2}}^{\prime }\,.\] By renaming \(T_{\hat{k}}^{\prime}\) to \(T_{\widehat{k-1}}^{\prime}\), this reads \[T_{\hat{2}}^{\prime}\cdots T_{\widehat{N-2}}^{\prime}T_{\widehat{N}}^{\prime}= T_{\widehat{N-1}}^{\prime}T_{\widehat{N-3}}^{\prime}\cdots T_{\hat{1}}^{\prime}\,,\] which has the standard form of the (even) \(N\)-gon equation. **Theorem 4.1**.: _Let \(N\in\mathbb{N}\) be even. Let maps \(T_{K}\), \(K\in\binom{[N+1]}{N}\), solve the \((N+1)\)-gon equation. For \(K\) with \(1\in K\) let \(T_{K^{\prime}}\), \(K^{\prime}=K\setminus\{1\}\), be obtained from \(T_{K}\) by deleting the first component of its codomain. Then \(\{T_{K^{\prime}}|K^{\prime}\in\binom{\{2,\ldots,N+1\}}{N-1}\}\) solve the \(N\)-gon equation. _ Let \(N\) be odd. Then \(K^{\prime}=K\setminus\{1\}\) is the last element of \(\overrightarrow{P}_{o}(K)\). If \[T_{K}:\mathcal{U}_{\overleftarrow{P}_{o}^{\prime}(K)}\times\mathcal{U}_{K \setminus\{1\}}\to\mathcal{U}_{\overleftarrow{P}_{e}(K)}\] does not depend on the last component of its domain, it induces a map \[T_{K^{\prime}}^{\prime}:\mathcal{U}_{\overleftarrow{P}_{o}(K^{\prime})}\to \mathcal{U}_{\overleftarrow{P}_{e}(K^{\prime})}\,,\] again with \(\mathcal{U}_{J}:=\mathcal{U}_{\{1\}\cup J}\) for \(J\in P(K^{\prime})\). The \((N+1)\)-gon equation \[T_{\hat{2}}T_{\hat{4}}\cdots T_{\widehat{N-1}}T_{\widehat{N+1}}=T_{\widehat{N}}T _{\widehat{N-2}}\cdots T_{\hat{1}}\] reduces to \[T^{\prime}_{\hat{2}}T^{\prime}_{\hat{4}}\cdots T^{\prime}_{\widehat{N-1}}T^{ \prime}_{\widehat{N+1}}=T^{\prime}_{\widehat{N}}T^{\prime}_{\widehat{N-2}} \cdots T^{\prime}_{\hat{3}}\,.\] By a shift in the numbering, this is turned into the standard form of the (odd) \(N\)-gon equation, \[T^{\prime}_{\hat{1}}T^{\prime}_{\hat{3}}\cdots T^{\prime}_{\widehat{N-2}}T^{ \prime}_{\widehat{N}}=T^{\prime}_{\widehat{N-1}}T^{\prime}_{\widehat{N-3}} \cdots T^{\prime}_{\hat{2}}\,.\] **Theorem 4.2**.: _Let \(N\in\mathbb{N}\) be odd, \(N>2\). Let maps \(T_{K}\), \(K\in\binom{[N+1]}{N}\), solve the \((N+1)\)-gon equation. For \(K\) with \(1\in K\), let \(T_{K}\) not depend on the last component of its domain. Let \(T_{K^{\prime}}\), \(K^{\prime}=K\setminus\{1\}\), be obtained from \(T_{K}\) by excluding \(\mathcal{U}_{K^{\prime}}\) from its domain. Then \(\{T_{K^{\prime}}|K^{\prime}\in\binom{\{2,\ldots,N+1\}}{N-1}\}\) solve the \(N\)-gon equation. _ Let us now turn to the _dual_\((N+1)\)-gon equation and consider the subset of maps \(\tilde{T}_{K}:\mathcal{U}_{\overrightarrow{P}_{e}(K)}\to\mathcal{U}_{ \overrightarrow{P}_{o}(K)}\) with \(1\in K\). If \(N\) is even, then \(K^{\prime}:=K\setminus\{1\}\) is the last element of \(\overrightarrow{P}_{e}(K)\), so that \[\tilde{T}_{K}:\mathcal{U}_{\overrightarrow{P}_{e}(K)}\times\mathcal{U}_{K \setminus\{1\}}\to\mathcal{U}_{\overrightarrow{P}_{o}(K)}\,,\] where \(\overrightarrow{P}_{e}^{\prime}(K)\) is \(\overrightarrow{P}_{e}(K)\) without the last element. If \(\tilde{T}_{K}\) does not depend on the last component of its domain, it induces a map \[\tilde{T}_{K^{\prime}}:\mathcal{U}_{\overrightarrow{P}_{e}(K^{\prime})}\to \mathcal{U}_{\overrightarrow{P}_{o}(K^{\prime})}\qquad K^{\prime}\in\binom{ \{2,\ldots,N+1\}}{N-1}\,,\] where we set \(\mathcal{U}_{J}:=\mathcal{U}_{\{1\}\cup J}\). **Theorem 4.3**.: _Let \(N\in\mathbb{N}\) be even. Let maps \(\tilde{T}_{K}\), \(K\in\binom{[N+1]}{N}\), solve the dual \((N+1)\)-gon equation. For \(K\) with \(1\in K\), let \(\tilde{T}_{K}\) not depend on the last component of its domain. Let \(\tilde{T}_{K^{\prime}}\), \(K^{\prime}=K\setminus\{1\}\), be obtained from \(\tilde{T}_{K}\) by excluding the last component of its domain. Then \(\{\tilde{T}_{K^{\prime}}|K^{\prime}\in\binom{\{2,\ldots,N+1\}}{N-1}\}\) solve the dual \(N\)-gon equation. _ If \(N\) is odd, then \(K^{\prime}\) is the first element of \(\overleftarrow{P}_{o}(K)\), so that \[\tilde{T}_{K}:\mathcal{U}_{\overrightarrow{P}_{e}(K)}\to\mathcal{U}_{K \setminus\{1\}}\times\mathcal{U}_{\overrightarrow{P}_{o}^{\prime}(K)}\,,\] where \(\overleftarrow{P}_{o}^{\prime}(K)\) is \(\overleftarrow{P}_{o}(K)\) without the first element. Now \(\tilde{T}_{K}\) induces a map \[\tilde{T}_{K^{\prime}}:\mathcal{U}_{\overrightarrow{P}_{e}(K^{\prime})}\to \mathcal{U}_{\overleftarrow{P}_{o}(K^{\prime})}\qquad K^{\prime}\in\binom{ \{2,\ldots,N+1\}}{N-1}\,,\] **Theorem 4.4**.: _Let \(N\in\mathbb{N}\) be odd. Let maps \(\tilde{T}_{K}\), \(K\in\binom{[N+1]}{N}\), solve the dual \((N+1)\)-gon equation. For \(K\) with \(1\in K\) let \(\tilde{T}_{K^{\prime}}\), \(K^{\prime}=K\setminus\{1\}\), be obtained from \(\tilde{T}_{K}\) by excluding the first component of its codomain. Then \(\{\tilde{T}_{K^{\prime}}|K^{\prime}\in\binom{\{2,\ldots,N+1\}}{N-1}\}\) solve the dual \(N\)-gon equation. _ ### The second projection The projection \(T(N+1,N-1)\to\tilde{T}(N,N-2)\) induces a reduction of the \((N+1)\)-gon to the dual \(N\)-gon equation. It is essentially obtained by dropping those \(T_{K}\), \(K\in\binom{[N+1]}{N}\), where \(N+1\notin K\), and those sets \(\mathcal{U}_{J}\), where \(N+1\notin J\). Let now \(K\) be such that \(N+1\in K\). Then \(K^{\prime}:=K\setminus\{N+1\}\) is the first element of \(P(K)\) in lexicographic order, and thus also of \(P_{o}(K)\). Hence, we have maps \[T_{K}:\mathcal{U}_{K\setminus\{N+1\}}\times\mathcal{U}_{\overrightarrow{P}_{o }^{\prime}(K)}\to\mathcal{U}_{\overleftarrow{P}_{e}(K)}\qquad\quad\forall\ K\in \binom{[N+1]}{N}\,,\ N+1\in K\,,\] where \(P_{o}^{\prime}(K)\) is \(P_{o}(K)\) without its first element. If these maps do not depend on the first component of their domain, they project to maps \[\tilde{T}_{K^{\prime}}:\mathcal{U}_{\overrightarrow{P}_{e}(K^{\prime})}\to \mathcal{U}_{\overleftarrow{P}_{o}(K^{\prime})}\qquad\forall\ K^{\prime}\in \binom{[N]}{N-1}\,,\] where we set \(\mathcal{U}_{J}:=\mathcal{U}_{J\cup\{N+1\}}\) for \(J\in P(K^{\prime})\). For odd \(N\), the \((N+1)\)-gon equation \[T_{\widehat{2}}T_{\widehat{4}}\cdots T_{\overrightarrow{N-1}}T_{ \overrightarrow{N+1}}=T_{\widehat{N}}T_{\overrightarrow{N-2}}\cdots T_{ \widehat{1}}\] reduces to \[\tilde{T}_{\widehat{2}}\tilde{T}_{\widehat{4}}\cdots\tilde{T}_{ \overrightarrow{N-1}}=\tilde{T}_{\widehat{N}}\tilde{T}_{\overrightarrow{N-2 }}\cdots\tilde{T}_{\widehat{3}}\tilde{T}_{\widehat{1}}\,,\] which is the dual \(N\)-gon equation for odd \(N\). For even \(N\), the \((N+1)\)-gon equation \[T_{\widehat{1}}T_{\widehat{3}}\cdots T_{\widehat{N-1}}T_{ \widehat{N+1}}=T_{\widehat{N}}T_{\widehat{N-2}}\cdots T_{\widehat{2}}\] reduces to \[\tilde{T}_{\widehat{1}}\tilde{T}_{\widehat{3}}\cdots\tilde{T}_{ \overrightarrow{N-1}}=\tilde{T}_{\widehat{N}}\tilde{T}_{\overrightarrow{N-2 }}\cdots\tilde{T}_{\widehat{2}}\,,\] which is the dual \(N\)-gon equation for even \(N\). **Theorem 4.5**.: _Let \(N\in\mathbb{N}\), \(N>1\). Let maps \(T_{K}\), \(K\in\binom{[N+1]}{N}\), solve the \((N+1)\)-gon equation. For \(K\) with \(N+1\in K\), let \(T_{K}\) not depend on the first component of its domain. Set \(K^{\prime}=K\setminus\{N+1\}\) and let \(\tilde{T}_{K^{\prime}}\) be given by \(T_{K}\) with the first component of its domain excluded. Then \(\{\tilde{T}_{K^{\prime}}\,|\,K^{\prime}\in\binom{[N]}{N-1}\}\) solve the dual \(N\)-gon equation. \(\square\)_ Let us now turn to the corresponding reduction of the _dual_\((N+1)\)-gon equation and consider maps \(\tilde{T}_{K}:\mathcal{U}_{\overrightarrow{P}_{e}(K)}\to\mathcal{U}_{ \overleftarrow{P}_{o}(K)}\) with \(N+1\in K\). Since \(K^{\prime}:=K\setminus\{N+1\}\) is the first element of \(\overrightarrow{P}(K)\), and thus the last element of \(\overleftarrow{P}_{o}(K)\), \[\tilde{T}_{K}:\mathcal{U}_{\overleftarrow{P}_{e}(K)}\to\mathcal{U}_{ \overleftarrow{P}_{o}^{\prime}(K)}\times\mathcal{U}_{K\setminus\{N+1\}}\,,\] where \(\overleftarrow{P}_{o}^{\prime}(K)\) is \(\overleftarrow{P}_{o}(K)\) without the last element. Hence \(\tilde{T}_{K}\) induces a map \[T_{K^{\prime}}:\mathcal{U}_{\overrightarrow{P}_{o}(K^{\prime})}\to\mathcal{U} _{\overleftarrow{P}_{e}(K^{\prime})}\,,\] where we set again \(\mathcal{U}_{J}:=\mathcal{U}_{J\cup\{N+1\}}\) for \(J\in P(K^{\prime})\). The dual \((N+1)\)-gon equation then reduces to the \(N\)-gon equation. **Theorem 4.6**.: _Let \(N\in\mathbb{N}\), \(N>1\). Let maps \(\tilde{T}_{K}\), \(K\in\binom{[N+1]}{N}\), solve the dual \((N+1)\)-gon equation. For \(K\) with \(N+1\in K\), set \(K^{\prime}=K\setminus\{N+1\}\) and let \(T_{K^{\prime}}\) be given by \(\tilde{T}_{K}\) with the last component of its codomain excluded. Then \(\{T_{K^{\prime}}\,|\,K^{\prime}\in\binom{[N]}{N-1}\}\) solve the \(N\)-gon equation. \(\square\)_ Examples of polygon equations In this section we elaborate polygon equations up to the 8-gon equation. ### Trigon equation This is the equation \[T_{23}\,T_{12}=T_{13} \tag{5.1}\] for maps \[T_{ij}:\mathcal{U}_{i}\to\mathcal{U}_{j}\,,\qquad i<j\,.\] On the left hand side of (5.1), we mean the _composition_ of two maps. If the sets are the same, \(\mathcal{U}_{i}=\mathcal{U}\), \(i=1,2,3\), and if there is only a single map \(T\), then the trigon equation means that \(T\) is idempotent. ### Tetragon equation For \(i,j=1,2,3,4\), \(i<j\), let a map \(L_{ij}:\mathcal{U}_{i}\to\mathcal{U}_{j}\) carry a parameter from a set \(\mathcal{U}_{ij}\). Let us further assume that each of the _local trigon equations_ \[L_{jk}(u_{jk})\,L_{ij}(u_{ij})=L_{ik}(T_{ijk}(u_{ij},u_{jk}))\,,\qquad i<j<k\,,\] uniquely determines a map \[T_{ijk}:\mathcal{U}_{ij}\times\mathcal{U}_{jk}\longrightarrow\mathcal{U}_{ik}\,.\] Using associativity of compositions, there is a consistency condition, see Fig. 1, which requires that the maps \(T_{ijk}\) have to satisfy the _tetragon equation_ \[T_{134}\,T_{123}=T_{124}\,T_{234}\,. \tag{5.2}\] Both sides of this equation act on the lexicographically ordered Cartesian product \(\mathcal{U}_{12}\times\mathcal{U}_{23}\times\mathcal{U}_{34}\) and map to \(\mathcal{U}_{14}\). Let us introduce a boldface "position index" that indicates the first of two neighboring sets on which the respective map acts in a Cartesian product of more than two sets, \[T_{134}\,T_{123,\mathbf{1}}=T_{124}\,T_{234,\mathbf{2}}\,.\] Figure 1: From local trigon equations to the tetragon equation. Here, and correspondingly in following figures, we suppress the parameters of the maps \(L\). These additional indices are redundant, as long as we keep the combinatorial indices and keep track of the numbered sets. Using complementary index notation, where \(\hat{k}\) stands for the complement of \(k\) in \(\{1,2,3,4\}\), the tetragon equation reads \[T_{\hat{2}}\,T_{\hat{4},\mathbf{1}}=T_{\hat{3}}\,T_{\hat{1},\mathbf{2}}\,.\] Writing \[T_{ijk}(a_{ij},a_{jk})=:a_{ij}\bullet_{ijk}a_{jk}\,,\] with \(a_{ij}\in\mathcal{U}_{ij}\), the equation takes the form \[(a_{12}\bullet_{123}a_{23})\bullet_{134}a_{34}=a_{12}\bullet_{124}(a_{23} \bullet_{234}a_{34})\;,\] which is a mixed associativity condition for the (in general different) binary operations \(\bullet_{123}\), \(\bullet_{124}\), \(\bullet_{134}\) and \(\bullet_{234}\).7 Footnote 7: Examples of such associativity relations for different binary operations are provided, for example, by nonsymmetric Poisson algebras (in a setting of vector spaces, with \(\times\) replaced by the corresponding tensor product) [42]. In the simplest case, where all the basic sets are equal and we are dealing with a single map \(T\), we may drop the combinatorial indices, but retain the boldface "position" indices. The tetragon equation is then \[T\,T_{\mathbf{1}}=T\,T_{\mathbf{2}}\,.\] Writing \[T(a,b)=a\cdot b\,,\] it becomes the associativity relation \[(a\cdot b)\cdot c=a\cdot(b\cdot c) \tag{5.3}\] for the binary operation \(\cdot\). ### Pentagon equation For \(i,j,k=1,\ldots,5\), \(i<j<k\), let a map \[L_{ijk}:\mathcal{U}_{ij}\times\mathcal{U}_{jk}\longrightarrow\mathcal{U}_{ik}\] depend on a parameter from a set \(\mathcal{U}_{ijk}\). Let us assume that each of the _local tetragon equations_ \[L_{ikl}(u_{ikl})\,L_{ijk,\mathbf{1}}(u_{ijk})=L_{ijl}(v_{ijl})\,L_{jkl, \mathbf{2}}(v_{jkl})\,,\qquad 1\leq i<j<k<l\leq 5\,,\] uniquely determines a map \[T_{ijkl}:\mathcal{U}_{ijk}\times\mathcal{U}_{ikl}\rightarrow\mathcal{U}_{jkl} \times\mathcal{U}_{ijl}\] via \[(u_{ijk},u_{ikl})\mapsto(v_{jkl},v_{ijl})\,.\] Then it follows that the maps \(T_{ijkl}\), \(1\leq i<j<k<l\leq 5\), have to satisfy the pentagon equation, see Fig. 2. Using complementary index notation, the _pentagon equation_ is \[T_{\hat{1}}\,T_{\hat{3}}\,T_{\hat{5}}=T_{\hat{4}}\,T_{\hat{2}}\,. \tag{5.4}\] Both sides of this equation act on \(\mathcal{U}_{123}\times\mathcal{U}_{134}\times\mathcal{U}_{145}\) and map to \(\mathcal{U}_{345}\times\mathcal{U}_{235}\times\mathcal{U}_{125}\). Representing the equivalence relation \(\thicksim\) in the diagram in Fig. 2 by a transposition map, \(\mathcal{P}(a,b)=(b,a)\), and reading off on which neighboring positions in a multiple Cartesian product a map acts, we get \[T_{\hat{1},\mathbf{1}}\,T_{\hat{3},\mathbf{2}}\,T_{\hat{5},\mathbf{1}}=T_{ \hat{4},\mathbf{2}}\,\mathcal{P}_{\mathbf{1}}\,T_{\hat{2},\mathbf{2}}\,.\] In case of identical basic sets, then renamed to \(\mathcal{U}\), and a single map \(T\), the last equation takes the form \[T_{\mathbf{1}}\,T_{\mathbf{2}}\,T_{\mathbf{1}}=T_{\mathbf{2}}\,\mathcal{P}_{ \mathbf{1}}\,T_{\mathbf{2}}\,, \tag{5.5}\] where the combinatorial indices have been dropped. Now all the information needed is provided by the position indices. The latter is our abbreviation of the following more familiar form of the pentagon equation \[T_{\mathbf{12}}\,T_{\mathbf{23}}\,T_{\mathbf{12}}=T_{\mathbf{23}}\,\mathcal{P }_{\mathbf{12}}\,T_{\mathbf{23}}\,.\] Figure 2: From local tetragon equations to the pentagon equation. Here \(\thicksim\) stands for an equivalence, which corresponds to an application of a transposition map \(\mathcal{P}\). Essentially, the left diagram is the pentagonal _Tamari lattice_, in its original form, as displayed in the second figure. Here the action of a map \(T\) corresponds to a right associativity map \((a\cdot b)\cdot c\mapsto a\cdot(b\cdot c)\). If this is an invertible map in a category with a binary operation \(\cdot\), the pentagon relation means that it is a _coherent associativity isomorphism_ in the sense of [38]. Writing \[T(a,b)=(a*b,a\cdot b)\,, \tag{5.6}\] the last restricted form of the pentagon equation is equivalent to the conditions (cf. [29, 28]) \[(a*b)*((a\cdot b)*c)=b*c\,,\quad(a*b)\cdot((a\cdot b)*c)=a*(b\cdot c)\,,\quad(a \cdot b)\cdot c=a\cdot(b\cdot c)\,, \tag{5.7}\] for all \(a,b,c\in\mathcal{U}\). Set-theoretic solutions have been obtained in [67, 1, 23, 29, 24, 25, 20, 26, 5, 6, 44, 43, 30]. **Example 5.1**.: 1. If \(a*b=b\) for all \(a,b\in\mathcal{U}\), (5.7) reduces to the associativity condition for \(\cdot\). If \((\mathcal{U},\cdot)\) is a group and if \(T\) is invertible, then this is the only solution [29, 6]. It underlies one of the Kac-Takesaki operators on a group (see [60], for example). If \(\mathcal{U}\) is a subset of a group \((G,\cdot)\), not containing the identity element, there are more solutions [29]. 2. If \(a\cdot b=a\), the above system reduces to \((a*b)*(a*c)=b*c\). In a group, a solution is given by \(a*b=a^{-1}b\). This underlies another Kac-Takesaki operator on a group (see, e.g., [60, 5]). In terms of the composition \(\hat{T}:=T\mathcal{P}\), (5.5) takes the form \[\hat{T}_{\mathbf{12}}\,\hat{T}_{\mathbf{13}}\,\hat{T}_{\mathbf{23}}=\hat{T}_{ \mathbf{23}}\,\hat{T}_{\mathbf{12}}\,. \tag{5.8}\] **Remark 5.2**.: If \(T\) solves (5.5), then \(\hat{T}:=\mathcal{P}T\) satisfies the - relative to (5.8) - _reversed_ pentagon equation \[\hat{T}_{\mathbf{12}}\,\hat{T}_{\mathbf{23}}=\hat{T}_{\mathbf{23}}\,\hat{T}_{ \mathbf{13}}\,\hat{T}_{\mathbf{12}}\,. \tag{5.9}\] If an invertible map \(\hat{T}\) satisfies (5.8), then its inverse \(\hat{T}^{-1}\) satisfies the above reversed equation. \(\square\) **Remark 5.3**.: If \(\hat{T}\) is involutive, then it satisfies both, (5.8) and (5.9). Such solutions have been explored in [6]. \(\square\) #### 5.3.1 An example related to incidence geometry Let \(\mathcal{V}\) be a vector space and \(L(a):\mathcal{V}\times\mathcal{V}\to\mathcal{V}\), \(a\in\mathcal{U}\), be such that the local tetragon equation \[L(b)\,L(a)_{\mathbf{1}}=L(b^{\prime})\,L(a^{\prime})_{\mathbf{2}}\] determines a unique map \(T:\mathcal{U}\times\mathcal{U}\to\mathcal{U}\times\mathcal{U}\) via \((a,b)\mapsto(a^{\prime},b^{\prime})\). Then this map satisfies the pentagon equation (5.5). Writing \(x\circ_{a}y:=L(a)(x,y)\), we can express the above equation as the parameter-dependent associativity condition \[(x\circ_{a}y)\circ_{b}z=x\circ_{b^{\prime}}(y\circ_{a^{\prime}}z)\,.\] An example is given by \(\mathcal{V}=\mathbb{R}^{n}\), \(\mathcal{U}=(0,1)\subset\mathbb{R}\), and \[x\circ_{a}y:=a\,x+(1-a)\,y\,.\] Then we obtain the solution \[T(a,b)=\Big{(}\frac{(1-a)\,b}{1-a\,b}\,,\,a\,b\Big{)} \tag{5.10}\] of the pentagon equation (also see [24]). It also shows up as a map of "polarizations" resulting from the evolution of tree-shaped matrix KP solitons [12]. The above binary operation can be interpreted as the collinearity of three points \(A,B,A\circ_{a}B\): \(A\circ_{a}B=a\,A+(1-a)\,B\). The generalized associativity condition can then be viewed as a \((6_{2},4_{3})\) configuration [19], which consists of 6 points and 4 lines, where each point is incident with exactly two lines, each line with exactly 3 points. See Fig. 3. A Desargues configuration \((10_{3})\) consists of 10 points and 10 lines, each point (line) incident with 3 lines (points). It contains 5 configurations of type \((6_{2},4_{3})\), which thus constitute a pentagon. Also see [13] for related considerations. ### Hexagon equation For \(i,j,k,l=1,\ldots,6\), \(i<j<k<l\), let a map \[L_{ijkl}:\mathcal{U}_{ijk}\times\mathcal{U}_{ikl}\longrightarrow\mathcal{U}_{jkl} \times\mathcal{U}_{ijl}\] depend on a parameter from a set \(\mathcal{U}_{ijkl}\). We assume that each of the _local pentagon equations_ \[L_{jklm}(u_{jklm})\,L_{ijlm}(u_{ijlm})\,L_{ijkl}(u_{ijkl})=L_{ijkm}(v_{ijkm}) \,L_{iklm}(v_{iklm})\,,\qquad 1\leq i<j<k<l<m\leq 6\,,\] uniquely determines a map \[T_{ijklm}:\mathcal{U}_{ijkl}\times\mathcal{U}_{ijlm}\times\mathcal{U}_{jklm} \rightarrow\mathcal{U}_{iklm}\times\mathcal{U}_{ijkm}\] via \[(u_{ijkl},u_{ijlm},u_{jklm})\mapsto(v_{iklm},v_{ijkm})\,.\] Then the maps \(T_{ijklm}\), \(1\leq i<j<k<l<m\leq 6\), have to satisfy the hexagon equation. See Fig. 4. Using complementary index notation, the _hexagon equation_ reads \[T_{\hat{2}}\,T_{\hat{4}}\,T_{\hat{6}}=T_{\hat{5}}\,T_{\hat{3}}\,T_{\hat{1}}\,. \tag{5.11}\] Both sides act on \(\mathcal{U}_{1234}\times\mathcal{U}_{1245}\times\mathcal{U}_{1256}\times \mathcal{U}_{2345}\times\mathcal{U}_{2356}\times\mathcal{U}_{3456}\) and map to \(\mathcal{U}_{1456}\times\mathcal{U}_{1346}\times\mathcal{U}_{1236}\). Employing transposition maps, according to the equivalences \(\thicksim\) appearing in Fig. 4, and introducing position indices, the hexagon equation takes the form \[T_{\hat{2},\hat{1}}\,\mathcal{P}_{\hat{3}}\,T_{\hat{4},\hat{2}}\,T_{\hat{6}, \hat{1}}\,\mathcal{P}_{\hat{3}}=T_{\hat{5},\hat{2}}\,\mathcal{P}_{\hat{1}}\,T_ {\hat{3},\hat{2}}\,T_{\hat{1},\hat{4}}\,.\] If all basic sets are equal and we are dealing with a single map \(T\), we may drop the combinatorial indices and only retain the boldface position indices: \[T_{\hat{1}}\,\mathcal{P}_{\hat{3}}\,T_{\hat{2}}\,T_{\hat{1}}\,\mathcal{P}_{ \hat{3}}=T_{\hat{2}}\,\mathcal{P}_{\hat{1}}\,T_{\hat{2}}\,T_{\hat{4}}\,. \tag{5.12}\] **Remark 5.4**.: Without introduction of an auxiliary structure, in its last form, the hexagon equation, and correspondingly all higher _even_ polygon equations, cannot be written without explicit appearance of transpositions. This is in contrast to the case of the pentagon equation (see (5.8)) and higher _odd_ polygon equations. \(\square\) Expressing \(T\) in terms of two ternary operations, \[T(a,b,c)=\left(\langle a,b,c\rangle,[a,b,c]\right), \tag{5.13}\] the hexagon equation, acting on \((a,b,c,d,e,f)\in\mathcal{U}^{6}\), is equivalent to the following conditions, \[\langle\langle a,b,d\rangle,\langle[a,b,d],c,e\rangle,f\rangle= \langle b,c,\langle d,e,f\rangle\rangle\,,\] \[[\langle a,b,d\rangle,\langle[a,b,d],c,e\rangle,f]=\langle a,[b,c, \langle d,e,f\rangle],[d,e,f]\rangle\,,\,,\] \[[[a,b,d],c,e]=[a,[b,c,\langle d,e,f\rangle],[d,e,f]]\,. \tag{5.14}\] Figure 3: A \((6_{2},4_{3})\) configuration. ### Heptagon equation Let maps \[L_{ijklm}:\;\mathcal{U}_{ijkl}\times\mathcal{U}_{ijlm}\times\mathcal{U}_{jklm} \rightarrow\mathcal{U}_{iklm}\times\mathcal{U}_{ijkm}\,,\] where \(1\leq i<j<k<l<m\leq 7\), be subject to local hexagon equations \[L_{iklmn}(u_{iklmn})\,L_{ijkmn}(u_{ijkm})\,L_{ijklm}(u_{ijklm})=L_{ijkln}(v_{ ijkln})\,L_{ijlmn}(v_{ijlmn})\,L_{jklmn}(v_{jklmn})\,,\] where \(1\leq i<j<k<l<m<n\leq 7\). If these equations uniquely determine maps \[T_{ijklmn}:\;\mathcal{U}_{ijklm}\times\mathcal{U}_{ijkmn}\times\mathcal{U}_{iklmn }\rightarrow\mathcal{U}_{jklmn}\times\mathcal{U}_{ijlmn}\times\mathcal{U}_{ ijkln}\] via \[(u_{ijklm},u_{ijkmn},u_{iklmn})\mapsto(v_{jklmn},v_{ijlmn},v_{ijkln})\,,\] elaborating \[L_{14567}L_{13467}L_{12367}L_{13456}L_{12356}L_{12345}=L_{\overline{23}}L_{ \overline{25}}L_{\overline{45}}L_{\overline{27}}L_{\overline{47}}L_{\overline {67}}\] in two different ways, using local hexagon equations, we find that the maps \(T_{ijklmn}\) have to satisfy the heptagon equation, which is \[T_{\hat{1}}\,T_{\hat{3}}\,T_{\hat{5}}\,T_{\hat{7}}=T_{\hat{6}}\,T_{\hat{4}}\,T _{\hat{2}}\,,\] Figure 4: From local pentagon equations to the hexagon equation. in complementary index notation. Introducing position indices and transposition maps, we can express it as \[T_{1,\mathbf{1}}\,T_{\bar{3},\mathbf{3}}\,\mathcal{P}_{\mathbf{5}}\,\mathcal{P}_ {\mathbf{2}}\,T_{\bar{5},\mathbf{3}}\,T_{\bar{7},\mathbf{1}}\,\mathcal{P}_{ \mathbf{3}}=\mathcal{P}_{\mathbf{3}}\,T_{\bar{6},\mathbf{4}}\,\mathcal{P}_{ \mathbf{3}}\,\mathcal{P}_{\mathbf{2}}\,\mathcal{P}_{\mathbf{1}}\,T_{\bar{4}, \mathbf{3}}\,\mathcal{P}_{\mathbf{2}}\,\mathcal{P}_{\mathbf{3}}\,T_{\bar{2}, \mathbf{4}}\,.\] Both sides act on \(\mathcal{U}_{12345}\times\mathcal{U}_{12356}\times\mathcal{U}_{12367}\times \mathcal{U}_{13456}\times\mathcal{U}_{13467}\times\mathcal{U}_{14567}\) and map to \(\mathcal{U}_{34567}\times\mathcal{U}_{23567}\times\mathcal{U}_{23457}\times \mathcal{U}_{12567}\times\mathcal{U}_{12457}\times\mathcal{U}_{12347}\). If all basic sets are equal and there is only a single map \(T\), all the information we need is in the position indices, so that the combinatorial indices can be dropped, \[T_{\mathbf{1}}\,T_{\mathbf{3}}\,\mathcal{P}_{\mathbf{5}}\,\mathcal{P}_{ \mathbf{2}}\,T_{\mathbf{3}}\,T_{\mathbf{1}}\,\mathcal{P}_{\mathbf{3}}= \mathcal{P}_{\mathbf{3}}\,T_{\mathbf{4}}\,\mathcal{P}_{\mathbf{3}}\,\mathcal{ P}_{\mathbf{2}}\,\mathcal{P}_{\mathbf{1}}\,T_{\mathbf{3}}\,\mathcal{P}_{\mathbf{2}}\, \mathcal{P}_{\mathbf{3}}\,T_{\mathbf{4}}\,. \tag{5.15}\] Expressing \(T\) in terms of three ternary operations,8 Footnote 8: It should be clear from the context when \(\{\,,\,,\,\}\) denotes a ternary operation and not a set. \[T(a,b,c)=(\{a,b,c\},\langle a,b,c\rangle,[a,b,c])\,, \tag{5.16}\] the heptagon equation, evaluated on \((a,b,c,d,e,f)\in\mathcal{U}^{6}\), is equivalent to (5.14), supplemented by \[\{\{a,b,d\},\{[a,b,d],c,e\},\{\langle a,b,d\rangle,\langle[a,b,d],c,e\rangle,f\}\}=\{d,e,f\}\,,\] \[\langle\{a,b,d\},\{[a,b,d],c,e\},\{\langle a,b,d\rangle,\langle[ a,b,d],c,e\rangle,f\}\rangle=\{b,c,\langle d,e,f\rangle\}\,,\] \[[\{a,b,d\},\{[a,b,d],c,e\},\{\langle a,b,d\rangle,\langle[a,b,d],c,e\rangle,f\}]=\{a,[b,c,\langle d,e,f\rangle],[d,e,f]\}\,. \tag{5.17}\] ### Octagon equation We consider maps \[T_{ijklmpq}:\,\mathcal{U}_{ijklmp}\times\mathcal{U}_{ijklpq}\times\mathcal{U} _{ijlmpq}\times\mathcal{U}_{jklmpq}\rightarrow\mathcal{U}_{iklmpq}\times \mathcal{U}_{ijklmpq}\times\mathcal{U}_{ijklmp}\,,\] where \(i<j<k<l<m<p<q\). The _octagon equation_ is \[T_{\bar{2}}\,T_{\bar{4}}\,T_{\bar{6}}\,T_{\bar{8}}=T_{\bar{7}}\,T_{\bar{5}}\,T_ {\bar{3}}\,T_{\bar{1}}\,. \tag{5.18}\] It arises as the consistency condition of a system of local heptagon equations. Using position indices and transposition maps, it can be written as \[T_{\bar{2},\mathbf{1}}\,\mathcal{P}_{\mathbf{4}}\,\mathcal{P}_{ \mathbf{5}}\,\mathcal{P}_{\mathbf{6}}\,T_{\bar{4},\mathbf{3}}\,\mathcal{P}_{ \mathbf{6}}\,\mathcal{P}_{\mathbf{5}}\,\mathcal{P}_{\mathbf{2}}\,T_{\bar{6}, \mathbf{3}}\,\mathcal{P}_{\mathbf{6}}\,T_{\bar{8},\mathbf{1}}\,\mathcal{P}_{ \mathbf{4}}\,\mathcal{P}_{\mathbf{5}}\,\mathcal{P}_{\mathbf{6}}\,\mathcal{P}_{ \mathbf{3}} \tag{5.19}\] \[= \mathcal{P}_{\mathbf{3}}\,T_{\bar{7},\mathbf{4}}\,\mathcal{P}_{ \mathbf{3}}\,\mathcal{P}_{\mathbf{2}}\,\mathcal{P}_{\mathbf{1}}\,T_{\bar{5}, \mathbf{3}}\,\mathcal{P}_{\mathbf{6}}\,\mathcal{P}_{\mathbf{2}}\,\mathcal{P}_{ \mathbf{3}}\,T_{\bar{3},\mathbf{4}}\,T_{\bar{1},\mathbf{7}}\,.\] The two sides act on \(\mathcal{U}_{\overline{7}8}\times\mathcal{U}_{\overline{5}8}\times\mathcal{U}_{ \overline{5}56}\times\mathcal{U}_{\overline{3}8}\times\mathcal{U}_{\overline{3} 6}\times\mathcal{U}_{\overline{3}4}\times\mathcal{U}_{\overline{1}8}\times \mathcal{U}_{\overline{1}6}\times\mathcal{U}_{\overline{1}4}\times\mathcal{U}_{ \overline{1}2}\) and map to \(\mathcal{U}_{\overline{2}\overline{3}}\times\mathcal{U}_{\overline{2}5}\times \mathcal{U}_{\overline{2}7}\times\mathcal{U}_{\overline{4}5}\times\mathcal{U}_{ \overline{4}7}\times\mathcal{U}_{\overline{6}7}\). If all basic sets are equal and we are dealing with a single map \(T\), this reduces to \[T_{\mathbf{1}}\,\mathcal{P}_{\mathbf{4}}\,\mathcal{P}_{\mathbf{5}} \,\mathcal{P}_{\mathbf{6}}\,T_{\mathbf{3}}\,\mathcal{P}_{\mathbf{6}}\, \mathcal{P}_{\mathbf{5}}\,\mathcal{P}_{\mathbf{2}}\,T_{\mathbf{3}}\,\mathcal{P}_ {\mathbf{6}}\,T_{\mathbf{1}}\,\mathcal{P}_{\mathbf{4}}\,\mathcal{P}_{\mathbf{5}} \,\mathcal{P}_{\mathbf{6}}\,\mathcal{P}_{\mathbf{3}} \tag{5.20}\] \[= \mathcal{P}_{\mathbf{3}}\,T_{\mathbf{4}}\,\mathcal{P}_{\mathbf{3}} \,\mathcal{P}_{\mathbf{2}}\,\mathcal{P}_{\mathbf{1}}\,T_{\mathbf{3}}\, \mathcal{P}_{\mathbf{6}}\,\mathcal{P}_{\mathbf{2}}\,\mathcal{P}_{\mathbf{3}}\,T_{ \mathbf{4}}\,T_{\mathbf{7}}\,.\] Writing \[T(a,b,c,d)=(\{a,b,c,d\},\langle a,b,c,d\rangle,[a,b,c,d])\,,\] with three quaternary operations \(\mathcal{U}^{4}\rightarrow\mathcal{U}\), the octagon equation, acting on \((a,b,c,d,e,f,g,h,k,l)\) results in the following six conditions, \[\{\{a,b,d,g\},\{[a,b,d,g],c,e,h\},\{\langle a,b,d,g\rangle,\langle[a,b,d,g],c,e,h \rangle,f,k\},l\}=\{d,e,f,\{g,h,k,l\}\}\,,\] \[\langle\{a,b,d,g\},\{[a,b,d,g],c,e,h\},\{\langle a,b,d,g\rangle, \langle[a,b,d,g],c,e,h\rangle,f,k\},l\rangle\] \[\qquad\qquad=\{b,c,\langle d,e,f,\{g,h,k,l\}\rangle,\langle g,h,k, l\rangle\}\,,\] \[[\{a,b,d,g\},\{[a,b,d,g],c,e,h\},\{\langle a,b,d,g\rangle, \langle[a,b,d,g],c,e,h\rangle,f,k\},l]\] \[\qquad\qquad=\{a,[b,c,\langle d,e,f,\{g,h,k,l\}\rangle,\langle g,h,k,l\rangle],[d,e,f,\{g,h,k,l\}],[g,h,k,l]\}\,,\] \[\langle\langle a,b,d,g\rangle,\langle[a,b,d,g],c,e,h\rangle,f,k \rangle=\langle b,c,\langle d,e,f,\{g,h,k,l\}\rangle,[g,h,k,l]\rangle\,,\] \[[\langle a,b,d,g\rangle,\langle[a,b,d,g],c,e,h\rangle,f,k]\] \[\qquad\qquad=\langle a,[b,c,\langle d,e,f,\{g,h,k,l\}\rangle, \langle g,h,k,l\rangle],[d,e,f,\{g,h,k,l\}],[g,h,k,l]\rangle\,,\] \[[a,[b,c,\langle d,e,f,\{g,h,k,l\}\rangle,\langle g,h,k,l\rangle], [d,e,f,\{g,h,k,l\}],[g,h,k,l]]=[[a,b,d,g],c,e,h\}\,, \tag{5.20}\] for all \(a,b,c,d,e,f,g,h,k,l\in\mathcal{U}\). ## 6 Examples of dual polygon equations In this section we elaborate dual polygon equations up to the dual 8-gon equation. ### Dual tetragon equation For \(i,j=1,2,3,4\), \(i<j\), let \(L_{ij}:\mathcal{U}_{i}\to\mathcal{U}_{j}\) carry a parameter from a set \(\mathcal{U}_{ij}\). Let each of the local trigon equations \[L_{ik}(u_{ik})=L_{jk}(v_{jk})\,L_{ij}(v_{ij})\,,\qquad i<j<k\,,\] uniquely determine a map \[\tilde{T}_{ijk}:\mathcal{U}_{ik}\longrightarrow\mathcal{U}_{ij}\times \mathcal{U}_{jk}\,,\quad u_{ik}\mapsto(v_{ij},v_{jk})\,.\] Then the maps \(\tilde{T}_{ijk}\) have to satisfy a consistency condition, which is obtained by reversing the arrows in Fig. 1. This means that the maps \(\tilde{T}_{ijk}\) have to satisfy the dual tetragon equation, \[\tilde{T}_{123,2}\,\tilde{T}_{134}=\tilde{T}_{234,1}\,\tilde{T}_{124}\,, \tag{6.1}\] which acts on \(\mathcal{U}_{14}\). If all basic sets are equal and there is only a single map \(\tilde{T}\), (6.1) reduces to \[\tilde{T}_{\mathbf{2}}\,\tilde{T}=\tilde{T}_{\mathbf{1}}\,\tilde{T}\,. \tag{6.2}\] Writing \(\tilde{T}=(\tilde{T}_{1},\tilde{T}_{2})\), this amounts to idempotency and commutativity of the maps \(\tilde{T}_{i}:\mathcal{U}\to\mathcal{U}\), \(i=1,2\). **Remark 6.1**.: In a framework of vector spaces, with the Cartesian product replaced by the corresponding tensor product, and linear maps, (6.2) means that \(\tilde{T}\) is _coassociative_. \(\Box\) ### Dual pentagon equation Using complementary index notation, the _dual pentagon equation_ is \[\tilde{T}_{\tilde{5}}\,\tilde{T}_{\tilde{3}}\,\tilde{T}_{\tilde{1}}=\tilde{T} _{\tilde{2}}\,\tilde{T}_{\tilde{4}}\,, \tag{6.3}\] for maps \[\tilde{T}_{ijkl}:\ \mathcal{U}_{ijl}\times\mathcal{U}_{jkl}\to\mathcal{U}_{ikl} \times\mathcal{U}_{ijk}\,,\qquad 1\leq i<j<k<l\leq 5\,.\] Letting it act on \(\mathcal{U}_{125}\times\mathcal{U}_{235}\times\mathcal{U}_{345}\), this equation takes the form \[\tilde{T}_{\tilde{5},\mathbf{2}}\,\tilde{T}_{\tilde{3},\mathbf{1}}\,\tilde{T}_{ \tilde{1},\mathbf{2}}=\tilde{T}_{\tilde{2},\mathbf{1}}\,\mathcal{P}_{\mathbf{2 }}\,\tilde{T}_{\tilde{4},\mathbf{1}}\,.\] If all the basic sets are the same and there is only a single map \(\tilde{T}\), this is simply \[\tilde{T}_{\mathbf{2}}\,\tilde{T}_{\mathbf{1}}\,\tilde{T}_{\mathbf{2}}=\tilde{ T}_{\mathbf{1}}\,\mathcal{P}_{\mathbf{2}}\,\tilde{T}_{\mathbf{1}}\,. \tag{6.4}\] Writing \(\tilde{T}(a,b)=:(a\cdot b,a*b)\), the last equation is equivalent to \[a\cdot(b\cdot c)=(a\cdot b)\cdot c\,,\quad(a*(b\cdot c))\cdot(b*c)=(a\cdot b)* c\,,\quad(a*(b\cdot c))*(b*c)=a*b\,, \tag{6.5}\] for all \(a,b,c\in\mathcal{U}\). ### Dual hexagon equation This is the equation \[\tilde{T}_{\tilde{6}}\,\tilde{T}_{\tilde{4}}\,\tilde{T}_{\tilde{2}}=\tilde{T} _{\tilde{1}}\,\tilde{T}_{\tilde{3}}\,\tilde{T}_{\tilde{5}} \tag{6.6}\] for maps \[\tilde{T}_{ijklm}:\mathcal{U}_{ijkm}\times\mathcal{U}_{iklm}\to\mathcal{U}_{ jklm}\times\mathcal{U}_{ijlm}\times\mathcal{U}_{ijkl}\,,\qquad 1\leq i<j<k<l<m\leq 6\,.\] Introducing position indices and transposition maps, it takes the form \[\tilde{T}_{\tilde{6},\mathbf{1}}\,\tilde{T}_{\tilde{4},\mathbf{2}}\, \mathcal{P}_{\mathbf{3}}\,\tilde{T}_{\tilde{2},\mathbf{1}}=\mathcal{P}_{ \mathbf{3}}\,\tilde{T}_{\tilde{1},\mathbf{4}}\,\tilde{T}_{\tilde{3},\mathbf{2 }}\,\mathcal{P}_{\mathbf{1}}\,\tilde{T}_{\tilde{5},\mathbf{2}}\,. \tag{6.7}\] If all the basic sets are the same and there is only a single map \(\tilde{T}\), this is \[\tilde{T}_{\mathbf{1}}\,\tilde{T}_{\mathbf{2}}\,\mathcal{P}_{ \mathbf{3}}\,\tilde{T}_{\mathbf{1}}=\mathcal{P}_{\mathbf{3}}\,\tilde{T}_{ \mathbf{4}}\,\tilde{T}_{\mathbf{2}}\,\mathcal{P}_{\mathbf{1}}\,\tilde{T}_{ \mathbf{2}}\,, \tag{6.8}\] which already appeared as a 4-cocycle condition in [56]. Writing \[\tilde{T}(a,b)=:(a*b,a\cdot b,a\diamond b)\,,\] (6.8) imposes (5.7) on the first two binary operations, and requires in addition \[(a\diamond(b\cdot c))*(b\diamond c)=(a*b)\diamond((a\cdot b)*c)\,,\] \[(a\diamond(b\cdot c))\cdot(b\diamond c)=(a\cdot b)\diamond c\,,\] \[(a\diamond(b\cdot c))\diamond(b\diamond c)=a\diamond b\,, \tag{6.9}\] for all \(a,b,c\in\mathcal{U}\). **Remark 6.2**.: In [27] (see (3.8) therein) the dual hexagon equation appeared, as a realization of a Pachner move of type (3,3) in four dimensions, in the form \[(QP)_{\mathbf{1}}Q_{\mathbf{2}}P_{\mathbf{1}}Q_{\mathbf{2}}=P_{ \mathbf{3}}(QP)_{\mathbf{4}}Q_{\mathbf{2}}P_{\mathbf{3}}Q_{\mathbf{1}}\,.\] Indeed, setting \(Q=\tilde{T}P\), this reads \[\tilde{T}_{\mathbf{1}}\tilde{T}_{\mathbf{2}}P_{\mathbf{2}}P_{ \mathbf{1}}\tilde{T}_{\mathbf{2}}P_{\mathbf{2}}=P_{\mathbf{3}}\tilde{T}_{ \mathbf{4}}\tilde{T}_{\mathbf{2}}P_{\mathbf{2}}P_{\mathbf{3}}\tilde{T}_{ \mathbf{1}}P_{\mathbf{1}}\] Writing it as \[\tilde{T}_{\mathbf{1}}\tilde{T}_{\mathbf{2}}P_{\mathbf{3}}(P_{ \mathbf{3}}P_{\mathbf{2}}P_{\mathbf{1}})\tilde{T}_{\mathbf{2}}P_{\mathbf{2}}=P _{\mathbf{3}}\tilde{T}_{\mathbf{4}}\tilde{T}_{\mathbf{2}}P_{\mathbf{1}}(P_{ \mathbf{1}}P_{\mathbf{2}}P_{\mathbf{3}})\tilde{T}_{\mathbf{1}}P_{\mathbf{1}}\,,\] and using the identities \(P_{\mathbf{3}}P_{\mathbf{2}}P_{\mathbf{1}}\tilde{T}_{\mathbf{2}}=\tilde{T}_{ \mathbf{1}}P_{\mathbf{2}}P_{\mathbf{1}}\), \(P_{\mathbf{1}}P_{\mathbf{2}}P_{\mathbf{3}}\tilde{T}_{\mathbf{1}}=\tilde{T}_{ \mathbf{2}}P_{\mathbf{1}}P_{\mathbf{2}}\), we obtain \[\tilde{T}_{\mathbf{1}}\tilde{T}_{\mathbf{2}}P_{\mathbf{3}}\tilde{T}_{ \mathbf{1}}P_{\mathbf{2}}P_{\mathbf{1}}P_{\mathbf{2}}=P_{\mathbf{3}}\tilde{T}_{ \mathbf{4}}\tilde{T}_{\mathbf{2}}P_{\mathbf{1}}\tilde{T}_{\mathbf{2}}P_{ \mathbf{1}}P_{\mathbf{2}}P_{\mathbf{1}}\,,\] which, by use of the braid equation \(P_{\mathbf{2}}P_{\mathbf{1}}P_{\mathbf{2}}=P_{\mathbf{1}}P_{\mathbf{2}}P_{ \mathbf{1}}\), is equivalent to (6.8). ### Dual heptagon equation The _dual heptagon equation_ is \[{\cal P}_{\bf 3}\,\tilde{T}_{\tilde{\gamma},{\bf 4}}\,\tilde{T}_{\tilde{5},{\bf 2 }}\,{\cal P}_{\bf 4}\,{\cal P}_{\bf 1}\,\tilde{T}_{\tilde{3},{\bf 2}}\,\tilde{T}_{ \tilde{1},{\bf 4}}=\tilde{T}_{\tilde{2},{\bf 1}}\,{\cal P}_{\bf 3}\,{\cal P}_{ \bf 4}\,\tilde{T}_{\tilde{3},{\bf 2}}\,{\cal P}_{\bf 5}\,{\cal P}_{\bf 4}\,{\cal P}_{ \bf 3}\,\tilde{T}_{\tilde{6},{\bf 1}}\,{\cal P}_{\bf 3}\,, \tag{6.10}\] with maps \(\tilde{T}_{ijklmp}\) : \({\cal U}_{ijklmp}\times{\cal U}_{ijlmmp}\times{\cal U}_{jklmmp}\to{\cal U}_{iklmp }\times{\cal U}_{ijkmp}\times{\cal U}_{ijklm}\), \(i<j<k<l<m<p\). Both sides act on \({\cal U}_{12347}\times{\cal U}_{12457}\times{\cal U}_{12567}\times{\cal U}_{23 457}\times{\cal U}_{23567}\times{\cal U}_{34567}\) and map to \({\cal U}_{14567}\times{\cal U}_{13467}\times{\cal U}_{13456}\times{\cal U}_{12 367}\times{\cal U}_{12356}\times{\cal U}_{12345}\). If all basic sets are the same and there is only a single map \(\tilde{T}\), writing \[\tilde{T}(a,b,c)=(\{a,b,c\},\langle a,b,c\rangle,[a,b,c])\,, \tag{6.11}\] (6.10) is equivalent to the following conditions for the three ternary operations, \[\{b,c,\{d,e,f\}\}=\{\{a,b,d\},\{\langle a,b,d\rangle,c,e\},f\}\,,\] \[\{a,\langle b,c,\{d,e,f\}\rangle,\langle d,e,f\rangle\}=\langle \{a,b,d\},\{\langle a,b,d\rangle,c,e\},f\rangle\,,\] \[\langle a,\langle b,c,\{d,e,f\}\rangle,\langle d,e,f\rangle\rangle =\langle\langle a,b,d\rangle,c,e\rangle\,,\] \[\{[a,\langle b,c,\{d,e,f\}\rangle,\langle d,e,f\rangle],[b,c,\{ d,e,f\}],[d,e,f]\}=[\{a,b,d\},\{\langle a,b,d\rangle,c,e\},f]\,,\] \[\langle[a,\langle b,c,\{d,e,f\}\rangle,\langle d,e,f\rangle],[b,c, \{d,e,f\}],[d,e,f]\rangle=[\langle a,b,d\rangle,c,e\rfloor\,,\] \[[[a,\langle b,c,\{d,e,f\}\rangle,\langle d,e,f\rangle],[b,c,\{d, e,f\}],[d,e,f]]=[a,b,d\rfloor\,, \tag{6.12}\] for all \(a,b,c,d,e,f\in{\cal U}\). ### Dual octagon equation The _dual octagon equation_ is \[{\cal P}_{\bf 4}\,{\cal P}_{\bf 7}\,{\cal P}_{\bf 5}\,{\cal P}_{\bf 6}\, \tilde{T}_{\tilde{8},{\bf 7}}\,{\cal P}_{\bf 3}\,\tilde{T}_{\tilde{6},{\bf 4}}\,{\cal P}_{ \bf 6}\,{\cal P}_{\bf 3}\,{\cal P}_{\bf 2}\tilde{T}_{\tilde{4},{\bf 3}}\,{\cal P}_{ \bf 1}\,{\cal P}_{\bf 2}\,{\cal P}_{\bf 3}\,\tilde{T}_{\tilde{2},{\bf 4}}=\tilde{T}_{ \tilde{1},{\bf 1}}\,\tilde{T}_{\tilde{3},{\bf 3}}\,{\cal P}_{\bf 5}\,{\cal P}_{ \bf 6}\,{\cal P}_{\bf 2}\,\tilde{T}_{\tilde{5},{\bf 3}}\,{\cal P}_{\bf 6}\,{\cal P}_{\bf 5}\,{\cal P }_{\bf 4}\,\tilde{T}_{\tilde{7},{\bf 1}}\,{\cal P}_{\bf 3}, \tag{6.13}\] for maps \(\tilde{T}_{ijklmpq}:\,{\cal U}_{ijklmp}\times{\cal U}_{ijkmmpq}\times{\cal U}_{ kklmpq}\to{\cal U}_{jklmpq}\times{\cal U}_{ijlmpq}\times{\cal U}_{ijklpq} \times{\cal U}_{ijklmp}\), where \(i<j<k<l<m<p<q\). Both sides of the equation act on \({\cal U}_{\tilde{6}\overline{7}}\times{\cal U}_{\tilde{4}\overline{7}}\times{ \cal U}_{\tilde{4}\overline{5}}\times{\cal U}_{\overline{2}\overline{7}} \times{\cal U}_{\overline{2}\overline{5}}\times{\cal U}_{\overline{2} \overline{3}}\). If all basic sets are the same and there is only a single map \(\tilde{T}\), writing \[\tilde{T}(a,b,c)=(\{a,b,c\},\langle a,b,c\rangle,[a,b,c],|a,b,c|)\,, \tag{6.14}\] with four ternary operations \({\cal U}^{3}\to{\cal U}\), the dual octagon equation, acting on \((a,b,c,d,e,f)\) results in (5.14), (5.17), and the following conditions, \[\{|a,[b,c,\langle d,e,f\rangle],[d,e,f]|,|b,c,\langle d,e,f\rangle |,|d,e,f|\}\] \[=|\{a,b,d\},\{[a,b,d],c,e\},\{\langle a,b,d\rangle,\langle[a,b,d],c, e\}\rangle,f|\,,\] \[\langle|a,[b,c,\langle d,e,f\rangle],[d,e,f]|,|b,c,\langle d,e,f \rangle|,|d,e,f|\rangle=|\langle a,b,d\rangle,\langle[a,b.d],c,e,\rangle,f|\,,\] \[[a,[b,c,\langle d,e,f\rangle],[d,e,f]|,|b,c,\langle d,e,f\rangle|,|d,e, f|]=|[a,b,d],c,e|\,,\] \[|\,|a,[b,c,\langle d,e,f\rangle],[d,e,f]|,|b,c,\langle d,e,f\rangle|,|d,e,f|\,|=|a,b,d|\,. \tag{6.15}\] ## 7 Relations between solutions of neighboring polygon equations By a (dual) polygon map we mean a solution of a (dual) polygon equation. In this section we consider the case, where all the basic sets appearing in a multiple Cartesian product, on which a (dual) polygon map acts, are the same set \({\cal U}\). First, we formulate special cases of Theorems in Section 4. Though they are actually corollaries of the latter theorems, because of their relevance they also deserve to be called a theorem. **Theorem 7.1**.: _Let \(\tilde{T}^{(N+1)}\) be a dual \((N+1)\)-gon map and \(T\) this map with the last component of its codomain cut off. Then \(T\) is an \(N\)-gon map. \(\square\)_ **Theorem 7.2**.: _For even \(N\), let \(T^{(N+1)}\) be an \((N+1)\)-gon map and \(T\) this map with the first component of its codomain cut off. Then \(T\) is an \(N\)-gon map. \(\square\)_ **Theorem 7.3**.: _For odd \(N\), let \(\tilde{T}^{(N+1)}\) be a dual \((N+1)\)-gon map and \(\tilde{T}\) this map with the first component of its codomain cut off. Then \(\tilde{T}\) is a dual \(N\)-gon map. \(\square\)_ In the following subsections we provide examples for these results and derive more powerful results for the (dual) polygon equations up to the (dual) octagon equation. ### Degenerate dual \((N+1)\)-gon maps from (dual) \(N\)-gon maps For any (not necessarily trigon or dual trigon) map \(T:\mathcal{U}\to\mathcal{U}\), \(T^{(4)}(a,b):=T(a)\) and also \(T^{(4)}(a,b):=T(b)\) are (degenerate) tetragon maps. Furthermore, we have the following. **Proposition 7.4**.: _Let \(\tilde{T}^{(4)}\) be a dual tetragon map. Then_ \[T^{(5)}(a,b):=\tilde{T}^{(4)}(b)\] _is a (degenerate) pentagon map and_ \[\tilde{T}^{(5)}(a,b):=\tilde{T}^{(4)}(a)\] _is a (degenerate) dual pentagon map._ _Proof._ This is quickly verified. \(\square\) **Proposition 7.5**.: _Let \(T^{(5)}\) be a pentagon map. Then_ \[T^{(6)}(a,b,c):=T^{(5)}(a,b)\] _is a (degenerate) hexagon map._ _Proof._ If \(\langle a,b,c\rangle=a*b\) and \([a,b,c]=a\cdot b\), (5.14) becomes (5.7). \(\square\) **Proposition 7.6**.: _Let \(\tilde{T}^{(5)}\) be a dual pentagon map. Then_ \[T^{(6)}(a,b,c):=\tilde{T}^{(5)}(b,c)\] _is a (degenerate) hexagon map._ _Proof._ If \(\langle a,b,c\rangle=b\cdot c\) and \([a,b,c]=b*c\), then (5.14) becomes (6.5). \(\square\) **Proposition 7.7**.: _Let \(\tilde{T}^{(6)}\) be a dual hexagon map. Then_ \[T^{(7)}(a,b,c):=\tilde{T}^{(6)}(b,c)\] _is a (degenerate) heptagon map._ _Proof._ Writing \(T^{(7)}(a,b,c)=(b*c,b\cdot c,b\circ c)\), the first two of equations (5.17) become the first two of (5.7). The last two of (5.17) become the last two of (6.9). The remaining two of equations (5.17) take the form \[(a*b)\diamond((a\cdot b)*c)=a\cdot(b\cdot c)\,,\quad(a\cdot b)\cdot c=(a \diamond(b\cdot c))*(b\diamond c)\,.\] But this system is equivalent to the third of (5.7) and the first of (6.9). \(\square\) **Proposition 7.8**.: _Let \(\tilde{T}^{(6)}\) be a dual hexagon map. Then_ \[\tilde{T}^{(7)}(a,b,c):=\tilde{T}^{(6)}(a,b)\] _is a (degenerate) dual heptagon map._ Proof.: Writing \(\tilde{T}^{(7)}(a,b,c)=(a*b,a\cdot b,a\diamond b)\), (6.12) becomes (6.5) and (6.9). **Proposition 7.9**.: _Let \(T^{(7)}\) be a heptagon map. Then_ \[T^{(8)}(a,b,c,d):=T^{(7)}(a,b,c)\] _is a (degenerate) octagon map._ Proof.: If \(T\) does not depend on the last argument, setting \[\left\{a,b,c,d\right\}=:\left\{a,b,c\right\},\quad\left\langle a,b,c,d\right\rangle =:\left\langle a,b,c\right\rangle,\quad\left[a,b,c,d\right]=:\left[a,b,c\right],\] the first three of conditions (5.20) reduce to (5.17) and the last three to (5.14). The resulting six conditions are those for \(T^{(7)}\) to be a heptagon map. More generally, we obtain the following results. **Theorem 7.10**.: _For \(n\in\mathbb{N}\), \(n>1\), let \(T^{(2n-1)}\) be a \((2n-1)\)-gon map. Then_ \[T^{(2n)}(a_{1},\dots,a_{n}):=T^{(2n-1)}(a_{1},\dots,a_{n-1})\] _is a (degenerate) \(2n\)-gon map._ Proof.: This is a special case of Theorem 4.2. **Theorem 7.11**.: _For \(n\in\mathbb{N}\), \(n>1\), let \(\tilde{T}^{(2n-1)}\) be a dual \((2n-1)\)-gon map. Then_ \[T^{(2n)}(a_{1},\dots,a_{n}):=\tilde{T}^{(2n-1)}(a_{2},\dots,a_{n})\] _is a (degenerate) \(2n\)-gon map._ Proof.: This is a special case of Theorem 4.5. **Theorem 7.12**.: _For \(n\in\mathbb{N}\), \(n>1\), let \(\tilde{T}^{(2n)}\) be a dual \(2n\)-gon map. Then_ \[T^{(2n+1)}(a_{1},\dots,a_{n}):=\tilde{T}^{(2n)}(a_{2},\dots,a_{n})\] _is a (degenerate) \((2n+1)\)-gon map._ Proof.: This is a special case of Theorem 4.5. **Theorem 7.13**.: _For \(n\in\mathbb{N}\), \(n>1\), let \(\tilde{T}^{(2n)}\) be a dual \(2n\)-gon map. Then_ \[\tilde{T}^{(2n+1)}(a_{1},\dots,a_{n}):=\tilde{T}^{(2n)}(a_{1},\dots,a_{n-1})\] _is a (degenerate) dual \((2n+1)\)-gon map._ Proof.: This is a special case of Theorem 4.3. ### Further relations between neighboring polygon maps **Proposition 7.14**.: _Let \(T_{i}^{(3)}\), \(i=1,2\), be trigon maps. Then_ \[\tilde{T}^{(4)}(a)=(T_{1}^{(3)}(a),T_{2}^{(3)}(a))\] _is a dual tetragon map iff the two trigon maps commute._ Proof.: See Section 6.1. **Proposition 7.15**.: _For a map \(T^{(5)}:\mathcal{U}\times\mathcal{U}\to\mathcal{U}\times\mathcal{U}\), let us consider_ \[T^{(5)}(a,b)=:(a*b,T^{(4)}(a,b))\,,\] _with a binary operation \(*\). The following conditions are equivalent. (1) \(T^{(5)}\) is a pentagon map. (2) \(T^{(4)}\) is a tetragon map and, with \(a\cdot b:=T^{(4)}(a,b)\), the binary operations \(\cdot\) and \(*\) satisfy_ \[(a*b)*((a\cdot b)*c)=b*c\,,\quad(a*b)\cdot((a\cdot b)*c)=a*(b\cdot c)\,,\] _for all \(a,b,c\in\mathcal{U}\)._ Proof.: (5.7) and (5.3) show that the conditions for \(T^{(5)}\) to be a pentagon map are equivalent to \(T^{(4)}\) being a tetragon map and the two compatibility conditions for the two binary operations. **Corollary 7.16**.: _Let \(T^{(4)}\) be a tetragon map. Then_ \[T^{(5)}(a,b)=(b,T^{(4)}(a,b))\] _is a pentagon map._ Proof.: Setting \(a*b=b\) solves the two compatibility equations in condition (2) of the preceding proposition. **Corollary 7.17**.: _Let \(T^{(4)}\) be a tetragon map and \(u\) a fixed element of \(\mathcal{U}\). Then_ \[T^{(5)}(a,b)=(u,T^{(4)}(a,b))\] _is a pentagon map iff \(T^{(4)}(u,u)=u\)._ Proof.: Setting \(a*b=u\) for all \(a,b\in\mathcal{U}\), solves the first of the two compatibility equations in condition (2) of the preceding proposition and reduces the second to \(u\cdot u=u\). **Proposition 7.18**.: _Let \(T^{(4)}\) be a tetragon map. (1) The map_ \[\tilde{T}^{(5)}(a,b)=(T^{(4)}(a,b),a)\] _is a dual pentagon map. (2) If \(u\) is a fixed element of \(\mathcal{U}\), then_ \[\tilde{T}^{(5)}(a,b)=(T^{(4)}(a,b),u)\] _is a dual pentagon map iff \(T^{(4)}(u,u)=u\)._ Proof.: This is easily verified. **Proposition 7.19**.: _Let_ \[\tilde{T}^{(6)}(a,b)=\left(T^{(5)}(a,b),a\diamond b\right),\] _with a binary operation \(\diamond\). The following conditions are equivalent. (1) \(\tilde{T}^{(6)}\) is a dual hexagon map. (2) \(T^{(5)}\) is a pentagon map and, expressed as in (5.6) (so that (5.7) holds), it satisfies (6.9)._ Proof.: This is easily verified. **Example 7.20**.: Let \(\cdot\) be commutative and \(a\diamond b:=b*a\). Then, as a consequence of (5.7), the additional dual hexagon map conditions (6.9) are reduced to the single condition \[\left((a\cdot b)*c\right)*\left(a*b\right)=\left((b\cdot c)*a\right)*\left(c* b\right).\] This holds, for example, if \(\mathcal{U}=(0,1)\subset\mathbb{R}\) and \(a*b:=(1-a)b/(1-ab)\). Hence, using the pentagon map (5.10), the map \[(a,b)\longmapsto\Big{(}\frac{(1-a)b}{1-ab},ab,\frac{a(1-b)}{1-ab}\Big{)}\] solves the dual hexagon equation, also see [27]. This map actually shows up in an identity for the _Rogers dilogarithm_ function \(L(a)\)[50]. \(S(a)=e^{\lambda\,L(a)}\), with an arbitrary constant \(\lambda\neq 0\), satisfies the pentagon relation9 Footnote 9: If we order the parameters as \((a_{0},\ldots,a_{4}):=(a,1-ab,b,(1-b)/(1-ab),(1-a)/(1-ab))\), they are given by the recursion relation \(a_{n-1}a_{n+1}=1-a_{n}\) (a special \(Y\)-system), which has \(\mathbb{Z}_{5}\) symmetry \(a_{n+5}=a_{n}\)[18, 16, 64]. \[S(b)\,S(a)=S\Big{(}\frac{a(1-b)}{1-ab}\Big{)}\,S(ab)\,S\Big{(}\frac{(1-a)b}{1- ab}\Big{)}\,.\] Kashaev called a solution \(\hat{T}:I\to\mathrm{End}(\mathcal{U}\otimes\mathcal{U})\), where \(I\) is the open unit interval \((0,1)\subset\mathbb{R}\) and \(\mathcal{U}\) a vector space, a matrix or operator dilogarithm if it satisfies the local pentagon equation \[\hat{T}_{\mathbf{23}}(a)\,\hat{T}_{\mathbf{12}}(b)=\hat{T}_{\mathbf{12}}\Big{(} \frac{a(1-b)}{1-ab}\Big{)}\,\hat{T}_{\mathbf{13}}(ab)\,\hat{T}_{\mathbf{23}} \Big{(}\frac{(1-a)b}{1-ab}\Big{)}\] [24, 25]. **Corollary 7.21**.: _If \(T^{(5)}\) is a pentagon map, then_ \[\hat{T}^{(6)}(a,b):=(T^{(5)}(a,b),a)\] _is a dual hexagon map._ Proof.: Setting \(a\diamond b:=a\) solves the three equations (6.9). **Corollary 7.22**.: _Let \(T^{(5)}\) be a pentagon map and \(u\in\mathcal{U}\) a fixed element. Then_ \[\tilde{T}^{(6)}(a,b):=(T^{(5)}(a,b),u)\] _is a dual hexagon map iff \(T^{(5)}(u,u)=(u,u)\)._ Proof.: Setting \(a\diamond b:=u\) reduces the three equations (6.9) to \(u*u=u\) and \(u\cdot u=u\). **Proposition 7.23**.: _Let \(\tilde{T}^{(5)}\) be a dual pentagon map. (1) The map_ \[\tilde{T}^{(6)}(a,b)=(b,\tilde{T}^{(5)}(a,b))\] _is a dual hexagon map. (2) If \(u\) is a fixed element of \(\mathcal{U}\), then_ \[\tilde{T}^{(6)}(a,b)=(u,\tilde{T}^{(5)}(a,b))\] _is a dual hexagon map iff \(\tilde{T}^{(5)}(u,u)=(u,u)\)._ Proof.: This is also easily verified using results of Section 6. **Proposition 7.24**.: _Let_ \[T^{(7)}(a,b,c)=\left(\{a,b,c\},T^{(6)}(a,b,c)\right),\] _with a ternary operation \(\{\,\,\}\). The following conditions are equivalent. (1) \(T^{(7)}\) is a heptagon map. (2) \(T^{(6)}\) is a hexagon map and, expressed as in (5.13) (so that (5.14) holds), it satisfies the compatibility conditions (5.17) with the above ternary operation._ Proof.: This is an immediate consequence of the last part of Section 5.5. **Corollary 7.25**.: _If \(T^{(6)}\) is a hexagon map, then_ \[T^{(7)}(a,b,c):=(c,T^{(6)}(a,b,c))\] _is a heptagon map._ Proof.: Using \(\{a,b,c\}=c\) in (5.17) results in identities. **Corollary 7.26**.: _Let \(T^{(6)}\) be a hexagon map and \(u\in\mathcal{U}\) a fixed element. Then_ \[T^{(7)}(a,b,c):=(u,T^{(6)}(a,b,c))\] _is a heptagon map iff \(T^{(6)}(u,u,u)=(u,u)\)._ Proof.: Using \(\{a,b,c\}=u\) in (5.17) results in \(\langle u,u,u\rangle=u=[u,u,u]\). **Proposition 7.27**.: _Let \(T^{(6)}\) be a hexagon map. (1) The map_ \[\tilde{T}^{(7)}(a,b,c)=(T^{(6)}(a,b,c),a)\] _is a dual heptagon map. (2) If \(u\) is a fixed element of \(\mathcal{U}\), then_ \[\tilde{T}^{(7)}(a,b,c)=(T^{(6)}(a,b,c),u)\] _is a dual heptagon map iff \(T^{(6)}(u,u,u)=(u,u)\)._ Proof.: (1) Setting \([a,b,c]=a\), the last three equations of (6.12) become identities and the first three are equivalent to (5.14) by a renaming of the ternary operations. (2) With \([a,b,c]=u\), the last three of equations (6.12) become \(T^{(6)}(u,u,u)=(u,u)\) **Proposition 7.28**.: _Let_ \[\tilde{T}^{(8)}(a,b,c)=\left(T^{(7)}(a,b,c),|a,b,c|\right),\] _with a ternary operation \(|\,,\,,\,|\). The following conditions are equivalent. (1) \(\tilde{T}^{(8)}\) is a dual hexagon map. (2) \(T^{(7)}\) is a heptagon map and, expressed as in (5.16), it satisfies (6.15)._ Proof.: This immediately follows from results in Section 6.5. **Corollary 7.29**.: _If \(T^{(7)}\) is a heptagon map, then_ \[\tilde{T}^{(8)}(a,b,c):=\left(T^{(7)}(a,b,c),a\right)\] _is a dual octagon map._ Proof.: Setting \(|a,b,c|=a\) turns the four equations (6.15) into identities. **Corollary 7.30**.: _Let \(T^{(7)}\) be a heptagon map and \(u\in\mathcal{U}\) a fixed element. Then_ \[\tilde{T}^{(8)}(a,b,c):=\left(T^{(7)}(a,b,c),u\right)\] _is a dual octagon map iff \(T^{(7)}(u,u,u)=(u,u,u)\)._ Proof.: Setting \(|a,b,c|=u\) turns the four equations (6.15) into \(\{u,u,u\}=u\), \(\langle u,u,u\rangle=u\) and \([u,u,u]=u\). **Proposition 7.31**.: _Let \(\tilde{T}^{(7)}\) be a dual heptagon map. (1) The map_ \[\tilde{T}^{(8)}(a,b,c)=\left(c,\tilde{T}^{(7)}(a,b,c)\right)\] _is a dual octagon map. (2) If \(u\) is a fixed element of \(\mathcal{U}\), then_ \[\tilde{T}^{(8)}(a,b,c)=\left(u,\tilde{T}^{(7)}(a,b,c)\right)\] _is a dual octagon map iff \(\tilde{T}^{(7)}(u,u,u)=(u,u,u)\)._ Proof.: This can be verified using results of Section 6. Preceding results suggest the following conjectures. **Conjecture 7.32**.: _Let \(T^{(2n)}\) be a \(2n\)-gon map, \(n\in\mathbb{N}\), \(n\geq 2\). Then_ \[T^{(2n+1)}(a_{1},\ldots,a_{n}):=\left(a_{n},T^{(2n)}(a_{1},\ldots,a_{n})\right)\] _is a \((2n+1)\)-gon map. _ **Conjecture 7.33**.: _Let \(T^{(2n)}\) be a \(2n\)-gon map, \(n\in\mathbb{N}\), and \(u\in\mathcal{U}\) a fixed element. Then_ \[T^{(2n+1)}(a_{1},\ldots,a_{n}):=\left(u,T^{(2n)}(a_{1},\ldots,a_{n})\right)\] _is a \((2n+1)\)-gon map iff \(T^{(2n)}(u,\ldots,u)=(u,\ldots,u)\). _ **Conjecture 7.34**.: _Let \(T^{(2n+1)}\) be a \((2n+1)\)-gon map, \(n\in\mathbb{N}\). Then_ \[\tilde{T}^{(2n+2)}(a_{1},\dots,a_{n}):=(T^{(2n+1)}(a_{1},\dots,a_{n}),a_{1})\] _is a dual \((2n+2)\)-gon map. _ **Conjecture 7.35**.: _Let \(T^{(2n+1)}\) be a \((2n+1)\)-gon map, \(n\in\mathbb{N}\), and \(u\in\mathcal{U}\) a fixed element. Then_ \[\tilde{T}^{(2n+2)}(a_{1},\dots,a_{n}):=(T^{(2n+1)}(a_{1},\dots,a_{n}),u)\] _is a dual \((2n+2)\)-gon map iff \(T^{(2n+1)}(u,\dots,u)=(u,\dots,u)\). _ **Conjecture 7.36**.: _Let \(\tilde{T}^{(2n+1)}\) be a dual \((2n+1)\)-gon map, \(n\in\mathbb{N}\). Then_ \[\tilde{T}^{(2n+2)}(a_{1},\dots,a_{n}):=(a_{n},\tilde{T}^{(2n+1)}(a_{1},\dots,a _{n}))\] _is a dual \((2n+2)\)-gon map. _ **Conjecture 7.37**.: _Let \(\tilde{T}^{(2n+1)}\) be a dual \((2n+1)\)-gon map, \(n\in\mathbb{N}\), and \(u\in\mathcal{U}\) a fixed element. Then_ \[\tilde{T}^{(2n+2)}(a_{1},\dots,a_{n}):=(u,\tilde{T}^{(2n+1)}(a_{1},\dots,a_{n }))\] _is a dual \((2n+2)\)-gon map iff \(\tilde{T}^{(2n+1)}(u,\dots,u)=(u,\dots,u)\). _ ## 8 Conclusions The main results of this work concern the structure of solutions of polygon equations. More precisely, we have shown that a solution of a (dual) \(N\)-gon equation is related in surprisingly simple ways to solutions of the (dual) \((N+1)\)-gon and (dual) \((N-1)\)-gon equation. For a chosen polygon equation, the most important case is when all basic sets are equal and there is only a single polygon map. Expressing polygon equations with the help of transposition maps, we can quite easily verify the abovementioned features for the simplest equations of the family. But for general proofs, we had to return to the underlying framework of higher Tamari orders, as developed in [11]. Our results reveal a beautiful structure of the family of polygon equations. Other nice aspects are the integrability feature [11], recalled in some examples in Section 5, and relations with polyhedra [11]. In this work, we concentrated on the set-theoretic setting. Many results directly pass over to the framework of vector spaces, tensor products, and linear maps, where the dual tetragon equation becomes the coassociativity condition and the pentagon equation plays one of its most important roles, as mentioned in the introduction. A further exploration of polygon equations, beyond the pentagon equation, in this framework, will be left for a separate work. Also concerning set-theoretic solutions we have only set the stage. It is to be expected that more concrete solutions can be obtained by applying methods that have already been exploited in the case of the pentagon equation. Whereas the (dual) pentagon and dual hexagon equation can be expressed in terms of binary operations, the hexagon and higher polygon equations involve \(n\)-ary operations with \(n>2\). There is a vast literature about such generalizations of products, and this should be helpful in finding solutions of \(N\)-gon equations with \(N>5\). In particular, there are corresponding generalizations of co-, bi- and Hopf algebras, for which higher polygon equations may play a role. Also see [56] ("cocycloids"). We have seen in Section 7.2 that a solution of the dual hexagon equation shows up in a pentagonal relation for the (exponentiated) Rogers dilogarithm. Also higher (dual) polygon equations may play a role in this context [63]. Surely there is much more to be revealed. **Acknowledgment.** Some important insights that led to this work originated from my collaboration with Aristophanes Dimakis, who, sadly, passed away in 2021.
2310.02671
Beyond Stationarity: Convergence Analysis of Stochastic Softmax Policy Gradient Methods
Markov Decision Processes (MDPs) are a formal framework for modeling and solving sequential decision-making problems. In finite-time horizons such problems are relevant for instance for optimal stopping or specific supply chain problems, but also in the training of large language models. In contrast to infinite horizon MDPs optimal policies are not stationary, policies must be learned for every single epoch. In practice all parameters are often trained simultaneously, ignoring the inherent structure suggested by dynamic programming. This paper introduces a combination of dynamic programming and policy gradient called dynamic policy gradient, where the parameters are trained backwards in time. For the tabular softmax parametrisation we carry out the convergence analysis for simultaneous and dynamic policy gradient towards global optima, both in the exact and sampled gradient settings without regularisation. It turns out that the use of dynamic policy gradient training much better exploits the structure of finite- time problems which is reflected in improved convergence bounds.
Sara Klein, Simon Weissmann, Leif Döring
2023-10-04T09:21:01Z
http://arxiv.org/abs/2310.02671v2
# Beyond Stationarity: Convergence Analysis of Stochastic Softmax Policy Gradient Methods ###### Abstract Markov Decision Processes (MDPs) are a formal framework for modeling and solving sequential decision-making problems. In finite-time horizons such problems are relevant for instance for optimal stopping or specific supply chain problems, but also in the training of large language models. In contrast to infinite horizon MDPs optimal policies are not stationary, policies must be learned for every single epoch. In practice all parameters are often trained simultaneously, ignoring the inherent structure suggested by dynamic programming. This paper introduces a combination of dynamic programming and policy gradient called dynamic policy gradient, where the parameters are trained backwards in time. For the tabular softmax parametrisation we carry out the convergence analysis for simultaneous and dynamic policy gradient towards global optima, both in the exact and sampled gradient settings without regularisation. It turns out that the use of dynamic policy gradient training much better exploits the structure of finite-time problems which is reflected in improved convergence bounds. **Keywords:** reinforcement learning, policy gradient, stochastic approximation, finite-time MDP. ## 1 Introduction Policy gradient (PG) methods continue to enjoy great popularity in practice due to their model-free nature and high flexibility. Despite their far-reaching history (Williams, 1992; Sutton et al., 1999; Konda and Tsitsiklis, 1999; Kakade, 2001), there were no proofs for the global convergence of these algorithms for a long time. Nevertheless, they have been very successful in many applications, which is why numerous variants have been developed in the last few decades, whose convergence analysis, if available, was mostly limited to convergence to stationary points (Pirotta et al., 2013; Schulman et al., 2015; Papini et al., 2018; Clavera et al., 2018; Shen et al., 2019; Xu et al., 2020b; Huang et al., 2020; Xu et al., 2020a; Huang et al., 2022). In recent years, notable advancements have been achieved in the convergence analysis towards global optima (Fazel et al., 2018; Agarwal et al., 2021; Mei et al., 2020; Bhandari and Russo, 2021, 2022; Cen et al., 2022; Xiao, 2022; Yuan et al., 2022; Alfano and Rebeschini, 2023). These achievements are partially attributed to the utilisation of (weak) gradient domination or Polyak-Lojasiewicz (PL) inequalities (lower bounds on the gradient) (Polyak, 1963). As examined in Karimi et al. (2016) a PL-inequality and \(\beta\)-smoothness (i.e. \(\beta\)-Lipschitz continuity of the gradient) implies a linear convergence rate for gradient descent methods. In certain cases, only a weaker form of the PL inequality can be derived, which states that it is only possible to lower bound the norm of the gradient instead of the squared norm of the gradient by the distance to the optimum. Despite this limitation, \(\mathcal{O}(1/n)\)-convergence can still be achieved in some instances. This article deals with PG algorithms for finite-time MDPs. Finite-time MDPs differ from discounted infinite-time MDPs in that the optimal policies are not stationary, i.e. depend on the epochs. While a lot of recent theoretical research focused on discounted MDPs with infinite-time horizon not much is known for finite-time MDPs. There is a prevailing thought that finite-time MDPs do not require additional scrutiny as they can be transformed into infinite horizon MDPs by adding an additional time-coordinate. Seeing finite-time MDPs this way leads to a training procedure in which parameters for all epochs are trained simultaneously, see for instance Guin and Bhatnagar (2023). While there are practical reasons to go that way, we will see below that ignoring the structure of the problem yields worse convergence bounds. The aim of this article is two-fold. Firstly, we analyse the simultaneous PG algorithm. The analysis for exact gradients goes along arguments of recent articles, the analysis of the stochastic PG case is novel. Secondly, we introduce a new approach to PG for finite-time MDPs. We exploit the dynamic programming structure and view the MDP as a nested sequence of contextual bandits. Essentially, our algorithm performs a sequence of PG algorithms backwards in time with a carefully chosen epoch dependent number of training steps. We compare the exact and stochastic analysis to the simultaneous approach. There are some recent articles also studying PG of finite-time horizon MDPs from a different perspective considering fictitious discount algorithms (Guo et al., 2022) or finite-time linear quadratic control problems (Hambly et al., 2021, 2022; Zhang et al., 2021). This article can be seen to extend a series of recent articles from discounted MDPs to finite-time MDPs. In Agarwal et al. (2021), the global asymptotic convergence of PG is demonstrated under tabular softmax parametrisation, and convergence rates are derived using log-barrier regularisation and natural policy gradient. Building upon this work, Mei et al. (2020) showed the first convergence rates for PG using non-uniform PL-inequalities (Mei et al., 2021), specifically for tabular softmax parametrisation. Their convergence rate is fundamentally dependent on the discount factor as \((1-\gamma)^{-6}\). While the results obviously do not immediately translate to non-discounted MDPs with \(\gamma=1\) a careful investigation of their arguments allows us to prove upper bounds involving \(H^{5}\) for the simultaneous PG method compared to \(H^{3}\) that we obtain for the dynamic PG method. In a nutshell, the advantage of the dynamic PG is simple. Looking at the PG theorem for finite-time MDPs it is clear that earlier epochs should not be trained as long as policies for later epochs are far from optimal. A badly learned \(Q\)-function-to-go leads to badly directed gradients in early epochs. Simultaneous training of all policies thus leads to "useless" training in early epochs. This is covered by our dynamic PG algorithm that optimises policies backwards in time with an Figure 1: Evolution of the value function during training. increased number of training steps. To illustrate this phenomenon we implemented a simple toy example where the advantage of dynamic PG becomes visible. In Figure 1 one can see 5 simulations of the dynamic PG with different target accuracies (blue curves) plotted against one version of the simultaneous PG with target accuracy 0.1 (dashed magenta curve). The time-horizon is chosen as \(H=5\). More details on the example can be found in Appendix E. A main further contribution of this article is a stochastic analysis, where we abandon the assumption that the exact gradient is known and focus on the model free stochastic PG method. For this type of algorithm, very little is known about convergence to global optima even in the discounted case. Agarwal et al. (2021) discuss the approximate natural policy gradient for log-linear policies, and Fatkhullin et al. (2023) consider fisher non-degenerated policies. In the tabular case, Xiao (2022) analyse inexact policy mirror descent and Ding et al. (2022) derive complexity bounds for entropy-regularised stochastic PG. They use a well-chosen stopping time which measures the distance to the set of optimal parameters, and simultaneously guarantees convergence to the regularised optimum prior to the occurrence of the stopping time by using a small enough step size and large enough batch size. Similar to this idea, we construct a different stopping time in this work, which allows us to derive complexity bounds for an approximation arbitrarily close to the global optimum that does not require a set of optimal parametersand this is relevant when considering softmax parametrisation. To the best of our knowledge, the results presented in this paper provide the first convergence analysis for dynamic programming inspired PG under softmax parametrisation in the finite-time MDP setting. Both for exact and batch sampled policy gradients without regularisation. ## 2 Finite-time horizon MDPs and policy gradient methods. A finite-time MDP is defined by a tuple \((\mathcal{H},\mathcal{S},\mathcal{A},r,p)\) with \(\mathcal{H}=\{0,\ldots,H-1\}\) decision epochs, finite state space \(\mathcal{S}=\mathcal{S}_{0}\cup\cdots\cup\mathcal{S}_{H-1}\), finite action space \(\mathcal{A}=\bigcup_{s\in\mathcal{S}}\mathcal{A}_{s}\), a reward function \(r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) and transition function \(p:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})\) with \(p(\mathcal{S}_{h+1}|s,a)=1\) for every \(h<H-1\), \(s\in\mathcal{S}_{h}\) and \(a\in\mathcal{A}_{s}\). Here \(\Delta(D)\) denotes the set of all probability measures over a finite set \(D\). Throughout the article \(\pi=(\pi_{h})_{h=0}^{H-1}\) denotes a time-dependent policy, where \(\pi_{h}:\mathcal{S}_{h}\rightarrow\Delta(\mathcal{A})\) is the policy in decision epoch \(h\in\mathcal{H}\) with \(\pi_{h}(\mathcal{A}_{s}|s)=1\) for every \(s\in\mathcal{S}_{h}\). It is well-known that in contrast to discounted infinite-time horizon MDPs non-stationary policies are needed to optimise finite-time MDPs. An optimal policy in time point \(h\) depends on the time horizon until the end of the problem (see for example Puterman (2005)). The epoch-dependent value functions under policy \(\pi\) are defined by \[V_{h}^{\pi_{(h)}}(\mu_{h}):=\mathbb{E}_{\mu_{h}}^{\pi_{(h)}}\Big{[}\sum_{k=h} ^{H-1}r(S_{k},A_{k})\Big{]},\quad h\in\mathcal{H}, \tag{1}\] where \(\mu_{h}\) is an initial distribution, \(\pi_{(h)}=\left(\pi_{k}\right)_{k=h}^{H-1}\) denotes the sub-policy of \(\pi\) from \(h\) to \(H-1\) and \(\mathbb{E}_{\mu_{h}}^{\pi_{(h)}}\) is the expectation under the measure such that \(S_{h}\sim\mu_{h}\), \(A_{k}\sim\pi_{k}(\cdot|S_{k})\) and \(S_{k+1}\sim p(\cdot|S_{k},A_{k})\) for \(h\leq k<H-1\). The target is to find a (time-dependent) policy that maximises the state-value function \(V_{0}\) at time 0. In the following we will discuss two approaches to solve finite-time MDPs with PG: * An algorithm that is often used in practice, where parametrised policies are trained simultaneously, i.e. the parameters for \(\pi_{0},...,\pi_{H-1}\) are trained at once using the objective \(V_{0}\). * A new algorithm that trains the parameters sequentially starting at the last epoch. We call this scheme dynamic PG because it combines dynamic programming (backwards induction) and PG. In fact, one can also consider PG algorithms that train stationary policies (i.e. independent of \(h\)) for finite-time MDPs. However, this violates the intrinsic nature of finite-time MDPs (optimal policies will only be stationary in trivial cases). In order to carry out a complete theoretical analysis assumptions are required. In this article we will assume that all policies are softmax parametrised, an assumption that appeared frequently in the past years. It is a first step towards a full understanding and already indicates why PG methods should use the dynamic programming structure inherent in finite-time MDPs. Simultaneous Policy Gradient.Let us start by formulating the simultaneous PG algorithm that is often used in practice. The action spaces may depend on the current state and the numbers of possible actions in epoch \(h\) is denoted by \(d_{h}=\sum_{s\in\mathcal{S}_{h}}|\mathcal{A}_{s}|\). To perform a PG algorithm all policies \(\pi_{h}\) (or the entire policy \(\pi\)) must be parametrised. While the algorithm does not require a particular policy we will analyse the tabular softmax parametrisation \[\pi^{\theta}(a|s_{h})=\frac{\exp(\theta(s_{h},a))}{\sum_{a^{\prime}}\exp( \theta(s_{h},a^{\prime}))},\quad\theta=\left(\theta(s_{h},a)\right)_{s_{h} \in\mathcal{S}^{[\mathcal{H}]},a\in\mathcal{A}_{s_{h}}}\in\mathbb{R}^{\sum_{h }d_{h}}, \tag{2}\] where the notation \(\mathcal{S}^{[\mathcal{H}]}\) defines the enlarged state space, containing all possible states associated to their epoch (see Remark A.1 for more details). The tabular softmax parametrisation uses a single parameter for each possible state-action pair at all epochs. Other parametrised policies, e.g. neural networks, take states from all epochs, i.e. from the enlarged state space \(\mathcal{S}^{[\mathcal{H}]}\), as input variables. The simultaneous PG algorithm trains all parameters at once and solves the optimisation problem (to maximize the state value function at time \(0\)) by gradient ascent over all parameters (all epochs) simultaneously. ``` Result: Approximate policy \(\hat{\pi}^{*}\approx\pi^{*}\) initialise \(\theta^{(0)}\in\mathbb{R}^{\sum_{h}d_{h}}\) Choose fixed step sizes \(\eta>0\), number of training steps \(N\) and start distribution \(\mu\) for\(n=0,\ldots,N-1\)do \(\left|\quad\theta^{(n+1)}=\theta^{(n)}+\eta\,\nabla_{\theta}V_{0}^{\pi^{ \theta^{(n)}}}\left(\mu\right)\right|_{\theta^{(n)}}\) end Set \(\hat{\pi}^{*}=\pi^{\theta^{(N)}}\) ``` **Algorithm 1**Simultaneous Policy Gradient for finite-time MDPs Most importantly, the algorithm does not treat epochs differently, the same training effort goes into all epochs. For later use the objective function will be denoted by \[J(\theta,\mu):=V_{0}^{\pi^{\theta}}(\mu)=\mathbb{E}_{\mu}^{\pi^{\theta}}\Big{[} \sum_{h=0}^{H-1}r(S_{h},A_{h})\Big{]} \tag{3}\] Furthermore, let \(\rho_{\mu}^{\pi^{\theta}}(s)=\sum_{h=0}^{H-1}\mathbb{P}_{\mu}^{\pi^{\theta}}( S_{h}=s)\) be the state-visitation measure on \(\mathcal{S}\) and \(d_{\mu}^{\pi^{\theta}}(s)=\frac{1}{H}\rho_{\mu}^{\pi^{\theta}}(s)\) be the normalised state-visitation distribution. We denote by \(J^{*}(\mu)=\sup_{\theta}J(\theta,\mu)\) the optimal value of the objective function and note that \(J^{*}(\mu)=V_{0}^{*}(\mu)=\sup_{\pi:\,\mathrm{Policy}}V_{0}^{\pi}(\mu)\) under the tabular softmax parametrisation, as an optimal policy can be approximated arbitrarily well. Dynamic Policy Gradient.First of all, recall that the inherent structure of finite-time MDPs is a backwards induction principle (dynamic programming), see for instance (Puterman, 2005). To see backwards induction used in learning algorithms we refer for instance to Bertsekas and Tsitsiklis (1996, Sec 6.5). In a way, finite-time MDPs can be viewed as nested contextual bandits. The dynamic PG approach suggested in this article builds upon this intrinsic structure and sets on top a PG scheme. Consider \(H\) different parameters \(\theta_{0},\ldots,\theta_{H-1}\) such that \(\theta_{h}\in\mathbb{R}^{d_{h}}\). A parametric policy \((\pi^{\theta_{h}})_{h=0}^{H-1}\) is defined such that the policy in epoch \(h\) depends only on the parameter \(\theta_{h}\). An example is the tabular softmax parametrisation formulated slightly differently than above. For each decision epoch \(h\in\mathcal{H}\) the tabular softmax parametrisation is given by \[\pi^{\theta_{h}}(a|s)=\frac{\exp(\theta_{h}(s,a))}{\sum_{a^{\prime}\in \mathcal{A}}\exp(\theta_{h}(s,a^{\prime}))},\quad\theta_{h}=(\theta_{h}(s,a))_ {s\in\mathcal{S}_{h},a\in\mathcal{A}_{s}}\in\mathbb{R}^{d_{h}}. \tag{4}\] The total dimension of the parameter tensor \((\theta_{0},\ldots,\theta_{H-1})\) equals the one of \(\theta\) from the (2) because \(\theta_{h}(s_{h},a)=\theta(s_{h},a)\) for \(s_{h}\in\mathcal{S}_{h}\subset\mathcal{S}^{[\mathcal{H}]}\). The difference is that the epoch dependence is made more explicit in (4). The main idea of this approach is as follows. The dynamic programming perspective suggests to learn policies backwards in time. Thus, we start by training the last parameter vector \(\theta_{H-1}\) on the sub-problem \(V_{H-1}\), a one-step MDP which can be viewed as contextual bandit. After convergence up to some termination condition, it is known how to act near optimality in the last epoch and one can proceed to train the parameter vector from previous epochs by exploiting the knowledge of acting near optimal in the future. This is what the proposed dynamic PG algorithm does. A policy is trained up to some termination condition and then used to optimise an epoch earlier. ``` Result: Approximate policy \(\hat{\pi}^{*}\approx\pi^{*}\) initialise \(\theta^{(0)}=(\theta_{0}^{(0)},\ldots,\theta_{H-1}^{(0)})\in\Theta\) for\(h=H-1,\ldots,0\)do Choose fixed step size \(\eta_{h}\), number of training steps \(N_{h}\) and start distribution \(\mu_{h}\) for\(n=0,\ldots,N_{h}-1\)do \(\theta_{h}^{(n+1)}=\theta_{h}^{(n)}+\eta_{h}\nabla_{\theta_{h}}V_{h}^{(\pi^{ \theta_{h}},\hat{\pi}_{(h+1)}^{*})}(\mu_{h})\big{|}_{\theta_{h}^{(n)}}\) end Set \(\hat{\pi}_{h}^{*}=\pi^{\theta_{h}^{(N_{h})}}\) end ``` **Algorithm 2**Dynamic Policy Gradient for finite-time MDPs A bit of notation is needed to analyse this approach. Given any fixed policy \(\tilde{\pi}\), the objective function \(J_{h}\) in epoch \(h\) is defined to be the \(h\)-state value function in state under the extended policy \((\pi^{\theta_{h}},\tilde{\pi}_{(h+1)}):=(\pi^{\theta_{h}},\tilde{\pi}_{h+1}, \ldots,\tilde{\pi}_{H-1})\), \[J_{h}(\theta_{h},\tilde{\pi}_{(h+1)},\mu_{h}):=V_{h}^{(\pi^{\theta_{h}},\tilde{ \pi}_{(h+1)})}(\mu_{h})=\mathbb{E}_{\mu_{h}}^{(\pi^{\theta_{h}},\tilde{\pi}_{ (h+1)})}\Big{[}\sum_{k=h}^{H-1}r(S_{k},A_{k})\Big{]}. \tag{5}\] While the notation is a bit heavy the intuition behind is easy to understand. If the policy after epoch \(h\) is already trained (this is \(\tilde{\pi}_{(h+1)}\)) then \(J_{h}\) as a function of \(\theta_{h}\) is the parametrised dependence of the value function when only the policy for epoch \(h\) is changed. Gradient ascent is then used to find a parameter \(\theta_{h}^{*}\) that maximises \(J_{h}(\cdot,\tilde{\pi}_{(h+1)},\delta_{s})\), for all \(s\in\mathcal{S}_{h}\), where \(\delta_{s}\) the dirac measure on \(s\). Note that to train \(\theta_{h}\) one chooses \(\tilde{\pi}_{(h+1)}=\tilde{\pi}_{(h+1)}^{*}\) in Algorithm 2. A priori it is not clear if simultaneous or dynamic programming inspired training is more efficient. Dynamic PG has an additional loop but trains less parameters at once. We give a detailed analysis for the tabular softmax parametrisation but want to give a heuristic argument why simultaneous training is not favorable. The policy gradient theorem, see Theorem A.5, states that \[\nabla J(\theta,\mu)=\sum_{s_{h}\in\mathcal{S}^{[H]}}\tilde{\rho}_{\mu}^{\pi ^{\theta}}(s_{h})\sum_{a\in\mathcal{A}_{s_{h}}}\pi^{\theta}(a|s_{h})\nabla \log(\pi^{\theta}(a|s_{h}))Q_{h}^{\pi^{\theta}}(s_{h},a),\] involving \(Q\)-values under the current policy1. It implies that training policies at earlier epochs are massively influenced by estimation errors of \(Q_{h}^{\pi^{\theta}}\). Reasonable training of optimal decisions is only possible if all later epochs have been trained well, i.e. \(Q_{h}^{\pi^{\theta}}\approx Q_{h}^{*}\). This may lead to inefficiency in earlier epochs when training all epochs simultaneously. It is important to note that the policy gradient formula is independent of the parametrisation. While our precise analysis is only carried out for tabular softmax parametrisations this general heuristic remains valid for all classes of policies. Footnote 1: See Appendix A, (12) and (13) for the definition of the state-action value function \(Q\) and the enlarged state visitation measure \(\tilde{\rho}\). _Assumption 2.1_.: Throughout the remaining manuscript we assume that the rewards are bounded in \([0,R^{*}]\), for some \(R^{*}>0\). The positivity is no restriction of generality, bounded negative rewards can be shifted using the base-line trick. In what follows we will always assume the tabular softmax parametrisation and analyse both PG schemes. First under the assumption of exact gradients, then with sampled gradients a la REINFORCE. ## 3 Convergence of Softmax Policy Gradient with exact gradients In the following, we analyse the convergence behavior of the simultaneous as well as the dynamic approach under the assumption to have access to exact gradient computation. The presented convergence analysis in both settings is inspired from the discounted setting considered recently in Agarwal et al. (2021); Mei et al. (2020). The idea is to combine smoothness of the objective function and a (weak) PL-inequality in order to derive a global convergence result. ### Simultaneous Policy Gradient To prove convergence in the simultaneous approach we will interpret the finite-time MDP as an undiscounted stationary problem with state-space \(\mathcal{S}^{[H]}\) and deterministic absorption time \(H\). This MDP is undiscounted but terminates in finite-time. Building upon Agarwal et al. (2021); Mei et al. (2020); Yuan et al. (2022) we prove that the objective function defined in (3) is \(\beta\)-smooth with parameter \(\beta=H^{2}R^{*}(2-\frac{1}{|\mathcal{A}|})\) and satisfies a weak PL-inequality of the form \[\|\nabla J(\theta,\mu)\|_{2}\geq\frac{\min_{s_{h}\in\mathcal{S}^{[\mathcal{H}]}} \pi^{\theta}(a^{*}(s_{h})|s_{h})}{\sqrt{|\mathcal{S}^{[\mathcal{H}]}|}}\Big{\|} \frac{d_{\mu}^{\pi^{*}}}{d_{\mu}^{\pi^{\theta}}}\Big{\|}_{\infty}^{-1}(J^{*}( \mu)-J(\theta,\mu)).\] Here \(\pi^{*}\) denotes a fixed but arbitrary deterministic optimal policy for the enlarged state space \(\mathcal{S}^{[\mathcal{H}]}\) and \(a^{*}(s_{h})=\operatorname*{argmax}_{a\in\mathcal{A}_{s_{h}}}\pi^{*}(a|s_{h})\) is the best action in state \(s_{h}\). The term \[\Big{\|}\frac{d_{\mu}^{\pi^{*}}}{d_{\mu}^{\pi^{\theta}}}\Big{\|}_{\infty}:= \max_{s\in\mathcal{S}}\frac{d_{\mu}^{\pi^{*}}(s)}{d_{\mu}^{\pi^{\theta}}(s)} \tag{6}\] is the distribution mismatch coefficient introduced in Agarwal et al. (2021, Def 3.1). Both properties are shown in Appendix B.1. To ensure that the distribution mismatch coefficient can be bounded from below uniformly in \(\theta\) (see also Remark B.4) we make the following assumption. _Assumption 3.1_.: For the simultaneous PG algorithm we assume that the state space is constant over all epochs, i.e. \(\mathcal{S}_{h}=\mathcal{S}\) for all epochs. As already pointed out in Mei et al. (2020) one key challenge in providing global convergence is to bound the term \(\min_{s\in\mathcal{S}}\pi^{\theta}(a_{h}^{*}(s)|s)\) from below uniformly in \(\theta\) appearing in the gradient ascent updates. Techniques introduced in Agarwal et al. (2021) can be extended to the finite-horizon setting to prove asymptotic convergence towards global optima. This can then be used to bound \(c=c(\theta^{(0)})=\inf_{n}\min_{s\in\mathcal{S}}\pi^{\theta^{(n)}}(a_{h}^{*}( s)|s)>0\) (Lemma B.5). Combining smoothness and the gradient domination property results in the following global convergence result. **Theorem 3.2**.: _Under Assumption 3.1, let \(\mu\) be a probability measure such that \(\mu(s)>0\) for all \(s\in\mathcal{S}\), let \(\eta=\frac{1}{5H^{2}R^{*}}\) and consider the sequence \((\theta^{(n)})\) generated by Algorithm 1 with arbitrary initialisation \(\theta^{(0)}\). For \(\epsilon>0\) choose the number of training steps as \(N=\frac{10H^{5}R^{*}|\mathcal{S}|}{c^{2}\epsilon}\Big{\|}\frac{d_{\mu}^{\pi^{ *}}}{\mu}\Big{\|}_{\infty}^{2}\). Then it holds that_ \[V_{0}^{*}(\mu)-V_{0}^{\pi^{\theta^{(N)}}}(\mu)\leq\epsilon.\] One can compare this result to Mei et al. (2020, Thm 4) for discounted MDPs. A discounted MDP can be seen as an undiscounted MDP stopped at an independent geometric random variable with mean \((1-\gamma)^{-1}\). Thus, it comes as no surprise that algorithms with deterministic absorption time \(H\) have analogous estimates with \(H\) instead of \((1-\gamma)^{-1}\). See Remark B.6 for a detailed comparison. Furthermore, it is noteworthy that it cannot be proven that \(c\) is independent of \(H\). We omitted this dependency when we compare to the discounted case because the model dependent constant there could also depend on \(\gamma\) in the same sense. ### Dynamic Policy Gradient We now come to the first main contribution of this work, an improved bound for the convergence of the dynamic PG algorithm. The optimisation objectives are \(J_{h}\) defined in (5). The structure of proving convergence is as follows. For each fixed \(h\in\mathcal{H}\) we provide global convergence given that the policy after \(h\) is fixed and denoted by \(\tilde{\pi}\). After having established bounds for each decision epoch, we apply backwards induction to derive complexity bounds on the total error accumulated over all decision epochs. The \(\beta\)-smoothness for different \(J_{h}\) is then reflected in different training steps for different epochs. The backwards induction setting can be described as a nested sequence of contextual bandits (one-step MDPs) and thus, can be analysed using results from the discounted setting by choosing \(\gamma=0\). Using PG estimates for discounted MDPs (Mei et al., 2020; Yuan et al., 2022) we prove in Appendix B.2 that the objective \(J_{h}\) from (5) is a smooth function in \(\theta_{h}\) with parameter \(\beta_{h}=2(H-h)R^{*}\) and satisfies also a weak PL-inequality of the form \[\|\nabla J_{h}(\theta_{h},\tilde{\pi}_{(h+1)},\mu_{h})\|_{2}\geq\min_{s\in \mathcal{S}_{h}}\pi^{\theta_{h}}(a_{h}^{*}(s)|s)(J_{h}^{*}(\tilde{\pi}_{(h+1)},\mu_{h})-J_{h}(\theta_{h},\tilde{\pi}_{(h+1)},\mu_{h})).\] It is crucial to keep in mind that classical theory from non-convex optimisation tells us that less smooth (large \(\beta\)) functions must be trained with more gradient steps. It becomes clear that the dynamic PG algorithm should spend less training effort on later epochs (earlier in the algorithm) and more training effort on earlier epochs (later in the algorithm). In fact, we make use of this observation by applying backwards induction in order to improve the convergence behavior depending on \(H\) (see Theorem 4.2). The main challenge is again to bound \(\min_{s\in\mathcal{S}}\pi^{\theta_{h}}(a_{h}^{*}(s)|s)\) from below uniformly in \(\theta_{h}\) appearing in the gradient ascent updates from Algorithm 2. In this setting the required asymptotic convergence follows directly from the one-step MDP viewpoint using \(\gamma=0\) obtained in Agarwal et al. (2021, Thm 5) and it holds \(c_{h}=c_{h}(\theta_{h}^{(n)})=\inf_{n\geq 0}\min_{s\in\mathcal{S}_{h}}\pi^{ \theta_{h}^{(n)}}(a_{h}^{*}(s)|s)>0\) (Lemma B.10). There is another subtle advantage in the backwards induction point of view. The contextual bandit interpretation allows using refinements of estimates for the special case of contextual bandits. A slight generalisation of work of Mei et al. (2020) for stochastic bandits shows that the unpleasant unknown constants \(c_{h}\) simplify if the PG algorithm is uniformly initialised: **Proposition 3.3**.: _For fixed \(h\in\mathcal{H}\), let \(\mu_{h}\) be a probability measure such that \(\mu_{h}(s)>0\) for all \(s\in\mathcal{S}_{h}\) and let \(0<\eta_{h}\leq\frac{1}{2(H-h)R^{*}}\). Let \(\theta_{h}^{(0)}\in\mathcal{R}^{d_{h}}\) be an initialisation such that the initial policy is a uniform distribution, then \(c_{h}=\frac{1}{|\mathcal{A}|}>0\)._ This property is in sharp contrast to the simultaneous approach, where to the best of our knowledge it is not known how to lower bound \(c\) explicitly. Comparing the proofs of \(c>0\) and \(c_{h}>0\) one can see that this advantage comes from the backward inductive approach and is due to fixed future policies which are not changing during training. For fixed decision epoch \(h\) combining \(\beta\)-smoothness and weak PL inequality yields the following global convergence result for the dynamic PG generated in Algorithm 2. **Lemma 3.4**.: _For fixed \(h\in\mathcal{H}\), let \(\mu_{h}\) be a probability measure such that \(\mu_{h}(s)>0\) for all \(s\in\mathcal{S}_{h}\), let \(\eta_{h}=\frac{1}{2(H-h)R^{*}}\) and consider the sequence \((\theta_{h}^{(n)})\) generated by Algorithm 2 with arbitrary initialisation \(\theta_{h}^{(0)}\) and \(\tilde{\pi}\). For \(\epsilon>0\) choose the number of training steps as \(N_{h}=\frac{4(H-h)R^{*}}{c_{h}^{2}\epsilon}\). Then it holds that_ \[V_{h}^{(\pi_{h}^{*},\tilde{\pi}_{(h+1)})}(\mu_{h})-V_{h}^{(\pi^{\theta_{h}^{(N _{h})}},\tilde{\pi}_{(h+1)})}(\mu_{h})\leq\epsilon\] _Moreover, if \(\theta_{h}^{(0)}\) initialises the uniform distribution the constants \(c_{h}\) can be replaced by \(\frac{1}{|\mathcal{A}|}\)._ The error bound depends on the time horizon up to the last time point, meaning intuitively that an optimal policy for earlier time points in the MDP (smaller \(h\)) is harder to achieve and requires a longer learning period then later time points (\(h\) near to \(H\)). We remark that the assumption on \(\mu_{h}\) is not a sharp restriction and can be achieved by using a strictly positive start distribution \(\mu\) on followed by a uniformly distributed policy. Note that assuming a positive start distribution is common in the literature and Mei et al. (2020) showed the necessity of this assumption. Accumulating errors over time we can now derive the analogous estimates to the simultaneous PG approach. We obtain a linear accumulation such that an \(\frac{\epsilon}{H}\)-error in each time point \(h\) results in an overall error of \(\epsilon\) which appears naturally from the dynamic programming structure of the algorithm. **Theorem 3.5**.: _For all \(h\in\mathcal{H}\), let \(\mu_{h}\) be probability measures such that \(\mu_{h}(s)>0\) for all \(s\in\mathcal{S}_{h}\), let \(\eta_{h}=\frac{1}{2(H-h)R^{*}}\). For \(\epsilon>0\) choose the number of training steps as \(N_{h}=\frac{4(H-h)HR^{*}}{c_{h}^{2}\epsilon}\big{\|}\frac{1}{\mu_{h}}\big{\|}_ {\infty}\). Then for the final policy from Algorithm 2, \(\hat{\pi}^{*}=(\pi^{\theta_{0}^{(N_{0})}},\ldots,\pi^{\theta_{H-1}^{(N_{H-1})}})\), it holds for all \(s\in\mathcal{S}_{0}\) that_ \[V_{0}^{*}(s)-V_{0}^{\hat{\pi}^{*}}(s)\leq\epsilon.\] _If \(\theta_{h}^{(0)}\) initialises the uniform distribution the constants \(c_{h}\) can be replaced by \(\frac{1}{|\mathcal{A}|}\)._ ### Comparison of the algorithms Comparing Theorem 3.5 to the convergence rate for simultaneous PG in Theorem 3.2, we first highlight that the constant \(c_{h}\) in the dynamic approach can be explicitly computed under uniform initialisation. This has not yet been established in the simultaneous PG (see Remark B.11) and especially it cannot be guaranteed that \(c\) is independent of the time horizon. Second, we compare the overall dependence of the training steps on the time horizon. In the dynamic approach \(\sum_{h}N_{h}\) scales with \(H^{3}\) in comparison to \(H^{5}\) in the convergence rate for the simultaneous approach. In particular for large time horizons the theoretical analysis shows that reaching a given accuracy is more costly for simultaneous training of parameters. In the dynamic PG the powers are due to the smoothness constant, the \(\frac{\epsilon}{H}\) error which we have to achieve in every epoch and finally the sum over all epochs. In comparison, in the simultaneous PG a power of 2 is due to the smoothness constant, another power of 2 is due to the distribution mismatch coefficient in the PL-inequality which we need to bound uniformly in \(\theta\) (see also Remark B.3) and the last power is due to the enlarged state space \(|\mathcal{S}^{[H]}|=|\mathcal{S}|H\). See Appendix E for a toy example visualising that the rate of convergence in both approaches is \(\mathcal{O}(\frac{1}{n})\) and the constants in the dynamical approach are indeed better then for the dynamic approach. ## 4 Convergence Analysis of Stochastic Softmax Policy Gradient In the previous section, we have derived global convergence guarantees for solving a finite-time MDP via simultaneous as well as dynamic PG with exact gradient computation. However, in practical scenarios assuming access to exact gradients is not feasible, since the transition function \(p\) of the underlying MDP is unknown. In the following section, we want to relax this assumption by replacing the exact gradient by a stochastic approximation. To be more precise, we view a model-free setting where we are only able to generate trajectories of the finite-time MDP. These trajectories are used to formulate the stochastic PG method for training the parameters in both the simultaneous and dynamic approach. Although in both approaches we are able to guarantee almost sure asymptotic convergence similar to the exact PG scheme, we are no longer able to control the constants \(c\) and \(c_{h}\) respectively along trajectories of the stochastic PG scheme due to the randomness in our iterations. Therefore, the derived lower bound in the weak PL-inequality may degenerate in general. In order to derive complexity bounds in the stochastic scenario, we make use of the crucial property that \(c\) (and \(c_{h}\) respectively) remain strictly positive along the trajectory of the exact PG scheme. To do so, we introduce the stopping times \(\tau\) and \(\tau_{h}\) stopping the scheme when the stochastic PG trajectory is too far away from the exact PG trajectory (under same initialisation). Hence, conditioning on \(\{\tau\geq n\}\) (and \(\{\tau_{h}\geq n\}\) respectively) forces the stochastic PG to remain close to the exact PG scheme and hence, guarantees non-degenerated weak PL-inequalities. The structure of the proof in the stochastic setting is then two-fold: * We derive a rate of convergence of the stochastic PG scheme under non-degenerated weak PL-inequality on the event \(\{\tau\geq n\}\). Since we consider a constant step size, the batch size needs to be increased sufficiently fast for controlling the variance occurring through the stochastic approximation scheme. See Lemma D.4 and Lemma D.8. * We introduce a second rule for increasing the batch-size depending on a tolerance \(\delta>0\) leading to \(\mathbb{P}(\tau\leq n)<\delta\). This means, that one forces the stochastic PG to remain close to the exact PG with high probability. See Lemma D.5 and Lemma D.9. A similar proof strategy has been introduced in Ding et al. (2022) for proving convergence for entropy-regularised stochastic PG in the discounted case. However, we emphasise that entropy-regularisation yields a stronger form of PL-inequality such that the results cannot be transferred straightforwardly. We again first discuss the simultaneous approach followed by the dynamic approach. Simultaneous stochastic policy gradient estimator:Consider \(K\) trajectories \((s^{i}_{h},a^{i}_{h})_{h=0}^{H-1}\), for \(i=1,\ldots,K\), generated by \(s^{i}_{0}\sim\mu\), \(a^{i}_{h}\sim\pi^{\theta}(\cdot|s^{i}_{h})\) and \(s^{i}_{h}\sim p(\cdot|s^{i}_{h-1},a^{i}_{h-1})\) for \(0<h<H\). The gradient estimator is defined by \[\widehat{\nabla}J^{K}(\theta,\mu)=\frac{1}{K}\sum_{i=1}^{K}\sum_{h=0}^{H-1} \nabla\log(\pi^{\theta}(a^{i}_{h}|s^{i}_{h}))\hat{R}^{i}_{h}, \tag{7}\] where \(\hat{R}^{i}_{h}=\sum_{k=h}^{H-1}r(s^{i}_{k},a^{i}_{k})\) is an unbiased estimator of the \(h\)-state-action value function in \((s^{i}_{h},a^{i}_{h})\) under policy \(\pi^{\theta}\). This gradient estimator is unbiased and has bounded variance (Lemma D.1). Then the stochastic PG updates for training the softmax parameter are given by \[\bar{\theta}^{(n+1)}=\bar{\theta}^{(n)}+\eta\widehat{\nabla}J^{K}(\bar{\theta }^{(n)},\mu). \tag{8}\] Our main result for the simultaneous stochastic PG scheme is given as follows. **Theorem 4.1**.: _Under Assumption 3.1, let \(\mu\) be a probability measure such that \(\mu(s)>0\) for all \(s\in\mathcal{S}\). Consider the final policy using Algorithm 1 with stochastic updates from (8) denoted by \(\hat{\pi}^{*}=\pi^{\bar{\theta}^{(N)}}\). Moreover, for any \(\delta,\epsilon>0\) assume that the number of training steps satisfies \(N\geq\left(\frac{21|\mathcal{S}|H^{5}R^{*}}{\epsilon\delta c^{2}}\right)^{2} \left\|\frac{d^{\pi^{*}}_{\mu}}{\mu}\right\|_{\infty}^{4}\), let \(\eta=\frac{1}{5H^{2}R^{*}\sqrt{N}}\) and \(K\geq\frac{10\max\{R^{*},1\}^{2}N^{3}}{c^{2}\delta^{2}}\). Then it holds true that_ \[\mathbb{P}\big{(}V_{0}^{*}(\mu)-V_{0}^{\hat{\pi}^{*}}(\mu)\geq\epsilon\big{)} \leq\delta\,.\] Dynamic stochastic policy gradient estimator:For fixed \(h\) consider \(K_{h}\) trajectories \((s^{i}_{k},a^{i}_{k})_{k=h}^{H-1}\), for \(i=1,\ldots,K_{h}\), generated by \(s^{i}_{h}\sim\mu_{h}\), \(a^{i}_{h}\sim\pi^{\theta}\) and \(a^{i}_{k}\sim\tilde{\pi}_{k}\) for \(h<k<H\). The estimator is defined by \[\widehat{\nabla}J^{K}_{h}(\theta,\tilde{\pi}_{(h+1)},\mu_{h})=\frac{1}{K_{h}} \sum_{i=1}^{K_{h}}\nabla\log(\pi^{\theta}(a^{i}_{h}|s^{i}_{h}))\hat{R}^{i}_{h}, \tag{9}\] where \(\hat{R}^{i}_{h}=\sum_{k=h}^{H-1}r(s^{i}_{k},a^{i}_{k})\) is an unbiased estimator of the \(h\)-state-action value function in \((s^{i}_{h},a^{i}_{h})\) under policy \(\tilde{\pi}\). Then the stochastic PG updates for training the parameter \(\theta_{h}\) are given by \[\bar{\theta}^{(n+1)}_{h}=\bar{\theta}^{(n)}_{h}+\eta_{h}\widehat{\nabla}J^{K_ {h}}_{h}(\bar{\theta}^{(n)}_{h},\tilde{\pi}_{(h+1)},\mu_{h}). \tag{10}\] Our main result for the dynamic stochastic PG scheme is given as follows. **Theorem 4.2**.: _For all \(h\in\mathcal{H}\), let \(\mu_{h}\) be probability measures such that \(\mu_{h}(s)>0\) for all \(h\in\mathcal{H}\), \(s\in\mathcal{S}_{h}\). Consider the final policy using Algorithm 2 with stochastic updates from (10) denoted by \(\hat{\pi}^{*}=(\pi^{\bar{\theta}^{(N_{0})}_{0}},\ldots,\pi^{\bar{\theta}^{(N_ {H-1})}_{H-1}})\). Moreover, for any \(\delta,\epsilon>0\) assume that the numbers of training steps satisfy \(N_{h}\geq\Big{(}\frac{12(H-h)R^{*}H^{2}\big{\|}\frac{1}{\mu_{h}}\big{\|}_{ \infty}}{\delta c_{h}^{2}\epsilon}\Big{)}^{2}\), let \(\eta_{h}=\frac{1}{2(H-h)R^{*}\sqrt{N_{h}}}\) and \(K_{h}\geq\frac{5N_{h}^{3}H^{2}}{c_{h}^{2}\delta^{2}}\). Then it holds true that_ \[\mathbb{P}\Big{(}\exists s\in\mathcal{S}_{0}:V_{0}^{*}(s)-V_{0}^{\tilde{\pi}^ {*}}(s)\geq\epsilon\Big{)}\leq\delta.\] Note that the proof of Theorem 4.2 is again split into convergence guarantee for each fixed decision epoch \(h\in\mathcal{H}\) (Lemma D.10) followed by the backward induction controlling of the overall error with high probability. ComparisonIn both scenarios the derived complexity bounds for the stochastic PG uses a very large batch size and small step size. It should be noted that the choice of step size and batch size are closely connected and both strongly depend on the number of training steps \(N\). Specifically, as \(N\) increases, the batch size increases, while the step size tends to decrease to prevent exceeding the stopping time with high probability. However, it is possible to increase the batch size even further and simultaneously benefit from choosing a larger step size, or vice versa. An advantage of the dynamic approach is that \(c_{h}\) can be explicitly known for uniform initialisation. Hence, the complexity bounds for the dynamic approach results in a practicable algorithm, while \(c\) is unknown and possibly arbitrarily small for the simultaneous approach. Finally, we will also compare the complexity with respect to the time horizon. For the simultaneous approach the number of training steps scales with \(H^{10}\), and the batch size with \(H^{30}\), while in the dynamic approach the overall number of training steps scale with \(H^{7}\) and the batch size with \(H^{20}\). We are aware that these bounds are far from tight and irrelevant for practical implementations. Nevertheless, these bounds highlight once more the advantage of the dynamic approach in comparison to the simultaneous approach and show (the non-trivial fact) that the algorithms can be made to converge without knowledge of exact gradients. ## 5 Conclusion and Future Work In this paper, we have presented a convergence analysis of two PG methods for undiscounted MDPs with finite-time horizon in the tabular parametrisation. Assuming exact gradients we have obtained an \(\mathcal{O}(1/n)\)-convergence rate for both approaches where the behavior regarding the time horizon and the model-dependent constant \(c\) is better in the dynamic approach than in the simultaneous approach. In the model-free setting we have derived complexity bounds to approximate the error to global optima with high probability using stochastic PG. It would be desirable to derive tighter bounds using for example adaptive step sizes or variance reduction methods that lead to more realistic batch sizes. Similar to many recent results, the presented analysis relies on the tabular parametrisation. However, the heuristic from the policy gradient theorem does not, and the dynamic programming perspective suggests that parameters should be trained backwards in time. It would be interesting future work to see how this theoretical insight can be implemented in lower dimensional parametrisations using for instance neural networks.
2303.02749
Revisiting the Noise Model of Stochastic Gradient Descent
The stochastic gradient noise (SGN) is a significant factor in the success of stochastic gradient descent (SGD). Following the central limit theorem, SGN was initially modeled as Gaussian, and lately, it has been suggested that stochastic gradient noise is better characterized using $S\alpha S$ L\'evy distribution. This claim was allegedly refuted and rebounded to the previously suggested Gaussian noise model. This paper presents solid, detailed empirical evidence that SGN is heavy-tailed and better depicted by the $S\alpha S$ distribution. Furthermore, we argue that different parameters in a deep neural network (DNN) hold distinct SGN characteristics throughout training. To more accurately approximate the dynamics of SGD near a local minimum, we construct a novel framework in $\mathbb{R}^N$, based on L\'evy-driven stochastic differential equation (SDE), where one-dimensional L\'evy processes model each parameter in the DNN. Next, we show that SGN jump intensity (frequency and amplitude) depends on the learning rate decay mechanism (LRdecay); furthermore, we demonstrate empirically that the LRdecay effect may stem from the reduction of the SGN and not the decrease in the step size. Based on our analysis, we examine the mean escape time, trapping probability, and more properties of DNNs near local minima. Finally, we prove that the training process will likely exit from the basin in the direction of parameters with heavier tail SGN. We will share our code for reproducibility.
Barak Battash, Ofir Lindenbaum
2023-03-05T18:55:12Z
http://arxiv.org/abs/2303.02749v1
# Revisiting the Noise Model of Stochastic Gradient Descent ###### Abstract The stochastic gradient noise (SGN) is a significant factor in the success of stochastic gradient descent (SGD). Following the central limit theorem, SGN was initially modeled as Gaussian, and lately, it has been suggested that stochastic gradient noise is better characterized using \(S\alpha S\) Levy distribution. This claim was allegedly refuted and rebounded to the previously suggested Gaussian noise model. This paper presents solid, detailed empirical evidence that SGN is heavy-tailed and better depicted by the \(S\alpha S\) distribution. Furthermore, we argue that different parameters in a deep neural network (DNN) hold distinct SGN characteristics throughout training. To more accurately approximate the dynamics of SGD near a local minimum, we construct a novel framework in \(\mathbb{R}^{N}\), based on Levy-driven stochastic differential equation (SDE), where one-dimensional Levy processes model each parameter in the DNN. Next, we show that SGN jump intensity (frequency and amplitude) depends on the learning rate decay mechanism (LRdecay); furthermore, we demonstrate empirically that the LRdecay effect may stem from the reduction of the SGN and not the decrease in the step size. Based on our analysis, we examine the mean escape time, trapping probability, and more properties of DNNs near local minima. Finally, we prove that the training process will likely exit from the basin in the direction of parameters with heavier tail SGN. We will share our code for reproducibility. Machine Learning, ICM \[\frac{1}{B}\left[\frac{1}{D}\sum_{j=1}^{D}\nabla U^{(j)}(W_{t})\nabla U^{(j)}(W_{t} )^{T}-\nabla U(W_{t})\nabla U(W_{t})^{T}\right].\] Recently, [68] showed the importance of modeling the SGN as an anisotropic noise. Precisely, they show that an anisotropic noise model improves the approximation of the dynamics of SGD. In [56], the authors argue that SGN obeys \(\mathcal{S}\alpha\mathcal{S}\) Levy distribution due to SGN's heavy-tailed nature. \(\mathcal{S}\alpha\mathcal{S}\) Levy process is described by a single parameter \(\alpha_{i}\), also named "stability parameter," and holds a unique property, large discontinuous jumps. Therefore, Levy-driven SDE does not depend on the height of the potential; on the contrary, it directly depends on the horizontal distance to the domain's boundary; this implies that the process can escape from narrow minima - no matter how deep they are and will stay longer in wide minima. In this work, we claim that the noise of distinct parameters in the DNN distributes differently, and we further argue that it is crucial to incorporate this discrepancy into the SGN model. Hence, we model the training process as Levy-driven stochastic differential equations (SDEs) in \(\mathbb{R}^{N}\), where each parameter \(i\) distributes with a unique \(\alpha_{i}\); this formulation helps us investigate the properties and influence of each parameter on the training process. Another critical aspect of NN optimization is the learning rate. Bengio [4] have argued that the learning rate is "the single most important hyper-parameter" in training DNNs; we yearn to understand what is the interplay between the learning rate decay (LRdecay) and properties of the SGN. Therefore, we examine the effect of the learning rate scheduler on the training process.We argue that decreasing the learning rate improves the properties of the optimization due to attenuation of the noise and not merely reduction of the step size; we brace the above claim using theoretical and experimental evidence. Our contributions can be summarized as follows: * Demonstrate empirically that the SGN of each parameter in a deep neural network is better characterized by \(S\alpha S\) distribution. * Provide experimental evidence which strongly indicates that different parametric distributions characterize the noise of distinct parameters. * Propose a novel dynamical system in \(\mathbb{R}^{N}\) consisting of \(N\) one-dimensional Levy processes with \(\alpha_{i}\)-stable components. * Using our framework, we present an approximation of the mean escape time, the probability of escaping the local minima using a specific parameter, and additional properties of the training process near the local minima. * We prove that parameters with low \(\alpha_{i}\) are associated with a high probability of aiding the training process to exit from the local minima, and show empirical evidence. ## 2 Related Work Stochastic optimization has been demonstrated effective for several applications, including generative modeling [36], support recovery [39, 40], clustering, and many more. The study of stochastic dynamics of systems with small random perturbations is a well-established field, first by modeling as Gaussian perturbations [16, 31], then replaced by Levy noise with discontinuous trajectories [24, 27, 26, 7]. Characterizing the noise as Levy perturbations has attracted interest in the context of extreme events modeling, such as in climate [12], physics [6] and finance [55]. **Remark** We note that a Symmetric \(\alpha\) stable distribution (\(S\alpha S\) or Levy \(S\alpha S\)) is a heavy-tailed distribution, parameterized by \(\alpha\) - the stability parameter, where a smaller \(\alpha\) leads to heavier tail (i.e., extreme events are more frequent and with more amplitude), and vice versa. Modeling SGD using differential equations is a deep-rooted method. Li et al. [38] used an SDE to approximate SGD and focused on momentum and adaptive parameter tuning schemes to study the dynamical properties of stochastic optimization. Mandt and Blei [41] employed a similar procedure to derive an SDE approximation for the SGD to study the influence of the value of the learning rate. Li et al. [37] showed that an SDE could approximate SGD in a first-order weak approximation. The early works in the field have approximated SGD by Langevin dynamic with isotropic diffusion coefficients [54, 51, 65]. Later more accurate modeling suggested [43, 68, 47] using an anisotropic noise covariance matrix. Lately, it has been argued [56] that SGN is better characterized by \(S\alpha S\) noise, presenting experimental and theoretical justifications. This model was allegedly refuted by [63], claiming that the experiments performed by [56] are inaccurate since the noise calculation was done across parameters and not across mini-batches. Levy driven SDEs Euler approximation literature is sparser than for the Brownian motion SDEs; however, it is still intensely investigated; for more details about the convergence of Euler approximation for Levy discretization, see [46, 50, 7]. Learning rate decay is an essential technique in training DNNs, investigated first for gradient descent (GD) by [34]. Kleinberg et al. [30] showed that SGD is equivalent to the convolution of the loss surface, with the learning rate serving as the conceptual kernel size of the convolution. Hence spurious local minima can be smoothed out; and, the decay of the learning rate later helps the network converge around the local minimum. You et al. [64] suggested that learning rate decay improves the ability to learn complex separation patterns. ## 3 Framework In our analysis, we consider a DNN with \(\bar{\mathbf{L}}\) layers and a total of \(N\) weights (parameters), the domain \(\mathcal{G}\) is the local environment of a minimum. Our framework considers an \(N\)-dimensional dynamic system, representing the update rule of SGD as a Levy-driven stochastic differential equation. In contrast to previous works [67, 56], our framework does not assume that SGN distributes the same for every parameter \(l\) in the DNN. Thus, the SGN of each parameter is characterized by a different \(\alpha\). The governing SDE that depicts the SGDs dynamic inside the domain \(\mathcal{G}\) at time \(t\) is as follows: \[W_{t}=w-\int_{0}^{t}\nabla U(W_{p})\,dp+\sum_{l=1}^{N}s_{t}^{\frac{\alpha_{l}-1 }{\alpha_{l}}}\epsilon\mathbf{1}^{T}\lambda_{l}(t)r_{l}L_{t}^{l}, \tag{3}\] where \(W_{t}\) is the process that depicts the evolution of DNN weights at time \(t\). \(L_{t}^{l}\in\mathbb{R}\) is a mean-zero \(S\alpha S\) Levy processes with a stable parameter \(\alpha_{l}\). \(\lambda_{l}(t)\in\mathbb{R}^{N}\) is the \(l\)-th row of the noise covariance matrix, \(\mathbf{1}\in\mathbb{R}^{N}\) is a vector of ones, and its purpose is to sum the \(l\)-th row of the noise covariance matrix. \(r_{l}\in\mathbb{R}^{N}\) is a unit vector and we demand \(|\langle r_{i},r_{j}\rangle|\neq 1,\) for \(i\neq j\), we will use \(r_{i}\) as a one-hot vector. \(s_{t}\) represents the learning rate scheduler, and \(w\) are the initial weights. **Remark**\(L_{t}^{l}\) can be decomposed into a small jump part \(\xi_{t}^{l}\), and an independent part with large jumps \(\psi_{t}^{l}\), i.e. \(L_{t}=\xi_{t}^{l}+\psi_{t}^{l}\), more information on \(S\alpha S\) process appears in A.3. Let \(\sigma_{\mathcal{G}}=\inf\{t\geq 0:W_{t}\notin\mathcal{G}\}\) depict the first exit time from \(\mathcal{G}\). \(\tau_{k}^{l}\) denotes the time of the \(k\)-th largest jump of parameter \(l\), which is driven by the process \(\psi^{l}\), where we define \(\tau_{0}=0\). The interval between large jumps is denoted as: \(S_{k}^{l}=\tau_{k}^{l}-\tau_{k-1}^{l}\) and is exponentially distributed with mean \(\beta_{l}(t)^{-1}\), while \(\tau_{k}^{l}\) is gamma distributed \(Gamma(k,\beta_{l}(t))\); where \(\beta_{l}(t)\) is the intensity of the jump and will be defined in Sec 3.2. We define the arrival time of the \(k\)-th jump of all parameters combined as \(\tau_{k}^{*}\), for \(k\geq 1\) we can write \[\tau_{k}^{*}\triangleq\bigwedge_{\tau_{j}^{l}>\tau_{k-1}^{*}}\tau_{j}^{l}, \tag{4}\] following that \(S_{k}^{*}=\tau_{k}^{*}-\tau_{k-1}^{*}\). Jump heights are notated as: \(J_{k}^{l}=\psi_{\tau_{k}}^{l}-\psi_{\tau_{k-1}^{l}}^{l}\). We will define \(\alpha_{\nu}\) as the average \(\alpha\) over the entire DNN; this will help us describe the global properties of our network. Let us define a measure of horizontal distance from the domain boundary using \(d_{t}^{+}\) and \(d_{t}^{-}\); we present a rigorous formulation of our assumptions in Sec. E. We define two additional processes to better understand the dynamics inside the basin (between the large jumps). The deterministic processdenoted as \(Y_{t}\) is affected by the drift alone, without any perturbations. This process starts within the domain and does not escape the domain as time proceeds. The drift forces this process towards the stable point \(W^{*}\) as \(t\rightarrow\infty\), i.e., the local minimum of the basin; furthermore, the process converges to the stable point exponentially fast and is defined for \(t>0\), and \(w\in\mathcal{G}\) by: \[Y_{t}=w-\int_{0}^{t}\nabla U(Y_{s})\,ds. \tag{5}\] The following Lemma shows how fast \(Y_{t}\) converges to the local minima from any starting point \(w\) inside the domain. **Lemma 3.1**.: \(\forall w\in\mathcal{G}\) _, \(\tilde{U}=U(w)-U(W^{*})\), the process \(Y_{t}\) converges to the minimum \(W^{*}\) exponentially fast:_ \[\|Y_{t}-W^{*}\|^{2}\leq\frac{2\tilde{U}}{\mu}e^{-2\mu t}. \tag{6}\] _The complete proof appears in Appendix B.6_ The small jumps process\(Z_{t}\) is composed of the deterministic process \(Y_{t}\) and a stochastic process with infinite small jumps denoted as \(\xi_{t}\) (see more details in A.3). \(Z_{t}\) describes the system's dynamic in the intervals between the large jumps; hence we add an index \(k\) that represents the index of the jump, for instance \(Z_{t,k}\) represent the time \(t\) between the jump \(k\) to jump jump \(k+1\). Due to strong Markov property, \(\xi_{t+\tau}^{l}-\xi_{\tau}^{l},t\geq 0\) is also a Levy process with the same law as \(\xi^{l}\). Hence, for \(t\geq 0\) and \(k\geq 0\): \[\xi_{t,k}^{l}=\xi_{t+\tau_{k-1}}^{l}-\xi_{\tau_{k-1}}^{l}. \tag{7}\] The full small jumps process for \(\forall t\in[0,S_{k}]\) is defined as: \[Z_{t,k}=w+\int_{0}^{t}\nabla U(y_{s})ds+\sum_{l=1}^{N}s_{t}^{\frac{\alpha_{l}- 1}{\alpha_{l}}}\epsilon\mathbf{1}^{T}\lambda_{l}(t)r_{l}\xi_{t,k}^{l}. \tag{8}\] In the following proposition, we estimate the deviation in the \(l\)-th parameter between the SDE solution driven by the process of the small jumps \(Z_{t,k}^{l}\), and the deterministic trajectory. **Proposition 3.2**.: _Let \(T_{\epsilon}>0\) exponentially distributed with parameter \(\beta_{l}\), \(\forall w\in\mathcal{G}\), and \(\tilde{\theta}_{l}\triangleq-\rho(1-\alpha_{l})+2-2\theta_{l}\), s.t. \(\theta_{l}\in(0,\frac{2-\alpha_{l}}{4})\), the following holds:_ \[P\left(\sup_{t\in[0,T_{\epsilon}]}|Z_{t,k}^{l}(w)-Y_{t,k}^{l}(w)|\geq c\bar{ \epsilon}^{\theta_{l}}\right)\leq C_{\theta_{l}}\bar{\epsilon}^{\tilde{\theta}_{ l}}. \tag{9}\] Where \(C_{\theta_{l}}>0\) and \(c>0\) are constants, let us remind the reader that: \(\bar{\epsilon}_{l}=s_{t}^{\frac{\alpha_{l}-1}{\alpha_{l}}}\epsilon_{l}\). Precisely, proposition 3.2 describes the distance between the deterministic process \(Y_{t,k}\) and the process of small jumps \(Z_{t,k}\) at time \(t\) that occurs in the interval after the jump \(k\) and before jump \(k+1\). It indicates that between large jumps, the processes are close to each other with high probability. The complete proof appears in Appendix B.3. Let us present additional notations: \(H()\) and \(\nabla U\) are the Hessian and the gradient of the objective function. To denote different mini-batches, we use subscript \(d\). That is, \(H_{d}()\) and \(\nabla U_{d}(W^{*})\) are the Hessian and gradient of the \(d\) mini-batch. To represent different parameters, as before we will use subscript \(l\), for example \(\nabla u_{d,l}\), is the gradient of the \(l\)-th parameter after a forward pass over mini-batch \(d\). Furthermore, \(h_{l,j}\) represents the \(l\)-th row and \(j\)-th column of \(H(W^{*})\), which is the Hessian after a forward pass over the entire dataset \(D\), i.e., the Hessian when performing standard gradient descent. Next, we turn our attention to another property of the process of the small jumps \(Z^{l}_{t,k}\). This will help us understand the noise covariance matrix. Using stochastic asymptotic expansion, we can approximate \(Z^{l}_{t,k}\) using the deterministic process and a first-order approximation of \(Z^{l}_{t,k}\). **Lemma 3.3**.: _For a general scheduler \(s_{t}\), \(\rho\in(0,1)\), \(\forall w_{l},w_{j}\in\mathcal{G}\), starting point after a big jump at time \(\tau_{k}^{*}+p\) where \(p\to 0\), and \(A_{lj}(t)\triangleq\bar{\epsilon}_{l}w_{j}e^{-h_{jj}t}\mu_{\xi}^{l}(2t+\frac{1 }{h_{ll}}(1-e^{-h_{ll}t}))\), for \(t\in[0,S_{k}^{*})\) the following fulfills:_ \[\mathbb{E}[Z^{l}_{t,k}Z^{j}_{t,k}]=w_{l}w_{j}e^{-(h_{ll}+h_{jj})t}+A_{jl}(t)+A _{lj}(t)+\mathcal{O}(\epsilon^{2}). \tag{10}\] Where \(\mu_{\xi}^{i}=2t\left[\frac{\xi-\rho(1-\alpha_{l})-1}{1-\alpha_{l}}\right]\), \(\bar{\epsilon}_{l}=s_{t}^{\frac{\alpha_{l}-1}{\alpha_{l}}}\epsilon_{l}\). Lemma 3.3 depicts the dynamics between two parameters in the intervals between the large jumps; this helps us to accurately express the covariance matrix of the noise; the complete derivation of this result appears in Appendix B.4. ### Noise covariance matrix The covariance of the noise matrix holds a vital role in modeling the training process; in this subsection, we aim to achieve an expression of the noise covariance matrix based on the stochastic processes we presented in the previous subsection. We can achieve the following approximation using stochastic Taylor expansion near the basin \(W^{*}\). **Proposition 3.4**.: _Let us define \(\tilde{u}_{l}=\sum_{j=1}^{N}\nabla u_{l}\nabla u_{j}\), \(\tilde{h}_{l,m,p,j}:=\frac{1}{B}\sum_{b=1}^{B}h_{b,l,m}h_{b,p,j}\), \(h_{l,m,p,j}:=h_{l,m}h_{p,j}\) and \(\tilde{h}_{l,m,p,j}:=\tilde{h}_{l,m,p,j}-h_{l,m,p,j}\), then for any \(t\in[0,S_{k}^{*})\), the sum of the \(l\)-th row of the covariance matrix:_ \[\mathbf{1}^{T}\lambda_{l}^{k}(W_{t})=\frac{1}{BD}\sum_{j=1}^{N}\bar{u}_{lj}+ \tag{11}\] \[\frac{1}{B}\sum_{j,m,p=1}^{N}\bar{h}_{l,m,p,j}(w_{m}w_{p}e^{-(h_{mm}+h_{pp})t}+\] \[A_{mp}(t)+A_{pm}(t))+\mathcal{O}(\bar{\epsilon}^{2}),\] where \(A_{mp}(t)\) and \(A_{pm}(t)\) are defined in lemma 3.3. We note that \(h_{l,m,p,j}\) and \(\tilde{h}_{l,m,p,j}\) represent the interaction of two terms in the Hessian matrix when performing GD and SGD respectively, and \(\bar{h}_{l,m,p,j}\) is the difference between them. The proof of the proposition appears in Appendix B.5. ### Jump Intensity Let us denote \(\beta_{l}(t)\) as the jump intensity of the compound Poisson process \(\xi_{l}\). \(\beta_{l}(t)\) simultaneous responsible for scaling of the jump frequency and size. Jumps are distributed according to the law \(\beta_{l}(t)^{-1}\nu_{\eta}\), and the jump intensity is formulated as: \[\beta_{l}(t)=\nu_{\eta_{l}}(\mathbb{R})=\int_{\mathbb{R}/[-O,O]}\nu_{l}(dy)= \frac{2}{\alpha_{i}}s_{t}^{\rho(\alpha_{l}-1)}\epsilon_{l}^{\rho\alpha_{l}}, \tag{12}\] where the integration boundary is \(O\triangleq\epsilon^{-\rho}s_{t}^{-\rho\frac{\alpha_{l}-1}{\alpha_{l}}}\), which is time-dependent, due to the learning rate scheduler, which decreases the size and frequency of the large jumps, thus the jump intensity is not stationary. Hence, changing the learning rate during training enables us to increase and decrease the jumps frequency and amplitude. The entire DNN jump intensity as \(\beta_{S}(t)\triangleq\sum_{l=1}^{N}\beta_{l}(t)\). The probability of escaping the local minima in the first jump, in a single parameter perspective, is expressed by: \[P(s_{t}\epsilon\mathbf{1}^{T}\lambda_{l}(t)J_{1}^{l}\notin[d_{l}^{-},d_{l}^{+} ])=\frac{m_{l}(t)\Phi_{l}s_{t}^{\alpha_{l}-1}}{\beta_{l}(t)}, \tag{13}\] where \(m_{l}(t)=\frac{\mathbf{1}^{T}\lambda_{l}(t)\epsilon_{l}^{\alpha_{l}}}{\alpha_{l}}\), and \(\Phi_{l}=(-d_{l}^{-})^{-\alpha_{l}}+(d_{l}^{+})^{-\alpha_{l}}\). ## 4 Theorems In the following section, we provide a theoretical analysis of SGD dynamics during the training of DNNs. Our analysis is based on two empirical pieces of evidence demonstrated in \begin{table} \begin{tabular}{c c c c} \hline Model & Gauss & \(S\alpha S\) Const \(\alpha\) & \(S\alpha S\) \\ \hline ResNet18 & \(1.39\pm 0.41\) & \(1.55\pm 0.71\) & \(\mathbf{0.65}\pm 0.27\) \\ \hline ResNet34 & \(1.58\pm 0.73\) & \(2.31\pm 1.16\) & \(\mathbf{1.15}\pm 0.74\) \\ \hline ResNet50 & \(1.42\pm 0.73\) & \(1.47\pm 0.98\) & \(\mathbf{0.99}\pm 0.61\) \\ \hline \end{tabular} \end{table} Table 1: The fitting error between SGN and \(S\alpha S\)/Gaussian distribution. Averaged over \(150\) randomly sampled parameters, three different CNNs trained on the CINIC10 data with a batch size of 400. Sum of Squares Error (SSE) is used to evaluate the fitting error of each distribution,. ”Gauss” represents the Gaussian distribution. Our results demonstrate that \(S\alpha S\) better depicts SGN. Values in the table were multiplied by 10 to simplify the exposition this work; the first is that SGN is indeed heavy-tailed. The second is that each parameter in the DNN's training process has a different stability parameter \(\alpha\) drastically affects the noise properties. Our work will assume that the training process can exit from the domain only at times that coincide with large jumps. This assumption is based on a few realizations; first, the deterministic process \(Y_{t}\) initialized in any point \(w\in\mathcal{G}_{\delta}\), will converge to the local minima of the domain by the positively invariance of the process, see assumptions in Appendix E. Second, \(Y_{t}\) converges to the minimum much faster than the average temporal gap between the large jumps; third, using lemma 3.1 we conclude that the small jumps are less likely to help the process escape from the local minimum. Next, we will show evidence for the second realization mentioned above, the relaxation time \(T_{R}^{l}\) is the time for the deterministic process \(Y_{t}^{l}\), starting from any arbitrary \(w\in\mathcal{G}\), to reach an \(\bar{\epsilon}_{l}^{\zeta}\)-neighbourhood of the attractor. For some \(C_{1}>0\), the relaxation time is \[T_{R}^{l}=\max\left\{\int_{d_{l}^{-}}^{-\bar{\epsilon}_{l}^{\zeta}}\frac{dy}{- U^{\prime}(y)_{l}},\int_{\bar{\epsilon}_{l}^{\bar{\epsilon}}}^{d_{l}^{+}}\frac{dy}{ U^{\prime}(y)_{l}}\right\}\leq C_{1}|ln\bar{\epsilon}_{l}|. \tag{14}\] Now, let us calculate the expectation of \(S_{k}^{*}=\tau_{k}^{*}-\tau_{k-1}^{*}\), i.e. the interval between the large jumps: \[\mathbb{E}[S_{k}^{l}]=\mathbb{E}[\tau_{k}^{l}-\tau_{k-1}^{l}]=\beta_{l}^{-1}= \frac{\alpha_{l}}{2}\bar{\epsilon}_{l}{}^{-\rho\alpha_{l}}. \tag{15}\] Since \(\bar{\epsilon}\in(0,1)\), usually even \(\bar{\epsilon}\ll 1\), it is easy to notice that \(\mathbb{E}[S_{k}^{l}]\gg T_{R}\), thus we can approximate that the process \(W_{t}\) is near the neighborhood of the basin, right before the large jumps. This means that it is highly improbable that two large jumps will occur before the training process returns to a neighborhood of the local minima. Using the information described above, we analyze the escaping time for the exponential scheduler and for the multi-step scheduler; expanding our framework for more LRdecay schemes is straightforward. Let us define a constant that will be used for the remaining of the paper: \(A_{l,\nu}\triangleq(1-\bar{m}_{\nu}\bar{\beta}_{\nu}^{-1}\Phi_{\nu})(1-\bar{ \beta}_{l}\bar{\beta}_{l}^{-1})\), for the next theorem we denote: \(C_{l,\nu,p}\triangleq\frac{2+(\gamma-1)(\alpha_{l}-1+\rho(\alpha_{l}-\alpha _{\nu}))}{1+(\gamma-1)(\alpha_{l}-1)}\), where \(C_{l,\nu,p}\) depends on \(\alpha_{l}\), \(\gamma\), and on the difference \(\alpha_{l}-\alpha_{\nu}\). The following theorem describes the approximated mean escape time for the exponential scheduler: **Theorem 4.1**.: _Given \(C_{l,\nu,p}\) and \(A_{l,\nu}\), let \(s_{t}\) be an exponential scheduler \(s_{t}=t^{\gamma-1}\), the mean transition time from the domain \(\mathcal{G}\):_ \[\mathbb{E}[\sigma_{\mathcal{G}}]\leq\,\sum_{l=0}^{N}A_{l,\nu}^{-1}\frac{\beta _{l}(\bar{m}_{l}\Phi_{l})^{1-C_{l,\nu,p}}}{\beta_{S}(1+(\gamma-1)(\alpha_{l}-1 ))}\Gamma\left(C_{l,\nu,p}\right).\] Where \(\Gamma\) is the gamma function, \(\bar{m}_{l}=\frac{\bar{\lambda}_{l}^{\alpha_{l}}\epsilon_{l}^{\alpha_{l}}}{ \alpha_{l}}\) and \(\bar{\beta}_{l}=\frac{2\bar{\epsilon}_{l}^{\rho\alpha_{l}}}{\alpha_{l}}\) is the time independent jump intensity. For the full proof, see Appendix B.1. It can be observed from Thm. 4.1 that as \(\gamma\) decreases, i.e., faster learning rate decay, the mean transition time increases. Interestingly, when \(\alpha_{l}\to 2\) (nearly Gaussian) and \(\gamma\to 0\), the mean escape time goes to infinity, which means that the training process is trapped inside the basin. **Corollary 4.2**.: _Using Thm. 4.1, if the cooling rate is negligible, i.e \(\gamma\to 1\), the mean transition time:_ \[\mathbb{E}[\sigma_{\mathcal{G}}]\leq\sum_{l=0}^{N}A_{l,\nu}^{-1}\frac{1}{\beta _{S}1^{T}\bar{\lambda}_{l}\epsilon^{\alpha_{l}(1-\rho)}\Phi_{l}}. \tag{16}\] The framework presented in this work enables us to understand in which direction \(r_{i}\) the training process is more probable to exit the basin \(\mathcal{G}\), i.e., which parameter is more liable to help the process escape; this is a crucial feature for Figure 1: Histograms of the stochastic gradient noise for a single parameter in ResNet34 for :(left) layer number 1, (right) layer number 2. The plots qualitatively shows that SGN is far from Normal distribution, and presents heavy tail nature. understanding the training process. The following theorems will be presented for the exponential scheduler but can be expanded for any scheduler. **Theorem 4.3**.: _Let \(s_{t}\) be an exponential scheduler \(s_{t}=t^{\gamma-1}\), \(C_{l}\triangleq\frac{(\gamma-1)(\alpha_{l}-1+\rho(2\alpha_{l}-\alpha_{\nu}- \alpha_{l}))+2}{(\gamma-1)(\alpha_{l}-1)+1}\), for \(\delta\in(0,\delta_{0})\), the probability of the training process to exit the basin through the \(l\)-th parameter is as follows:_ \[P(W_{\sigma}\in\Omega_{i}^{+}(\delta))\leq\sum_{l=0}^{N}A_{l, \nu}^{-1}\frac{\bar{m}_{i}\Phi_{i}}{\bar{\beta}_{i}}(d_{i}^{+})^{-\alpha_{l}} \tag{17}\] \[\frac{\beta_{l}^{2}(\bar{m}_{l}\Phi_{l})^{-C_{l}}}{\beta_{S}(( \gamma-1)(\alpha_{l}-1)+1)}\Gamma\left(C_{l}\right).\] Let us focus on the terms that describes the \(i\)-th parameter: \[P(W_{\sigma}\in\Omega_{i}^{+}(\delta))\leq\frac{\bar{m}_{i}}{\bar{\beta}_{i}} (d_{i}^{+})^{-\alpha_{i}}\sum_{l=0}^{N}\tilde{C}_{l}, \tag{18}\] where \(\tilde{C}_{l}\) encapsulate all the terms that do not depend on \(i\). When considering SGN as Levy noise, we can see that the training process needs only polynomial time to escape a basin. The following result helps us to assess the escaping ratio of two parameters. **Corollary 4.4**.: _The ratio of probabilities for exiting the local minima from two different DNN parameters is:_ \[\frac{P(W_{\sigma}\in\Omega_{l}^{+}(\delta))}{P(W_{\sigma}\in \Omega_{j}^{+}(\delta))}\leq\frac{1^{T}\lambda_{l}^{\alpha_{l}}}{1^{T}\lambda_ {j}^{\alpha_{j}}}e^{(\alpha_{l}-\alpha_{j})(1-\rho)}\frac{(d_{l}^{+})^{-\alpha _{l}}}{(d_{j}^{+})^{-\alpha_{j}}}. \tag{19}\] Let us remind the reader that \((d_{i}^{+})\) is a function of the horizontal distance from the domain's edge. Therefore, the first conclusion is that the higher \((d_{l}^{+})\) is, the probability of exiting from the \(l\)-th direction decreases. However, the dominant term is \(\epsilon^{(\alpha_{l}-\alpha_{j})(1-\rho)}\), combining both factors, parameters with lower \(\alpha\) will have more chance of being in the escape path. It can also be seen from the definition of \(\beta_{l}\) that parameters with lower \(\alpha\) jump earlier and contribute more significant jump intensities. We can conclude by writing: \[\frac{P(W_{\sigma}\in\Omega_{l}^{+}(\delta))}{P(W_{\sigma}\in \Omega_{j}^{+}(\delta))}\propto\epsilon^{\Delta_{l,j}}, \tag{20}\] where \(\Delta_{l,j}=\alpha_{l}-\alpha_{j}\). The next theorem evaluates the probability of exiting the basin after time \(u\). **Theorem 4.5**.: _Let \(s_{t}=t^{\gamma-1}\), where \(\gamma\) is the cooling rate; let us denote two constants that express the effect of the scheduler: \(\gamma_{l}\triangleq 1+(\gamma-1)(\alpha_{l}-1)\) and \(\kappa\triangleq\frac{1+(\gamma-1)(\alpha_{l}-1+\rho(\alpha_{l}-\alpha_{\nu} ))}{\gamma_{l}}\), for \(u>0\):_ \[P(\sigma>u)\leq\sum_{l=0}^{N}A_{l,\nu}^{-1}\frac{\bar{\beta}_{l} \bar{m}_{l}\Phi_{l}}{\bar{\beta}_{S}\gamma_{l}(\bar{m}_{l}\Phi_{l})^{\kappa}} \Gamma\left(\kappa,\bar{m}_{l}\Phi_{l}u^{\gamma_{l}}\right)\quad. \tag{21}\] We now show in a corollary, that the probability of exiting a basin after \(u\) iterations decays exponentially with respect to \(u\), \(\bar{m}_{l}\), and \(\Phi_{l}\). **Corollary 4.6**.: _Using Thm. 4.5, for \(\gamma\to 1\):_ \[P(\sigma>u)\leq\sum_{l=0}^{N}A_{l,\nu}^{-1}\frac{\bar{\beta}_{l}}{\bar{\beta} _{S}}e^{-\bar{m}_{l}\Phi_{l}u}\quad. \tag{22}\] The value \(\Phi_{l}\) describes the horizontal width of the basin, and \(\bar{m}_{l}\) is a function of the learning rate and the noise covariance matrix. Our proof appears in Appendix C.4. ## 5 Experiments This section presents the core experimental results supporting our analysis; additional experiments can be found in the Appendix. All the experiments were conducted using SGD without momentum and weight decay. Stochastic gradient noise distributionWe empirically show that SGN is better characterized using the \(S\alpha S\) Levy distribution. Unlike previous works [56, 67, 63] we use numeric results to demonstrate the heavy-tailed nature of SGN. Our methodology follows [63], calculating the noise of each parameter separately using multiple mini-batches; as opposed to [56] that calculated the noise of multiple parameters on one mini-batch and averages over all parameters and batches to characterize the distribution of SGN. In [63], the authors estimate SGN on a DNN with randomly initialized weights; we, on the other hand, estimate the properties of SGN based on a pre-trained DNN. Expressly, since we want to estimate the escape time, we reason that a pre-trained DNN would better characterize this property. We examine the SGN of three ResNet variants and a Bert-base architecture. The ResNets were examined using CINIC10 dataset [10]. Bert's SGN was examined using \begin{table} \begin{tabular}{c c c c c} Model & BS & Gauss & \(S\alpha S\) Const \(\alpha\) & \(S\alpha S\) \\ \hline Bert & 8 & \(2.15\pm 0.64\) & \(1.98\pm 0.88\) & \(\textbf{0.71}\pm 0.33\) \\ \hline Bert & 32 & \(0.37\pm 0.33\) & \(0.36\pm 0.19\) & \(\textbf{0.18}\pm 0.12\) \\ \hline \end{tabular} \end{table} Table 2: The fitting errors. The errors were computed by averaging \(150\) randomly sampled parameters from BERT [11] base model trained on the Cola dataset. Sum of Squares Error (SSE) is used to evaluate the fitting error of each distribution. Gauss represents the Gaussian distribution. CoLA [60] dataset. The complete technical details appear in appendix A.2. We show qualitative and quantitative evidence for SGN's heavy tail nature. The qualitative results in Fig. 1 depicts the histogram of SGN norm, which shows the heavy tail nature of the SGN, more visualization for NNs trained on CIFAR100 [32] dataset are available in Sec. G.2. Furthermore Fig. 8 shows the \(\alpha_{i}\) values of randomly sampled parameters. In this figure, if the noise of the sampled parameter were Gaussian, we would expect all the blobs to concentrate around \(\alpha=2\) (since at this value \(S\alpha S\) boils down to a Gaussian distribution). The quantitative results depict the fitting error of the empirical distribution of SGN with three distributions: (1) Gaussian [63], (2) \(S\alpha S\) with constant \(\alpha\)[56], and (3) \(S\alpha S\) with multiple \(\alpha_{i}\) values (ours). The fitting errors for ResNets on CINIC10 [10] are shown in Tab. 3.2, and for Bert see Tab. 2, the results shows strong evidence that SGN is best explained by \(S\alpha S\) distribution. Different parameters hold different noise distributions?This experiment shows that distinct DNN parameters lead to different SGN during training. We randomly sampled 100 parameters from five different DNNs. Then, we calculated the SGN and estimated \(\alpha_{i}\) for each parameter; Fig. 8 depicts the results for the five DNNs. We observe that different parameters have noise that distributes differently during training. We can further notice that the variance is stretched on large segments of \(\alpha_{i}\) values. This implies that building a framework that considers the DNN as one homogeneous system is insufficient; each parameter in the DNN has its characteristics, and we should consider this when modeling the noise. Models were trained as detailed in Appendix A.2. Mean escape timeThe following experiment validates Theorem 4.1. We trained a three-layer neural network with Relu activation on "BreastW", "Satellite" and "Cardio" datasets [14]. We first train the model using SGD with multiple learning rates and batch size of 256 until reaching a local minimum (see discussion Appendix A.4). After reaching the critical point, we decrease the mini-batch size and try to escape the critical minimum, Fig 2 shows the escape time using different learning rates. The escape time is measured by the number of iterations, averaged over \(100\) seeds. We fit empirical results to two theories, ours and [63], both theories fitted with the same amount of free parameters. The results in Fig 2 show the mean escape time using batch size of 32, one can observe that the empiric results are better explained by our theory on all three datasets examined. Our method shows limitation when using small batch sizes, as depicted in Appendix G.3, our theory overshoot when predicting mean escape time for Satellite dataset, competitive on Cardio and better on BreastW dataset. Probability of escaping after time u.The following experiment validates Thm. 4.5. We trained a three-layer neural network with Relu activation on Speech, Cardio and dataset [14] using SGD with a learning rate of 0.05 and batch size of 128 until convergence to a local minima. We measure the time to escape the local minimum on 1000 seeds and plot the probability distribution to exit as a function of time in Fig. 3. This results demonstrates that our theoretical results coincide with the empiric evidence. Learning rate decayThe heavy tail behaviour of SGN may prevent from the training process to converge to a critical point due to the large jump process, hence reducing the frequency and size of the large jumps may be crucial for good convergence. This paragraph aims to demonstrate that the LRdecay's effectiveness may be due to the attenuation of SGN. We show two experiments, first we trained ResNet110 [21] on CIFAR100 [32], on epoch \(280\) the learning rate is decreased by a factor of \(10\). Fig. 4 shows that the learning rate decay results in a lower noise amplitude and less variance. In the second experiment, a ResNet20 [21] is trained Figure 2: The mean escape time of SGD on Breastw (left), Cardio (middle), and Satellite (right) datasets. The plots show the fitting base on two methods: ours and [63] using a batch size of 32. Each dot represents the mean escape time for a sweep of learning rates. For each learning rate, the dot is an average of over \(100\) random seeds. We observe that the empiric results are better explained by our theory for all three datasets examined. in three different variations for 90 epochs; the first variation had LRdecay at epochs 30 and 60, the second had a batch-size increase at epochs 30 and 60, the third was trained with the same learning rate and batch size for the entire training process, the results show almost identical results on the first two cases, (i.e., LRdecay and batch increase) reaching a top-1 score of 66.7 and 66.4 on the validation set. In contrast, the third led to worse performances reaching a top-1 score of 53. [58] performed similar experiments to show the similarity between decreasing the learning rate and increasing the batch size; however, their purpose was to suggest a method for improving training speed without degrading the results. LRdecay decreases the step size and the noise amplitude; on the other hand, increasing the batch size only decreases the noise amplitude. Combining the results of the two experiments above, we may carefully deduce that the main effect of LRdecay is reducing the fluctuation in the gradient update phase and not decreasing the step size (step size is the movement of the deterministic process towards the minus of the gradient). SGN amplitude reduction enables the training process to get easier localization in the current promising domain. Escaping AxisIn this section, we demonstrate that the optimization process is more probable to escape from the axis with lower \(\alpha_{i}\). We use a 2D Ackleys function; the escape process starts at the global minimum \(\vec{0}\). We apply Gradient Descent with added \(S\alpha S\) noise (\(S\alpha S(\alpha_{x_{1}})\),\(S\alpha S(\alpha_{x_{2}})\)), where \(\alpha_{1}=\alpha_{2}-\Delta\), learning rate of \(1e-4\), with no momentum or weight decay. Once the optimization process passes some predefined radius, we check which axis is larger. Fig 9 shows how probable it is to exit from \(x_{1}\) based on 1000 different seeds. This result implies that as the \(\delta\) between the \(\alpha_{i}\)s increases, the axis with the smaller value of \(\alpha\) has more probability of being the axis through which the optimization process can escape. ## 6 Conclusions Our experiments corroborate that the \(S\alpha S\) better characterized SGN qualitatively and quantitatively. Furthermore, we show that distinct parameters are better characterized by different distribution parameters, \(\alpha_{i}\). Based on the mentioned experiments, we constructed a framework in \(\mathbb{R}^{N}\) consisting of \(N\) one-dimensional Levy processes with \(\alpha_{i}\)-stable components. This framework enables us to better understand the nature of DNN training with SGD, such as the escaping properties from different local minima, a learning rate scheduler, and other parameters' effects in the DNN. We also presented experiments that support the claim that a significant feature of LR schedulers comes from reducing the fluctuations of the SGN. Finally, we show that parameters in the DNN that hold noise that distributes with low \(\alpha_{i}\) have a unique role in the training process, helping the training process escape local minima. Limitations and Future ResearchThe presented framework is valid once the training process is near a local minimum; how the training acts in other states, for example, at the beginning of the training, is not intended to be solved in this work. Further, how \(\alpha\) evolves in time is still unclear and demands future research. It is also unclear why different parameters holds different SGN distributions, and what are the roles of each parameter in the optimization process. Figure 3: The x-axis represents the number of iterations. The y-axis represents the probability of exiting the basin; We train the same model, 1000 runs, and in each run, we keep the iteration of escaping the basin. The left plot shows results on the Cardio dataset with different mini-batch sizes, and the right plot shows the same on the Speech dataset. The exponential decay predicted by our theorem (lines) coincides with the empirical results (dots).
2306.11855
A Model-free Closeness-of-influence Test for Features in Supervised Learning
Understanding the effect of a feature vector $x \in \mathbb{R}^d$ on the response value (label) $y \in \mathbb{R}$ is the cornerstone of many statistical learning problems. Ideally, it is desired to understand how a set of collected features combine together and influence the response value, but this problem is notoriously difficult, due to the high-dimensionality of data and limited number of labeled data points, among many others. In this work, we take a new perspective on this problem, and we study the question of assessing the difference of influence that the two given features have on the response value. We first propose a notion of closeness for the influence of features, and show that our definition recovers the familiar notion of the magnitude of coefficients in the parametric model. We then propose a novel method to test for the closeness of influence in general model-free supervised learning problems. Our proposed test can be used with finite number of samples with control on type I error rate, no matter the ground truth conditional law $\mathcal{L}(Y |X)$. We analyze the power of our test for two general learning problems i) linear regression, and ii) binary classification under mixture of Gaussian models, and show that under the proper choice of score function, an internal component of our test, with sufficient number of samples will achieve full statistical power. We evaluate our findings through extensive numerical simulations, specifically we adopt the datamodel framework (Ilyas, et al., 2022) for CIFAR-10 dataset to identify pairs of training samples with different influence on the trained model via optional black box training mechanisms.
Mohammad Mehrabi, Ryan A. Rossi
2023-06-20T19:20:18Z
http://arxiv.org/abs/2306.11855v1
# A Model-free Closeness-of-influence Test for Features in Supervised Learning ###### Abstract Understanding the effect of a feature vector \(x\in\mathbb{R}^{d}\) on the response value (label) \(y\in\mathbb{R}\) is the cornerstone of many statistical learning problems. Ideally, it is desired to understand how a set of collected features combine together and influence the response value, but this problem is notoriously difficult, due to the high-dimensionality of data and limited number of labeled data points, among many others. In this work, we take a new perspective on this problem, and we study the question of assessing the difference of influence that the two given features have on the response value. We first propose a notion of closeness for the influence of features, and show that our definition recovers the familiar notion of the magnitude of coefficients in the parametric model. We then propose a novel method to test for the closeness of influence in general model-free supervised learning problems. Our proposed test can be used with finite number of samples with control on type I error rate, no matter the ground truth conditional law \(\mathcal{L}(Y|X)\). We analyze the power of our test for two general learning problems i) linear regression, and ii) binary classification under mixture of Gaussian models, and show that under the proper choice of score function, an internal component of our test, with sufficient number of samples will achieve full statistical power. We evaluate our findings through extensive numerical simulations, specifically we adopt the datamodel framework (Ilyas, et al., 2022) for CIFAR-10 dataset to identify pairs of training samples with different influence on the trained model via optional black box training mechanisms. Machine Learning, ICML ## 1 Introduction In a classic supervised learning problem, we are given a dataset of \(n\) iid data points \(\{(x_{i},y_{i})\}_{i=1:n}\) with feature vectors \(x\in\mathbb{R}^{d}\) and response value (label) \(y\in\mathbb{R}\). From the inferential point of view, understanding the influence of each individual feature \(i\in\{1,\ldots,d\}\) on \(y\) is of paramount importance. Considering a parametric family of distributions for \(\mathcal{L}(Y|X)\) is among the most studied techniques for this problem. In this setting, the influence of each feature can be seen by their corresponding coefficient value in the parametric model. Essentially such methods can result in spurious statistical findings, mainly due to model mis-specification, where in the first place the ground-truth data generating law \(\mathcal{L}(Y|X)\) does not belong to the considered parametric family. A natural remedy for this problem is to relax the parametric family assumption, removing concerns about model misspecification. Besides the difficulties with the new model-free structure of the problem, we need a new notion to capture the influence of features, as there is no longer a coefficient vector as per class of parametric models. In this paper, we follow the model-free structure, but take a new perspective on the generic problem of investigating the influence of features on the response value. In particular, as a first step towards this notoriously hard question under no class of parametric distribution assumption or whatsoever, we are specifically interested in assessing the closeness of influence of features. For this end, we posit the following fundamental question: _(*) In a general model-free supervised learning problem, for two given features, is it possible to assess the closeness of their influence on the response value (label) in a statistically sound way?_ In this paper, we answer question (*) affirmatively. We characterize a notion of closeness for the influence of features on \(y\) under the general model-free framework. We show that this notion aligns perfectly well with former expectations in parametric models, where small difference in the coefficient values imply close influence on the response value. We then cast the closeness of influence question as a hypothesis testing problem, and show that we can control associated type I error rate with finite number of samples. ### Motivation Behind Question (*) Beyond the inferential nature of Question (*) that helps to better understand the data-generating process of on-hand data, being able to answer this question has a myriad of applications for other classic machine learning tasks. In fact, inspired by the recent advancements in interpretable machine learning systems, it is desired to strike a balance between model flexibility in capturing the ground-truth law \(\mathcal{L}(Y|X)\) and using few number of explanatory variables. For this goal, feature aggregation has been used to distill a large amount of feature information into a smaller number of features. In several parametric settings, features with equal coefficients are naturally grouped together, e.g, in linear regression new feature \(x_{1}+x_{2}\) is considered rather than \((x_{1},x_{2})\), in case that \(x_{1},x_{2}\) have equal corresponding regression coefficients (Yan and Bien, 2021). In addition, identifying features with near influence on the response value can be used for tree-based aggregation schemes (Shao et al., 2021; Bien et al., 2021; Wilms and Bien, 2022). This is of paramount importance in learning problems involving rare features, such as the count of microbial species (Bien et al., 2021). In addition, in many learning problems, an honest comprehensive assessment for characterizing the behavior of \(Y\) with respect to a certain attribute \(A\) is desired. This can be used to assess the performance of model with respect to a sensitive attribute (fair machine learning), or to check if two different treatments (different values of \(A\)) have close influence on potential outcomes. ### Related Work In machine learning, the problem of identifying a group of features that have the largest influence on the response value is often formulated as variable selection. With a strong parametric assumption, the conditional law \(\mathcal{L}(Y|X)\) is considered to belong to a known class of parametric models, such as linear regression. For variable selection in the linear regression setting, the LASSO (Tibshirani, 1996) and Dantzig selector (Candes and Tao, 2007) are the most widely used. In fact, there are several other works for variable selection in the linear regression setting with output solutions satisfying certain structures, such as (Bogdan et al., 2015; Tibshirani et al., 2005). There has been another complimentary line in the past years from model-X perspective (Candes et al., 2018). In this setting, despite the classical setup, in which a strong parametric assumption is considered on the conditional law, it shifts the focus to the feature distribution \(X\) and assumes an extensive knowledge on the distribution of the features. This setting arises naturally in many learning problems. For example, we can get access to distributional information on features in learning scenarios where the sampling mechanism can be controlled, e.g,., in datamodel framework (Ilyas et al., 2022), and gene knockout experiments (Peters et al., 2016; Cong et al., 2013). Other settings include problems where an abundant number of unlabeled data points (unsupervised learning) are available. The other related line of work is to estimate and perform statistical inference on certain statistical model parameters. Specifically, during the past few years, there have been several works (Javanmard and Montanari, 2014; Van de Geer et al., 2014; Deshpande et al., 2019; Fei and Li, 2021) for inferential tasks on low-dimensional components of model parameters in high-dimensional \((d>n)\) settings of linear and generalized linear models. Another complementary line of work, is the conditional independence testing problem \(X_{j}\perp\!\!\!\perp Y|X_{-j}\) to test if a certain feature \(X_{j}\) is independent of the response value \(Y\), while controlling for the effect of the other features. This problem has been studied in several recent works for both parametric (Crawford et al., 2018; Belloni et al., 2014), and model-X frameworks (Candes et al., 2018; Javanmard and Mehrabi, 2021; Liu et al., 2022; Shaer and Romano, 2022; Berrett et al., 2020). Here are couple of points worth mentioning regarding the scope of our paper. 1. _(Feature selection methods)_ However Question (*) has a complete different nature from well-studied variable selection techniques- with the goal of removing redundant features, an assessment tool provided for (*) can be beneficial for post-processing of feature selection methods as well. Specifically, we expect that two redundant features have close (zero) influence on the response value, therefore our closeness-of-influence test can be used to sift through the set of redundant features and potentially improve the statistical power of the baseline feature selection methods. 2. _(Regression models)_ We would like to emphasize that however fitting any class of regression models would yield an estimate coefficient vector, but comparing the magnitude of coefficient values for answering Question (*) is not statistically accurate and would result in invalid findings, mainly due to model misspecification. Despite such inaccuracies of fitted regression models, our proposed closeness-of-influence test works under no parametric assumption on the conditional law. 3. _(Hardness of non-parametric settings)_ The finite-sample guarantee on type-I error rate for our test does not come free. Specifically, this guarantee holds when certain partial knowledge on the feature distributions \(\mathcal{L}(X)\) is known. This setup is often referred as model-X framework (Candes et al., 2018), where on contrary to the classic statistic setups, the conditional law \(\mathcal{L}(Y|X)\) is optional, and adequate amount of information on features distribution \(\mathcal{L}(X)\) is known. Such requirements for features distribution makes the scope of our work distant from completely non-parametric problems. ### Summary of contributions and organization In this work, we propose a novel method to test the closeness of influence of a given pair of features on the response value. Here is the organization of the three major parts of the paper: * In Section 2, we propose the notion of symmetric influence and formulate the question (*) as a tolerance hypothesis testing problem. We then introduce the main algorithm to construct the test statistic, and the decision rule. We later show that the type-I error is controlled for finite number of data points. * In Section 3, for two specific learning problems: 1) linear regression setup, and 2) binary classification under a mixture of Gaussians, we analyze the statistical power of our proposed method. Our analysis reveals guidelines on the choice of the score function, that is needed for our procedure. * In Section 5, we combine our closeness-of-influence test with datamodels (Ilyas et al., 2022) to study the influence of training samples on the trained black box model. We consider CIFAR-10 dataset and identify several pairs of training samples with different influence on the output models. Finally, we empirically evaluate the performance of our method in several numerical experiments, we show that our method always controls type-I error with finite number of data points, while it can achieve high statistical power. We end the paper by providing concluding remarks and interesting venues for further research. ### Notation For a random variable \(X\), we let \(\mathcal{L}(X)\) denote the probability density function of \(X\). For two density functions \(p,q\) let \(d_{\mathsf{TV}}(p,q)\) denote the total variation distance. We use \(\Phi(t)\) and \(\varphi(t)\) respectively for cdf and pdf of standard normal distribution. For and integer \(n\) let \([n]=\{1,\dots,n\}\) and for a vector \(x\in\mathbb{R}^{d}\) and integers \(i,j\in[d]\) let \(x_{\mathsf{swap}(i,j)}\) be a vector obtained by swapping the coordinates \(i\) and \(j\) of \(x\). We let \(\mathsf{N}(\mu,\Sigma)\) denote the probability density function of a multivariate normal distribution with mean \(\mu\) and covariance matrix \(\Sigma\). ## 2 Problem Formulation We are interested in investigating that if two given features \(i,j\) have close influence on the response value \(y\). Specifically, in the case of the linear regression setting \(\mathcal{L}(Y|X)=\mathsf{N}(X^{\mathsf{T}}\theta,\sigma^{2})\), two features \(i\) and \(j\) have an equal effect on the response variable \(y\), if the model parameter \(\theta\) has equal coordinates in \(i\) and \(j\). In this parametric problem, the close influence analysis can be formulated as the following hypothesis testing problem \[H_{0}:|\theta_{i}-\theta_{j}|\leq\tau\,,\quad H_{A}:|\theta_{i}-\theta_{j}|> \tau\,.\] In practice, the considered parametric model may not hold, and due to model misspecification, the reported results are not statistically sound and accurate. Our primary focus is to extend the definition of close influence of features on the response value to a broader class of supervised learning problems, ideally with no parametric assumption on \(\mathcal{L}(Y|X)\) (model-free). For this end, we first propose the notion of _symmetric influence_. **Definition 2.1** (Symmetric influence).: _We say that two features \(i,j\in[d]\) have a symmetric influence on the response value \(y\) if the conditional law \(p_{Y|X}\) does not change once features \(i\) and \(j\) are swapped in \(x\). More precisely, if \(\mathcal{L}(Y|X)=\mathcal{L}(Y|X_{\mathsf{swap}(i,j)})\), where \(X_{\mathsf{swap}(i,j)}\) is obtained from swapping coordinates \(i\) and \(j\) in \(X\)._ While the perfect alignment between density function \(p_{Y|X}\) and \(p_{Y|X_{\mathsf{swap}(i,j)}}\) is considered as equal influence, it is natural to consider small (but nonzero) average distance of these two density functions as having close influence of features \(i,j\) on the response value. Inspired by this observation, we cast the problem of closeness-of-influence testing as a tolerance hypothesis testing problem 1. Before further analyzing this extended definition, for two simple examples we show that the symmetric influence definition recovers the familiar equal effect notion in parametric problems. It is worth noting that this result can be generalized to a broader class of parametric models. **Proposition 2.2**.: _Consider the logistic model \(\mathbb{P}(Y=1|X=x)=\frac{1}{1+\exp(-x^{i}\theta)}\). In this model, features \(i\) and \(j\) have symmetric influence on \(y\) if and only if \(\theta_{i}=\theta_{j}\). In addition, for the linear regression setting \(y=x^{\mathsf{T}}\theta+\varepsilon\) with \(\varepsilon\sim\mathsf{N}(0,\sigma^{2})\), features \(i\) and \(j\) have symmetric influence on \(y\) if and only if \(\theta_{i}=\theta_{j}\)._ We refer to Appendix A for proofs of all propositions and theorems. ### Closeness-of-influence testing Inspired by the definition of symmetric influence given in Definition 2.1, we formulate the problem of testing the closeness of the influence of two features \(i,j\) on \(y\) as the following: \[\mathcal{H}_{0}:\ \mathbb{E}\left[d_{\mathsf{TV}}(p_{Y|X},p_{Y|X_{ \mathsf{swap}(i,j)}})\right] \leq\tau\,,\] \[\mathcal{H}_{A}:\ \mathbb{E}\left[d_{\mathsf{TV}}(p_{Y|X},p_{Y|X_{ \mathsf{swap}(i,j)}})\right] >\tau\,. \tag{1}\] Specifically, this hypothesis testing problem allows for general non-negative \(\tau\) values. We can test for symmetric influence by simply selecting \(\tau=0\). In this case, we must have \(p_{Y|X}=p_{Y|X_{\text{swap}(i,j)}}\) almost surely (with respect to some measure on \(\mathcal{X}\)). For better understanding of the main quantities in the left-hand-side of 1, it is worth to note that \(p_{Y|X_{\text{swap}(i,j)}}(y|x)=p_{Y|X}(y|x_{\text{swap}}(i,j))\) and the quantity of interest can be written as \[\mathbb{E}\left[d_{\text{TV}}(p_{Y|X},p_{Y|X_{\text{swap}(i,j)}})\right]\] \[=\frac{1}{2}\int\Big{|}p_{Y|X}(y|x)-p_{Y|X}(y|x_{\text{swap}(i,j) })\Big{|}p_{X}(x)\mathrm{d}y\mathrm{d}x\,.\] We next move to the formal process to construct the test statistics of this hypothesis testing problem. **Test statistics**. We first provide high-level intuition behind the test statistics used for testing 1. In a nutshell, for two i.i.d. data points \((x^{(1)},y^{(1)})\) and \((x^{(2)},y^{(2)})\), if the density functions \(p_{Y|X}\) is close to \(p_{Y|X_{\text{swap}(i,j)}}\), then for an optional score functions applied on \((x^{(1)},y^{(1)})\) and \((x^{(2)}_{\text{swap}(i,j)},y^{(2)})\), with equal chance (50\(\%\)) one should be larger than the other one. This observation is subtle though. Since we intervene in the features of the second data point (by swapping its coordinates), this shifts the features distribution, thereby the joint distribution of \((x^{(1)},y^{(1)})\) and \((x^{(2)}_{\text{swap}(i,j)},y^{(2)})\) are not equal. This implies that we must control for such distributional shifts on features as well. The formal process for constructing the test statistics \(U_{n}\) is given in Algorithm 1. We next present the decision rule for hypothesis problem 1. **Input:**\(n\) data points \(\{(x^{(m)},y^{(m)})\}_{m=1:n}\) with \((x,y)\in\mathbb{R}^{d}\times\mathbb{R}\) (for \(n\) being even-if not, remove one sample), two features \(i,j\in\{1,2,...,d\}\), and a score function \(T:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\). **Output:** A test statistic \(U_{n}\,\). For \(1\leq m\leq\frac{n}{2}\) define \[\tilde{x}^{(m)}=x^{(m+\frac{n}{2})}_{\text{swap}(i,j)}\,,\quad\tilde{y}^{(m )}=y^{(m+\frac{n}{2})}\,.\] Define tests statistic \(U_{n}\) \[U_{n}=\frac{2}{n}\sum_{m=1:\frac{n}{2}}\mathbb{I}\left(T\big{(}x^{(m)},y^{(m) }\big{)}\geq T\big{(}\tilde{x}^{(m)},\tilde{y}^{(m)}\big{)}\right)\,.\] **Decision rule**. For the data set \((\mathbf{X},\mathbf{Y})\) of size \(n\) and test statistic \(U_{n}\) as per Algorithm 1 at significance level \(\alpha\) consider the following decision rule \[\psi_{n}(\mathbf{X},\mathbf{Y})=\mathbb{I}\left(\Big{|}U_{n}-\frac{1}{2}\Big{|} \geq\tau+\tau_{X}+\sqrt{\frac{\log(2/\alpha)}{n}}\right)\,, \tag{2}\] with \(\tau_{X}\) being an upper bound on the total variation distance between the original feature distribution, and the obtained distribution by swapping coordinates \(i,j\). More precisely, for two independent features vectors \(X^{(1)},X^{(2)}\) let \(\tau_{X}\) be such that \(\tau_{X}\geq d_{\text{TV}}\left(\mathcal{L}(X^{(1)}),\mathcal{L}(X^{(2)}_{ \text{swap}(i,j)})\right)\). In fact, in several learning problems when features have a certain symmetric structure, the quantity \(\tau_{X}\) is zero. For instance, when features are multivariate Gaussian with isotropic covariance matrix. More on this can be seen in Section 2.2. **Size of the test**. In this section, we show that the obtained decision rule 2 has control on type I error with finite number of samples. More precisely, we show that the probability of falsely rejecting the null hypothesis 1 can always be controlled such that it does not exceed a predetermined significance level \(\alpha\). **Theorem 2.3**.: _Under the null hypothesis 1, decision rule 2 has type-I error smaller than \(\alpha\). More precisely_ \[\mathbb{P}_{\mathcal{H}_{0}}(\psi(\mathbf{X},\mathbf{Y})=1)\leq\alpha\,.\] Based on decision rule 1, we can construct p-values for the hypothesis testing problem 1. The next proposition gives such formulation. **Proposition 2.4**.: _Consider_ \[p=\begin{cases}1\,,&|U_{n}-1/2|\leq\tau+\tau_{X}\,,\\ 1\wedge\eta_{n}(U_{n},\tau,\tau_{X})\,,&\text{otherwise}\,,\end{cases} \tag{3}\] _with function \(\eta_{n}(u,\tau_{1},\tau_{2})\) being defined as_ \[\eta_{n}(u,\tau_{1},\tau_{2})=2\exp\left(-n\bigg{(}\Big{|}u-\frac{1}{2}\Big{|} -\tau_{1}-\tau_{2}\bigg{)}^{2}\right)\,.\] _In this case, the p-value \(p\) is super-uniform. More precisely, under the null hypothesis 1 for every \(\alpha\in[0,1]\) we have_ \[\mathbb{P}(p\leq\alpha)\leq\alpha\,.\] ### Effect of feature swap on features distribution From the formulation of the decision rule given in 2, it can be seen that an upper bound on total variation distance between density functions of \(X^{(1)}\) and \(X^{(2)}_{\text{swap}(i,j)}\) is required. This quantity shows up as \(\tau_{X}\) in 2. Regarding this change on \(X\) distribution, two points are worth mentioning. First, in several classes of learning problems the feature vectors follow a symmetric structure which renders the quantity \(\tau_{X}\) to zero. For instance, when features have an isotropic Gaussian distribution (Proposition 2.5), or in the datamodel sampling scheme (Ilyas et al., 2022), the formal statement is given in Proposition 2.6. Secondly, the value of \(\tau_{X}\) can be computed when adequate amount of information is available on distribution of \(X\), the so-called model-X framework (Candes et al., 2018). We would also like to emphasize that indeed we do not need the direct access to entire density function \(p_{X}\) information, and an upper bound on the quantity \(d_{\mathsf{TV}}(\mathcal{L}(X^{(1)}),\mathcal{L}(X^{(2)}_{\mathsf{swap}(i,j)}))\) is sufficient. In the next proposition, for the case that features follow a general multivariate Gaussian distribution \(\mathsf{N}(\mu,\Sigma)\) we provide a valid closed-form value for \(\tau_{X}\). **Proposition 2.5**.: _Consider a multivariate Gaussian distribution with the mean vector \(\mu\in\mathbb{R}^{d}\) and the covariance matrix \(\Sigma\in\mathbb{R}^{d\times d}\), for two features \(i\) and \(j\) the following holds:_ \[d_{\mathsf{TV}}\left(\mathcal{L}(X^{(1)}),\mathcal{L}(X^{(2)}_{ \mathsf{swap}(i,j)})\right)\\ \leq\frac{1}{2}\Big{[}\mathsf{tr}\big{(}-I_{d}+P_{ij}\Sigma^{-1} P_{ij}\Sigma\big{)}\\ +(\mu-P_{ij}\mu)^{\mathsf{T}}\Sigma^{-1}(\mu-P_{ij}\mu)\Big{]}^{1 /2}\,, \tag{4}\] _where \(P_{ij}\) is the permutation matrix that swaps the coordinates \(i\) and \(j\). More precisely, for every \(x\in\mathbb{R}^{d}\) we have \(P_{ij}x=x_{\mathsf{swap}(i,j)}\)._ It is easy to observe that in the case of isotropic Gaussian distribution with zero mean, we can choose \(\tau_{X}=0\). More concretely, when \(\mu=0\), and \(\Sigma=\sigma^{2}I\), then Proposition 2.5 reads \(\tau_{X}=0\). We next consider a setting with binary feature vectors that arise naturally in datamodels (Ilyas et al., 2022), and will be used later in experiments of Section 5. **Proposition 2.6**.: _Consider a learning problem with binary features vector \(x\in\{0,1\}^{d}\). For a positive integer \(m\), we suppose that \(x\) is sampled uniformly at random from the space \(S_{m}=\{x\in\{0,1\}^{d}:\sum x_{i}=m\}\). This means that the output sample has binary entries with exactly \(m\) non-zero coordinates. Then, in this setting for two independent features vectors \(x^{(1)},x^{(2)}\), the following holds_ \[d_{\mathsf{TV}}\left(\mathcal{L}\big{(}X^{(1)}\big{)},\mathcal{L}\big{(}X^{(2 )}_{\mathsf{swap}(i,j)}\big{)}\right)=0\,.\] ## 3 Power Analysis In this section, we provide a power analysis for our method. For a fixed score function \(T:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\) and two i.i.d. data points \((x^{(1)},y^{(2)})\) and \((x^{(2)},y^{(2)})\) consider the following cumulative distribution functions: \[F_{T}(t) =\mathbb{P}\left(T(X^{(1)},Y^{(1)})\leq t\right)\,,\] \[G_{T}(t) =\mathbb{P}\left(T(X^{(2)}_{\mathsf{swap}(i,j)},Y^{(2)})\leq t \right)\,.\] In the next theorem, we show that the power of our test depends on the average deviation of the function \(F_{T}\circ G_{T}^{-1}\) from the identity mapping on the interval \([0,1]\). **Theorem 3.1**.: _Consider the hypothesis testing problem 1 at significance level \(\alpha\) with \(n\) data points \((\mathbf{X},\mathbf{Y})\). In addition, suppose that score function \(T:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\) satisfies the following condition for some \(\beta\in(0,1)\):_ \[\left|\int_{0}^{1}(F_{T}(G_{T}^{-1}(u))-u)\mathrm{d}u\right|\geq\rho_{n}( \alpha,\beta,\tau)+\tau_{X}\,,\] _with \(\rho_{n}(\alpha,\beta,\tau)=2\exp(-n\beta^{2})+\sqrt{\frac{\log(2/\alpha)}{n} }+\tau\). In this case, the decision rule 2 used with the score function \(T\) has type II error not exceeding \(\beta\). More precisely \(\mathbb{P}\left(\Psi_{n}(\mathbf{X},\mathbf{Y})=1\right)\geq 1-\beta\,.\)_ The function \(F_{T}\circ G_{T}^{-1}\) is called _ordinal dominance curve_ (ODC) (Hsieh and Turnbull, 1996; Bamber, 1975). It can be seen that the ODC is the population counterpart of the PP plot. A direct consequence of the above theorem is that if the ODC has a larger distance from the identity map \(i(u)=u\), then it would be easier for our test to flag smaller gaps between the influence of features. We next focus on two learning problems: 1) linear regression setting, and 2) binary classification under Gaussian mixture models. For each problem, we use Theorem 3.1 and provide lower bounds on the statistical power of our closeness-of-influence test. **Linear regression setup.** In this setting, we suppose that \(y=x^{\mathsf{T}}\theta^{*}+\varepsilon\) for \(\varepsilon\sim\mathsf{N}(0,\sigma^{2})\) and feature vectors drawn iid from a multivariate normal distribution \(\mathsf{N}(0,I_{d})\). Since features are isotropic Gaussian with zero mean, by an application of Theorem 2.5 we know that \(\tau_{X}\) is zero. In the next theorem, we provide an upper bound for hypothesis testing problem 1 with \(n\) data points and the score function \(T(x,y)=|y-x^{\mathsf{T}}\widehat{\theta}|\) for some model estimate \(\widehat{\theta}\). We show that in this example, the power of the test highly depends on the value \(|\theta_{i}^{*}-\theta_{j}^{*}|\) and the quality of the model estimate \(\widehat{\theta}\). Indeed, the higher the contrast between the coefficient values \(\theta_{i}^{*}\) and \(\theta_{j}^{*}\), the easier it is for our test to reject the null hypothesis. **Theorem 3.2**.: _Under the linear regression setting \(y=x^{\mathsf{T}}\theta^{*}+\varepsilon\) with \(\varepsilon\sim\mathsf{N}(0,\sigma^{2})\) with feature vectors coming from a normal population \(x\sim\mathsf{N}(0,I_{d})\), consider the hypothesis testing problem 1 for features \(i\) and \(j\) with \(\tau\in(0,1)\). We run Algorithm 1 at the significance level \(\alpha\) with the score function \(T(x,y)=|y-x^{\mathsf{T}}\widehat{\theta}|\) for a model estimate \(\widehat{\theta}\in\mathbb{R}^{d}\). For \(\beta\in(0,1)\) such that \(\tan(\frac{\pi}{2}\rho_{n}(\alpha,\beta,\tau))\leq\frac{1}{2}\), suppose that the following condition holds_ \[|\theta_{i}^{*}-\theta_{j}^{*}|\geq\frac{2\tan(\frac{\pi}{2}(\rho_{n}(\alpha, \beta,\tau)))}{1-2\tan(\frac{\pi}{2}(\rho_{n}(\alpha,\beta,\tau)))}\frac{ \left(\sigma^{2}+\|\widehat{\theta}-\theta^{*}\|_{2}^{2}\right)}{|\widehat{ \theta}_{i}-\widehat{\theta}_{j}|}\,,\] _for \(\rho_{n}(\alpha,\beta,\tau)\) as per Theorem 3.1. Then, the type II error is bounded by \(\beta\). More precisely, we have \(\mathbb{P}(\Psi_{n}(\mathbf{X},\mathbf{Y})=1)\geq 1-\beta\,.\)_ We refer to Appendix for the proof of Theorem 3.2. It can be seen that the right-hand-side of the above expression can be decomposed into two major parts. The first part involves the problem parameters, such as the number of samples \(n\), and error tolerance values \(\alpha\) and \(\beta\). This quantity for a moderately large number of samples \(n\), and small tolerance value \(\tau\) can get sufficiently small. On the other hand, the magnitude of the second part depends highly on the quality of the model estimate \(\widehat{\theta}\) and the inherent noise value of the problem \(\sigma^{2}\) which basically indicates how structured is the learning problem. Another interesting observation is regarding the \(|\widehat{\theta}_{i}-\widehat{\theta}_{j}|\). Indeed, it can be inferred that small values of this quantity renders the problem of discovering deviation from the symmetric influence harder. This conforms to our expectation, given that in the extreme scenario that \(\widehat{\theta}_{i}=\widehat{\theta}_{j}\) it is impossible for the score function to discern \(\theta_{i}^{*}\) and \(\theta_{j}^{*}\), because of the additive nature of the considered score function. **Binary classificaiton**. In this section, we provide power analysis of our method for a binary classification setting. Specifically, we consider the binary classification under a mixture of Gaussian model. More precisely, in this case the data generating process is given by \[y=\begin{cases}+1\,,&\text{w.p }q\,,\\ -1\,,&\text{w.p }1-q\,.\end{cases}\,,\quad x\sim\mathsf{N}(y\mu,I_{d})\,. \tag{5}\] We consider the influence testing problem \(1\) with \(\tau=0\). In the next theorem, we provide a lower bound on the statistical power of our method used under this learning setup. **Theorem 3.3**.: _Under the binary classification setup 5, consider the hypothesis testing problem 1 for \(\tau=0\). We ran Algorithm 1 with the score function \(T(x,y)=yx^{\mathsf{T}}\theta\) at the significance level \(\alpha\), and suppose that for some nonnegative value \(\beta\) the following holds_ \[|\mu_{i}-\mu_{j}|\geq\Phi^{-1}\left(\frac{1}{2}+\rho_{n}(\alpha,\beta,0) \right)\frac{\sqrt{2}\|\widehat{\theta}\|_{2}}{|\widehat{\theta}_{i}-\widehat {\theta}_{j}|}\,,\] _where \(\rho_{n}(\alpha,\beta,\tau)\) is given as per Theorem 3.1. Then the type-II error in this case is bounded by \(\beta\). More concretely, we have \(\mathbb{P}(\Psi_{n}(\mathbf{X},\mathbf{Y})=1)\geq 1-\beta\,.\)_ It is important to note that in this particular setting, the features do not follow a Gaussian distribution with a zero mean. Instead, they are sampled from a mixture of Gaussian distributions with means \(\mu\) and \(-\mu\). The reason why \(\tau_{X}=0\) can be utilized is not immediately obvious. However, we demonstrate that when testing for \(\tau=0\) under the null hypothesis, it is necessary for \(\mu_{i}\) to be equal to \(\mu_{j}\), and the distribution of features remains unchanged when the coordinates \(i\) and \(j\) are swapped. As a result, we can employ \(\tau_{X}=0\) in this scenario. This argument is further elaborated upon in the proof of Theorem 3.3. From the above expression it can be observed that for sufficiently large number of data points \(n\) and a small value \(\tau\), the value \(\Phi^{-1}(1/2+\rho_{n})\) will get smaller and converge to zero. In addition, it can be inferred that an ideal model estimate \(\widehat{\theta}\) must have small norm and high contrast between \(\widehat{\theta}_{i}\) and \(\widehat{\theta}_{j}\) values. An interesting observation can be seen on the role of other coordinate values in \(\widehat{\theta}\). In fact, it can be realized that for the choice of the score function \(T(x,y)=yx^{\mathsf{T}}\widehat{\theta}\), the support of the model estimate \(\widehat{\theta}\) must be a subset of two features \(i\) and \(j\), since this would decrease \(|\widehat{\theta}|\) and increases the value of \(|\widehat{\theta}_{i}-\widehat{\theta}_{j}|\). ## 4 Experiments In this section, we evaluate the performance of our proposed method for identifying the symmetric influence across features. We start by the Isotropic Gaussian model for feature vectors. More precisely, we consider \(x\sim\mathsf{N}(0,I_{d})\) with \(d=10\). In this case, we have \(\tau_{X}=0\) and we consider the hypothesis testing problem 1 for \(\tau=0\) (symmetric influence). **Size of the test.** We first start by examining the size of our proposed method. For this end, we consider the conditional law \(y|x\sim\mathsf{N}(x^{\mathsf{T}}Sx,1)\), for a semi-positive definite matrix \(S\) with coordinate \((i,j)\) being \(S_{i,j}=1+\mathbb{I}(i=j)\). The conditional mean of \(y|x\) is a quadratic form and it is easy to observe that in this case for every two features \(i,j\in\{1,\dots,10\}\) we have \(x^{\mathsf{T}}Sx=x_{\mathsf{swap}(i,j)}^{\mathsf{T}}Sx_{\mathsf{swap}(i,j)}\), and therefore the null hypothesis holds. We test for the symmetric influence of each pair of features (\(\binom{10}{2}\) number of tests). We run our method with the score function \(T(x,y)=|y-\widehat{\theta}^{\mathsf{T}}x|\) with \(\widehat{\theta}\sim\mathsf{N}(0,I_{d})\). The estimate \(\widehat{\theta}\) is fixed across all \(45\) tests. We suppose that we have access to \(1000\) data points, and we consider three different significance levels \(\alpha=0.1,0.15,\) and \(0.2\). The results of this experiment can be seen in Figure 1(b) where the reported numbers (rejection rates) are averaged over \(1000\) independent experiments. It can be observed that, in this case for all three significance levels, the rejection rates are smaller than \(\alpha\), and therefore the size of the test is controlled. **Power analysis.** The linear regression setting is considered, in which \(y|x\sim\mathsf{N}(x^{\mathsf{T}}\theta^{*},1)\), for \(\theta^{*}\in\mathbb{R}^{d}\) with \(d=10\). We consider the following pattern for signal strength \(\theta_{1}^{*}=\theta_{2}^{*}=1\), \(\theta_{3}^{*}=\theta_{4}^{*}=2\), \(\theta_{5}^{*}=\theta_{6}^{*}=3\), \(\theta_{7}^{*}=\theta_{8}^{*}=4\), \(\theta_{9}^{*}=\theta_{10}^{*}=5\). In this example, it can be observed that the following pairs of features \(\mathcal{I}=\{(1,2),(3,4),(5,6),(7,8),(9,10)\}\) have symmetric influence, and for any other pair the null hypothesis 1 must be rejected. We use the score function \(T(x,y)=|y-x^{\mathsf{T}}\widehat{\theta}|\) at significance level \(\alpha=0.1\) for three different choices of \(\widehat{\theta}\). We follow this probability distribution \(\widehat{\theta}\sim\mathsf{N}(\theta_{0},\sigma^{2}I_{d})\) for three different \(\sigma\) values \(\sigma=1,2,\) and \(3\). A smaller value of \(\sigma\) implies a better estimation of \(\theta_{0}\). The average rejection rates are depicted in Figure 1(a), where each \(10\times 10\) square corresponds to a different \(\sigma\) value (three plots in total). Specifically, \((i,j)\)-th cell in each plot denotes the average rejection rate of the symmetric influence hypothesis for features \(i\) and \(j\). The rejection rates are obtained by averaging over \(1000\) independent experiments. First, it can be inferred that for pairs belonging to the set \(\mathcal{I}\) the rejection rate is always smaller than the significance level \(\alpha=0.1\), thereby the size of the test is controlled. In addition, by decreasing the \(\sigma\) value (moving from right to left), it can be inferred that the test achieves higher power (more dark blue regions). It is consistent with our prior expectation that the statistical power of our method depends on the quality of the score function \(T\) and model estimate \(\widehat{\theta}\); see Theorem 3.2. More on the statistical power of our method, it can be observed that within each plot, pairs that have higher contrast in the difference of coefficient magnitudes have higher statistical power. For instance, this pair of features \((1,10)\) with coefficient values \(\theta_{1}^{*}=1\), \(\theta_{10}^{*}=5\) has rejection rates of \(0.987,0.768,0.543\) (for \(\sigma=1,2,3\), respectively) while the other pair of features \((6,8)\) with coefficient values \(\theta_{6}^{*}=3,\theta_{8}^{*}=4\) has rejection rate of \(0.294,0.097,0.055\) (for \(\sigma=1,2,3\), respectively). ## 5 Influence of Training Data on Output Model In this section, we combine our closeness-of-influence test with datamodel framework (Ilyas et al., 2022) to analyze the influence of training samples on the evaluations of the trained model on certain target examples. We first provide a brief overview on datamodels and later describe the experiments setup. ### Datamodels For training samples \(\mathcal{D}^{\mathsf{train}}=\{(x_{i},y_{i})\}_{i=1:N}\) consider a class of learning algorithm \(\mathcal{A}\), where by class we mean a training mechanism (potentially randomized), such as training a fixed geometry of deep neural networks via gradient descent and a fixed random initialization scheme. In datamodels (Ilyas et al., 2022), a new learning problem is considered, where feature vectors \(S\) are binary 0-1 vectors with size \(N\) with \(\gamma\in(0,1)\) portion one entries, selected uniformly at random. Here \(S\) is an indicator vector for participation of \(N\) data points \(\mathcal{D}^{\mathsf{train}}\) in the training mechanism, i.e., \(S_{i}=1\) if and only if the \(i-\)th sample of \(\mathcal{D}^{\mathsf{train}}\) is considered for the training purpose via \(\mathcal{A}\). For a fixed target example \(x\), the response value is the evaluation (will be described later) of the output model (trained with samples indicated in \(S\)) on \(x\), denoted by \(f_{\mathcal{A}}(x;S)\). This random sampling of data points from \(\mathcal{D}^{\mathsf{train}}\) is repeated \(m\) times, therefore data for the new learning problem is \(\{(S_{i},f_{\mathcal{A}}(x,S_{i}))\}_{i=1:m}\). The ultimate goal of datamodels is to learn the mapping \(S\to f_{\mathcal{A}}(x,S)\) via surrogate modeling and a class of much less complex models. In the seminal work of (Ilyas et al., 2022), they show that using linear regression with \(\ell_{1}\) penalty (LASSO (Tibshirani, 1996)) performs surprisingly well in learning the highly complex mapping of \(S\to f_{\mathcal{A}}(x,S)\). ### Motivation We are specifically interested in analyzing the influence of different pairs of training samples on a variety of test targets, and discover pairs of training samples that with high cer Figure 1: Average Rejection Rates for Different Settings tainty influence the test target differently. We use the score function \((f_{\mathcal{A}}(x,S)-x^{\mathsf{T}}\widehat{\theta})^{2}\) for our closeness-of-influence test, where \(\widehat{\theta}\) is the learned datamodel. We adopt this score function, mainly due to the promising performance of linear surrogate models in (Ilyas et al., 2022) for capturing the dependency rule between \(S\) and \(f_{\mathcal{A}}(x;S)\). In addition, the described sampling scheme in datamodels satisfies the symmetric structure as per Proposition 2.6 (so \(\tau_{X}=0\)). We would like to emphasize that despite the empirical success of datamodels, the interpretation of training samples with different coefficient magnitude in the obtained linear data-model \(\widehat{\theta}\) is _not_ statistically accurate. Here we approach this problem through the lens of hypothesis testing and output p-values, to project the level of confidence in our findings. ### Experimental Setups and Results We consider the CIFAR-10 dataset (Krizhevsky et al., 2009), which has \(N=50000\) training samples along with \(10000\) test datapoints and 10 classes 1. We consider \(\gamma=0.5\) (portion of ones in \(S_{i}\) samples), and follow the same heuristics provided for \(f_{\mathcal{A}}(x;S)\) in (Ilyas et al., 2022), which is the correct-class margin, defined as the logit value of the true class minus the highest logit value among incorrect classes. We use the datamodel data given in [https://github.com/MadryLab/datamodels-data](https://github.com/MadryLab/datamodels-data). The provided data has \(310k\) samplings, where for each target example \(x\) (in the test data) the datamodel parameter \(\widehat{\theta}\in\mathbb{R}^{N}\) is estimated via the first \(300k\) samples (\(10000\) total number of datamodels \(\widehat{\theta}\) for each test data). We use the additional \(10k\) samples to run our closeness-of-fit test with the linear score function \((f_{\mathcal{A}}(x;S)-x^{\mathsf{T}}\widehat{\theta})^{2}\). Now, for each pair of training samples and a specific target test example, we can test for their closeness of influence. In the first experiment, for each two classes (can be the same) we choose two pictures as the training pair (randomly from the two classes), and for the target sample, we select randomly from the class of dog pictures. For each two classes, we repeat this process \(20\) times, and run our test \(1\) with \(\tau=0\), and report all p-values (\(2000\) in total). After running the Benjamini-Yekutieli procedure (Benjamini and Yekutieli, 2001) (with log factor correction to control for dependency among p-values), we find three statistically significant results at \(\alpha=0.2\) with p-value=\(5\times 10^{-5}\) (for all three discoveries). Surprisingly, all three findings correspond to a similar test image, the pictures of training pairs and the one test image can be seen in Figure 2. It can be observed that in all findings one of the reported images is visually closer to the target image. This conforms well to obtained results that the null hypothesis 1 which states that the two training images have equal influence on the target sample is rejected. We refer to Appendix B for the rest of experiments. Footnote 1: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck ## 6 Concluding Remarks In this paper, we proposed a novel method to test the closeness of influence of a given pair of features on the response value. This procedure makes no assumption on the conditional law between the response value and features (\(\mathcal{L}(Y|X)\)). We first proposed a notion called "symmetric influence" that generalized the familiar concept of equal coefficient in parametric models. This notion is motivated to characterize the sensitivity of the conditional law with respect to swapping the features. We then formulated the closeness-of-influence testing problem as a tolerance hypothesis testing. We provide theoretical guarantees on type-I error rate. We then analyzed statistical power of our method for a general score function \(T\), and show that for two specific learning problems i) linear regression settings, and 2) binary classification under a mixture of Gaussian models with a certain choice of score functions we can achieve full statistical power. Finally, we adopt the datamodel framework and use our closeness-of-influence test to find training samples that have different influence on the trained model. Several interesting venues for future research are in order. In particular, extending this framework for multiple testing (testing for multiple number of pairs) and still achieving valid statistical results. This can be done with generic mul Figure 2: Summary of discoveries on CIFAR-10 dataset via datamodels used with our closeness-of-influence test. For each pair of 10 classes (can be similar), we choose random samples from the training data along with a random target image from dog pictures in the test data, and we repeat this process 20 times. After running the Benjamini-Yekutieli procedure on output p-values (2000 in total) at \(\alpha=0.2\) three significant results are reported. The images of these findings are plotted above, with their associated p-values. This implies that with high certainty images in each pair influence the target example differently. tiple testing frameworks (similar to Benjamini-Yekutieli procedure used in Section 5) on the obtained p-values, but a method that is crafted for this setting can be more powerful. In addition, extending this framework for studying influence of a group of features (more that two) can be of great interest.
2302.12673
The Wigner function of a semiconfined harmonic oscillator model with a position-dependent effective mass
We propose a phase-space representation concept in terms of the Wigner function for a quantum harmonic oscillator model that exhibits the semiconfinement effect through its mass varying with the position. The new method is used to compute the Wigner distribution function exactly for such a semiconfinement quantum system. This method suppresses the divergence of the integrand in the definition of the quantum distribution function and leads to the computation of its analytical expressions for the stationary states of the semiconfined oscillator model. For this quantum system, both the presence and absence of the applied external homogenous field are studied. Obtained exact expressions of the Wigner distribution function are expressed through the Bessel function of the first kind and Laguerre polynomials. Furthermore, some of the special cases and limits are discussed in detail.
S. M. Nagiyev, A. M. Jafarova, E. I. Jafarov
2023-02-24T14:54:32Z
http://arxiv.org/abs/2302.12673v5
The Wigner function of a semiconfined harmonic oscillator model with a position-dependent effective mass ###### Abstract We develop a phase-space representation concept in terms of the Wigner function for a quantum harmonic oscillator model that exhibits the semiconfinement effect through its mass varying with the position. The new method is applied for the analytical computation of the Wigner distribution function for such a semiconfinement quantum system. The method allows for suppression of the divergence of the integrand in the definition of the quantum distribution function and leads to the computation of its analytical expressions for the stationary states of the semiconfined oscillator model. Both cases of the presence and absence of the applied external homogeneous field for this quantum system are studied. Obtained exact expressions of the Wigner distribution function are expressed through the Bessel function of the first kind and Laguerre polynomials. Further, some of the special cases and limits are discussed in detail. **Keywords:** Position-dependent effective mass, Wigner function, semiconfinement effect, Bessel function of the first kind, Laguerre polynomials, Exact expression ## 1 Introduction The concept of phase space can be considered the best tool for the description of the dynamics of the mechanical system. The phase space of any classical mechanical system contains all possible values of its position and momentum as well as its time evolution through certain phase-space trajectories. The quantum world of similar dynamical systems is drastically different and complicated. Within the quantum approach, one deal with the probabilistic description of the sub-micron-sized physical systems through their non-commuting position and momentum operators. Then, the joint distribution of the momentum and position for quantum mechanical systems needs a new mathematical tool that can be as informative for us as in the case of classical mechanical systems. The Wigner distribution function is that powerful mathematical tool [1]. It allows us to describe the quantum systems under study by using the language of classical physics [2]. At present, there exist a lot of papers dealing with the computation of the Wigner function of the various constant mass quantum harmonic oscillator models [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. Few papers discussing phase-space behavior of the oscillator-like quantum systems with the position-dependent mass also exist [14, 15, 16]. In [17], we computed the simplest Gaussian smoothed Wigner function for the oscillator model with a position-dependent effective mass exhibiting semiconfinement effect [18, 19]. That simplest definition of the computed Gaussian smoothed Wigner function of the joint quasiprobability of momentum and position also called as Husimi function is well known, too [20]. The main reason for the computation of the Husimi function instead of the Wigner function itself also was briefly discussed in [17]. The main problem was related to mathematics. During the computation of the analytical expression of the Wigner function [1], one observed that the integrand of its integral definition simply diverges and this fact makes it impossible to perform further calculations. However, it was not the case for the Husimi function. The Gaussian smoothing applied to the integrand of the Husimi function definition simply restricted that divergence and allows the performing of further analytical computations. Such divergences commonly appear during computations of the quantum distribution functions. For example, [6] succeeds with computation of the exact expression of the Wigner function of the one-dimensional parabose oscillator, but not its Husimi function, due to that the momentum and position operators commute in a non-canonical manner. However, [5] succeeds with the computation of the exact expression of the both Wigner and Husimi functions of the \(q\)-deformed harmonic oscillator. However, the reality is that the phase-space description of the quantum harmonic oscillator cannot simply diverge if one applies any confinement to it. Then, we started to think that there is some 'lost brick' from the physics definition of the used mathematical tool and one needs to find and put that brick in its empty cell. We managed to solve the problem successfully and now report on the final product of this solution, which is the analytical expression of the Wigner function for the semiconfined oscillator model under consideration. Our paper is structured as follows. In Section 2, basic information about the Wigner function definition is briefly reviewed and then its well-known analytical expressions for a case of the nonrelativistic canonical quantum harmonic oscillator with and without the applied external homogeneous field are also presented. Section 3 is devoted to the computation of the Wigner function for the oscillator model with a position-dependent effective mass exhibiting a semiconfinement effect. These computations also are performed for both cases without and with the applied external homogeneous field. Section 4 contains detailed discussions of the obtained analytical expressions, their contour depicting, limit relations, and a brief conclusion. ## 2 The Wigner joint quasiprobability distribution of the position and momentum As we noted in the introduction, the Wigner function plays an exceptional role in the description of any quantum system within the phase space of momentum and position, which is very similar to classical physics approaches. Its general definition for the pure stationary quantum states \(\left|n\right\rangle\) in the framework of the assumption that \(\hat{p}\) momentum and \(\hat{x}\) position operators of the one-dimensional quantum system under consideration simply do not commute, can be written as follows [21]: \[W_{n}\left(p,x\right)=\frac{1}{4\pi^{2}}\int\int\left\langle n\right|e^{i \left(\lambda\hat{p}+\mu\hat{x}\right)}\left|n\right\rangle e^{-i\left(\lambda p +\mu\hat{x}\right)}d\mu d\lambda. \tag{2.1}\] \(\lambda\) and \(\mu\) appearing in this definition, act as some real variables generally associated with the values of the momentum and position of the quantum system itself. Definition (2.1) completely simplifies, if one takes into account the canonical commutation relation between the \(\hat{p}\) momentum and \(\hat{x}\) position operators of the one-dimensional quantum system, which says us that the commutation between these two operators is \(\left[\hat{p},\hat{x}\right]=-i\hbar\). Then, (2.1) reduces to the well-known definition of the Wigner function empirically introduced in [1] as a method allowing to compute the quantum corrections to the thermodynamic equilibrium state of the physical system under consideration. That definition of the Wigner distribution function is the following integral consisting of the integrand of the combination of the shifted wavefunctions: \[W_{n}\left(p,x\right)=\frac{1}{2\pi\hbar}\int\psi_{n}^{\ast}\left(x-\frac{1}{2 }x^{\prime}\right)\psi_{n}\left(x+\frac{1}{2}x^{\prime}\right)e^{-i\frac{gx^{ \prime}}{\hbar}}dx^{\prime}. \tag{2.2}\] Here, \(\psi_{n}\left(x\right)\) are orthonormalized wavefunctions of the stationary states of the quantum system under consideration in the configuration representation. A similar definition of the Wigner function can be easily written down also via the momentum representation wavefunction of the quantum system. General definition of the Wigner function (2.1) or (2.2) also imposes the bounded restriction \(\left|W_{n}\left(p,x\right)\right|\leq\left(\pi\hbar\right)^{-1}\) to it. Such a behavior exhibits that the function is valid for both positive and negative values of the momentum and position. Therefore, the function is called a joint quasiprobability distribution function of momentum \(p\) and position \(x\). However, the function defined through (2.1) or (2.2) is strictly positive, if the wavefunctions \(\psi\left(x\right)\) are also strictly of the Gaussian behavior. A well-known example of such behavior is the ground state Wigner function of the non-relativistic quantum harmonic oscillator. Analytical expression of its arbitrary stationary state \(n\) for this quantum system under the action of the external homogeneous field \(V^{ext}\left(x\right)=gx\) can be exactly computed via its following orthonormalized wavefunctions of the stationary states \[\psi_{Nn}^{g}\left(x\right)=C_{Nn}e^{-\frac{\lambda_{0}^{2}}{2}\left(x+x_{0} \right)^{2}}H_{n}\left(\lambda_{0}\left(x+x_{0}\right)\right)\ \ \ \ n=0,1,2,\ldots. \tag{2.3}\] Here, \(H_{n}\left(x\right)\) is the Hermite polynomial. It is defined via the \({}_{2}F_{0}\) hypergeometric functions [22]. Additionally, the following notations are introduced, too: \[\lambda_{0}=\sqrt{\frac{m_{0}\omega}{\hbar}},\ \ \ x_{0}=\frac{g}{m_{0}\omega^{2}}, \ \ \ C_{Nn}=\frac{C_{N0}}{\sqrt{2^{n}n!}},\ \ \ C_{N0}=\sqrt[4]{\frac{\lambda_{0}^{2}}{\pi}}.\] Wavefunctions (2.3) satisfy an orthogonality relation within the region \(\left(-\infty,+\infty\right)\). Therefore, the Wigner distribution function of the non-relativistic quantum harmonic oscillator under the action of the external homogeneous field being computed via (2.3) has the following analytical expression: \[W_{Nn}^{g}\left(p,x\right)=\frac{\left(-1\right)^{n}}{\pi\hbar}e^{-\frac{2}{ \hbar\omega}\left(\frac{p^{2}}{2m_{0}}+\frac{m_{0}\omega^{2}}{2}x^{2}+gx+ \frac{g^{2}}{2m_{0}\omega^{2}}\right)}L_{n}\left(\frac{4}{\hbar\omega}\left( \frac{p^{2}}{2m_{0}}+\frac{m_{0}\omega^{2}}{2}x^{2}+gx+\frac{g^{2}}{2m_{0} \omega^{2}}\right)\right). \tag{2.4}\] Here, \(L_{n}\left(x\right)\) is the Laguerre polynomial defined via the \({}_{1}F_{1}\) hypergeometric functions [22]. Analytical expression of the non-relativistic quantum harmonic oscillator Wigner distribution function without any applied external field (\(g=0\)) being special case of eq.(2.4) is also well known [21]: \[W_{Nn}^{0}\left(p,x\right)=\frac{\left(-1\right)^{n}}{\pi\hbar}e^{-\frac{2}{ \hbar\omega}\left(\frac{p^{2}}{2m_{0}}+\frac{m_{0}\omega^{2}}{2}x^{2}\right)}L _{n}\left(\frac{4}{\hbar\omega}\left(\frac{p^{2}}{2m_{0}}+\frac{m_{0}\omega^{ 2}}{2}x^{2}\right)\right). \tag{2.5}\] It also can be written down via substitution of the following analytical expression of the wavefunctions of the stationary states of the non-relativistic quantum harmonic oscillator [23]: \[\psi_{Nn}^{0}\left(x\right)=\frac{1}{\sqrt{2^{n}n!}}\left(\frac{\lambda_{0}^{2 }}{\pi}\right)^{\frac{1}{4}}e^{-\frac{\lambda_{0}^{2}}{2}x^{2}}H_{n}\left( \lambda_{0}x\right). \tag{2.6}\] Due to that the ground state of both wavefunctions of the stationary states of the non-relativistic quantum harmonic oscillator with and without the action of the external homogeneous field (2.3) and (2.6) are definitely of the Gaussian behavior, both Wigner functions of the ground state \(W_{N0}^{g}\left(p,x\right)\) and \(W_{N0}^{0}\left(p,x\right)\) extracted from (2.4) and (2.5) for value \(n=0\) \[W_{N0}^{g}\left(p,x\right) = \frac{1}{\pi\hbar}e^{-\frac{2}{\hbar\omega}\left(\frac{p^{2}}{2m _{0}}+\frac{m_{0}\omega^{2}}{2}x^{2}+gx+\frac{x^{2}}{2m_{0}-2}\right)}, \tag{2.7}\] \[W_{N0}^{0}\left(p,x\right) = \frac{1}{\pi\hbar}e^{-\frac{2}{\hbar\omega}\left(\frac{p^{2}}{2m _{0}}+\frac{m_{0}\omega^{2}}{2}x^{2}\right)}, \tag{2.8}\] are also definitely positive. ## 3 Computation of the Wigner function of a semiconfined harmonic oscillator model The previous section deals with the well-known one-dimensional non-relativistic canonical quantum harmonic oscillator model in the phase space. [18] introduced an exactly-solvable model of the one-dimensional non-relativistic canonical quantum harmonic oscillator with the following effective mass varying by position: \[M\left(x\right)=\left\{\begin{array}{ll}\frac{am_{0}}{a+x},&\mbox{for }-a<x<+ \infty\\ +\infty,&\mbox{for }x\leq-a\end{array}\right.\qquad(a>0). \tag{3.1}\] That model is semiconfined, i.e. its wavefunctions of the stationary states vanish at both values of the position \(x=-a\) and \(x\rightarrow+\infty\), but, the energy spectrum corresponding to such behavior of the wavefunctions completely overlaps with the energy spectrum of the standard non-relativistic canonical quantum harmonic oscillator. Its wavefunctions of the stationary states are expressed through the generalized Laguerre polynomials as follows: \[\psi_{n}\left(x\right)\equiv\psi_{n}^{SC}\left(x\right)=C_{n}^{SC}\left(x+a \right)^{\lambda_{0}^{2}a^{2}}e^{-\lambda_{0}^{2}a\left(x+a\right)}L_{n}^{ \left(2\lambda_{0}^{2}a^{2}\right)}\left(2\lambda_{0}^{2}a\left(x+a\right) \right), \tag{3.2}\] where the normalization factor equals to \[C_{n}^{SC}=\left(-1\right)^{n}\left(2\lambda_{0}^{2}a\right)^{\lambda_{0}^{2} a^{2}+\frac{1}{2}}\sqrt{\frac{n!}{\Gamma\left(n+2\lambda_{0}^{2}a^{2}+1 \right)}}. \tag{3.3}\] Next, this model also was generalized to the case of the applied external homogeneous field [19] and the following analytical expression of the wavefunctions of the stationary states in terms of the generalized Laguerre polynomials have been obtained: \[\psi_{n}\left(x\right)\equiv\psi_{n}^{gSC}\left(x\right)=C_{n}^{gSC}\left(x+a \right)^{\lambda_{0}^{2}a^{2}}e^{-\lambda_{0}^{2}ag_{0}\left(x+a\right)}L_{n}^ {\left(2\lambda_{0}^{2}a^{2}\right)}\left(2\lambda_{0}^{2}ag_{0}\left(x+a \right)\right), \tag{3.4}\] where, \[C_{n}^{gSC}=g_{0}^{\lambda_{0}^{2}a^{2}+\frac{1}{2}}C_{n}^{SC}, \tag{3.5}\] with the normalization factor \(C_{n}^{SC}\) is same as from (3.3) and the parameter \(g_{0}\) is defined as \[g_{0}=\sqrt{1+2\frac{x_{0}}{a}}.\] In case of the absence of the external field corresponding to value \(g=0\) (\(x_{0}=0\)), the parameter \(g_{0}\) defined above simply equals one. Then, the wavefunction (3.4) reduces to the wavefunction (3.2). One needs to note that both wavefunctions (3.4) and (3.2) satisfy the following orthogonality relation: \[\int_{-a}^{+\infty}\psi_{m}(x)\psi_{n}(x)\,dx=\delta_{mn},\] which can be deduced from the known orthogonality relation of the generalized Laguerre polynomials [22]. Already, we computed in [17] the simplest realization of the Gaussian smoothing for the Wigner function for the oscillator model with a position-dependent effective mass exhibiting a semiconfinement effect [18, 19]. However, our initial goal was the computation of the exact expression of the Wigner function itself even without any simplest Gaussian smoothing. First of all, we took into account that the oscillator model with a position-dependent effective mass exhibiting semiconfinement effect is constructed within the non-relativistic canonical approach. Therefore, the use of the Wigner function definition (2.2) instead of the more general definition (2.1) was sufficient. Next, it was necessary to take into account that the wavefunctions of the stationary states (3.2) and (3.4) vanish at both values of the position \(x=-a\) and \(x\rightarrow+\infty\). Therefore, the integral in the definition of the Wigner function (2.2) should have integration limits from \(x=-a\) to \(x\rightarrow+\infty\). At that point, we observed that the integrand from eq.(2.2) simply diverges and this fact makes it impossible to perform further calculations. However, Gaussian smoothing of eq.(2.2) restricted that divergence and allowed us to perform the computations of the simplest Gaussian smoothed Wigner function (or Husimi function) for this semiconfined model and obtain the exact expression of the phase-space function in terms of the parabolic cylinder function [17]. But, actually, the phase-space description of the quantum harmonic oscillator exists from a physics viewpoint, and confinement as an effect cannot diverge it. Analytical expression of the Husimi function for the same model that we managed to compute was evidence for this statement. By thinking a little bit more deeply one could solve the divergence problem of the integrand. We start our computation directly from the semiconfined oscillator model generalized to the case of the applied external homogeneous field and substitute its wavefunctions of the stationary states (3.4) at the definition of the Wigner function (2.2): \[W_{n}^{g}\left(p,x\right) = \frac{\left(C_{n}^{gSC}\right)^{2}}{2\pi\hbar}e^{-2g_{0}\lambda_{ 0}^{2}a\left(x+a\right)}\] \[\times \int e^{-i\frac{p}{\hbar}x^{\prime}}\left[\left(x+a\right)^{2}- \frac{x^{\prime 2}}{4}\right]^{\lambda_{0}^{2}a^{2}}L_{n}^{\left(2\lambda_{0}^{2}a ^{2}\right)}\left(2g_{0}\lambda_{0}^{2}a\left(x-\frac{x^{\prime}}{2}+a\right) \right)L_{n}^{\left(2\lambda_{0}^{2}a^{2}\right)}\left(2g_{0}\lambda_{0}^{2}a \left(x+\frac{x^{\prime}}{2}+a\right)\right)dx^{\prime}.\] This expression becomes more compact if one applies the change of the variable as \(y=x^{\prime}/2\): \[W_{n}^{g}\left(p,x\right) = \frac{\left(C_{n}^{gSC}\right)^{2}}{\pi\hbar}e^{-2g_{0}\lambda_{ 0}^{2}a\left(x+a\right)}\] \[\times \int e^{-2i\frac{p}{\hbar}y}\left[\left(x+a\right)^{2}-y^{2} \right]^{\lambda_{0}^{2}a^{2}}L_{n}^{\left(2\lambda_{0}^{2}a^{2}\right)} \left(2g_{0}\lambda_{0}^{2}a\left(x+a-y\right)\right)L_{n}^{\left(2\lambda_{0} ^{2}a^{2}\right)}\left(2g_{0}\lambda_{0}^{2}a\left(x+a+y\right)\right)dy.\] As we noted above, if one defines the limits of the integral from \(-a/2\) to \(+\infty\), then the integral diverges. However, one needs to take into account that if the wavefunction \(\psi_{n}^{gSC}\left(x\right)\) defined through expression (3.4) vanishes at finite value \(x=-a\), then both \(\psi_{n}^{gSC}\left(x+2y\right)\) and \(\psi_{n}^{gSC}\left(x-2y\right)\) forming the integrand of the Wigner function also have to vanish at finite value \(x=-a\). This is a main difference of the Wigner function from any of its Gaussian smoothed analogues, which does not exhibit itself if one deals with the phase space of the quantum system defined within the whole region \(\left(-\infty,+\infty\right)\) (i.e. the quantum system with the wavefunctions vanishing at \(\pm\infty\) values of the position and momentum). Taking this property into account one obtains that the integral limits are within \(-\left(x+a\right)\leq y\leq x+a\). Therefore, the modified version of the Wigner function (3.7) is as follows: \[W_{n}^{g}\left(p,x\right) = \frac{\left(C_{n}^{gSC}\right)^{2}}{\pi\hbar}e^{-2g_{0}\lambda_{ 0}^{2}a\left(x+a\right)}\] \[\times \int\limits_{-\left(x+a\right)}^{x+a}e^{-2i\frac{p}{\hbar}y} \left[\left(x+a\right)^{2}-y^{2}\right]^{\lambda_{0}^{2}a^{2}}L_{n}^{\left(2 \lambda_{0}^{2}a^{2}\right)}\left(2g_{0}\lambda_{0}^{2}a\left(x+a-y\right) \right)L_{n}^{\left(2\lambda_{0}^{2}a^{2}\right)}\left(2g_{0}\lambda_{0}^{2}a \left(x+a+y\right)\right)dy.\] Now, the integrand does not diverge under these integration limits and the above integral is analytically computable. Further, we slightly change the variable \(y\) to \(t\) as follows: \[t=\frac{y}{x+a}.\] Its substitution at (3.8) yields: \[W_{n}^{g}\left(p,x\right) = \frac{\left(C_{n}^{gSC}\right)^{2}}{\pi\hbar}\left(x+a\right)^{2 \lambda_{0}^{2}a^{2}+1}e^{-2g_{0}\lambda_{0}^{2}a\left(x+a\right)}\] \[\times \int\limits_{-1}^{1}e^{-2i\frac{p}{\hbar}\left(x+a\right)t} \left(1-t^{2}\right)^{\lambda_{0}^{2}a^{2}}L_{n}^{\left(2\lambda_{0}^{2}a^{2} \right)}\left(2g_{0}\lambda_{0}^{2}a\left(x+a\right)\left(1-t\right)\right)L_{ n}^{\left(2\lambda_{0}^{2}a^{2}\right)}\left(2g_{0}\lambda_{0}^{2}a\left(x+a \right)\left(1+t\right)\right)dt.\] First of all, it is convenient to analyze the ground state distribution function. Therefore, one considers the value \(n=0\). Then, eq.(3.9) simplifies as follows: \[W_{0}^{g}\left(p,x\right)=\frac{\left(C_{0}^{gSC}\right)^{2}}{\pi\hbar}\left(x+a \right)^{2\lambda_{0}^{2}a^{2}+1}e^{-2g\omega\lambda_{0}^{2}a(x+a)}\int\limits_ {-1}^{1}e^{-2i\frac{p}{2}\left(x+a\right)t}\left(1-t^{2}\right)^{\lambda_{0}^ {2}a^{2}}dt. \tag{3.10}\] Integral appearing here can be computed further exactly. For this reason, one needs first to replace the exponential function from the integrand with its following Maclaurin expansion: \[e^{-2i\frac{p}{2}\left(x+a\right)t}=\sum\limits_{k=0}^{\infty}\frac{\left(-2i \frac{p}{2}\left(x+a\right)\right)^{k}}{k!}t^{k}.\] One obtains that \[W_{0}^{g}\left(p,x\right)=\frac{\left(C_{0}^{gSC}\right)^{2}}{\pi\hbar}\left(x +a\right)^{2\lambda_{0}^{2}a^{2}+1}e^{-2g\omega\lambda_{0}^{2}a(x+a)}\sum \limits_{k=0}^{\infty}\frac{\left(-2i\frac{p}{2}\left(x+a\right)\right)^{k}}{ k!}\int\limits_{-1}^{1}\left(1-t^{2}\right)^{\lambda_{0}^{2}a^{2}}t^{k}dt. \tag{3.11}\] Integral appearing in (3.11) can be exactly computed in terms of the Gamma functions yielding: \[\int\limits_{-1}^{1}\left(1-t^{2}\right)^{\lambda_{0}^{2}a^{2}}t^{k}dt=\frac{ 1}{2}\left(1+\left(-1\right)^{k}\right)\frac{\Gamma\left(\frac{k+1}{2}\right) \Gamma\left(\lambda_{0}^{2}a^{2}+1\right)}{\Gamma\left(\lambda_{0}^{2}a^{2}+ \frac{k+3}{2}\right)}.\] Its substitution at (3.11) leads to a reduction of the odd terms of the expansion over \(k\) due to multiplication to \(\left(1+\left(-1\right)^{k}\right)\). Then, such a reduction simplifies the ground state Wigner distribution function as follows: \[W_{0}^{g}\left(p,x\right)=\frac{\left(C_{0}^{gSC}\right)^{2}}{\pi\hbar}\Gamma \left(\lambda_{0}^{2}a^{2}+1\right)\left(x+a\right)^{2\lambda_{0}^{2}a^{2}+1 }e^{-2g\omega\lambda_{0}^{2}a(x+a)}\sum\limits_{m=0}^{\infty}\frac{\left(-2i \frac{p}{2}\left(x+a\right)\right)^{2m}}{\left(2m\right)!}\frac{\Gamma\left(m +\frac{1}{2}\right)}{\Gamma\left(\lambda_{0}^{2}a^{2}+m+\frac{3}{2}\right)}. \tag{3.12}\] The ratio of two Gamma functions appearing in the above expansions can be reexpressed as follows: \[\frac{\Gamma\left(m+\frac{1}{2}\right)}{\Gamma\left(\lambda_{0}^{2}a^{2}+m+ \frac{3}{2}\right)}=\frac{\sqrt{\pi}}{\Gamma\left(\lambda_{0}^{2}a^{2}+\frac{ 3}{2}\right)}\frac{\left(\frac{1}{2}\right)_{m}}{\left(\lambda_{0}^{2}a^{2}+ \frac{3}{2}\right)_{m}}.\] Its substitution at (3.12) yields: \[W_{0}^{g}\left(p,x\right)=\frac{\left(C_{0}^{gSC}\right)^{2}}{\sqrt{\pi}\hbar }\frac{\Gamma\left(\lambda_{0}^{2}a^{2}+1\right)}{\Gamma\left(\lambda_{0}^{2} a^{2}+\frac{3}{2}\right)}\left(x+a\right)^{2\lambda_{0}^{2}a^{2}+1}e^{-2g \omega\lambda_{0}^{2}a(x+a)}\sum\limits_{m=0}^{\infty}\frac{\left(\frac{1}{2} \right)_{m}}{\left(\lambda_{0}^{2}a^{2}+\frac{3}{2}\right)_{m}}\frac{\left(-2i \frac{p}{2}\left(x+a\right)\right)^{2m}}{\left(2m\right)!}. \tag{3.13}\] Now, taking into account that \[\left(C_{0}^{gSC}\right)^{2}=\frac{\left(2g\omega\lambda_{0}^{2}a\right)^{2 \lambda_{0}^{2}a^{2}+1}}{\Gamma\left(2\lambda_{0}^{2}a^{2}+1\right)},\] one needs to apply here the following well-known relation for the Gamma functions \[\Gamma\left(2z\right)=\frac{2^{2z-1}}{\sqrt{\pi}}\Gamma\left(z\right)\Gamma \left(z+\frac{1}{2}\right),\] and even number factorials: \[\left(2m\right)!=2^{2m}\left(\frac{1}{2}\right)_{m}m!.\] As a result of their use, one obtains the following analytical expression for the ground state Wigner distribution function in terms of \(o_{F}\) hypergeometric function: \[W_{0}^{g}\left(p,x\right)=\frac{2}{\hbar}\frac{\left(g\omega\lambda_{0}^{2}a \right)^{2\lambda_{0}^{2}a^{2}+1}}{\Gamma\left(\lambda_{0}^{2}a^{2}+\frac{3}{ 2}\right)\Gamma\left(\lambda_{0}^{2}a^{2}+\frac{1}{2}\right)}\left(x+a\right)^ {2\lambda_{0}^{2}a^{2}+1}e^{-2g\omega\lambda_{0}^{2}a(x+a)}\,o_{F}\left( \begin{array}{c}-\\ \lambda_{0}^{2}a^{2}+\frac{3}{2}\end{array}:-\frac{p^{2}}{h^{2}}\left(x+a \right)^{2}\right). \tag{3.14}\] Now, taking into account that the following analytical expression for the Bessel functions of the first kind exists: \[J_{\alpha}\left(z\right)=\frac{\left(\frac{z}{2}\right)^{\alpha}}{\Gamma\left( \alpha+1\right)}\,{}_{0}F_{1}\left(\begin{array}{c}-\\ \alpha+1\end{array};-\frac{z^{2}}{4}\right),\] one obtains that \[W_{0}^{g}\left(p,x\right)=2h^{\lambda_{0}^{2}a^{2}-\frac{1}{2}}\frac{\left(g_{ 0}\lambda_{0}^{2}a\right)^{2\lambda_{0}^{2}a^{2}+1}}{\Gamma\left(\lambda_{0} ^{2}a^{2}+\frac{1}{2}\right)}e^{-2g_{0}\lambda_{0}^{2}a\left(x+a\right)} \left(\frac{x+a}{p}\right)^{\lambda_{0}^{2}a^{2}+\frac{1}{2}}J_{\lambda_{0}^{2 }a^{2}+\frac{1}{2}}\left(2\frac{p}{\hbar}\left(x+a\right)\right). \tag{3.15}\] This result also can be obtained via direct use of the following known table integral [24, eq.(2.3.5.3)] in terms of the Gamma function \(\Gamma\left(\beta\right)\) and Bessel function of the first kind \(J_{\alpha}\left(z\right)\): \[\int\limits_{-a}^{a}\left(a^{2}-x^{2}\right)^{\beta-1}e^{i\lambda x}dx=\sqrt{ \pi}\Gamma\left(\beta\right)\left(\frac{2a}{\lambda}\right)^{\beta-\frac{1}{ 2}}J_{\beta-\frac{1}{2}}\left(a\lambda\right),\quad a>0,\;\Re\left(\beta \right)>0. \tag{3.16}\] It is interesting to note that the external field does not exhibit itself through the Bessel function of the first kind. This is due to that, it does not exist anymore in the integrand. Therefore, for the case of the absence of the external field, the parameter \(g_{0}\) becomes zero, and the Wigner function of the ground state (3.15) slightly simplifies as follows: \[W_{0}^{0}\left(p,x\right)=2h^{\lambda_{0}^{2}a^{2}-\frac{1}{2}}\frac{\left( \lambda_{0}^{2}a\right)^{2\lambda_{0}^{2}a^{2}+1}}{\Gamma\left(\lambda_{0}^{2 }a^{2}+\frac{1}{2}\right)}e^{-2\lambda_{0}^{2}a\left(x+a\right)}\left(\frac{x +a}{p}\right)^{\lambda_{0}^{2}a^{2}+\frac{1}{2}}J_{\lambda_{0}^{2}a^{2}+\frac{ 1}{2}}\left(2\frac{p}{\hbar}\left(x+a\right)\right). \tag{3.17}\] Taking into account that the Wigner function of the ground state is exactly computed in terms of the Bessel function of the first kind, then one can try to compute its analytical expression for arbitrarily excited states \(n\). Therefore, one needs to go back to the expression (3.9). Its integrand mainly consists of the product of two Laguerre polynomials with different arguments. One applies there the following known finite sum for such kind of products [25]: \[L_{n}^{\left(\alpha\right)}\left(x\right)L_{n}^{\left(\alpha\right)}\left(y \right)=\frac{\Gamma\left(n+\alpha+1\right)}{n!}\sum\limits_{k=0}^{n}\frac{ \left(xy\right)^{k}}{k!\Gamma\left(k+\alpha+1\right)}L_{n-k}^{\left(\alpha+2k \right)}\left(x+y\right). \tag{3.18}\] Its substitution at (3.9) yields: \[W_{n}^{g}\left(p,x\right) = \frac{\left(C_{n}^{gSC}\right)^{2}}{\pi\hbar}e^{-2g_{0}\lambda_{0 }^{2}a\left(x+a\right)}\left(x+a\right)^{2\lambda_{0}^{2}a^{2}+1}\frac{\Gamma \left(n+2\lambda_{0}^{2}a^{2}+1\right)}{n!}\] \[\times \int\limits_{-1}^{1}e^{-2i\frac{p}{\hbar}\left(x+a\right)t}\left(1 -t^{2}\right)^{\lambda_{0}^{2}a^{2}}\sum\limits_{k=0}^{n}\frac{\left(2g_{0} \lambda_{0}^{2}a\left(x+a\right)\right)^{2k}}{k!\Gamma\left(k+2\lambda_{0}^{2 }a^{2}+1\right)}\left(1-t^{2}\right)^{k}L_{n-k}^{\left(2\lambda_{0}^{2}a^{2} +2k\right)}\left(4g_{0}\lambda_{0}^{2}a\left(x+a\right)\right)dt.\] Next, one interchanges integral and finite summation, which also changes eq.(3.19) as follows: \[W_{n}^{g}\left(p,x\right) = \frac{\left(C_{n}^{gSC}\right)^{2}}{\pi\hbar}e^{-2g_{0}\lambda_{0 }^{2}a\left(x+a\right)}\left(x+a\right)^{2\lambda_{0}^{2}a^{2}+1}\frac{\Gamma \left(n+2\lambda_{0}^{2}a^{2}+1\right)}{n!}\] \[\times \sum\limits_{k=0}^{n}\frac{\left(2g_{0}\lambda_{0}^{2}a\left(x+a \right)\right)^{2k}}{k!\Gamma\left(k+2\lambda_{0}^{2}a^{2}+1\right)}L_{n-k}^{ \left(2\lambda_{0}^{2}a^{2}+2k\right)}\left(4g_{0}\lambda_{0}^{2}a\left(x+a \right)\right)\int\limits_{-1}^{1}e^{-2i\frac{p}{\hbar}\left(x+a\right)t}\left(1 -t^{2}\right)^{\lambda_{0}^{2}a^{2}+k}dt.\] Finally, one can again successfully apply the table integral (3.16) that yields an exact expression for the Wigner function of the arbitrary semiconfined quantum harmonic oscillator stationary states in the presence of the homogeneous external field: \[W_{n}^{g}\left(p,x\right) = 2h^{\lambda_{0}^{2}a^{2}-\frac{1}{2}}\frac{\left(g_{0}\lambda_{0}^ {2}a\right)^{2\lambda_{0}^{2}a^{2}+1}}{\Gamma\left(\lambda_{0}^{2}a^{2}+ \frac{1}{2}\right)}e^{-2g_{0}\lambda_{0}^{2}a\left(x+a\right)}\left(\frac{x+a }{p}\right)^{\lambda_{0}^{2}a^{2}+\frac{1}{2}}\] \[\times \sum\limits_{k=0}^{n}\frac{\left(2g_{0}\lambda_{0}^{2}a\right)^{ 2k}}{k!}\frac{\left(\lambda_{0}^{2}a^{2}+1\right)_{k}}{\left(2\lambda_{0}^{2}a^ {2}+1\right)_{k}}\left(h\frac{x+a}{p}\right)^{k}J_{\lambda_{0}^{2}a^{2}+k+\frac{ 1}{2}}\left(2\frac{p}{\hbar}\left(x+a\right)\right)L_{n-k}^{\left(2\lambda_{0} ^{2}a^{2}+2k\right)}\left(4g_{0}\lambda_{0}^{2}a\left(x+a\right)\right).\] Absence of the external field again slightly simplifies (3.21) due to that \(g=0\) (\(g_{0}=1\)): \[W_{n}^{0}\left(p,x\right) = 2\hbar^{\lambda_{0}^{2}a^{2}-\frac{1}{2}}\frac{\left(\lambda_{0}^{2 }a^{2}+1\right)_{k}}{\left(2\lambda_{0}^{2}a^{2}+1\right)_{k}}\left(\hbar\frac{ x+a}{p}\right)^{k}J_{\lambda_{0}^{2}a+k+\frac{1}{2}}\left(2\frac{p}{\hbar}\left(x+a \right)\right)L_{n-k}^{\left(2\lambda_{0}^{2}a^{2}+2k\right)}\left(4\lambda_{ 0}^{2}a\left(x+a\right)\right).\] We obtained an exact expression of the Wigner function of the semiconfined quantum harmonic oscillator under the action of the external homogeneous field. In the next section, its main properties as well as the behavior of the semiconfined quantum harmonic oscillator model in the phase space will be briefly discussed. Figure 1: A comparative plot of the semiconfined quantum harmonic oscillator Wigner function (3.15) of the ground state (\(n=0\)) without an external field (\(g=0\), left plots) and with an external field (\(g=2,\ 4\); middle and right plots). Upper plots correspond to the confinement parameter \(a=2\), whereas, middle plots correspond to the confinement parameter \(a=4\) and lower plots correspond to the confinement parameter \(a\rightarrow\infty\) (\(m_{0}=\omega=\hbar=1\)). ## 4 Discussions In the previous section, we managed to compute exactly an analytical expression of the Wigner quasiprobability distribution function of the semiconfined quantum harmonic oscillator described by the wavefunctions of its stationary states (3.4). We considered two cases: the presence of the external homogeneous field and its absence (the special case, when \(g=0\)). Therefore, we obtained two expressions of the Wigner function of the arbitrary \(n\) states: they are the Wigner quasiprobability distribution function under the action of the external field (3.21) and in the case of absence of such a field (3.22). Graphical visualization of these analytical expressions is the best tool for deeper analysis of their behavior and possible 'hidden' differences from the canonical harmonic oscillator Wigner function analytical expressions (2.4) and (2.5). In fig.1, we decided to restrict depictions with the ground state Wigner functions (3.15) and (3.17). These functions are definitely positive and being simpler cases of the Wigner functions of the arbitrary \(n\) states (3.21) and (3.22), they are sufficient for qualitative analysis of the phase space features of the semiconfined oscillator model. Nine plots are presented, where dependencies of the ground state Wigner function from the parameters \(a\) and \(g\) are depicted. Values of \(a=2;4\) and \(a\rightarrow\infty\) as well as \(g=0;2;4\) are considered. In the case of \(a=2\), one observes squeezed Gaussian distribution around the equilibrium position-momentum value due to that position being semiconfined, but it is not valid for momentum, too. Therefore, the semiconfinement of the position values makes the position distribution non-symmetrical, but it does not disturb the symmetrical distribution behavior of the momentum, which can be seen from the upper left plot. Applied external field to the semiconfined quantum system under study (can be observed from upper middle and right plots) acts completely different than the canonical harmonic oscillator case. This happens due to that there is an infinitely high wall located at some negative position value. It simply reduces positive position values distribution, and preserves symmetric behavior, but extends the probability to both positive and negative values of the momentum. Greater values of semiconfinement parameter \(a\) simply recover the symmetric Gaussian behavior of the position in the phase space. Such a behavior is a weak signature of the correct extension of the non-relativistic canonical harmonic oscillator phase space in terms of its Wigner function. Finally, the value \(a\rightarrow\infty\) corresponding to lower plots completely recovers the non-relativistic canonical harmonic oscillator, in which the Wigner function is an analytical expression (2.4). As one observes from fig.1, there should be a correct limit from (3.15) to (2.4). We have to use the Stirling approximation of the Gamma function \[\Gamma\left(z+1\right)\cong\sqrt{2\pi z}e^{z\ln z-z},\quad|z|\rightarrow\infty,\] the following asymptotics of the Bessel function of the first kind \[J_{\alpha}\left(z\right)=\frac{1}{\pi\sqrt{2}\sqrt[2]{\alpha^{2}-z^{2}}}\exp \left(\sqrt{\alpha^{2}-z^{2}}-\alpha\,\mathrm{arccosh}\,\frac{\alpha}{z} \right),\;\alpha>z>0,\quad\alpha\rightarrow\infty,\] and the following expansions (\(z<<1\)): \[\frac{1}{z+1} \approx 1-z+z^{2}-\cdots,\] \[\frac{1}{\sqrt{z+1}} \approx 1-\frac{z}{2}+\frac{3}{8}z^{2}-\cdots,\] \[\sqrt{z+1} \approx 1+\frac{z}{2}-\frac{z^{2}}{8}+\cdots,\] \[\ln\left(z+1\right) \approx z-\frac{z^{2}}{2}+-\cdots.\] Substitution of above-listed approximations, expansions, and asymptotics at (3.15) and further straightforward long computations will lead to the complete recovery of the Wigner function (2.4) under the limit \(a\rightarrow\infty\). Let's prove the correctness of the above-noted limit recovery in more detail. First of all, one analyses the correct recovery of the ground state Wigner function (2.7) from the obtained ground state expression (3.15). One writes down the following expansions: \[g_{0}^{2\lambda_{0}^{2}a^{2}+1} \approx e^{-2\lambda_{0}^{2}x_{0}\left(x_{0}-a\right)},\] \[e^{-2g_{0}\lambda_{0}^{2}a\left(x+a\right)} \approx e^{-2\lambda_{0}^{2}a\left(x+x_{0}+a\right)+\lambda_{0}^{2}x_{0 }\left(x_{0}-2x\right)},\] \[\left(m_{0}\omega\frac{x+a}{p}\right)^{\lambda_{0}^{2}a^{2}+ \frac{1}{2}} \approx \left(\frac{m_{0}\omega a}{p}\right)^{\lambda_{0}^{2}a^{2}+ \frac{1}{2}}e^{-\frac{1}{2}\lambda_{0}^{2}x\left(x-2a\right)},\] as well as the following approximation and asymptotics: \[\Gamma\left(\lambda_{0}^{2}a^{2}+\frac{1}{2}\right) \cong \sqrt{2\pi}\left(\lambda_{0}a\right)^{2\lambda_{0}^{2}a^{2}}e^{- \lambda_{0}^{2}a^{2}},\] \[J_{\lambda_{0}^{2}a^{2}+\frac{1}{2}}\left(\frac{2p}{\hbar} \left(x+a\right)\right) \cong \frac{1}{\sqrt{2\pi}\lambda_{0}a}e^{-\lambda_{0}^{2}a^{2}-\frac{ p^{2}}{m_{0}\omega k}-\frac{1}{2}\lambda_{0}^{2}x\left(x-2a\right)-\left(\lambda_{0}^{2}a^{2}+ \frac{1}{2}\right)\ln\left(2\lambda_{0}^{2}a^{2}\right)+\left(\lambda_{0}^{2}a ^{2}+\frac{1}{2}\right)\ln\left(2\frac{2p}{\hbar}p\right)}.\] Their substitution at (3.15) with further substitution of the natural logarithm expansion under the case \(a\rightarrow\infty\) yields: \[\lim_{a\rightarrow\infty}W_{0}^{g}\left(p,x\right)=W_{N0}^{g}\left(p,x\right). \tag{4.1}\] We are going to prove the correct limit existence of the excited Wigner function through its finite-difference operator action version. There exists the following finite-difference operator action version of the Wigner function (2.4): \[W_{Nn}^{g}\left(p,x\right)=\frac{1}{2^{n}n!}H_{n}\left(\lambda_{0}\left(x+x_{0 }-\frac{i\hbar}{2}\frac{\partial}{\partial p}\right)\right)H_{n}\left(\lambda _{0}\left(x+x_{0}+\frac{i\hbar}{2}\frac{\partial}{\partial p}\right)\right)W_ {N0}^{g}\left(p,x\right). \tag{4.2}\] Then, one can rewrite (3.21) also in its following finite-difference operator action version: \[W_{n}^{g}\left(p,x\right)=\frac{n!}{\left(2\lambda_{0}^{2}a^{2}+1\right)_{n}}L _{n}^{\left(2\lambda_{0}^{2}a^{2}\right)}\left(2\lambda_{0}^{2}g_{0}a\left(x +a-\frac{i\hbar}{2}\frac{\partial}{\partial p}\right)\right)L_{n}^{\left(2 \lambda_{0}^{2}a^{2}\right)}\left(2\lambda_{0}^{2}g_{0}a\left(x+a+\frac{i \hbar}{2}\frac{\partial}{\partial p}\right)\right)W_{0}^{g}\left(p,x\right). \tag{4.3}\] Now, taking into account that there is the following known direct limit from the Laguerre to Hermite polynomials [22]: \[\lim_{\alpha\rightarrow\infty}\left(\frac{2}{\alpha}\right)^{\frac{\alpha}{2} }L_{n}^{\left(\alpha\right)}\left(\alpha+\sqrt{2\alpha}z\right)=\frac{\left(-1 \right)^{n}}{n!}H_{n}\left(z\right),\] one can easily show that the following correct limit from (4.3) to (4.2) also holds: \[\lim_{a\rightarrow\infty}W_{n}^{g}\left(p,x\right)=W_{Nn}^{g}\left(p,x\right). \tag{4.4}\] We conclude that the phase space representation for a semiconfined harmonic oscillator model with a position-dependent effective mass is constructed in terms of the Wigner function of the joint quasiprobability of momentum and position. We have found the exact expression of the joint distribution function for the stationary states of the oscillator model under consideration in terms of the Bessel function of the first kind. It has the correct limit to the Wigner function of the non-relativistic canonical harmonic oscillator. [27] also generalizes a semiconfined harmonic oscillator model with a position-dependent effective mass introduced in [18] and extends the model to the case of the so-called semiconfined shifted oscillator. Despite the extension, wavefunctions of the family of the semiconfined harmonic oscillator potentials preserve their general behavior in terms of the Laguerre polynomials. Therefore, the method developed here for the computation of the exact expression of the Wigner function can be further applied to semiconfined shifted oscillator potentials, too. ## Acknowledgement This work was supported by the Azerbaijan Science Foundation -- Grant Nr **AEF-MCG-2022-1(42)-12/01/1-M-01**.
2308.08670
Improved Approximation Bounds for Minimum Weight Cycle in the CONGEST Model
Minimum Weight Cycle (MWC) is the problem of finding a simple cycle of minimum weight in a graph $G=(V,E)$. This is a fundamental graph problem with classical sequential algorithms that run in $\tilde{O}(n^3)$ and $\tilde{O}(mn)$ time where $n=|V|$ and $m=|E|$. In recent years this problem has received significant attention in the context of fine-grained sequential complexity as well as in the design of faster sequential approximation algorithms, though not much is known in the distributed CONGEST model. We present sublinear-round approximation algorithms for computing MWC in directed graphs, and weighted graphs. Our algorithms use a variety of techniques in non-trivial ways, such as in our approximate directed unweighted MWC algorithm that efficiently computes BFS from all vertices restricted to certain implicitly computed neighborhoods in sublinear rounds, and in our weighted approximation algorithms that use unweighted MWC algorithms on scaled graphs combined with a fast and streamlined method for computing multiple source approximate SSSP. We present $\tilde{\Omega}(\sqrt{n})$ lower bounds for arbitrary constant factor approximation of MWC in directed graphs and undirected weighted graphs.
Vignesh Manoharan, Vijaya Ramachandran
2023-08-16T20:41:53Z
http://arxiv.org/abs/2308.08670v3
# Improved Approximation Bounds for Minimum Weight Cycle in the CONGEST Model+ ###### Abstract Minimum Weight Cycle (MWC) is the problem of finding a simple cycle of minimum weight in a graph \(G=(V,E)\). This is a fundamental graph problem with classical sequential algorithms that run in \(\tilde{O}(n^{3})\) and \(\tilde{O}(mn)\) time1 where \(n=|V|\) and \(m=|E|\). In recent years this problem has received significant attention in the context of hardness through fine grained sequential complexity [37, 3] as well as in design of faster sequential approximation algorithms [24, 25, 12]. Footnote 1: We use \(\tilde{O}\), \(\tilde{\Omega}\) and \(\tilde{\Theta}\) to absorb \(polylog(n)\) factors. For computing minimum weight cycle in the distributed CONGEST model, near-linear in \(n\) lower and upper bounds on round complexity are known for directed graphs (weighted and unweighted), and for undirected weighted graphs [29]; these lower bounds also apply to any \((2-\epsilon)\)-approximation algorithm. This paper focuses on round complexity bounds for approximating MWC in the CONGEST model: For coarse approximations we show that for any constant \(\alpha>1\), computing an \(\alpha\)-approximation of MWC requires \(\Omega(\frac{\sqrt{n}}{\log n})\) rounds on weighted undirected graphs and on directed graphs, even if unweighted. We complement these lower bounds with a sublinear \(\tilde{O}(n^{2/3}+D)\)-round algorithm to compute a \((2+\epsilon)\)-approximation of undirected weighted MWC. We also give a \(\tilde{O}(n^{4/5}+D)\)-round algorithm to compute \(2\)-approximate directed unweighted MWC and \((2+\epsilon)\)-approximate directed weighted MWC. To obtain the sublinear round bounds of our approximation algorithms, we design an efficient algorithm for computing \((1+\epsilon)\)-approximate shortest paths from \(k\) sources in directed and weighted graphs, which may be of independent interest for other CONGEST problems. We present an algorithm that runs in \(\tilde{O}(\sqrt{nk}+D)\) rounds if \(k\geq n^{1/3}\) and \(\tilde{O}(\sqrt{nk}+k^{2/5}n^{2/5+o(1)}D^{2/5}+D)\) rounds if \(k<n^{1/3}\), and this round complexity smoothly interpolates between the best known upper bounds for (approximate or exact) SSSP [9, 10] when \(k=1\) and APSP [8] when \(k=n\). Introduction We present algorithms and lower bounds to compute a minimum weight cycle in the distributed CONGEST model. Given a graph \(G=(V,E)\) with a non-negative weight \(w(e)\) on each edge \(e\in E\), the minimum weight cycle problem (MWC) asks for a cycle of minimum weight in \(G\). An \(\alpha\)-approximation algorithm (\(\alpha>1\)) for MWC must find a cycle whose weight is within an \(\alpha\) multiplicative factor of the true MWC. Let \(|V|=n\) and \(|E|=m\). MWC is a fundamental and well-studied problem in the sequential context on both directed and undirected graphs, both weighted and unweighted. In the unweighted undirected case the MWC size is called the _girth_. MWC has classical sequential algorithms running in \(\tilde{O}(n^{3})\) time and \(\tilde{O}(mn)\) time. There are also sequential fine-grained hardness results: MWC is in the \(n^{3}\) class [37] and in the \(mn\) class [3] for hardness in graph path problems. In the distributed CONGEST model, near-linear in \(n\) upper bound on rounds for exact computation of MWC were given in [29] for weighted undirected graphs and for directed graphs (weighted and unweighted) by using a \(\tilde{O}(n)\) APSP algorithm [8] as a subroutine. This was matched by a nearly optimal \(\tilde{\Omega}(n)\) lower bound [29] for exact computation of MWC in these graph classes, and the lower bound even applies to any \((2-\epsilon)\)-approximation algorithm for MWC for any constant \(\epsilon>0\). In this paper, we present new upper and lower bounds on approximating MWC in the CONGEST model. First, we give an approximation lower bound for arbitrarily large constant approximation factors \(\alpha>1\), with a \(\tilde{\Omega}(\sqrt{n})\) lower bound for computation of \(\alpha\)-approximate MWC in directed (weighted and unweighted) graphs and undirected weighted graphs. We complement these results with sublinear upper bounds, with a \(2\)-approximation algorithm for directed unweighted graphs running in \(\tilde{O}(n^{4/5}+D)\) rounds. For weighted graphs, we show \((2+\epsilon)\)-approximation algorithms running in \(\tilde{O}(n^{2/3}+D)\) rounds for undirected graphs and \(\tilde{O}(n^{4/5}+D)\) rounds for directed graphs. These algorithms use a multiple source approximate SSSP procedure, for which we provide an algorithm that is significantly more efficient than repeating the best known approximate SSSP algorithm, and has applications for other CONGEST problems. ### Our Results and Techniques All of our lower bounds hold for randomized algorithms, and the algorithms we present are also randomized. Our results are for the CONGEST model of distributed computing, defined below. **CONGEST Model.** In the CONGEST model [31], a communication network is represented by a graph \(G=(V,E)\) where nodes model processors and edges model bounded-bandwidth communication links between processors. Each node has a unique identifier in \(\{0,1,\ldots n-1\}\) where \(n=|V|\), and each node only knows the identifiers of itself and its neighbors in the network. Each node has infinite computational power. The nodes perform computation in synchronous rounds, where each node can send a message of up to \(\Theta(\log n)\) bits to each neighbor and can receive the messages sent to it by its neighbors. The complexity of an algorithm is measured by the number of rounds until the algorithm terminates, as a parameter of \(n\) and \(D\), the undirected diameter of \(G\). We consider algorithms on both weighted and unweighted graphs \(G\) in this paper, where in the weighted case each edge has a weight assignment \(w:E(G)\rightarrow\{0,1,\ldots W\}\) and the weight of an edge is known to the vertices incident to it. In the weighted case, \(O(\log n+\log W)\) bits can be sent to and received by each node in a round, i.e., a constant number of words. The graph \(G\) can be directed or undirected. Following the convention for CONGEST algorithms (e.g., [13, 17, 4, 5]), the communication links are always bi-directional and unweighted. In our algorithms, we frequently use the well known broadcast and convergecast CONGEST operations [31]: Broadcasting \(M\) messages in total to all nodes, where each message could originate from any node, can be done in \(O(M+D)\) time. In the convergecast operation, each node holds a \(O(\log n)\)-bit value and we want to compute an associative operation (such as minimum or maximum) over all values. This can be done in \(O(D)\) time using a broadcast tree and performing the associative operation at each step up the tree. Since some of our algorithms involve cases where we consider a different virtual graph \(G^{\prime}\) on the network \(G\), we denote the cost of a convergecast operation as \(R_{cast}(G)\). We now give an overview of our results. #### 1.1.1 Undirected Graphs. For undirected weighted MWC, near linear lower and upper bounds were given in [29] and the lower bound extends to \((2-\epsilon)\)-approximation algorithms for constant \(\epsilon>0\). We present an \(\alpha\)-approximation lower bound of \(\tilde{\Omega}(\sqrt{n})\) for any constant \(\alpha>1\). We present a \((2+\epsilon)\)-approximation algorithm for undirected weighted MWC taking \(\tilde{O}(n^{2/3}+D)\) rounds, which is sublinear when \(D\) is sublinear in \(n\), in contrast to the near-linear lower bound for \((2-\epsilon)\)-approximation even in constant diameter graphs. For undirected unweighted MWC, a nearly optimal \((2-\frac{1}{g})\)-approximation algorithm taking \(\tilde{O}(\sqrt{n}+D)\) rounds was given in [29] (\(g\) is the MWC value). **Theorem 1**.: _In an undirected weighted graph \(G=(V,E)\) in the CONGEST model with undirected unweighted diameter \(D\), for any constant \(\epsilon>0\):_ 1. _[label=()]_ 2. _We can compute a_ \((2+\epsilon)\)_-approximation of MWC in_ \(\tilde{O}(n^{2/3}+D)\) _rounds._ 3. _Computing an_ \(\alpha\)_-approximation of MWC, for any constant_ \(\alpha>1\) _requires_ \(\Omega(\frac{\sqrt{n}}{\log n})\) _rounds, even on graphs with diameter_ \(D=\Theta(\log n)\)_._ Our approximation upper bound for undirected weighted MWC is in Section 3.2, and our lower bound is presented in Section 2.2. A key ingredient in our approximation algorithms for MWC is the problem of multiple source approximate SSSP, where we want to compute \((1+\epsilon)\)-approximate SSSP (for constant \(\epsilon>0\) \begin{table} \begin{tabular}{|c||c|c||c|c|} \hline **Problem** & **Lower Bound** & **Ref.** & **Upper Bound** & **Ref.** \\ \hline \hline _Undirected_ & \((2-\epsilon),\Omega\left(\frac{n}{\log n}\right)\) & [29] & \(1,\tilde{O}(n)\) & Folklore \\ _weighted MWC_ & & & \((2+\epsilon),\tilde{O}(n^{2/3}+D)\) & Thm 1.a \\ & \(\alpha,\Omega\left(\frac{\sqrt{n}}{\log n}\right)\) & Thm 1.b & & \\ \hline _Undirected_ & & & \(1,O(n)\) & Folklore \\ _unweighted MWC_ & \((2-\epsilon),\Omega\left(\frac{\sqrt{n}}{\log n}\right)\) & [18] & \((2-\frac{1}{g}),\tilde{O}(\sqrt{ng}+D)\) & [32] \\ _(Girth)_ & & & \((2-\frac{1}{g}),\tilde{O}(\sqrt{n}+D)\) & [29] \\ \hline _Directed MWC_ & \((2-\epsilon),\Omega\left(\frac{n}{\log n}\right)\) & [29] & \(1,\tilde{O}(n)\) & Folklore \\ _weighted/unweighted_ & & & \(2,\tilde{O}(n^{4/5}+D)\) (unweighted) & Thm 2.a \\ & \(\alpha,\Omega\left(\frac{\sqrt{n}}{\log n}\right)\) & Thm 2.c & \((2+\epsilon),\tilde{O}(n^{4/5}+D)\) (weighted) & Thm 2.b \\ \hline \end{tabular} \end{table} Table 1: Minimum Weight Cycle Results for CONGEST. \(n=|V|\), \(D\) is the undirected diameter of \(G\) ‘Folklore’ is used for an algorithm readily derived from a \(\tilde{O}(n)\)-round APSP algorithm [8, 21, 28]. Some approximation results hold for approximation ratio \(\alpha\) or \((1+\epsilon)\), where \(\alpha>1\) is an arbitrarily large constant, and \(\epsilon>0\) is an arbitrarily small constant. All our upper and lower bounds for directed graphs are for randomized algorithms. from \(k\) sources efficiently. We present an algorithm with a tradeoff between the number of sources \(k\) and the round complexity of approximate \(k\)-source SSSP for directed and undirected weighted graphs, which is considerably faster than repeating the current best approximate SSSP algorithm \(k\) times (see Section 4). In our undirected MWC algorithm we use the specific case of \(k=n^{1/3}\), and we give a streamlined algorithm taking \(\tilde{O}(n^{2/3}+D)\) rounds for this case in Section 3.1. Scaling has been used in CONGEST distributed SSSP algorithms [30] to compute approximate SSSP by repeated hop-limited BFS computations in scaled unweighted graphs. Here we use scaling to compute approximate weighted MWC from scaled unweighted graphs on which we apply the \((2-\frac{1}{g})\)-approximate undirected unweighted MWC algorithm [29]. Our scaling technique allows us to use hop-limited versions of unweighted MWC approximation algorithms in a black-box manner to compute MWC of short hop length in weighted graphs, both directed and undirected. We present a detailed version of the \(\tilde{O}(\sqrt{n}+D)\)-round unweighted MWC algorithm from [29] in Appendix A with the necessary modifications for completeness. The other component of the algorithms is the use of our improved \(n^{1/3}\)-source approximate SSSP from randomly sampled vertices to handle long hop cycles. The scaling introduces an additional multiplicative error, causing our algorithms to give \((2+\epsilon)\)-approximations instead of the 2-approximations obtained in the unweighted case. #### 1.1.2 Directed Graphs. For exact computation of minimum weight cycle in directed graphs, strong near-linear lower bounds were given in [29] for both unweighted and weighted cases. This lower bound extends to a \((2-\epsilon)\)-approximation of MWC, but no lower bounds were known for approximation ratios larger than 2. We address this by proving a \(\tilde{\Omega}(\sqrt{n})\) lower bound for \(\alpha\)-approximation, for any constant \(\alpha>1\). This lower bound is obtained by adapting an undirected graph construction in [36, 16] that was used to prove lower bounds for MST and other graph problems. For computing \(\alpha\)-approximation of directed MWC for \(\alpha\geq 2\), there is a gap between our \(\tilde{\Omega}(\sqrt{n})\) lower bound and the previously known linear exact upper bound [29]. We make partial progress on resolving this problem by showing a \(\tilde{O}(n^{4/5}+D)\)-round algorithm for 2-approximation of directed unweighted MWC. This bound on rounds is sublinear when \(D\) is sublinear in \(n\), and such an algorithm would not be possible for a \((2-\epsilon)\)-approximation algorithm as the linear lower bound of [29] holds even for constant diameter graphs. **Theorem 2**.: _In a directed graph \(G=(V,E)\) in the CONGEST model with undirected unweighted diameter \(D\), for any constant \(\epsilon>0\):_ 1. _[label=()]_ 2. _We can compute 2-approximation of unweighted MWC in_ \(\tilde{O}(n^{4/5}+D)\) _rounds._ 3. _We can compute_ \((2+\epsilon)\)_-approximation of weighted MWC in_ \(\tilde{O}(n^{4/5}+D)\) _rounds._ 4. _To compute an_ \(\alpha\)_-approximation of directed MWC (weighted or unweighted) for any constant_ \(\alpha>1\) _requires_ \(\Omega(\frac{\sqrt{n}}{\log n})\) _rounds, even on graphs with diameter_ \(D=\Theta(\log n)\) _._ Our approximation upper bounds are presented in Section 3.3 (directed unweighted MWC) and Section 3.3.1 (directed weighted MWC), and our lower bound is presented in Section 2.1. This algorithm uses \(k\)-source directed SSSP as a subroutine, for which we give an algorithm in Section 3.1. For the unweighted case, we present a \(\tilde{O}(\sqrt{nk}+D)\)-round algorithm for \(k\)-source exact directed BFS in unweighted graphs when \(k\geq n^{1/3}\), and for the weighted case we present a \(k\)-source \((1+\epsilon)\)-approximate SSSP algorithm with the same round complexity. We use techniques from the sequential \(\tilde{O}(m\sqrt{n})\) time algorithm for 2-approximation of directed weighted MWC [12], which is much faster than the best sequential exact directed MWC algorithm taking \(\tilde{O}(mn)\) time. We compute an MWC candidate among long cycles using directed BFS from sampled sources. For short cycles, we incorporate a pruning technique presented in [12] and use pipelined BFS from all sources, separately handling 'high-traffic' nodes. We obtain a \((2+\epsilon)\)-approximation algorithm for directed weighted MWC with the same round complexity using scaling techniques combined with the unweighted algorithm. #### 1.1.3 Approximate \(k\)-source SSSP A key subroutine used in our approximation algorithms for MWC in directed and undirected weighted graphs is the computation of breadth-first search (BFS) or approximate SSSP from \(k\) sources. If the CONGEST network on \(n\) nodes has undirected diameter \(D\), then undirected \(k\)-source BFS can be computed in \(O(k+D)\) rounds [28, 20], which is optimal for the entire range of \(1\leq k\leq n\). Adapting this algorithm to directed graphs gives us an \(O(k+D_{dir})\) round algorithm for directed BFS, where \(D_{dir}\) is the directed hop diameter of the network. Since \(D_{dir}\) could be as large as \(n\) even when the undirected diameter is small, using the best known SSSP algorithm [9] which takes \(\tilde{O}(n^{2/5+o(1)}D^{2/5}+\sqrt{n}+D)\) rounds is better when \(D_{dir}\) is large and \(D\) is small. For \(k\) sources, if we run the best known SSSP algorithm from each source, we cannot pipeline the computation as in undirected BFS since it can have high congestion on some edges. Repeating the exact or approximate SSSP algorithm [10, 9] for each source in sequence gives a complexity of \(\tilde{O}(k\cdot(n^{2/5+o(1)}D^{2/5}+\sqrt{n}+D))\). This method degrades as \(k\) gets larger, and becomes inferior to using the \(\tilde{O}(n)\) round algorithm for APSP. For general \(k\), we give a \(k\)-source approximate SSSP algorithm in Section 4 with the following round complexity. **Theorem 3**.: _We can compute \((1+\epsilon)\)-approximate weighted SSSP from \(k\) sources in directed or undirected weighted graphs with round complexity, for any constant \(\epsilon>0\):_ \[\begin{cases}\tilde{O}(\sqrt{nk}+D)&;k\geq n^{1/3}\\ \tilde{O}(\sqrt{nk}+k^{2/5}n^{2/5+o(1)}D^{2/5}+D)&;k<n^{1/3}\end{cases}\] _Additionally, for the case where \(k\geq n^{1/3}\) and the graph is unweighted, exact directed BFS can be computed in \(\tilde{O}(\sqrt{nk}+D)\) rounds._ Our result is significantly faster than simply repeating SSSP algorithm \(k\) times, and smoothly interpolates between the round complexities of the best known SSSP algorithm [10, 9] for \(k=1\) and the best known APSP algorithm [8] for \(k=n\). In our undirected MWC algorithm we use the specific case of \(k=n^{1/3}\), and we show a simpler algorithm taking \(\tilde{O}(n^{2/3}+D)\) rounds in Section 3.1. The problem of computing \((1+\epsilon)\)-approximation of replacement paths in directed weighted graphs in the CONGEST model was considered in [29]. Using our improved \(k\)-source SSSP algorithm, we can improve the round complexity of the approximate replacement path algorithm. #### 1.1.4 Significance of our Results MWC is a fundamental graph problem, and well-studied in the sequential context. The \(\tilde{O}(n^{3})\) and \(\tilde{O}(mn)\) time sequential algorithms for MWC have stood the test of time and it plays a central role in sequential fine-grained complexity for the \(n^{3}\)[37] and \(mn\)[3] classes for path problems in graphs. These fine-grained complexity classes contain important graph problems, some of which have been studied in the CONGEST model: All Pairs Shortest Paths(APSP) [8], Radius and Eccentricities [1, 5], Betweenness Centrality [20], Replacement Paths(RP) and Second Simple Shortest Path(2-SiSP) [29]. Comparing approximation results for problems in the \(mn\) class in the sequential and distributed setting, we have a nuanced situation where for some problems the CONGEST lower bounds match sequential hardness, while for others the results diverge. For all these problems, the sequential hardness results are conditional on fine-grained complexity assumptions, while the CONGEST lower bounds are unconditional, typically based on communication complexity arguments. The problems for which we have similar sequential and distributed hardness include the following: \((1+\epsilon)\)-approximation of replacement paths, where we have algorithms in sequential [7] and distributed setting that beat the lower bounds for exact computation; Radius and Eccentricities, with hardness results [1, 2] for \((\frac{3}{2}-\epsilon)\)-approximation and \((\frac{5}{3}-\epsilon)\)-approximation respectively in both settings. Directed MWC follows this pattern: computing a \((2-\epsilon)\)-approximation is hard in both settings [12, 29] while we have better algorithms to compute \(2\)-approximations, \(\tilde{O}(m\sqrt{n})\) in sequential [12] and \(\tilde{O}(n^{4/5}+D)\) in CONGEST (Theorem 2.b). On the other hand, some CONGEST lower bounds are stronger than their counterparts in sequential fine-grained hardness: for APSP we have subcubic approximation algorithms in the sequential setting [38, 6] while any constant approximation of APSP in the CONGEST model requires \(\tilde{\Omega}(n)\) rounds [30]. For undirected weighted MWC, we similarly have a stronger lower bound in the CONGEST model than in sequential fine-grained hardness: a lower bound of \(\tilde{\Omega}(n)\) for \((2-\epsilon)\) approximation [29] in CONGEST while \((\frac{4}{3}-\epsilon)\)-approximation is a hard problem in the sequential \(n^{3}\) complexity class [37]. These lower bounds are tight given that we show sublinear round algorithms for \((2+\epsilon)\)-approximation (Theorem 1.a) and a sequential \(\frac{4}{3}\)-approximation can be computed in \(O(n^{2})\) time [34]. ### Prior Work Sequential Minimum Weight CycleThe problem of computing a minimum weight cycle in a given graph has been extensively studied in the sequential setting, for directed and undirected, weighted and unweighted graphs. It can be solved by computing All Pairs Shortest Paths (APSP) in the given graph in \(O(n^{3})\) time and in \(\tilde{O}(mn)\) time. The hardness of computing MWC in the fine-grained setting was shown by [37] for the \(O(n^{3})\)-class and by [3] for the \(O(mn)\) class. Fast approximation algorithms for computing MWC have been studied: [12] gives an \(\tilde{O}(n^{2})\) time and an \(\tilde{O}(m\sqrt{n})\) time algorithm for computing a \(2\)-approximation of directed MWC. For undirected unweighted graphs, [22] computes MWC up to additive error \(1\) in \(O(n^{2})\) time, and a more general multiplicative \(\alpha\)-approximation can be computed in \(\tilde{O}(n^{1+1/\alpha})\) time [25]. For undirected weighted graphs, [34] computes a \(\frac{4}{3}\)-approximation in \(\tilde{O}(n^{2})\) time and [24] computes a general \(\frac{4}{3}\alpha\)-approximation in \(\tilde{O}(n^{1+1/\alpha})\) time. Distrbuted Minimum Weight CycleLower and upper bounds for exact computation of MWC in the CONGEST model were given in [29], with near-linear in \(n\) round complexity bounds for directed graphs and undirected weighted graphs. These results for exact MWC are nearly optimal up to polylog factors. The lower bounds also apply to a \((2-\epsilon)\)-approximation of MWC, but no bounds were known for coarser approximation. Distributed GirthMinimum Weight Cycle in undirected unweighted graphs or girth has been studied in the distributed CONGEST model. An \(O(n)\) algorithm for computing girth was given in [21], and a \(\tilde{\Omega}(\sqrt{n})\) lower bound for computing girth was given in [18] which even applies to any \((2-\epsilon)\)-approximation algorithm. An approximation algorithm that nearly matches this lower bound was given in [29], improving on a result of [32], with a \(\tilde{O}(\sqrt{n}+D)\)-round algorithm for a \((2-\frac{1}{g})\)-approximation (where \(g\) is the girth). For exact computation of girth, the gap between lower and upper bounds has been a longstanding open problem. Fixed-Length Cycle DetectionA problem closely related to girth is undirected \(q\)-cycle detection, where we want to check if a graph has a cycle of a certain length \(q\). Lower bounds for \(q\)-cycle detection with \(\Omega(n)\) lower bound for odd \(q\geq 5\) and sublinear lower bounds for even \(q\geq 4\) are given in [14]. The case of \(q=3\) or triangle detection has been studied extensively [23, 11]. There is a \(\tilde{O}(n^{1/3})\) algorithm for triangle enumeration [11] and this result also applies to directed graphs [33]. For even \(q\geq 4\), sublinear round algorithms are given in [15]. For directed \(q\)-cycle detection (\(q\geq 4\)), a tight linear lower bound was given in [29]. CONGEST results for APSP and related problemsThe CONGEST round complexity of APSP [30] has been extensively studied, with nearly optimal upper and lower bounds of \(\tilde{O}(n)\)[8] and \(\Omega(\frac{n}{\log n})\)[30] respectively. Upper and lower bounds for some related problems that have sequential \(O(n^{3})\) and \(O(mn)\) algorithms have been studied in the CONGEST model, such as diameter [1, 5], replacement paths and second simple shortest paths [29], radius and eccentricities [1, 5], and betweenness centrality [20]. CONGEST results for SSSPOur algorithms use distributed SSSP as a basic building block, and the CONGEST round complexity of both exact and approximate SSSP has been extensively researched [16, 30, 17, 13, 10, 9]. For exact or \((1+\epsilon)\)-approximate SSSP, the best known upper and lower bounds are \(\tilde{O}(n^{2/5+o(1)}D^{2/5}+\sqrt{n}+D)\)[9] and \(\Omega(\sqrt{n}+D)\)[16, 36] respectively. ## 2 Lower Bounds Strong near-linear lower bounds in the CONGEST model are known [29] for the round complexity of computing minimum weight cycle in directed graphs, and in undirected weighted graphs. In undirected unweighted graphs, where MWC is known as girth, \(\tilde{\Omega}(\sqrt{n})\) lower bounds are known [18]. All these lower bounds also apply to \((2-\epsilon)\)-approximation of MWC. We address the case of \(\alpha\)-approximation for larger constants \(\alpha\) in this section. We establish \(\tilde{\Omega}(\sqrt{n})\) lower bounds for _any_ constant factor approximation algorithms for MWC in directed (weighted or unweighted) and undirected weighted graphs. The lower bounds hold even for graphs with undirected diameter \(D=\Theta(\log n)\). As seen in Section 3, we have sublinear algorithms for \(\alpha\geq 2\), so linear lower bounds are not possible. ### Directed Minimum Weight Cycle We show a \(\Omega\left(\frac{\sqrt{n}}{\log n}\right)\) lower bound for any constant \(\alpha\)-approximation of directed MWC, both weighted and unweighted. Our technique is to adapt a general lower bound construction for undirected graphs in [36] to directed MWC. This lower bound graph has been used to show lower bounds for MST, SSSP and other graph problems [36, 16]. Proof of Theorem 2.c.: We use the construction in Figure 1. The graph \(G\) is constructed as a \(p\)-ary balanced tree with leaves \(u_{0},u_{1},\ldots u_{L}\) of height \(d=\log_{p}L\). This tree is connected to \(g\) paths of length \(L\), with path \(i\) being made of new vertices \(\langle v_{0}^{i},v_{1}^{i},\ldots v_{L}^{i}\rangle\). Let \(S_{a},S_{b}\) be an instance of Set Disjointness on \(g\) bits. If the bit \(i\) of \(S_{a}\) is \(1\), the edge \((u_{0},v_{0}^{i})\) is present in the graph, otherwise the edge is removed. Similarly, if bit \(i\) of \(S_{b}\) is \(1\), the edge \((u_{L},v_{L}^{i})\) is present in the graph and is removed otherwise. Using the parameters \(g=Lp\log_{p}L\), \(p=\Theta(\log n)\), Theorem 4.1 of [36] gives a lower bound of \(\Omega\left(\frac{\sqrt{n}}{\log n}\right)\) rounds for any algorithm computing disjointness in this construction. This graph has undirected diameter \(2p+2\) since any two vertices can be connected through paths in the tree, so the lower bounds hold even for graphs of diameter \(\Theta(\log n)\). We now show that \(G\) has a directed cycle if and only if \(S_{a},S_{b}\) are not disjoint. All edges in the tree are directed towards the leaves except for edges on the path from \(u_{L}\) to the root. Each of the \(g\) paths of length \(L\) are directed from \(v_{0}^{i}\) to \(v_{L}^{i}\). The edges from leaves to path vertices are directed from \(v_{j}^{i}\) to \(u_{j}\) except for the first leaf where edges are directed from \(u_{0}\) to \(v_{0}^{i}\), see Figure 2 for details. We have directed the edges such that any cycle in the graph must involve a path from the root to some \(u_{j}\) along with a path from \(u_{j}\) to \(u_{L}\). The only such path is through \(u_{0}\) and along one of the \(v^{i}\) paths. Such a path from \(u_{0}\) to \(u_{L}\) through \(v^{i}\) exists if and only if both \(S_{a}[i]\) and \(S_{b}[i]\) are \(1\), which means the sets are not disjoint. Since \(G\) has a directed cycle if and only if \(S_{a}\) and \(S_{b}\) are not disjoint, the set disjointness lower bound gives us a \(\Omega\left(\frac{\sqrt{n}}{\log n}\right)\) lower bound for an \(\alpha\)-approximation algorithm for directed MWC, for any \(\alpha>1\). ### Undirected Weighted Minimum Weight Cycle We also adapt the constant approximation lower bound for directed graphs to undirected weighted MWC. Proof of Theorem 1.b.: We modify the directed MWC construction and make all edges undirected, shown in Figure 2. To force the minimum weight cycle to use the edges corresponding to \(S_{a}\) and \(S_{b}\), we use weights. Set all edge weights to one, except the edges \((u_{1},v_{1}^{i}),(u_{2},v_{2}^{i}),\cdots,\)\((u_{L-1},v_{L-1}^{i})\) for all \(1\leq i\leq g\) which are set to weight \(\alpha n\). This means that any cycle involving one of these high weight edges has weight at least \(\alpha n+1\). The only possible cycles using exclusively weight \(1\) edges use the path from root to \(u_{0}\) and \(u_{L}\) along with one of the paths \(v^{i}\), such a cycle exists if and only if \(S_{a}[i]\) and \(S_{b}[i]\) are both \(1\). So, if the sets of not disjoint, there is a weighted cycle of weight \(<n\), and otherwise any cycle has weight at least \(\alpha n+1\). This means any \(\alpha\)-approximation algorithm for weighted undirected MWC can determine if \(S_{a},S_{b}\) are disjoint and hence we get a \(\Omega\left(\frac{\sqrt{n}}{\log n}\right)\) round lower bound, using the result of [36]. ## 3 Algorithms Our approximation algorithms for directed and undirected weighted MWC use an approximate multiple source SSSP (or exact directed BFS) algorithm as a subroutine. For \(k=n^{1/3}\)-source SSSP, we describe a \(\tilde{O}(n^{2/3}+D)\)-round algorithm in Section 3.1. The general \(k\)-source approximate SSSP problem has other applications, and we present a CONGEST algorithm for it later in Section 4. In Section 3.2, we present a \(\tilde{O}(n^{2/3}+D)\)-round algorithm for \((2+\epsilon)\)-approximation of MWC in undirected weighted graphs. In Section 3.3, for unweighted directed graphs we present a \(\tilde{O}(n^{4/5}+D)\)-round 2-approximation algorithm and for weighted directed MWC we present a \((2+\epsilon)\)-approximation algorithm with the same round complexity. ### Multiple Source SSSP from \(n^{1/3}\) sources For unweighted graphs, we show how to compute \(n^{1/3}\)-source directed BFS in \(\tilde{O}(n^{2/3}+D)\) rounds. To compute \((1+\epsilon)\)-approximate SSSP from \(k=n^{1/3}\) sources in a directed or undirected weighted graph, we show an algorithm that runs in \(\tilde{O}(n^{2/3}+D)\) rounds. Note that naively repeating the best known single source algorithm [10] would give us a \(\tilde{O}(n^{5/6}+n^{1/3}D+n^{0.73}D^{2/5})\) round algorithm, and our algorithm is substantially faster. We use this result in our undirected weighted MWC approximation algorithm in Section 3.2, and a generalized version for \(k\geq n^{1/3}\) in the directed MWC approximation algorithm in Section 3.3. Algorithm 1 describes our \(n^{1/3}\)-source exact directed BFS algorithm. The algorithm randomly samples a vertex set \(S\) of size \(\tilde{\Theta}(n^{1/3})\) and computes a skeleton graph on these vertices in lines 2-4. Given a directed graph \(G=(V,E)\) and a subset of vertices \(S\subseteq V\), we define a skeleton graph [30]\(G^{\prime}=(S,E^{\prime})\) as a virtual graph on \(S\), with directed edges \(E^{\prime}\subseteq S\times S\). Each vertex in \(S\) must know its incoming and outgoing edges in \(E^{\prime}\), but edges need not be a single link in the underlying network. We will construct such a skeleton graph on the set of sampled vertices \(S\), and add directed edges corresponding to hop-limited shortest paths between sampled vertices in the original graph \(G\) -- these directed edges have weight equal to the shortest path distance. The edges of the skeleton graph are determined using an \(h\)-hop directed BFS (\(h=n^{2/3}\)) from each sampled vertex. These edges, of which there are at most \(|S|^{2}=O(n^{2/3})\), are then broadcast to all vertices so that all pairs shortest paths in the skeleton graph can be computed locally at each vertex in lines 5-6. We then perform an \(h\)-hop BFS from each of the \(k=n^{1/3}\) sources to determine distances from each source to each sampled vertex. These distances are then propagated from each sampled vertex to vertices within \(h\)-hop distance through BFS trees in line 9. This allows us to compute exact distances in the unweighted case. For the weighted case, we use \(O(\log n)\) BFS computations on scaled graphs to compute approximate hop-limited shortest paths, following the framework of [30]. **Lemma 4**.: _We can compute exact directed BFS from \(k=n^{1/3}\) sources in \(\tilde{O}(n^{2/3}+D)\) rounds._ Proof.: We describe our directed BFS algorithm in Algorithm 1. _Correctness:_ We show that our algorithm correctly computes distances so that each vertex \(v\in V\) knows \(d(u,v)\) for each source \(u\in U\). In line 7, we compute \(h\)-hop directed BFS, \(h=n^{2/3}\), from each of the \(k=n^{1/3}\) sources, so all vertices within \(h\) hops from a source have the correct distance. For vertices further away, we use distances from sampled vertices to correctly compute distance from source. Consider a vertex \(v\) such that a shortest path to source \(u\) has more than \(h\) hops. By our choice of sampling probability, w.h.p in \(n\), there is a sampled vertex \(s\in S\) on this shortest path at most \(h\) hops from \(v\). If the shortest path from \(u\) to \(s\) has less than \(h\) hops, then \(s\) knows the distance \(d(u,s)\) through the \(h\)-hop directed BFS in line 7 and propagates this distance to \(v\) in line 9. Otherwise if the \(u\)-\(s\) shortest path has more than \(h\) hops, then w.h.p. in \(n\) there is another sampled vertex \(s^{\prime}\in S\) that is on this shortest path at most \(h\) hops from \(u\). Thus, \(s^{\prime}\) knows the distance \(d(u,s^{\prime})\) in line 7 and broadcasts this distance. Note that after lines 3-6 all distances between sampled vertices are known and specifically \(d(s^{\prime},s)\) is computed at \(s\). Now, vertex \(s\) can compute distance \(d(u,s)=\min_{s^{\prime}\in S}d(u,s^{\prime})+d(s^{\prime},s)\). Thus, after line 8 each \(s\in S\) knows \(d(u,s)\) for each \(u\in U\). Now, since we had chosen \(s\) such that \(s\)-\(v\) shortest path has at most \(h\) hops, the distance \(d(u,s)\) is propagated to \(v\) in line 9 and the distance \(d(u,v)=\min_{s\in S}d(u,s)+d(s,v)\) is correctly computed. _Round Complexity:_ We use the well known result that directed BFS from \(k\) source restricted to \(h\) hops can be done in \(O(k+h)\) rounds [28, 20] - this is used in lines 3,7. We also use standard results [31] for broadcasting \(M\) messages in \(O(M+D)\) rounds in lines 5,7. For line 9 we used randomized scheduling [19] to pipeline the computation from all \(|S|\) sampled vertices. We have total congestion \(O(k|S|)\) (maximum number of messages through a single edge) since for each \(s\in S\), propagating \(k\) distances contributes \(O(k)\) congestion and we have dilation \(O(h)\) as we propagate up to \(h\) hops. So we can perform line 9 in \(\tilde{O}(h+k|S|)=\tilde{O}(n^{2/3})\) rounds. The total round complexity of our algorithm is \(\tilde{O}(n^{2/3}+D)\) rounds. To compute multiple source approximate SSSP in weighted graph, we use the following result from [30] to compute hop bounded multiple source SSSP which we will use to replace hop bounded BFS computations. **Fact 1** (Theorem 3.6 of [30]).: _There is an algorithm that computes \((1+\epsilon)\)-approximate \(h\)-hop \(k\)-source shortest paths in a weighted (directed or undirected) graph \(G=(V,E)\) in \(\tilde{O}(k+h+D)\) rounds, where \(D\) is the undirected diameter of \(G\)._ **Lemma 5**.: _We can compute \((1+\epsilon)\)-approximate SSSP from \(k=n^{1/3}\) sources in \(\tilde{O}(n^{2/3}+D)\) rounds in directed and undirected weighted graphs._ Proof.: We extend the multiple source directed BFS algorithm described in Algorithm 1 to approximate multiple source SSSP by using scaling. The main ingredient in our approximation algorithm is computing \((1+\epsilon)\)-approximate \(h\)-hop SSSP from \(k\) sources in \(\tilde{O}(h+k+D)\) rounds, using the algorithm of [30] stated in Fact 1. Now, we replace the \(\frac{n}{|S|}\)-hop directed BFS computations in lines 3,7,9 of Algorithm 1 by approximate \(\frac{n}{|S|}\)-hop SSSP. With this change, the distances computed in lines 4,6,8,9 are \((1+\epsilon)\)-approximations of the exact distances, and thus our final output is \((1+\epsilon)\)-approximate shortest path distances. These changes to Algorithm 1 only increase the round complexity by a factor \(O(\frac{\log n}{\epsilon})\), so the round complexity of our \(n^{1/3}\)-source \((1+\epsilon)\)-approximate SSSP is \(\tilde{O}(n^{2/3}+D)\) in both directed and undirected weighted graphs. The same idea can be extended to \(k\geq n^{1/3}\) sources to get a \(\tilde{O}(\sqrt{nk}+D)\) algorithm for exact \(k\)-source directed BFS and \((1+\epsilon)\)-approximated \(k\)-source directed SSSP, further discussed in Section 4. We set parameters \(h=\sqrt{nk},|S|=\sqrt{nk}\) in Algorithm 1 to get round complexity \(\tilde{O}(\sqrt{nk}+D+\frac{n}{k})\) and our assumption \(k\geq n^{1/3}\) implies \(\frac{n}{k}\leq\sqrt{nk}\) which gives our desired result. In Section 4 we also discuss the case of \(k<n^{1/3}\) sources, where we need other techniques such as distributed hopset construction -- but in this case even our directed BFS algorithm computes \((1+\epsilon)\)-approximate distances, we discuss this issue in Section 5. ### Approximate Undirected Weighted MWC In this section, we consider the weighted version of MWC in undirected graphs, and we present a \((2+\epsilon)\)-approximation algorithm that takes \(\tilde{O}(n^{2/3}+D)\) rounds proving Theorem 1.a. This upper bound is sublinear when \(D\) is sublinear and should be contrasted with the known \(\tilde{\Omega}(n)\) lower bound for \((2-\epsilon)\)-approximation [29]. The 2-approximation algorithm for undirected unweighted MWC [29] taking \(O(\sqrt{n}+D)\) rounds is used with modifications in our weighted algorithm, and we described this modified unweighted algorithm in detail in Corollary 10 in Appendix A. Our algorithm for undirected weighted MWC deals with long cycles (hop length \(\geq n^{2/3}\)) and short cycles separately. For long cycles, we sample \(\tilde{\Theta}(n^{1/3})\) vertices so that w.h.p. in \(n\) there is at least one sampled vertex on the cycle, and we compute the minimum weight cycle through each sampled vertex. We do this in \(\tilde{O}(n^{2/3}+D)\) rounds using the \(n^{1/3}\)-source SSSP algorithm of Algorithm 1 with the set of sampled vertices as sources. For small cycles (hop length \(<n^{2/3}\)), we use scaling along with the \(2\)-approximate undirected unweighted MWC algorithm [29]. We run a hop limited version of the unweighted algorithm on a scaled weighted graph, simulating an edge of weight \(w\) by an unweighted path of \(w\) unit weight edges. We state the result for hop limited undirected unweighted MWC below, which is proven in Corollary 10. **Fact 2** (Appendix A).: _Given an undirected unweighted graph \(G=(V,E)\), with underlying network \(G^{*}\) we can compute a \((2-(1/g))\)-approximation of the minimum weight cycle among all cycles of at most \(h\) hops (\(g\) is the \(h\)-hop limited MWC value) in \(\tilde{O}(\sqrt{n}+h+R_{cast}(G^{*}))\) rounds, where \(R_{cast}(G^{*})\) is the round complexity of a convergecast operation._ The scaling technique used here was used in the context of computing approximate shortest paths in [30]. The following theorem from [30] allows us to compute bounded hop weighted shortest paths using repeated bounded hop unweighted BFS computations on scaled graphs. **Fact 3** (Theorem 3.3 of [30]).: _Given a (directed or undirected) weighted graph \(G=(V,E)\) and parameter \(h\), construct a scaled weighted graph \(G^{i}=(V,E)\), for each integer \(1\leq i\leq\log_{(1+\epsilon)}(hW)\) (\(W\) is maximum edge weight in \(G\)), with weight of edge \(e\in E\) changed from \(w(e)\) to \(\left\lceil\frac{2hw(e)}{\epsilon^{2}}\right\rceil\). Let \(d_{h}(x,y)\) denote \(h\)-hop limited shortest path distances in \(G\), and \(d^{i}(x,y)\) denote distances in \(G^{i}\). Let distance estimate \(\tilde{d}_{h}(x,y)\) be defined as follows_ \[\tilde{d}_{h}(x,y)=\min\left\{\frac{\epsilon 2^{i}}{2h}d^{i}(x,y)\bigg{|}i:d^{i} (x,y)\leq\left(1+\frac{2}{\epsilon}\right)h\right\}\] _Then, \(\tilde{d}_{h}(x,y)\) is a \((1+\epsilon)\)-approximation of \(d_{h}(x,y)\), i.e., \(d_{h}(x,y)\leq\tilde{d}_{h}(x,y)\leq(1+\epsilon)d_{h}(x,y)\)_ We simulate the weighted graph \(G^{i}\) on the network \(G\), with each node treating an edge with weight \(w\) as a path of \(w\) unweighted edges. So, in a BFS computation the message will be transmitted to a neighbor along weight \(w\) edge after \(w\) BFS steps. In each graph \(G^{i}\), the computation performed by the undirected unweighted MWC algorithm will be restricted to \(\left(\left(1+\frac{2}{\epsilon}\right)\cdot h\right)=h^{*}\) hops. Graph \(G^{i}\) may have some large edge weights and such edges may not be traversed within \(h^{*}\) rounds. But we use the fact there is some \(i^{*}\) for which a given shortest path \(P\) of \(h\) hops in the original graph \(G\) is approximated by an unweighted shortest path of at most \(h^{*}\) hops in \(G^{i^{*}}\) -- this \(i^{*}\) is in fact \(\left\lceil\log w(P)\right\rceil\) as the proof in [30] indicates. Proof of Theorem 1.a.: Algorithm 2 computes a \((2+\epsilon)\)-approximation of MWC in \(\tilde{O}(n^{2/3}+D)\) rounds. _Correctness:_ The sampled set \(S\) has size \(\tilde{O}(n/h)\) and w.h.p. in \(n\), at least one sampled vertex is on any cycle of hop length \(\geq h\). If the MWC \(C\) has hop length more than \(h\), let \(w\in S\) be a sampled vertex on \(C\). In line 3, the shortest path from \(w\) to each \(v\in V\) is computed, and a \((1+\epsilon)\)-approximation of the weight of \(C\) is computed in line 6. Lines 8-10 compute a \((2+2\epsilon)\)-approximation of the minimum weight cycle if it has hop length \(<h\) in \(G\), and we can set \(\epsilon^{\prime}=2\epsilon\) to get a \((2+\epsilon^{\prime})\)-approximation. To prove this, we use Fact 3, which implies the minimum of the \(h^{*}\)- hop limited unweighted MWC in \(G^{i}\) over all \(i\) is a \((1+\epsilon)\)-approximation of the \(h\)-hop limited weighted MWC in \(G\). Since the undirected unweighted MWC algorithm computes a \(2\)-approximation, if \(G\) has a cycle \(C\) of hop length at most \(h\) then we compute a \(2(1+\epsilon)\)-approximation of weight of \(C\) in line 9. Thus at some iteration \(i\), \(M\) is correctly updated with this weight in line 10. _Round Complexity:_ Computing the \((1+\epsilon)\)-approximate SSSP from \(\tilde{\Theta}(n/h)=\tilde{\Theta}(n^{1/3})\) sources in line 3 takes \(\tilde{O}(n^{2/3}+D)\) using Algorithm 1. The communication in line 4 sends \(O(|S|)\) words taking \(\tilde{O}(n^{1/3})\) rounds. Lines 8-10 take \(\tilde{O}(\sqrt{n}+h+D)\) rounds by Corollary 10 -- note that \(R_{cast}=O(D)\) where \(D\) is the undirected diameter of \(G\) regardless of the diameter of scaled graph \(G^{i}\). The convergecast operation in line 11 takes \(O(D)\) rounds [31]. The total round complexity is \(\tilde{O}(n^{2/3}+D)\) ### Approximate Directed Minimum Weight Cycle We present our CONGEST algorithm for 2-approximation of directed unweighted MWC in Algorithm 3 which takes \(O(n^{4/5}+D)\) rounds. This is sublinear in \(n\) when \(D\) is \(o(n)\) in contrast to our linear lower bound for exact (or \((2-\epsilon)\)-approximate) computation even for constant \(D\). Our algorithm computes long cycles of hop length \(\geq h=n^{3/5}\) by sampling \(\tilde{\Theta}(n/h)=\tilde{\Theta}(n^{2/5})\) vertices and computing BFS with them as sources using the multiple source BFS algorithm (lines 2-4). To handle shorter cycles (\(<h\) hops), we try to compute a BFS from all vertices \(v\) restricted to \(h\)-hops and also restricted to a neighborhood \(P(v)\) (to be defined later) that contains all undiscovered short minimum weight cycles. The construction of \(P(v)\) is inspired by sequential algorithm of [12], which uses the following result. **Fact 4** (Lemma 5.1 of [12]).: _In a directed weighted graph \(G\), let \(C\) be a minimum weight cycle containing vertices \(v\) and \(u\). For any vertex \(w\), if \(d(u,w)+2d(v,u)\geq d(w,u)+2d(v,w)\), then a minimum weight cycle containing \(w\) and \(v\) has weight at most \(2w(C)\)._ For each \(v\), we construct a subset \(R(v)\) of the sampled vertices such that the set \(P(v)=\{u\in V\mid\forall t\in R(v),d(u,t)+2d(v,u)\leq d(t,u)+2d(v,t)\}\) has size at most \(\frac{n}{|S|}=\tilde{O}(n^{3/5})\) w.h.p. in \(n\). With this construction, any MWC is either completely contained in some \(P(v)\) or there is a \(w\in R(v)\) such that a cycle through \(w\) is a valid 2-approximation by Fact 4. We compute short cycles in \(G\) by computing an \(h\)-hop BFS restricted to vertex set \(P(v)\). For the BFS computation from source \(v\), along with BFS messages we transmit the set \(R(v)\) and distances \(d(v,t)\) for \(t\in R(v)\) so that neighborhood in \(P(v)\) can be tested before the BFS message is transmitted. There is a congestion issue with this computation, as it is possible that some vertex \(u\) belongs to a large number of \(P(v)\) and \(u\) would need to handle up to \(n\) messages, one for each such \(v\). We circumvent this by marking 'high-traffic' vertices (\(Z(u)\gets 1\) in line 22,24) with more than \(\rho=O(n^{4/5})\) messages to process. We separately compute \(h\)-hop BFS with these high-traffic vertices as sources (line 28) to compute short cycles through them, and we use the bound on \(P(v)\) to bound the number of high-traffic vertices by \(O(n^{4/5})\). We compute short cycles that do not contain any such high-traffic vertices by simultaneously scheduling the BFS computations with random delays [19, 27]. Finally, we perform a convergecast minimum operation(line 29) over all cycles found so far to compute a 2-approximation of MWC. **Input:** Directed unweighted graph \(G=(V,E)\) **Output:**\(M\), \(2\)-approximation of weight of a MWC in \(G\) ``` 1:Let \(h=n^{3/5}\), \(\rho=n^{4/5}\). Set \(M_{v}=\infty\) for all \(v\in V\), this will track the minimum weight cycle through \(v\) found so far. 2:Construct set \(S\) by sampling each vertex \(v\in G\) with probability \(\Theta(\frac{1}{h}\cdot\log^{3}n)\). W.h.p in \(n\), \(|S|=\Theta(n^{2/5}\cdot\log^{2}n)\). 3:Perform multiple source directed BFS (Algorithm 4) with \(S\) as the set of sources, computing distances \(d(s,v)\) for \(v\in V\). This is done in \(\tilde{O}(\sqrt{n|S|}+D)\) rounds. 4:Compute cycles through \(s\in S\): For each edge \((v,s)\), \(M_{v}\leftarrow min(M_{v},w(v,s)+d(s,v))\). 5:Broadcast all pairs distances between sampled vertices: Each \(t\in S\) broadcasts \(d(s,t)\) for all \(s\in S\). There are at most \(|S|^{2}\) such distances, which takes \(O(|S|^{2}+D)\) rounds. 6:Each vertex \(v\) sends its distance information to and from sampled vertices to its neighbors: \(v\) sends \(\{(d(v,s),d(s,v))\mid s\in S\}\) to each neighbor \(u\) in \(O(|S|)\) rounds. 7:Partition \(S\) into \(\beta=\log n\) sets \(S_{1},\ldots S_{\beta}\) of size \(\Theta(n^{2/5}\cdot\log n)\). 8:for each vertex \(v\in G\)do {Local computation at \(v\)} 9:\(R(v)\leftarrow\phi\) 10:for\(i=1\ldots\beta\)do 11: Let \(T(v)=\{s\in S_{i}\mid\forall t\in R(v),d(s,t)+2d(v,s)\leq d(t,s)+2d(v,t)\}\). 12: If \(T(v)\) is not empty, select a random vertex \(s^{*}\in T(v)\) and add it to \(R(v)\). 13:for each vertex \(v\in V\)do {Local computation at \(v\)} 14: Choose initial delay \(\delta_{v}\in\{1,\ldots,\rho\}\) uniformly at random for BFS rooted at \(v\). \(Z(v)\gets 0\), denoting whether \(v\) is a high-traffic vertex. 15: Construct message \(Q(v)=(R(v),\{d(v,t)\mid\forall t\in R(v)\})\) to be sent along the BFS rooted at \(v\). \(Q(v)\) contains \(O(\log n)\) words (\(|R(v)|\leq\beta=\log n\)) and can be sent in \(O(\log n)\) rounds. 16:for\(r=1\ldots(h+\rho)\)do 17: {Each iteration is a phase \(r\) where each vertex sends at most \(\log n\) BFS messages along out edges. A BFS message is of the form \(Q^{r}(w,u)=(Q(w),d(w,u))\) for up to \(\log n\) different \(w\). Since each \(Q^{r}(w,u)\) has \(O(\log n)\) words, a single phase takes \(O(\log^{2}n)\) CONGEST rounds.} 18:for each vertex \(v\in G\)do 19:if\(r=\delta_{v}\)then {Initial message for BFS rooted at \(v\) sent at appropriate phase } 20: For each outgoing neighbor \(u\), if \(\forall t\in R(v),d(y,t)+2d(v,y)\leq d(t,y)+2d(v,t)\), send message \((Q(v),d(v,u)=1)\) to \(u\). 21: {Process and propagate messages from other sources \(w\). In order to use randomized scheduling, we restrict the congestion at a single node by \(\rho=n^{4/5}\) and mark high-traffic vertices(\(Z(v)\gets 1\)) exceeding this congestion. High-traffic vertices are processed separately in line 28} 22: Receive at most \(\log n\) messages \(Q^{r-1}(w,v)=(Q(w),d^{*}(w,v))\) from each incoming neighbor. If more than \(\log n\) messages are received through a single incoming edge, terminate propagation and set \(Z(v)\gets 1\). 23: If message \((Q(w),d^{*}(w,v))\) is not the first message received for source \(w\), discard it. Otherwise set \(d(w,v)=d^{*}(w,v)\). Let \(W^{r}(v)\) denote the remaining set of sources \(w\) with first time messages in the current phase \(r\). 24: If \(|W^{r}(v)|>\log n\), terminate propagation and set \(Z(v)\gets 1\). 25: For each \(w\in W^{r}(v)\), and for each outgoing neighbor \(u\), set estimate \(d^{*}(w,u)=d(w,v)+1\). If \(\forall t\in R(w),d(u,t)+2d^{*}(w,u)\leq d(t,u)+2d(w,t)\), send message \(Q^{r}(w,u)=(Q(w),d^{*}(w,u))\) to \(u\). Note that \(R(w),d(w,t)\) are known to \(v\) from \(Q(w)\) and \(d(u,t),d(t,u)\) are known from line 6. * [26]**for each** vertex \(x\in V\)**do** {Local computation at \(x\)} * [27] If \(x\) has message \((Q(v),d(v,x))\) and edge \((x,v)\) exists, \(M_{x}\leftarrow\min(M_{x},d(v,x)+w(x,v))\). * [28] Let \(Z=\{v\in V\mid Z(v)=1\}\). Perform directed \(h\)-hop BFS with sources \(Z\) in \(O(|Z|+h)\) rounds. For each \(v\in Z\) and edge \((x,v)\), set \(M_{x}\leftarrow min(M_{x},d(v,x)+w(x,v))\). * [29] Return \(M\leftarrow\min_{v\in V}M_{v}\), computed by a convergecast operation [31] in \(O(D)\) rounds. Proof of Theorem 2.a.: We argue that Algorithm 3 correctly computes a \(2\)-approximation of the MWC in a given directed unweighted graph \(G=(V,E)\) in \(\tilde{O}(n^{4/5}+D)\) rounds. _Correctness:_ Let \(C\) be the MWC of \(G\) with weight \(w(C)\). Whenever we update \(M_{v}\) for any \(v\in V\), we use a shortest path from \(x\) to \(v\) along with an edge \((v,x)\), which means we only record weights of valid directed cycles. Now, consider the following cases for \(C\): **Case 1: \(w(C)\geq h\):** In this case \(C\) contains at least \(h\) vertices, and hence w.h.p. in \(n\), \(C\) contains a sampled vertex in \(S\) by our choice of sampling probability. If \(s\in S\) is on \(C\), then the computation in line 4 exactly computes \(w(C)\). For the following cases, define \(P(v)=\{u\in V\mid\forall t\in R(v),d(u,t)+2d(v,u)\leq d(t,u)+2d(v,t)\}\). Let \(v\) refer to an arbitrary vertex on \(C\). **Case 2: \(w(C)<h\) and \(C\) extends outside \(P(v)\):** If there is some \(v\in C\) such that \(C\) contains a vertex \(u\not\in P(v)\), then we have \(d(u,t)+2d(v,u)>d(t,u)+2d(v,t)\) for some \(t\in R(v)\). By Fact 4, this means that a minimum weight cycle containing \(t\) and \(v\) has weight at most \(2w(C)\) since \(C\) is a minimum weight cycle containing \(v\) and \(u\). Since \(R(v)\subseteq S\), \(t\) is a sampled vertex and hence \(M_{t}\leq 2w(C)\) by the computation in line 4. Thus, we compute a \(2\)-approximation of \(C\). **Case 3: \(w(C)<h\), \(C\) is contained in \(P(v)\) and \(\exists u\in C,Z(u)=1\):** In this case, \(u\in Z\) in line 28, and the minimum weight cycle through \(C\) is computed. Thus, \(w(C)\) is computed exactly. **Case 4: \(w(C)<h\), \(C\) is contained in \(P(v)\) and \(\forall u\in C,Z(u)=0\):** If \(Z(u)=0\), then \(u\) receives messages from less than \(\rho\) sources, and \(u\) never terminates its execution in line 22. Let \(z\) is the vertex on \(C\) with maximum distance from \(v\), i.e., the cycle \(C\) consists of a shortest path from \(v\) to \(z\) and an edge \((z,v)\). All vertices on the \(v\)-\(z\) shortest path that is part of \(C\) are not high-traffic vertices, and forward all BFS messages including ones with source \(v\). So, \(z\) receives message \(Q(v),d(v,x)\) from the BFS rooted at \(v\), since if this is the case \(M_{z}\leq d(v,z)+w(z,v)=w(C)\) and the weight of \(C\) is exactly computed. _Round complexity:_ We choose our sampling probability such that \(|S|=\tilde{\Theta}(n/h)=\tilde{\Theta}(n^{2/5})\), so the multiple source SSSP in line 3 takes time \(\tilde{O}(\sqrt{n|S|}+D)=\tilde{O}(n^{7/10}+D)\). In line 5, we broadcast \(|S|^{2}\) values taking \(O(|S|^{2}+D)=\tilde{O}(n^{4/5}+D)\) rounds. The computation of \(R(v)\) (line 12) is done locally at \(v\) using the distances \(d(v,t)\) known to \(v\) and distances \(d(s,t)\) obtained from the broadcast. To pipeline the \(h\)-hop BFS rooted at each \(v\), we will use scheduling with randomized delays [19] in lines 12-25. The \(h\)-hop BFS from a single \(v\) takes \(h\) rounds and sends at most one message through each edge. Note that each message of the BFS is of the form \(Q^{r}(v,w)=(Q(v),d(v,w))\) as in line 15. \(Q(v)\) has at most \(\beta=\log n\) words and can be sent across an edge in \(O(\log n)\) rounds. We organize the computation into \((h+\rho)\) phases, with each phase running for \(O(\log^{2}n)\) rounds in which we process \(O(\log n)\) BFS messages. Choose a random delay \(\delta_{v}\in\{1,2,\ldots\rho\}\) for the BFS rooted at \(v\), and \(v\) will sends it initial message at phase \(\delta_{v}\). If a vertex \(u\) receives more than \(\log n\) messages through a single edge in any round, \(u\) terminates its execution in line 22. We will prove w.h.p in \(n\) that this happens only when \(u\) receives messages from more than \(\rho\) sources, in which case \(u\) can terminate and set \(Z(u)=1\). If \(u\) receives messages from at most \(\rho\) sources, then each incoming edge to \(u\) receives at most \(\rho\) messages, since a single BFS sends at most one message through an edge. Let this edge receive messages from \(v_{1},v_{2},\ldots v_{\gamma}\) for \(\gamma\leq\rho\) and let the distance from \(v_{i}\) to \(u\) be \(h_{i}\). Then, the BFS message from \(v_{i}\) is received at \(u\) at phase \(h_{i}+\delta_{v_{i}}\). For a fixed phase \(r\), the message from \(v_{i}\) is sent to \(u\) at phase \(r\) iff \(r=h_{i}+\delta_{v_{i}}\) which happens with probability \(\frac{1}{\rho}\) since \(\delta_{v_{i}}\) is chosen uniformly at random. Using a Chernoff bound, we can show that w.h.p. there are at most \(\log n\) of the \(\gamma\) messages that are sent at phase \(r\), which can be communicated in \(\log n\) rounds. For outgoing messages, a node \(u\) may have received first time messages from sources \(v_{1},\ldots v_{\gamma}\). If \(\gamma\geq\rho\), we set \(Z(u)=1\) and terminate (line 24), otherwise we can repeat the same argument above to argue that at most \(\log n\) messages are to be sent out at a single phase. We also have \(h_{i}\leq h\) and \(\delta_{v_{i}}\leq\rho\) so all messages are sent within \(h+\rho\) phases, which gives us round complexity \(\tilde{O}(h+\rho)\). The \(h\)-hop limited BFS for a single \(x\) takes at most \(O(h\log n)\) rounds since we have \(h\) BFS rounds of sending a message \(Q(v)\) which requires \(O(\log n)\) rounds to communicate. Thus, the dilation is \(O(h\log n)\). The maximum congestion is \(\rho\) since if a vertex \(u\) needs to receive or send more than \(\rho\) messages it sets \(Z(u)=1\) and terminates its execution. So, we can complete lines 25 in \(\tilde{O}(h+\rho)=\tilde{O}(n^{4/5})\) rounds. Finally, we need to bound \(|Z|\). We first argue that \(P(v)\) has size at most \(\frac{n}{|S|}=\tilde{\Theta}(n^{3/5})\) w.h.p in \(n\) (adapting Lemma 6.2 of [12]): when we add a vertex \(t\) to \(R(v)\) in line 12, we expect \(t\) to cover cycles through half the remaining uncovered vertices, since the condition we check in line 11 is symmetric. At any iteration \(i\), if the number of uncovered vertices before line 11 is larger than \(n^{3/5}\), then with high probability there is some vertex in \(A_{i}\) (which has size \(\Theta(n^{2/5}\log n)\)) that is also not covered and is added to \(R(v)\) in line 12, reducing the remaining number of uncovered vertices by half. So, the probability that the number of uncovered vertices \(P(v)\) remains larger than \(n^{3/5}\) after \(\log n\) such steps is polynomially small. Note that \(Z(u)=1\) only if \(u\) receives messages from more than \(\rho\) sources \(v\). A message is send to \(u\) from source \(v\) by some neighbor \(x\) in line 25 only if the condition holds for \(d^{*}(w,u)\geq d(w,u)\) which only happens if \(u\in P(v)\). Define \(P^{-1}(u)=\{v\in V\mid u\in P(v)\}\), then \(Z(u)=1\) only if \(|P^{-1}(u)|\geq\rho\). To bound the number of such \(u\), we use the fact that \(\sum_{u\in V}|P^{-1}(u)|=\sum_{v\in V}|P(v)|\leq n\cdot\tilde{\Theta}(n^{3/5})\). We also have \(\sum_{u\in V}|P^{-1}(u)|\geq\sum_{u\in V,Z(u)=1}|P^{-1}(u)|\geq|Z|\cdot\rho\) and hence \(|Z|\leq\frac{n\cdot\tilde{\Theta}(n^{3/5})}{n^{4/5}}=\tilde{\Theta}(n^{4/5})\). Now, the \(h\)-hop directed BFS in line 28 from \(|Z|\) sources takes \(O(|Z|+h)=\tilde{O}(n^{4/5}+D)\) rounds. #### 3.3.1 Approximate Directed Weighted MWC The framework from the undirected weighted MWC algorithm (Section 3.2) can be used in order to compute \((2+\epsilon)\)-approximation of directed weighted MWC. The broad idea is to use sampling to handle long hop-length cycles, and use the unweighted MWC algorithm on scaled graphs to handle short hop-length cycles. Proof of Theorem 2.b.: We prove that \((2+\epsilon)\)-approximation of directed weighted MWC can be computed in \(\tilde{O}(n^{4/5}+D)\) rounds. The undirected weighted MWC algorithm, Algorithm 2, uses \(h\)-hop restricted undirected unweighted MWC to compute short cycles. For our directed weighted MWC approximation algorithm, we replace this by an \(h=n^{2/3}\)-hop restricted version of Algorithm 3 for 2-approximate directed unweighted MWC, which wil be run on scaled directed graphs where weighted edges are replaced by unweighted paths. We will restrict the BFS computations done by the unweighted algorithm to \(h\) hops (in cases where BFS would have extended further), and the broadcast operations cost \(O(D)\) overhead where \(D\) is the undirected diameter of the original weighted graph instead of the scaled graph. With these modifications, we can compute a 2-approximation of directed unweighted MWC of a scaled graph restricted to \(h\)-hops in \(O(n^{4/5}+h+D)\) rounds. For more details, see the hop-limited modifications for the undirected version in Corollary 10. Now, we modify line 3 of Algorithm 2 to computes long cycles using the directed version of the approximate multiple source SSSP algorithm (Lemma 5). Since scaling introduces an addition \((1+\epsilon)\)-approximation factor, we compute a \((2+\epsilon)\)-approximation of directed weighted MWC. The round complexity with these modifications is \(\tilde{O}(n^{4/5}+D)\), dominated by the hop-limited directed unweighted MWC computation. ## 4 General Approximate Multiple Source SSSP We give a general algorithm for computing \((1+\epsilon)\)-approximate SSSP from \(k\) sources in directed and undirected graphs, weighted or unweighted. This Algorithm 4 runs in \(\tilde{O}(\sqrt{nk}+D)\) rounds if \(k\geq n^{1/3}\) (recovering result from Section 3.1) and in \(\tilde{O}(\sqrt{nk}+k^{2/5}n^{2/5+o(1)}D^{2/5}+D)\) rounds if \(k<n^{1/3}\). Our algorithm is significantly faster than naively repeating the best exact or approximate SSSP algorithm [10, 9]\(k\) times which would take \(\tilde{O}(k\cdot(n^{2/5+o(1)}D^{2/5}+\sqrt{n}+D))\) rounds. This result is of independent interest and may have applications to other CONGEST algorithms. One such application is for computing approximate replacement paths in directed weighted graphs, which we describe in Section 4.1. Our algorithm uses tools from [10, 17]. We extend the algorithm for approximate directed weighted SSSP in [10] to \(k\) sources, and this single source algorithm has the following general steps from [17]: hop-limited BFS is computed using a set of sampled vertices as sources. A directed weighted skeleton graph \(G^{\prime}\) is constructed on these sampled vertices by placing a directed edge between each pair of vertices with a directed path of at most \(h\) hops(see Section 3.1). On this skeleton graph we use a construction from [10] for a \((h,\epsilon)\)-hopset, which is a set of edges added to the graph such that \(h\)-hop limited distances on this augmented graph is a \((1+\epsilon)\) approximation of the true shortest path distances. We compute an appropriate \((h,\epsilon)\)-hopset on the skeleton graph on sampled vertices, and compute approximate shortest paths using SSSP in the skeleton graph using broadcasts for each BFS step. For our \(k\)-source algorithm, we exploit the fact that the hopset applies to all \(k\) sources, hence we can efficiently compute approximate SSSP in the skeleton graph for \(k\) sources. The algorithm computes approximate \(k\)-source shortest path distances in the skeleton graph, which gives us distances between sources and sampled vertices. We then propagate these distances from sampled vertices to vertices within a certain holength in order to compute distances between sources and all vertices. For our skeleton graph shortest path algorithm, we first assume all edges in the skeleton graph are unweighted and directed, and show how to compute \(k\)-source directed BFS (Lemma 6.A). Then, we use a scaling technique (Fact 1,[30]) to extend the algorithm to \((1+\epsilon)\)-approximate shortest path in the weighted skeleton graph (Lemma 6.B). In the following lemma, we let \(U\) be the set of sources, with \(|U|=k\). In our algorithm, we will compute hop-limited BFS from the given set of sources \(U\), and we add one directed edge from each source to each directed graph corresponding to a hop-limited directed shortest path. **Lemma 6**.: _Let \(G^{\prime}=(S,E^{\prime})\) be a directed weighted skeleton graph on directed \(G=(V,E)\) (\(S\subseteq V\) and \(E^{\prime}\subseteq S\times S\)), where the underlying CONGEST network of \(G\) has undirected diameter \(D\) and \(W\) is the maximum weight of an edge in \(G\). Let \(U\subseteq V\) be the set of sources, with \(|U|=k\)._ 1. _We can compute exact_ \(k\)_-source_ \(h\)_-hop directed BFS on the skeleton graph_ \(G^{\prime}\) _in_ \(O\left(k\cdot|S|+h\cdot D\right)\) _rounds._ 2. _Given an_ \((h,\frac{\epsilon}{2})\)_-hopset (for any constant_ \(\epsilon>0\)_) for the skeleton graph_ \(G^{\prime}\)_, we can compute_ \((1+\epsilon)\)_-approximate_ \(k\)_-source weighted SSSP on the skeleton graph in_ \(\tilde{O}\left((k|S|+hD)\cdot\log W\right)\) _rounds._ Proof.: **[A.]** Assuming the skeleton graph is undirected, we will compute distances \(d(u,s)\) (will be known to node \(s\)) for each \(u\in U,s\in S\). We assume that along with the skeleton graph, directed edges \((u,s)\) corresponding to \(u\)-\(s\) directed paths in \(G\) have been computed at node \(s\in S\) if they exist. Since we cannot communicate directly across edges of the skeleton graph (they correspond to hop-limited paths in the network), we will use broadcasts on the network \(G\) to propagate BFS messages. Consider an \(h\)-hop BFS from one source \(u\in U\). Each BFS message that an intermediate vertex \(v\) in the skeleton graph sends will be of the form \((u,v,d)\), where \(u\) is the source of the BFS and \(d\) is the shortest \(u\)-\(v\) distance. At the first step of the BFS, \(u\) broadcasts the message \((u,u,0)\). All vertices in \(S\) know their incoming edges in the skeleton graph \(G^{\prime}\), and the (unvisited) vertices \(v\) with incoming edges \((u,v)\) will update their distance from \(u\) to be \(1\). All such vertices will now broadcast the message \((u,v,1)\) and so on. If \(N_{G^{\prime}}(u,t)\) is the set of vertices at distance \(t\) from \(u\) in \(G^{\prime}\), all the vertices in \(N_{G^{\prime}}(u,t)\) will broadcast BFS messages at step \(t\) of the BFS. Now, we consider all \(k\) sources \(u_{1},\ldots u_{k}\in U\). At a given step \(t\), for each \(i\) let \(N_{G^{\prime}}(u_{i},t)\) be the set of vertices broadcasting BFS messages for source \(u_{i}\). To communicate these BFS messages using a broadcast, assume that there is a broadcast tree that has been computed in the network \(G\) with diameter \(D\)[31]. The total number of messages to be broadcast in this step is \(M_{t}=\sum_{i=1}^{k}|N_{G^{\prime}}(u_{i},t)|\), and we perform this broadcast by pipelining through the broadcast tree in \(O(M_{t}+D)\) rounds. Now, we have at most \(h\) steps in BFS (since we only compute BFS up to \(h\) hops in \(G^{\prime}\)), and for a single source \(u_{i}\), \(\sum_{t=1}^{h}|N_{G^{\prime}}(u_{i},t)|\) cannot exceed \(|S|\), the number of vertices in \(G^{\prime}\). Using this, we bound \(\sum_{t=1}^{h}(M_{t}+D)\) by \(k|S|+hD\), and our round complexity for \(k\)-source \(h\)-hop BFS is \(O(k|S|+hD)\) rounds. **[B.]** We use the scaling technique of [30] as stated in Fact 1, which allows us to compute bounded hop approximate shortest paths efficiently. We replace the \(h\)-hop BFS used in the unweighted algorithm by a \(h\)-hop \(k\)-source approximate SSSP algorithm on the skeleton graph, which uses \(O(\log n)\) computations of \(O(\frac{h}{\epsilon})\) hop BFS computations. This gives us a round complexity of \(\tilde{O}\left((k|S|+hD)\cdot\log W\right)\) for computing \((1+\frac{\epsilon}{2})\)-approximate SSSP in the skeleton graph augmented by the hopset. With the guarantee of the \((h,\frac{\epsilon}{2})\)-hopset, the \(h\)-hop bounded approximate SSSP computations are \((1+\epsilon)\)-approximations of the shortest path distances in the skeleton graph. Now, we present our algorithm for computing \(k\)-source approximate SSSP in directed (unweighted or weighted) graphs and undirected weighted graphs. The round complexity of our algorithm matches the best known (exact or approximate) SSSP algorithm [10, 9] for \(k=1\) and the best known APSP algorithm [8] for \(k=n\), and smoothly transitions between these complexities for intermediate \(k\). Our algorithm is sublinear whenever both \(k\) and \(D\) are sublinear. As in our \(n^{1/3}\)-source algorithm, we sample a set of vertices and compute a skeleton graph on these vertices in lines 2-5. The size of the sampled set \(|S|\) is decided based on the number of sources \(k\) and the undirected diameter of the graph \(D\) as shown in Note 1. Now, we compute a \((h,\epsilon)\)-hopset using the distributed algorithm of [10] for \(h\) chosen according to Note 1. This allows us to compute \(h\)-hop shortest path distances in the skeleton graph with hopset edges included in line 7, which computes shortest path distances between all pairs of sampled vertices. We then follow the same procedure as in Algorithm 1 to compute distances between sampled vertices and source (line 4), and propagate these distances to all vertices of the graph (line 9). _Note 1_.: The proof of Theorem 3 establishes the following setting of parameters \(|S|\) and \(h\) for Algorithm 4 based on the input \(k\) and \(D\): \[|S|=\begin{cases}\sqrt{\frac{n}{k}}&;k\geq n^{1/3}\text{ or }\\ &k<n^{1/3},D<n^{1/4}k^{3/4}\\ \frac{n^{3/5}}{D^{2/5}}&;k<n^{1/3},n^{1/4}k^{3/4}<D<n^{2/3}\\ n^{1/3}&;k<n^{1/3},n^{2/3}<D\end{cases}\quad h=\begin{cases}1&;k\geq n^{1/3} \\ \frac{n^{1/4}}{k^{1/4}}&;k<n^{1/3},D<n^{1/4}k^{3/4}\\ \frac{n^{2/5}}{D^{3/5}}&;k<n^{1/3},n^{1/4}k^{3/4}<D<n^{2/3}\\ 1&;k<n^{1/3},n^{2/3}<D\end{cases}\] Proof of Theorem 3.: We present our algorithm for approximate shortest paths in directed unweighted graphs in Algorithm 4. The correctness follows from arguments similar to Lemma 4. The main difference is how we compute SSSP in the skeleton graph - we use a hopset construction in this algorithm while we used edge broadcasts in the \(n^{1/3}\)-source SSSP algorithm. We now analyze the number of rounds. The algorithm samples a vertex set \(S\) (whose size is a parameter to be fixed later) and performs BFS computations restricted to \(\frac{n}{|S|}\) hops. On the skeleton graph built on these sampled vertices, we construct a \((1+\epsilon)\)-approximate hopset with hop bound \(h\) (this parameter is fixed later). We use the approximate hopset algorithm of [10], which constructs a \(\left(\frac{|S|^{1/2+o(1)}}{\rho},\epsilon\right)\)-hopset in \(O(|S|\frac{\rho^{2}}{\epsilon^{2}}\log W+\frac{|S|^{1/2+o(1)}D}{\rho\epsilon} \log W)\) rounds. We denote the hop bound by \(h=\frac{|S|^{1/2}}{\rho}\), and get a round complexity of \(O\left(\left(|S|\cdot\frac{|S|}{h^{2}}+h^{1+o(1)}D\right)\cdot\frac{\log W}{ \epsilon^{2}}\right)\). Computing \(k\)-source approximate SSSP after this requires \(\tilde{O}(k|S|+h^{1+o(1)}D)\) rounds using Lemma 6. The total round complexity of the algorithm is \(\tilde{O}(\frac{n}{|S|}+\frac{|S|^{2}}{h^{2}}+h^{1+o(1)}D+k|S|)\). We set the parameters \(|S|\) and \(h\) based on the values of \(D\) and \(k\) as shown in Note 1. Note that for any setting of the parameter \(|S|\), the quantity \(\frac{n}{|S|}+k|S|\) is at least \(\sqrt{nk}\). For \(k\geq n^{1/3}\), the parameter setting \(|S|=\sqrt{\frac{n}{k}},h=1\) achieves round complexity \(\tilde{O}(\sqrt{nk}+D+\frac{n}{k})\) which is \(\tilde{O}(\sqrt{nk}+D)\) since \(\frac{n}{k}\leq\sqrt{nk}\) for \(k\geq n^{1/3}\). We also note that \(D\) is a lower bound for any \(k\)-source SSSP algorithm. Hence, this round complexity is the best we can achieve for any parameter choice for \(k>n^{1/3}\). Note that \(h=1\) essentially means all pairs shortest paths are computed at all vertices, recovering the algorithm in Section 3.1. For \(k<n^{1/3}\), we need to set parameters based on \(D\) to minimize the round complexity \(\tilde{O}(\frac{n}{|S|}+\frac{|S|^{2}}{h^{2}}+h^{1+o(1)}D+k|S|)\), with the constraints \(1\leq|S|\leq n,1\leq h\leq\sqrt{|S|}\). 1. When \(D\) is small (\(D\) is \(o(n^{1/4}k^{3/4})\)), the term \(\frac{|S|^{2}}{h^{2}}+h^{1+o(1)}D\) is minimized by setting \(h\) to its maximum value \(\sqrt{S}\). With this choice of \(h\), the round complexity is \(|S|+\sqrt{|S|}D+k|S|\)), which is minimized by setting \(|S|=\sqrt{\frac{n}{k}}\) assuming \(\sqrt{|S|}D\) is small compared to the other terms. For \(D<n^{1/4}k^{3/4}\), we have \(\frac{n^{1/4}}{k^{1/4}}D<\sqrt{nk}\), and the total round complexity is \(\tilde{O}(\sqrt{nk})\). 2. When \(D\) is large(\(D\) is \(\Omega(n^{2/3})\)), we minimize the term \(h^{1+o(1)}D\) by setting \(h=1\) and the resulting round complexity expression is \(\tilde{O}(\frac{n}{|S|}+|S|^{2}+D+k|S|)\). Since we are only concerned with the case \(k<n^{1/3}\), we minimize the expression \(\frac{n}{|S|}+|S|^{2}\) by setting \(|S|=n^{1/3}\) giving round complexity \(n^{2/3}+D+kn^{1/3}\). For the case \(k<n^{1/3},D>n^{2/3}\), this is \(\tilde{O}(D)\). 3. For intermediate \(D\), i.e, \(n^{1/4}k^{3/4}<D<n^{2/3}\), we balance terms \(\frac{|S|^{2}}{h^{2}}\), \(\frac{n}{|S|}\) and \(h^{1+o(1)}D\), giving the parameters \(|S|=\frac{n^{3/5}}{D^{2/5}},h=\frac{n^{2/5}}{D^{3/5}}\),and the round complexity expression is \(\tilde{O}(n^{2/5+o(1)}D^{2/5}+k\frac{n^{3/5}}{D^{2/5}})\). For \(D>n^{1/4}k^{3/4}\), we have \(k\frac{n^{3/5}}{D^{2/5}}\leq kn^{3/5}D^{2/5}\cdot\frac{1}{D^{4/5}}\leq kn^{3/5} D^{2/5}\frac{1}{n^{1/5}k^{3/5}}=k^{2/5}n^{2/5}D^{2/5}\). So, we can rewrite our round complexity as \(\tilde{O}(n^{2/5+o(1)}D^{2/5}+k^{2/5}n^{2/5}D^{2/5})\) Our final round complexity is \(\tilde{O}(\sqrt{nk}+D)\) if \(k\geq n^{1/3}\) and \(\tilde{O}(\sqrt{nk}+k^{2/5}n^{2/5+o(1)}D^{2/5}+D)\) if \(k<n^{1/3}\). This round complexity is sublinear whenever \(k\) and \(D\) are sublinear, and beats the \(\tilde{O}(n)\) round APSP algorithm for \(k=o(n)\). **Weighted Graphs**: We extend the multiple source directed BFS algorithm described in Algorithm 1 to approximate multiple source SSSP by using scaling. We use the algorithm of [30] stated in Fact 1 to compute \((1+\epsilon)\)-approximate \(h\)-hop SSSP from \(k\) sources in \(\tilde{O}(h+k+D)\) rounds, and the approximate SSSP algorithm for the skeleton graph described in Lemma 6.B. Thus, the BFS in lines 3,4 takes \(\tilde{O}(|S|+\frac{n}{|S|})\) and \(\tilde{O}(|S|+\frac{n}{|S|})\) rounds respectively, increasing the round complexity by a factor \(O(\frac{\log n}{\epsilon})\). The distances \(d(s,s^{\prime})\),\(d(u,s)\) computes in lines 4,7 are still \((1+\epsilon)\)-approximations of the shortest path distance, and hence the final \(d(u,v)\) distances are also \((1+\epsilon)\)-approximations. Thus. we compute \(k\)-source \((1+\epsilon)\)-approximate SSSP with the same round complexity up to \(\log\) factors. ### Application to Approximate Replacement Paths As an application of our \(k\)-source approximate SSSP algorithm, we improve the \((1+\epsilon)\)-approximate replacement paths (RP) algorithm given in [29] for directed weighted graphs. In the replacement paths (RP) problem, we are given a graph \(G\), two vertices \(s,t\) and a shortest path \(P_{st}\) and need to compute the shortest path distance between \(s\) and \(t\) in \(G-\{e\}\) (\(G\) with edge \(e\) removed) for each edge \(e\in P_{st}\). The hop length of the shortest path \(P_{st}\) is an important parameter for RP, denoted \(h_{st}\). The algorithm in [29] for \((1+\epsilon)\)-approximate directed weighted RP takes \(\tilde{O}(\min(n^{2/3}+\sqrt{nh_{st}}+D,h_{st}\cdot(\sqrt{n}+n^{2/5+o(1)}D^{2/5 }+D)))\) rounds. In the approximate RP algorithm (Theorem 1.c of [29]), detours are computed between the nodes on the input shortest path \(P_{st}\) (with hop length \(h_{st}\)) -- here detours are shortest paths in the graph with path \(P_{st}\) removed. For the case when \(h_{st}\) is small, the approximate detour distances can be efficiently computed using a \(h_{st}\)-source approximate SSSP algorithm run on \(G-P_{st}\). This gives the following improvement to the result in [29]. **Lemma 7**.: _We can compute \((1+\epsilon)\)-approximation replacement paths in a directed weighted graph in \(\tilde{O}(\min(n^{2/3},h_{st}^{2/5}\cdot n^{2/5+o(1)}D^{2/5})+\sqrt{nh_{st}}+D)\) rounds, where \(h_{st}\) is the hop length of the input shortest path._ This result is asymptotically faster than the result of [29] when \(h_{st}\) is \(o(n^{1/3})\) and \(D\) is \(o(n^{2/3})\). Conclusion and Open Problems We have presented several CONGEST upper and lower bounds for computing MWC in directed and undirected graphs, both weighted and unweighted. While many of our results are close to optimal, here are some topics for further research. * For arbitrary constant approximation of MWC, we show a non-trivial lower bound of \(\tilde{\Omega}(\sqrt{n})\) in directed and undirected weighted graphs. Complementing these lower bound results, we have made progress on the upper bound with 2-approximation algorithms (\((2+\epsilon)\) for weighted graphs) that run in sublinear rounds (when \(D\) is \(o(n)\)), beating the linear lower bound for \((2-\epsilon)\)-approximation. Whether we can bridge this gap for larger approximation ratios, or provide a tradeoff between round complexity and approximation quality is a topic for further research. Note that for \((2-\epsilon)\)-approximation of MWC, we have shown nearly optimal results in all cases: We gave near-linear lower bounds for directed graphs and for undirected weighted graphs, and these are matched by the \(\tilde{O}(n)\)-round algorithms for the exact case. For undirected unweighted MWC, there is a \((2-\frac{1}{g})\)-approximation algorithm (\(g\) is value of MWC) with round complexity \(\tilde{O}(\sqrt{n}+D)\)[29] which almost matches the known \(\tilde{\Omega}(\sqrt{n})\) lower bound for \((2-\epsilon)\)-approximation [18]. * Our approximation algorithms for weighted MWC (directed and undirected) are based on scaling techniques, which introduce an additional multiplicative error causing our algorithms to give \((2+\epsilon)\)-approximation instead of the 2-approximation obtained in the unweighted case. A similar phenomenon occurs in the parallel exact SSSP algorithm [26], where unweighted BFS on scaled graphs is used to compute an approximate distance estimate which also satisfies a certain triangle inequality. This is followed by another scaling procedure to compute exact SSSP by repeated approximation. Can we develop an analogous procedure for MWC to improve our \((2+\epsilon)\)-approximation algorithm for the weighted case to a 2-approximation algorithm? * We have presented a general \((1+\epsilon)\)-approximate \(k\)-source SSSP algorithm that is significantly faster than repeating the best approximate SSSP algorithm from \(k\) sources. In directed unweighted graphs, we present an algorithm for \(k\)-source exact directed BFS when \(k\geq n^{1/3}\). While there have been recent techniques to obtain exact SSSP algorithms from approximate SSSP algorithms [9, 35, 26], extending them to \(k\) sources seems difficult. These techniques involve distance computations on graphs with weights modified depending on the source, and we can no longer construct a single hopset that simultaneously works for all \(k\) sources. Providing an exact \(k\)-source SSSP algorithm that matches the round complexity of our \(k\)-source approximate SSSP algorithm that holds for weighted graphs and general \(k\) is a topic for further research. * For exact computation of MWC, we present close to linear upper and lower bounds for directed graphs (both weighted and unweighted) and for undirected weighted graphs. The only case where we don't have tight bounds is undirected unweighted MWC (girth), whose complexity in the CONGEST has remained an open problem with linear round algorithms and \(\tilde{\Omega}(\sqrt{n})\) lower bound. This is a topic worth investigating further. **Acknowledgement.** We thank the referees for their detailed comments.
2310.12462
Unmasking Transformers: A Theoretical Approach to Data Recovery via Attention Weights
In the realm of deep learning, transformers have emerged as a dominant architecture, particularly in natural language processing tasks. However, with their widespread adoption, concerns regarding the security and privacy of the data processed by these models have arisen. In this paper, we address a pivotal question: Can the data fed into transformers be recovered using their attention weights and outputs? We introduce a theoretical framework to tackle this problem. Specifically, we present an algorithm that aims to recover the input data $X \in \mathbb{R}^{d \times n}$ from given attention weights $W = QK^\top \in \mathbb{R}^{d \times d}$ and output $B \in \mathbb{R}^{n \times n}$ by minimizing the loss function $L(X)$. This loss function captures the discrepancy between the expected output and the actual output of the transformer. Our findings have significant implications for the Localized Layer-wise Mechanism (LLM), suggesting potential vulnerabilities in the model's design from a security and privacy perspective. This work underscores the importance of understanding and safeguarding the internal workings of transformers to ensure the confidentiality of processed data.
Yichuan Deng, Zhao Song, Shenghao Xie, Chiwun Yang
2023-10-19T04:41:01Z
http://arxiv.org/abs/2310.12462v1
# Unmasking Transformers: A Theoretical Approach to Data Recovery via Attention Weights ###### Abstract In the realm of deep learning, transformers have emerged as a dominant architecture, particularly in natural language processing tasks. However, with their widespread adoption, concerns regarding the security and privacy of the data processed by these models have arisen. In this paper, we address a pivotal question: Can the data fed into transformers be recovered using their attention weights and outputs? We introduce a theoretical framework to tackle this problem. Specifically, we present an algorithm that aims to recover the input data \(X\in\mathbb{R}^{d\times n}\) from given attention weights \(W=QK^{\top}\in\mathbb{R}^{d\times d}\) and output \(B\in\mathbb{R}^{n\times n}\) by minimizing the loss function \(L(X)\). This loss function captures the discrepancy between the expected output and the actual output of the transformer. Our findings have significant implications for the Localized Layer-wise Mechanism (LLM), suggesting potential vulnerabilities in the model's design from a security and privacy perspective. This work underscores the importance of understanding and safeguarding the internal workings of transformers to ensure the confidentiality of processed data. Introduction In the intricate and constantly evolving domain of deep learning, the transformer architecture has emerged as a game-changing innovation [14]. This novel architecture has propelled the state-of-the-art performance in a myriad of tasks, and its potency lies in the underlying mechanism known as the "attention mechanism." The essence of this mechanism can be distilled into its unique interaction between three distinct matrices: the **Query** (\(Q\)), the **Key** (\(K\)), and the **Value** (\(V\)), where the **Query** matrix (\(Q\)) represents the questions or the aspects we're interested in, the **Key** matrix (\(K\)) denotes the elements against which these questions are compared or matched, and the he **Value** matrix (\(V\)) encapsulates the information we want to retrieve based on the comparisons. These matrices are not just mere multidimensional arrays; they play vital roles in encoding, comparing, and extracting pertinent information from the data. Given this context, the attention mechanism can be mathematically captured as follows: **Definition 1.1** (Attention matrix computation).: _Let \(Q,K\in\mathbb{R}^{n\times d}\) be two matrices that respectively represent the query and key. Similarly, for a matrix \(V\in\mathbb{R}^{n\times d}\) denoting the value, the attention matrix is defined as_ \[\mathrm{Att}(Q,K,V):=D^{-1}AV,\] _In this equation, two matrices are introduced: \(A\in\mathbb{R}^{n\times n}\) and \(D\in\mathbb{R}^{n\times n}\), defined as:_ \[A:=\exp(QK^{\top})\ \text{ and }\ D:=\mathrm{diag}(A \mathbf{1}_{n}).\] Here, the matrix \(A\) represents the relationship scores between the query and key, and \(D\) ensures normalization, ensuring that the attention weights sum to one. The computation hence, deftly combines these relationships with the value matrix to output the final attended representation. In practical large-scale language models [11, 12], there might be multi-levels of the attention computation. For those multi-level architecture, the feed-forward can be represented as \[\underbrace{X_{\ell+1}^{\top}}_{n\times d}\leftarrow\underbrace{ D(X_{\ell})^{-1}\exp(X_{\ell}^{\top}Q_{\ell}K_{\ell}X_{\ell})}_{n\times n} \underbrace{X_{\ell}^{\top}}_{n\times d}\underbrace{V_{\ell}}_{d\times d}\] where \(X_{\ell}\) is the input of \(\ell\)-th layer, and \(X_{\ell+1}\) is the output of \(\ell\)-th layer, and \(Q_{\ell},K_{\ell},V_{\ell}\) are the attention weights in \(\ell\)-th layer. This architecture has particularly played a pivotal role in driving progress across various sub-disciplines of natural language processing (NLP). It has profoundly influenced sectors such as machine translation [13, 14], sentiment analysis [15, 16], language modeling [17], and even the generation of creative text [11, 12]. This trajectory of influence is most prominently embodied by the creation and widespread adoption of Large Language Models (LLMs) like GPT [10] and BERT [13]. These models, along with their successive versions, e.g., GPT-2 [14], GPT-3 [15], PaLM [13], OPT [16], are hallmarks in the field due to their staggering number of parameters and complex architectural designs. These LLMs have achieved unparalleled performance levels, setting new standards in machine understanding and automated text generation [11, 12]. Moreover, their emergence has acted as a catalyst for rethinking what algorithms are capable of, spurring new lines of inquiry and scrutiny within both academic and industrial circles [17]. As these LLMs find broader application across an array of sectors, gaining a thorough understanding of their intricate internal mechanisms is evolving from a topic of scholarly interest into a crucial requirement for their effective and responsible deployment. Yet, the very complexity and architectural sophistication that propel the success of transformers come with a host of consequential challenges, making their effective and responsible usage nontrivial. Prominent among these challenges is the overarching imperative of ensuring data security and privacy [14, 15, 16]. Within the corridors of the research community, an increasingly pertinent question is emerging regarding the inherent vulnerabilities of these architectures. Specifically, _is it possible to know the input data by analyzing the attention weights and model outputs?_ To put it in mathematical terms, given a language model represented as \(Y=f(W;X)\), if one has access to the output \(Y\) and the attention weights \(W\), is it possible to mathematically invert the model to obtain the original input data \(X\)? Addressing this line of inquiry extends far beyond the realm of academic speculation; it has direct and significant implications for practical, real-world applications. This is especially true when these transformer models interact with data that is either sensitive in nature, like personal health records [13], or proprietary, as in the financial sector [15]. With the broader deployment of Large Language Models (LLMs) into environments that adhere to stringent data confidentiality regulations, the mandate for achieving absolute data security becomes unequivocally critical. In this work, we aim to delve deeply into this paramount issue, striving to offer a nuanced understanding of these potential vulnerabilities while suggesting pathways for ensuring safety in the development, training, and utilization of transformer technologies. In this study, we address a distinct problem that differs from the conventional task of finding optimal weights for a given input and output. Specifically, we assume that the weights are already known, and our objective is to invert the input to recover the original data. The key focus of our investigation lies in identifying the conditions under which successful inversion of the original input is feasible. This problem holds significant relevance in the context of addressing security concerns associated with attention networks. To provide a formal definition of our training objective for data recovery, we aim to optimize a specific criterion that enables effective inversion of the input. By formulating and solving this objective, we aim to gain valuable insights into the security implications and vulnerabilities of attention networks. **Definition 1.2** (Regression model).: _Given the attention weights \(W=KQ^{\top}\in\mathbb{R}^{d\times d}\), \(V\in\mathbb{R}^{d\times d}\) and output \(B\in\mathbb{R}^{n\times d}\), the goal is find \(X\in\mathbb{R}^{d\times n}\) such that_ \[L(X):=\|\underbrace{D(X)^{-1}\exp(X^{\top}WX)}_{n\times n}\underbrace{X^{\top }}_{n\times d}\underbrace{V}_{d\times d}-\underbrace{B}_{n\times d}\|_{F}^{2}\] _where_ * \(D(X)=\operatorname{diag}(\exp(X^{\top}WX)\mathbf{1}_{n})\in\mathbb{R}^{n\times n}\)__ In order to establish an understanding of attacking on the above model, we present our main result in the following section. ### Our Result We state our result as follows: **Theorem 1.3** (Informal version of Theorem J.1).: _Given a model with several layers of attention. For each layer, we have parameters \(Q\in\mathbb{R}^{d\times d},K\in\mathbb{R}^{d\times d},V\in\mathbb{R}^{d\times d}\). We denote \(W:=KQ^{\top}\). Given a desired output \(B\in\mathbb{R}^{d\times n}\), then we can denote the training data input_ \[X^{*}=\arg\min_{X}\|D(X)^{-1}\exp(X^{\top}WX)X^{\top}V-B\|_{F}^{2}+L_{\mathrm{ reg}}\] _Next, we choose a good initial point \(X_{0}\) that is close enough to \(X^{*}\). Assume that there exists a scalar \(R>1\) such that \(\|W\|_{F}\leq R\), \(\|V\|_{F}\leq R\), \(|b_{i,j}|\leq R\) where \(b_{i,j}\) denotes the \(i,j\)-th entry of \(B\) for all \(i\in[n],j\in[d]\)._ _Then, for any accuracy parameter \(\epsilon\in(0,0.1)\) and a failure probability \(\delta\in(0,0.1)\), an algorithm based on the Newton method can be employed to recover the initial data. The result of this algorithm guarantee within \(T=O(\log(\|X_{0}-X^{*}\|_{F}/\epsilon))\) executions, it outputs a matrix \(\widetilde{X}\in\mathbb{R}^{d\times n}\) satisfying \(\|\widetilde{X}-X^{*}\|_{F}\leq\epsilon\) with a probability of at least \(1-\delta\)._ Roadmap.We arrange the rest of our paper as follows. In Section 2 we present some works related our topic. In Section 3 we provide a preliminary for our work. In Section 4, we state an overview of our techniques, summarizing the method we use to recover data via attention weights. We conclude our work and propose some future directions in Section 5. ## 2 Related Works Attention Computation Theory.Following the rise of LLM, numerous studies have emerged on attention computation [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 223, 219, 224, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 252, 254, 255, 257, 259, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 288, 289, 287, 288, 289, 291, 288, 289, 292, 293, 294, 295, 296, 297, 298, 299, 30, 31, 32, 333, 34, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 62, 64, 65, 66, 69, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 83, 84, 85, 86, 87, 88, 89, 91, 84, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 1109, 111, 111, 120, 113, 114, 115, 116, 117, 118, 119, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 207, 209, 211, 213, 208, 209, 211, 223, 214, 215, 216, 217, 228, 230, 231, 232, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 270, 281, 283, 284, 285, 286, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 299, 301, 31, 320, 323, 334, 325, 326, 327, 328, 329, 333, 343, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 60, 63, 60, 64, 60, 65, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 82, 83, 84, 85, 86, 87, 89, 91, 80, 82, 83 Security concerns about LLM.amid LLM advancements, concerns about misuse have arisen [14, 15, 16, 17, 18, 19, 20, 21]. However, the current methods fall short in guaranteeing comprehensive privacy for language models, recommending training on publicly intended text. [14] reveals that the vulnerability of large language models to privacy attacks is significantly tied to data duplication in training sets, emphasizing that deduplicating this data greatly boosts their resistance to such breaches. [15] devised a way to watermark LLM output without compromising quality or accessing LLM internals. Meanwhile, [16] introduced near access-freeness (NAF), ensuring generative models, like transformers and image diffusion models, don't closely mimic copyrighted content by over _k_-bits. Inverting the neural network.Originating from the explosion of deep learning, there have been a series of works focused on inverting the neural network [13, 1, 15, 14, 15] surveys various techniques for neural network inversion, which involves finding input values that produce desired outputs, and highlights its applications in query-based learning, sonar performance analysis, power system security assessment, control, and codebook vector generation. [1] presents a method for inverting trained neural networks by formulating the problem as a mathematical programming task, enabling various network inversions and enhancing generalization performance.. [16] explores the reconstruction of image representations, including CNNs, to assess the extent to which it's possible to recreate the original image, revealing that certain layers in CNNs retain accurate visual information with varying degrees of geometric and photometric invariance. [15] presents a novel generative model-inversion attack method that can effectively reverse deep neural networks, particularly in the context of face image reconstruction, and explores the connection between a model's predictive ability and vulnerability to such attacks while noting limitations in using differential privacy for defense. Attacking the Neural Networks.During the development of artificial intelligence, there have been many works on attaching the neural networks [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 233, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 32, 33, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 84, 86, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 113, 109, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 173, 174, 175, 176, 177, 177, 178, 179, 180, 181, 182, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 211, 22, 233, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 261, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 294, 295, 296, 297, 298, 298, 299, 300, 31, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 119, 116, 119, 120, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 151, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 181, 182, 183, 184, 185, 186, 187, 188, 189, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 23, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 281, 289, 290, 292, 294, 295, 296, 297, 298, 2 ses [12, 13]. Simultaneously, the field is seeing innovations in optimization algorithms tailored for LLMs. Techniques like block gradient estimators have been employed for huge-scale optimization problems, significantly reducing computational complexity [10]. Unique approaches like Direct Preference Optimization bypass the need for reward models, fine-tuning LLMs based on human preference data [23]. Additionally, advancements in second-order optimizers have relaxed the conventional Lipschitz Hessian assumptions, providing more flexibility in convergence proofs [11]. Also, there is a series of work on understanding fine-tuning [13, 14, 15, 16]. Collectively, these theoretical contributions are refining our understanding and optimization of LLMs, even as they introduce new techniques to address challenges such as non-guaranteed Hessian Lipschitz conditions. Optimization and Convergence of Deep Neural Networks.Prior research [11, 15, 16, 17, 18, 19, 15, 14, 16, 17, 18, 19, 16, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 233, 241, 252, 253, 254, 255, 256, 257, 258, 261, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 351, 352, 353, 354, 355, 356, 357, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 87, 88, 89, 90, 80, 81, 82, 84, 85, 86, 87, 88, 89, 80, 82, 85, 87, 88, 89, 80, 83, 84, 86, 88, 89, 81, 84, 87, 85, 88, 89, 82, 86, 89, 83, 87, 88, 89, 80, 84, 88, 89, 82, 87, 88, 89, 80, 85, 89, 81, 82, 84, 86, 88, 89, 82, 87, 88, 89, 80, 84, 89, 82, 88, 85, 89, 86, 87, 88, 88, 89, 81, 82, 89, 80, 83, 84, 88, 87, 88, 89, 80, 85, 89, 82, 88, 86, 89, 87, 88, 89, 80, 88, 89, 81, 82, 88, 89, 80, 81, 82, 82, 83, 84, 85, 86, 87, 88, 88, 89, 82, 88, 89, 81, 83, 84, 88, 85, 87, 88, 89, 82, 88, 89, 80, 82, 84, 86, 88, 88, 89, 82, 87, 88, 88, 89, 82, 89, 80, 83, 84, 85, 86, 87, 88, 88, 89, 82, 88, 89, 82, 89, 80, 84, 88, 85, 89, 86, 87, 88, 88, 89, 80, 87, 88, 88, 89, 82, 89, 83, 84, 85, 86, 87, 88, 88, 89, 80, 88, 89, 81, 84, 88, 89, 82, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 88, 89, 88, 89, 80, 82, 88, 89, 81, 82, 82, 83, 84, 85, 86, 87, 88, 88, 89, 82, 89, 80, 83, 88, 89, 81, 84, 88, 82, 83, 84, 85, 86, 87, 88, 88, 89, 82, 88, 89, 80, 88, 89, 81, 82, 83, 84, 85, 86, 88, 87, 88, 89, 82, 88, 89, 80, 83, 84, 85, 87, 88, 89, 82, 83, 85, 89, 86, 88, 89, 80, 87, 88, 89, 80, 88, 89, 81, 82, 83, 84, 85, 86, 88, 89, 82, 87, 88, 89, 80, 88, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 82, 83, 85, 89, 83, 86, 87, 88, 89, 82, 89, 80, 84, 88, 89, 80, 85, 86, 87, 88, 89, 82, 89, 83, 88, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 82, 83, 88, 89, 83, 84, 85, 87, 88, 89, 80, 89, 82, 83, 86, 88, 89, 80, 81, 82, 83, 87, 88, 88, 89, 82, 89, 83, 84, 85, 86, 87, 88, 88, 89, 82, 89, 83, 88, 8, 89, 84, 85, 87, 88, 89, 80, 89, 82, 83, 86, 89, 80, 87, 88, 89, 82, 89, 80, 83, 88, 89, 80, 81, 82, 83, 84, 85, 86, 87, 89, 82, 89, 83, 84, 85, 88, 89, 80, 86, 89, 80, 87, 88, 89, 80, 88, 89, 81, 82, 83, 84, 85, 89, 82, 83, 86, 87, 89, 83, 84, 85, 89, 84, 86 For any matrix \(A\in\mathbb{R}^{n\times d}\), we define \(\|A\|_{F}:=(\sum_{i=1}^{n}\sum_{j=1}^{d}A_{i,j}^{2})^{1/2}\). For a vector \(a,b\in\mathbb{R}^{n}\), we use \(\langle a,b\rangle\) to denote \(\sum_{i=1}^{n}a_{i}b_{i}\). ### Model Inversion Attack A model inversion attack is a type of adversarial attack in which a malicious user attempts to recover the private dataset used to train a supervised machine learning model. The goal of a model inversion attack is to generate realistic and diverse samples that accurately describe each class in the private dataset. The attacker typically has access to the trained model and can use it to make predictions on input data. By carefully crafting input data and observing the model's predictions, the attacker can infer information about the training data. Model inversion attacks can be a significant privacy concern, as they can potentially reveal sensitive information about individuals or organizations. These attacks exploit vulnerabilities in the model's behavior and can be used to extract information that was not intended to be disclosed. Model inversion attacks can be formulated as an optimization problem. Given the output \(Y\), the model function \(f_{\theta}\) with parameters \(\theta\), and the loss function \(\mathcal{L}\), the objective of a model inversion attack is to find an input data \(X^{*}\) that minimizes the loss between the model's prediction \(f_{\theta}(X)\) and the target output \(Y\). Mathematically, this can be expressed as: \[X^{*}=\arg\min_{X}\mathcal{L}(f_{\theta}(X),Y)\] Since the loss function \(\mathcal{L}(f_{\theta}(X),Y)\) is convex with respect to optimizing \(X\), we can employ a specific method for model inversion attack, which involves the following steps: 1. Initialize an input data \(X\). 2. Compute the gradient \(\nabla_{X}\mathcal{L}(f_{\theta}(X),Y)\). 3. Optimize \(X\) using a learning rate \(\eta\) by updating \(X=X-\eta\nabla_{X}\mathcal{L}(f_{\theta}(X),Y)\). This iterative process aims to find an input \(X\) that minimizes the loss between the model's prediction and the target output. By updating \(X\) in the direction opposite to the gradient, the attack can potentially converge to an input that generates a prediction close to the desired output, thereby inverting the model. In this work, we focus our effort on the Attention models (which is natural due to the explosive development of LLMs). In this case, the parameters \(\theta\) in our model are considered to consist of \(\{Q,K,V\}\). During the script, to avoid the abuse of notations, we use \(B=Y\) to denote the ground truth label. ### Regression Problem Inspired by Attention Computation In this paper, we extend the prior work of [14] and focus on the training process of the attention mechanism in the context of the Transformer model. We decompose the training procedure into a regression form based on the insights provided by [13]. Specifically, we investigate the training process for a specific layer, denoted as the \(l\)-th layer, and consider the case of single-headed attention. In this setting, we have an input matrix represented as \(X\in\mathbb{R}^{d\times n}\) and a target matrix denoted as \(B\in\mathbb{R}^{d\times n}\). Given \(Q\in\mathbb{R}^{d\times d},K\in\mathbb{R}^{d\times d},V\in\mathbb{R}^{d\times d}\) as the trained weights of attention architecture. The objective of the training process in the Transformer model is to minimize the loss function by utilizing back-propagation. The loss function, denoted as \(L(X)\), is defined as follows: \[L(X)=\|D^{-1}\exp(X^{\top}K^{\top}QX)X^{\top}V-B\|_{F}^{2},\] where \(D:=\text{diag}(\exp(X^{\top}K^{\top}QX)\mathbf{1}_{n})\) and each row of \(D^{-1}\exp()\) corresponds to a softmax function. The goal of minimizing this loss function is to align the predicted output, obtained by applying the attention mechanism, with the target matrix \(B\). ## 4 Recovering Data via Attention Weights In this section, we propose our theoretical method to recover the training data from trained transformer weights and outputs. Besides, we solve our method by proving hessian of our training objective is Lipschitz-continuous and positive definite. In Section 4.1, we provide a detailed description of our approach. In Section 4.3, we show our result that proving hessian of training objective is Lipschitz-continuous. In Section 4.4, we show our result that the hessian of training objective is positive definite. ### Training Objective of Attention Inversion Attack In this study, we propose a novel technique for inverting the attention weights of a transformer model using Hessian decomposition. Our aim is to find the input \(X\in\mathbb{R}^{d\times n}\) that minimizes the Frobenius norm of the difference between \(D(X)^{-1}\exp(X^{\top}WX)V\) and \(B\), where \(W=KQ^{\top}\in\mathbb{R}^{d\times d}\) represents the attention weights, \(B\in\mathbb{R}^{n\times d}\) is the desired output, and \(D(X)=\text{diag}(\exp(X^{\top}WX))\in\mathbb{R}^{n\times n}\) is a diagonal matrix. To achieve this, we introduce an algorithm that minimizes the loss function \(L(X)\), defined as follows: \[L(X):=\|D(X)^{-1}\exp(X^{\top}WX)X^{\top}V-B\|_{F}^{2}+L_{\text{ reg}}, \tag{1}\] where \(V\in\mathbb{R}^{d\times d}\) is a matrix of values, and \(L_{\text{reg}}\) captures any additional regularization terms. This loss function quantifies the discrepancy between the expected output and the actual output of the transformer. In our approach, we leverage Hessian decomposition to efficiently compute the Hessian matrix and apply a second-order method to approximate the optimal input \(X\). By utilizing the Hessian, we can gain insights into the curvature of the loss function and improve the efficiency of optimization. This approach enables us to efficiently find an approximate solution for the input \(X\) that minimizes the loss function, thereby inverting the attention weights of the transformer model. By integrating Hessian decomposition and second-order optimization techniques ([13, 14, 15, 16, 17, 18, 19]), our proposed algorithm provides a promising approach for addressing the challenging task of inverting attention weights in transformer models. Due to the complexity of the loss function (Eq. (1)), directly computing its Hessian is challenging or even impossible. To simplify the computation, we introduce several notations (See Figure 2 for visualization): Exponential Function: \[u(X)_{i} :=\exp(X^{\top}WX_{*,i})\] Sum of Softmax: \[\alpha(X)_{i} :=\langle u(X)_{i},\mathbf{1}_{n}\rangle\] Softmax Probability: \[f(X)_{i} :=\alpha(X)_{i}^{-1}u(X)_{i}\] Using these terms, we can express the loss function \(L(X)\) as the sum over all elements: \[L(X)=\sum_{i=1}^{n}\sum_{j=1}^{d}(c(X)_{i,j})^{2}\] This allows us to break down the computation into several steps. Specifically, we start by computing the gradients of the predefined terms. Given two integers \(i_{0}\in[n]\) and \(j_{0}\in[d]\), we define \(c(X)_{i_{0},j_{0}}\) as a matrix where all entries are zero except for the entry \(c_{i_{0},j_{0}}\). Additionally, we denote \(i_{1}\in[n]\) and \(j_{1}\in[d]\) as two other integers, and use \(x_{i_{1},j_{1}}\) to represent the entry in \(X\) corresponding to the \(i_{1}\)-th row and \(j_{1}\)-th column. We can now express \(\frac{\operatorname{dc}(X)_{i_{0},j_{0}}}{\operatorname{d}x_{i_{1},j_{1}}}\) (the gradient of \(c(X)_{i_{0},j_{0}}\)) in two cases: * _Case 1:_ The situation when \(i_{0}=i_{1}\). * _Case 2:_ The situation when \(i_{0}\neq i_{1}\). By decomposing the Hessian into several cases (See Section F for details), we can calculate the final Hessian. Similar to the approach used when computing the gradients, we introduce two additional integers \(i_{2}\in[n]\) and \(j_{2}\in[d]\). The Hessian can then be expressed as \(\frac{\operatorname{d}^{2}c(X)_{i_{0},j_{0}}}{\operatorname{d}x_{i_{1},j_{1 }}\operatorname{d}x_{i_{2},j_{2}}}\). We can further break down the computation into four cases to handle different scenarios: * _Case 1:_ The situation when \(i_{0}=i_{1}=i_{2}\). * _Case 2:_ The situation when \(i_{0}=i_{1}\neq i_{2}\). * _Case 3:_ The situation when \(i_{0}\neq i_{1}\), \(i_{0}\neq i_{2}\) and \(i_{1}=i_{2}\). * _Case 4:_ The situation when \(i_{0}\neq i_{1}\), \(i_{0}\neq i_{2}\) and \(i_{1}\neq i_{2}\). Figure 2: Visualization of Notations We Defined It is worth mentioning that there is a case that \(i_{0}\neq i_{1}\), \(i_{0}=i_{2}\), is equivalent to the case that \(i_{0}=i_{1}\neq i_{2}\). By considering these four cases, we can calculate the Hessian for each element in \(X\). This allows us to gain further insights into the curvature of the loss function and optimize the parameters more effectively. ### Hessian Decomposition By considering different conditions of Hessian, we have the following decomposition. **Definition 4.1** (Hessian of functions of matrix).: _We define the Hessian of \(c(X)_{i_{0},j_{0}}\) by considering its Hessian with respect to \(x=\operatorname{vec}(X)\). This means that, \(\nabla^{2}c(X)_{i_{0},j_{0}}\) is a \(nd\times nd\) matrix with its \(i_{1}\cdot j_{1},i_{2}\cdot j_{2}\)-th entry being \(\frac{\operatorname{dc}(X)_{i_{0},j_{0}}}{\operatorname{d}x_{i_{1},j_{2}}x_{ i_{2},j_{2}}}\)._ **Definition 4.2** (Hessian split).: _We split the hessian of \(c(X)_{i_{0},j_{0}}\) into following cases_ * \(i_{0}=i_{1}=i_{2}\) _:_ \(H_{1}^{(i_{1},i_{2})}\)__ * \(i_{0}=i_{1}\)_,_ \(i_{0}\neq i_{2}\) _:_ \(H_{2}^{(i_{1},i_{2})}\)__ * \(i_{0}\neq i_{1}\)_,_ \(i_{0}=i_{2}\) _:_ \(H_{3}^{(i_{1},i_{2})}\)__ * \(i_{0}\neq i_{1}\)_,_ \(i_{0}\neq i_{2}\)_,_ \(i_{1}=i_{2}\) _:_ \(H_{4}^{(i_{1},i_{2})}\)__ * \(i_{0}\neq i_{1}\)_,_ \(i_{0}\neq i_{2}\)_,_ \(i_{1}\neq i_{2}\) _:_ \(H_{5}^{(i_{1},i_{2})}\)__ _In above, \(H_{i}^{(i_{1},i_{2})}\) is a \(d\times d\) matrix with its \(j_{1},j_{2}\)-th entry being \(\frac{\operatorname{dc}(X)_{i_{0},j_{0}}}{\operatorname{d}x_{i_{1},j_{2}}x_{ i_{2},j_{2}}}\)._ Utilizing above definitions, we split the Hessian to a \(n\times n\) partition with its \(i_{1},i_{2}\)-th component being \(H_{i}(i_{1},i_{2})\). **Definition 4.3**.: _We define \(\nabla^{2}c(X)_{i_{0},j_{0}}\) to be as following_ \[\left[\begin{array}{ccccccccc}H_{4}^{(1,1)}&H_{5}^{(1,2)}&H_{5}^{(1,3)}& \cdots&H_{5}^{(1,i_{0}-1)}&H_{3}^{(1,i_{0})}&H_{5}^{(1,i_{0}+1)}&\cdots&H_{5}^{ (1,n)}\\ H_{5}^{(2,1)}&H_{4}^{(2,2)}&H_{5}^{(2,3)}&\cdots&H_{5}^{(2,i_{0}-1)}&H_{3}^{(2,i_{0})}&H_{5}^{(2,i_{0}+1)}&\cdots&H_{5}^{(2,n)}\\ H_{5}^{(3,1)}&H_{5}^{(3,2)}&H_{4}^{(3,3)}&\cdots&H_{5}^{(3,i_{0}-1)}&H_{3}^{(3,i_{0})}&H_{5}^{(3,i_{0}+1)}&\cdots&H_{5}^{(3,n)}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\ H_{2}^{(i_{0},1)}&H_{2}^{(i_{0},2)}&H_{2}^{(i_{0},3)}&\cdots&H_{2}^{(i_{0},i_{ 0}-1)}&H_{1}^{(i_{0},i_{0})}&H_{2}^{(i_{0},i_{0}+1)}&\cdots&H_{2}^{(i_{0},n)}\\ H_{5}^{(i_{0}+1,1)}&H_{5}^{(i_{0}+1,2)}&H_{5}^{(i_{0}+1,3)}&\cdots&H_{5}^{(i_{0}+ 1,i_{0}-1)}&H_{3}^{(i_{0}+1,i_{0})}&H_{4}^{(i_{0}+1,i_{0}+1)}&\cdots&H_{5}^{(i_{ 0}+1,n)}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\ddots&\vdots\\ H_{5}^{(n,1)}&H_{5}^{(n,2)}&H_{5}^{(n,3)}&\cdots&H_{5}^{(n,i_{0}-1)}&H_{3}^{(n,i_{ 0})}&H_{5}^{(n,i_{0}+1)}&\cdots&H_{4}^{(n,n)}\end{array}\right]\] ### Hessian of \(L(x)\) is Lipschitz- continuous We present our findings that establish the Lipschitz continuity property of the Hessian of \(L(X)\), which is a highly desirable characteristic in optimization. This property signifies that the second derivatives of \(L(X)\) exhibit smooth changes within a defined range. Leveraging this Lipschitz property enables us to employ gradient-based methods with guaranteed convergence rates and enhanced stability. Consequently, our results validate the feasibility of utilizing the proposed training objective to achieve convergence in the model inversion attack. This finding holds significant promise for the development of efficient and effective optimization strategies in this context. **Lemma 4.4** (informal version of Lemma H.11).: _Under following conditions_ * _Assumption_ G.1 _(bounded parameter) holds_ * _Let_ \(c(X)_{i_{0},j_{0}}\) _be defined as Definition_ B.8__ _For \(X,Y\in\mathbb{R}^{d\times n}\), we have_ \[\|\nabla^{2}L(X)-\nabla^{2}L(Y)\|\leq O(n^{3.5}d^{3.5}R^{10})\|X-Y\|_{F}\] ### Hessian of \(L(x)\) is Positive Definite After computing the Hessian of \(L(X)\), we now show our result that can confirm it is positive definite under proper regularization. Therefore, we can apply a modified Newton's method to approach the optimal solution. **Lemma 4.5** (PSD bounds for \(\nabla^{2}L(X)\)).: _Under following conditions,_ * _Let_ \(L(X)\) _be defined as in Definition_ B.9__ * _Let Assumption_ G.1 _(bounded parameter) be satisfied_ _we have_ \[\nabla^{2}L(X)\succeq-O(ndR^{8})\cdot\mathbf{I}_{nd}\] Therefore, we define the regulatization term as follows to have the PSD guarantee. **Definition 4.6** (Regularization).: _Let \(\gamma=O(-ndR^{8})\), we define_ \[L_{\mathrm{reg}}(X):=\gamma\cdot\|\operatorname{vec}(X)\|_{2}^{2}\] With above properties of the loss function, we have the convergence result in Theorem 1.3. ## 5 Conclusion and Future Discussion In this study, we have presented a theoretical approach for inverting input data using weights and outputs. Our investigation delved into the mathematical frameworks that underpin the attention mechanism, with the aim of determining whether knowledge of attention weights and model outputs could enable the reconstruction of sensitive information from the input data. The insights gained from this research are intended to deepen our understanding and facilitate the development of more secure and robust transformer models. By doing so, we strive to foster responsible and ethical advancements in the field of deep learning. This work lays the groundwork for future research and development aimed at fortifying transformer technologies against potential threats and vulnerabilities. Our ultimate goal is to enhance the safety and effectiveness of these groundbreaking models across a wide range of applications. By addressing potential risks and ensuring the integrity of sensitive information, we aim to create a more secure and trustworthy environment for the deployment of transformer models. Roadmap.We arrange the appendix as follows. In Section A, we provide several preliminary notations. In Section B we provide details of computing the gradients. In Section C and Section D we provide detail of computing Hessian for two cases. In Section E we show how to split the Hessian matrix. In Section F we combine the results before and compute the Hessian for the loss function. In Section G we bound the basic functions to be used later. In Section H we provide proof for the Lipschitz property of the loss function. We provide our final result in Section J. ## Appendix A Notations We used \(\mathbb{R}\) to denote real numbers. We use \(A\in\mathbb{R}^{n\times d}\) to denote an \(n\times d\) size matrix where each entry is a real number. For any positive integer \(n\), we use \([n]\) to denote \(\{1,2,\cdots,n\}\). For a matrix \(A\in\mathbb{R}^{n\times d}\), we use \(a_{i,j}\) to denote the an entry of \(A\) which is in \(i\)-th row and \(j\)-th column of \(A\), for each \(i\in[n]\), \(j\in[d]\). We use \(A_{i,j}\in\mathbb{R}^{n\times d}\) to denote a matrix such that all of its entries equal to \(0\) except for \(a_{i,j}\). We use \(\mathbf{1}_{n}\) to denote a length-\(n\) vector where all the entries are ones. For a vector \(w\in\mathbb{R}^{n}\), we use \(\operatorname{diag}(w)\in\mathbb{R}^{n\times n}\) denote a diagonal matrix where \((\operatorname{diag}(w))_{i,i}=w_{i}\) and all other off-diagonal entries are zero. Let \(D\in\mathbb{R}^{n\times n}\) be a diagonal matrix, we use \(D^{-1}\in\mathbb{R}^{n\times n}\) to denote a diagonal matrix where \(i\)-th entry on diagonal is \(D_{i,i}\) and all the off-diagonal entries are zero. Given two vectors \(a,b\in\mathbb{R}^{n}\), we use \((a\circ b)\in\mathbb{R}^{n}\) to denote the length-\(n\) vector where \(i\)-th entry is \(a_{i}b_{i}\). For a matrix \(A\in\mathbb{R}^{n\times d}\), we use \(A^{\top}\in\mathbb{R}^{d\times n}\) to denote the transpose of matrix \(A\). For a vector \(x\in\mathbb{R}^{n}\), we use \(\exp(x)\in\mathbb{R}^{n}\) to denote a length-\(n\) vector where \(\exp(x)_{i}=\exp(x_{i})\) for all \(i\in[n]\). For a matrix \(X\in\mathbb{R}^{n\times n}\), we use \(\exp(X)\in\mathbb{R}^{n\times n}\) to denote matrix where \(\exp(X)_{i,j}=\exp(X_{i,j})\). For any matrix \(A\in\mathbb{R}^{n\times d}\), we define \(\|A\|_{F}:=(\sum_{i=1}^{n}\sum_{j=1}^{d}A_{i,j}^{2})^{1/2}\). For a vector \(a,b\in\mathbb{R}^{n}\), we use \(\langle a,b\rangle\) to denote \(\sum_{i=1}^{n}a_{i}b_{i}\). ## Appendix B Gradients Here in this section, we provide analysis for the gradient computation. In Section B.1 we state some facts to be used. In Section B.2 we provide some definitions. In Sections B.3, B.4, B.5, B.6, B.7, B.8 and B.9 we compute the gradient for the terms defined respectively. Finally in Section B.10 we compute the gradient for \(L(X)\). ### Facts **Fact B.1** (Basic algebra).: _We have_ * \(\langle u,v\rangle=\langle v,u\rangle=u^{\top}v=v^{\top}u\)_._ * \(\langle u\circ v,w\rangle=\langle u\circ v\circ w,\mathbf{1}_{n}\rangle\)__ * \(u^{\top}(v\circ w)=u^{\top}\operatorname{diag}(v)w\)__ **Fact B.2** (Basic calculus rule).: _We have_ * \(\frac{\mathrm{d}\langle f(x),g(x)\rangle}{\mathrm{d}t}=\langle\frac{\mathrm{d }f(x)}{\mathrm{d}t},g(x)\rangle+\langle f(x),\frac{\mathrm{d}g(x)}{\mathrm{d} t}\rangle\) _(here_ \(t\) _can be any variable)_ * \(\frac{\mathrm{d}y^{z}}{\mathrm{d}x}=z\cdot y^{z-1}\frac{\mathrm{d}y}{\mathrm{d }x}\)__ * \(u\cdot v=v\cdot u\)__ * \(\frac{\mathrm{d}x}{\mathrm{d}x_{j}}=e_{j}\) _where_ \(e_{j}\) _is a vector that only_ \(j\)_-th entry is_ \(1\) _and zero everywhere else._ * _Let_ \(x\in\mathbb{R}^{d}\)_, let_ \(y\in\mathbb{R}\) _be independent of_ \(x\)_, we have_ \(\frac{\mathrm{d}x}{\mathrm{d}y}=\mathbf{0}_{d}\)_._ * _Let_ \(f(x),g(x)\in\mathbb{R}\)_, we have_ \(\frac{\mathrm{d}(f(x)g(x))}{\mathrm{d}t}=\frac{\mathrm{d}f(x)}{\mathrm{d}t}g( x)+f(x)\frac{\mathrm{d}g(x)}{\mathrm{d}t}\)__ * _Let_ \(x\in\mathbb{R}\)_,_ \(\frac{\mathrm{d}}{\mathrm{d}x}\exp{(x)}=\exp{(x)}\)__ * _Let_ \(f(x)\in\mathbb{R}^{n}\)_, we have_ \(\frac{\mathrm{d}\exp(f(x))}{\mathrm{d}t}=\exp(f(x))\circ\frac{\mathrm{d}f(x)}{ \mathrm{d}t}\)__ ### Definitions **Definition B.3** (Simplified notations).: _We have following definitions_ * _We use_ \(u(X)_{i_{0},i_{1}}\) _to denote the_ \(i_{1}\)_-th entry of_ \(u(X)_{i_{0}}\)_._ * _We use_ \(f(X)_{i_{0},i_{1}}\) _to denote the_ \(i_{1}\)_-th entry of_ \(f(X)_{i_{0}}\)_._ * _We define_ \(W_{j_{1},*}\) _to denote the_ \(j_{1}\)_-th row of_ \(W\)_. (In the proof, we treat_ \(W_{j_{1},*}\) _as a column vector)._ * _We define_ \(W_{*,j_{1}}\) _to denote the_ \(j_{1}\)_-th column of_ \(W\)_._ * _We define_ \(w_{j_{1},j_{0}}\) _to denote the scalar equals to the entry in_ \(j_{1}\)_-th row,_ \(j_{0}\)_-th column of_ \(W\)_._ * _We define_ \(V_{*,j_{1}}\) _to denote the_ \(j_{1}\)_-th column of_ \(V\)_._ * _We define_ \(v_{j_{1},j_{0}}\) _to denote the scalar equals to the entry in_ \(j_{1}\)_-th row,_ \(j_{0}\)_-th column of_ \(V\)_._ * _We define_ \(X_{*,i_{0}}\) _to denote the_ \(i_{0}\)_-th column of_ \(X\)_._ * _We define_ \(x_{i_{1},j_{1}}\) _to denote the scalar equals to the entry in_ \(i_{1}\)_-th column,_ \(j_{1}\)_-th row of_ \(X\)_._ **Definition B.4** (Exponential function \(u\)).: _If the following conditions hold_ * _Let_ \(X\in\mathbb{R}^{d\times n}\)__ * _Let_ \(W\in\mathbb{R}^{d\times d}\)__ _For each \(i_{0}\in[n]\), we define \(u(X)_{i_{0}}\in\mathbb{R}^{n}\) as follows_ \[u(X)_{i_{0}}=\exp(X^{\top}WX_{*,i_{0}})\] **Definition B.5** (Sum function of softmax \(\alpha\)).: _If the following conditions hold_ * _Let_ \(X\in\mathbb{R}^{d\times n}\)__ * _Let_ \(u(X)_{i_{0}}\) _be defined as Definition_ B.4__ _We define \(\alpha(X)_{i_{0}}\in\mathbb{R}\) for all \(i_{0}\in[n]\) as follows_ \[\alpha(X)_{i_{0}}=\langle u(X)_{i_{0}},\mathbf{1}_{n}\rangle\] **Definition B.6** (Softmax probability function \(f\)).: _If the following conditions hold_ * _Let_ \(X\in\mathbb{R}^{d\times n}\)__ * _Let_ \(u(X)_{i_{0}}\) _be defined as Definition_ B.4__ _ * _Let_ \(\alpha(X)_{i_{0}}\) _be defined as Definition_ B.5__ _We define \(f(X)_{i_{0}}\in\mathbb{R}^{n}\) for each \(i_{0}\in[n]\) as follows_ \[f(X)_{i_{0}}:=\alpha(X)_{i_{0}}^{-1}u(X)_{i_{0}}\] **Definition B.7** (Value function \(h\)).: _If the following conditions hold_ * _Let_ \(X\in\mathbb{R}^{d\times n}\)__ * _Let_ \(V\in\mathbb{R}^{d\times d}\)__ _We define \(h(X)_{j_{0}}\in\mathbb{R}^{n}\) for each \(j_{0}\in[n]\) as follows_ \[h(X)_{j_{0}}:=X^{\top}V_{*,j_{0}}\] **Definition B.8** (One-unit loss function \(c\)).: _If the following conditions hold_ * _Let_ \(f(X)_{i_{0}}\) _be defined as Definition_ B.6__ * _Let_ \(h(X)_{j_{0}}\) _be defined as Definition_ B.7__ _We define \(c(X)\in\mathbb{R}^{n\times d}\) as follows_ \[c(X)_{i_{0},j_{0}}:=\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle-b_{i_{0},j_{0}}, \forall i_{0}\in[n],j_{0}\in[d]\] **Definition B.9** (Overall function \(L\)).: _If the following conditions hold_ * _Let_ \(c(X)_{i_{0},j_{0}}\) _be defined as Definition_ B.8__ _We define \(L(X)\in\mathbb{R}\) as follows_ \[L(X):=\sum_{i_{0}=1}^{n}\sum_{j_{0}=1}^{d}(c(X)_{i_{0},j_{0}})^{2}\] ### Gradient for each column of \(X^{\top}WX_{*,i_{0}}\) **Lemma B.10**.: _We have_ * **Part 1.** _Let_ \(i_{0}=i_{1}\in[n]\)_,_ \(j_{1}\in[d]\)__ \[\underbrace{\frac{\mathrm{d}X^{\top}WX_{*,i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}}_{ n\times 1}=\underbrace{e_{i_{0}}}_{n\times 1}\cdot\underbrace{\langle W_{j_{1},*},X_{*,i_{0}} \rangle}_{\mathrm{scalar}}+\underbrace{X^{\top}}_{n\times d}\underbrace{W_{*,j _{1}}}_{d\times 1}\] * **Part 2** _Let_ \(i_{0}\neq i_{1}\in[n]\)_,_ \(j_{1}\in[d]\)__ \[\underbrace{\frac{\mathrm{d}X^{\top}WX_{*,i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}}_{ n\times 1}=\underbrace{e_{i_{1}}}_{n\times 1}\cdot\underbrace{\langle W_{j_{1},*},X_{*, i_{0}}\rangle}_{\mathrm{scalar}}\] Proof.: **Proof of Part 1.** \[\underbrace{\frac{\mathrm{d}X^{\top}WX_{*,i_{0}}}{\mathrm{d}x_{i_{1},j_ {1}}}}_{n\times 1} = \underbrace{\frac{\mathrm{d}X^{\top}}{\mathrm{d}x_{i_{1},j_{1}}}}_ {n\times d}\underbrace{W}_{d\times d}\underbrace{X_{*,i_{0}}}_{d\times 1}+ \underbrace{X^{\top}}_{n\times d}\underbrace{W}_{d\times d}\underbrace{\frac{ \mathrm{d}X_{*,i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}}_{d\times 1}\] \[= \underbrace{e_{i_{1}}}_{n\times 1}\underbrace{\langle W_{j_{1},*},X_{*, i_{0}}\rangle}_{\mathrm{scalar}}+\underbrace{X^{\top}}_{n\times d}\underbrace{W_{*,j_ {1}}}_{d\times 1}\] \[= \underbrace{e_{i_{0}}}_{n\times 1}\underbrace{\langle W_{j_{1},*},X_{*, i_{0}}\rangle}_{\mathrm{scalar}}+\underbrace{X^{\top}}_{n\times d}\underbrace{W_{*,j_ {1}}}_{d\times 1}\] \[= \underbrace{e_{i_{0}}}_{n\times 1}\underbrace{\langle W_{j_{1},*},X_{*, i_{0}}\rangle}_{\mathrm{scalar}}+\underbrace{X^{\top}}_{n\times d}\underbrace{W_{*,j_ {1}}}_{d\times 1}\] where the 1st step follows from Fact B.2, the 2nd step follows from simple derivative rule, the 3rd is simple algebra, the 4th step ie because \(i_{0}=i_{1}\). **Proof of Part 2** \[\underbrace{\frac{\mathrm{d}X^{\top}WX_{*,i_{0}}}{\mathrm{d}x_{i_{1 },j_{1}}}}_{n\times 1} = \underbrace{\frac{\mathrm{d}X^{\top}}{\mathrm{d}x_{i_{1},j_{1}}}} _{n\times d}\underbrace{W}_{d\times d}\underbrace{X_{*,i_{0}}}_{d\times 1}+ \underbrace{X^{\top}}_{n\times d}\underbrace{\frac{\mathrm{d}X_{*,i_{0}}}{ \mathrm{d}x_{i_{1},j_{1}}}}_{d\times 1}\] \[= \underbrace{e_{i_{1}}}_{n\times 1}\underbrace{e_{j_{1}}^{\top}} _{1\times d}\underbrace{W}_{d\times d}\underbrace{X_{*,i_{0}}}_{d\times 1}+ \underbrace{X^{\top}}_{n\times d}\underbrace{W}_{d\times d}\underbrace{\mathbf{0 }_{d}}_{d\times 1}\] \[= \underbrace{e_{i_{1}}}_{n\times 1}\underbrace{\langle W_{j_{1},*},X_{*,i_{0}}\rangle}_{\mathrm{scalar}}\] where the 1st step follows from Fact B.2, the 2nd step follows from simple derivative rule, the 3rd is simple algebra. ### Gradient for \(u(X)_{i_{0}}\) **Lemma B.11**.: _Under following conditions_ * _Let_ \(u(X)_{i_{0}}\) _be defined as Definition_ B.4__ _We have_ * **Part 1.** _For each_ \(i_{0}=i_{1}\in[n]\)_,_ \(j_{1}\in[d]\)__ \[\underbrace{\frac{\mathrm{d}u(X)_{i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}}_{n\times 1 }=u(X)_{i_{0}}\circ(e_{i_{0}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle+X^{ \top}W_{*,j_{1}})\] * **Part 2** _For each_ \(i_{0}\neq i_{1}\in[n]\)_,_ \(j_{1}\in[d]\)__ \[\underbrace{\frac{\mathrm{d}u(X)_{i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}}_{n\times 1 }=\underbrace{u(X)_{i_{0}}}_{n\times 1}\circ(e_{i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}} \rangle)\] Proof.: **Proof of Part 1** \[\underbrace{\frac{\mathrm{d}u(X)_{i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}}_{n \times 1} =\ \underbrace{\frac{\mathrm{d}\exp(X^{\top}WX_{*,i_{0}})}{\mathrm{d}x_{i_{1},j _{1}}}}_{n\times 1}\] \[=\ \exp(\underbrace{X^{\top}W}_{n\times d}\underbrace{X_{*,i_{0}}} _{d\times 1})\circ\underbrace{\frac{\mathrm{d}X^{\top}WX_{*,i_{0}}}{\mathrm{d}x_{ i_{1},j_{1}}}}_{n\times 1}\] \[=\ \underbrace{u(X)_{i_{0}}}_{n\times 1}\circ\underbrace{\frac{ \mathrm{d}X^{\top}WX_{*,i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}}_{n\times 1}\] \[=\ \underbrace{u(X)_{i_{0}}}_{n\times 1}\circ(\underbrace{e_{i_{0}}} _{n\times 1}\cdot\underbrace{\langle W_{j_{1},*},X_{*,i_{0}}\rangle}_{\mathrm{ scalar}}+\underbrace{X^{\top}W_{*,j_{1}}}_{n\times d})\] where the 1st step and the 3rd step follow from Definition of \(u(X)_{i_{0}}\) (see Definition B.4), the 2nd step follows from Fact B.2, the 4th step follows by Lemma B.10. **Proof of Part 2** \[\underbrace{\frac{\mathrm{d}u(X)_{i_{0}}}{\mathrm{d}x_{i_{1},j_{1 }}}}_{n\times 1} =\ \underbrace{\frac{\mathrm{d}\exp(X^{\top}WX_{*,i_{0}})}{\mathrm{d}x_{i_{1},j _{1}}}}_{n\times 1}\] \[=\ \exp(\underbrace{X^{\top}W}_{n\times d}\underbrace{X_{*,i_{0}}} _{d\times 1})\circ\underbrace{\frac{\mathrm{d}X^{\top}WX_{*,i_{0}}}{\mathrm{d}x_{ i_{1},j_{1}}}}_{n\times 1}\] \[=\ \underbrace{u(X)_{i_{0}}}_{n\times 1}\circ\underbrace{\frac{ \mathrm{d}X^{\top}WX_{*,i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}}_{n\times 1}\] \[=\ \underbrace{u(X)_{i_{0}}}_{n\times 1}\circ(\underbrace{e_{i_{1} }}_{n\times 1}\cdot\underbrace{\langle W_{j_{1},*},X_{*,i_{0}}\rangle}_{\mathrm{ scalar}})\] where the 1st step and the 3rd step follow from Definition of \(u(X)_{i_{0}}\) (see Definition B.4), the 2nd step follows from Fact B.2, the 4th step follows by Lemma B.10. ### Gradient Computation for \(\alpha(X)_{i_{0}}\) **Lemma B.12** (A generalization of Lemma 5.6 in [10]).: _If the following conditions hold_ * _Let_ \(\alpha(X)_{i_{0}}\) _be defined as Definition_ B.5__ _Then, we have_ * **Part 1.** _For each_ \(i_{0}=i_{1}\in[n]\)_,_ \(j_{1}\in[d]\)__ \[\underbrace{\frac{\mathrm{d}\alpha(X)_{i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}}_{ \mathrm{scalar}}=u(X)_{i_{0},i_{0}}\cdot\langle W_{j_{1},*},X_{*,i_{0}} \rangle+\langle u(X)_{i_{0}},X^{\top}W_{*,j_{1}}\rangle\] * **Part 2.**_For each_ \(i_{0}\neq i_{1}\in[n]\)_,_ \(j_{1}\in[d]\)__ \[\underbrace{\frac{\mathrm{d}\alpha(X)_{i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}}_{ \mathrm{scalar}}=u(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] Proof.: **Proof of Part 1.** \[\underbrace{\frac{\mathrm{d}\alpha(X)_{i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}}_{ \mathrm{scalar}} =\ \underbrace{\frac{\mathrm{d}\langle u(X)_{i_{0}},\mathbf{1}_{n} \rangle}{\mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[=\ \langle\underbrace{\frac{\mathrm{d}u(X)_{i_{0}}}{\mathrm{d}x_{i _{1},j_{1}}}}_{n\times 1},\underbrace{\mathbf{1}_{n}}_{n\times 1}\rangle\] \[=\ \langle\underbrace{u(X)_{i_{0}}}_{n\times 1}\circ e_{i_{0}} \cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle+X^{\top}W_{*,j_{1}}\rangle, \underbrace{\mathbf{1}_{n}}_{n\times 1}\rangle\] \[=\ \langle\underbrace{u(X)_{i_{0}}}_{n\times 1}\circ e_{i_{0}}, \mathbf{1}_{n}\rangle\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle+\langle u(X) _{i_{0}}\circ(X^{\top}W_{*,j_{1}}),\underbrace{\mathbf{1}_{n}}_{n\times 1}\rangle\] \[=\ \langle\underbrace{u(X)_{i_{0}}}_{n\times 1},e_{i_{0}}\rangle \cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle+\langle u(X)_{i_{0}},X^{\top}W_{*,j_{1}}\rangle\] \[=\ \ u(X)_{i_{0},i_{0}}\cdot\langle W_{j_{1},*},X_{*,i_{0}} \rangle+\langle u(X)_{i_{0}},X^{\top}W_{*,j_{1}}\rangle\] where the 1st step follows from the definition of \(\alpha(X)_{i_{0}}\) (see Definition B.5), the 2nd step follows from Fact B.2, the 3rd step follows from Lemma B.11, the 4th step is rearrangement, the 5th step is derived by Fact B.1, the last step is by the definition of \(U(X)_{i_{0},i_{0}}\). **Proof of Part 2.** \[\underbrace{\frac{\mathrm{d}\alpha(X)_{i_{0}}}{\mathrm{d}x_{i_{1},j_ {1}}}}_{\mathrm{scalar}} =\ \underbrace{\frac{\mathrm{d}\langle u(X)_{i_{0}},\mathbf{1}_{n} \rangle}{\mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[=\ \langle\underbrace{\frac{\mathrm{d}u(X)_{i_{0}}}{\mathrm{d}x_{i _{1},j_{1}}}}_{n\times 1},\underbrace{\mathbf{1}_{n}}_{n\times 1}\rangle\] \[=\ \langle\underbrace{u(X)_{i_{0}}}_{n\times 1}\circ e_{i_{1}} \cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle),\underbrace{\mathbf{1}_{n}}_{n \times 1}\rangle\] \[=\ \underbrace{u(X)_{i_{0},i_{1}}}_{\mathrm{scalar}}\cdot\langle W _{j_{1},*},X_{*,i_{0}}\rangle\] where the 1st step follows from the definition of \(\alpha(X)_{i_{0}}\) (see Definition B.5), the 2nd step follows from Fact B.2, the 3rd step follows from Lemma B.11, the 4th step is rearrangement, the 5th step is derived by Fact B.1. ### Gradient Computation for \(\alpha(X)_{i_{0}}^{-1}\) **Lemma B.13** (A generalization of Lemma 5.6 in [10]).: _If the following conditions hold_ * _Let_ \(\alpha(X)_{i_{0}}\) _be defined as Definition_ B.5__ _we have_ * **Part 1.** _For_ \(i_{0}=i_{1}\in[n]\)_,_ \(j_{1}\in[d]\)__ \[\underbrace{\frac{\mathrm{d}\alpha(X)_{i_{0}}^{-1}}{\mathrm{d}x_{i_{1},j_{1}} }}_{\mathrm{scalar}}=-\alpha(X)_{i_{0}}^{-1}\cdot(f(X)_{i_{0},i_{0}}\cdot \langle W_{j_{1},*},X_{*,i_{0}}\rangle+\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}} \rangle))\] * **Part 2.** _For_ \(i_{0}\neq i_{1}\in[n]\)_,_ \(j_{1}\in[d]\)__ \[\underbrace{\frac{\mathrm{d}\alpha(X)_{i_{0}}^{-1}}{\mathrm{d}x_{i_{1},j_{1}} }}_{\mathrm{scalar}}=-\alpha(X)_{i_{0}}^{-1}\cdot f(X)_{i_{0},i_{1}}\cdot \langle W_{j_{1},*},X_{*,i_{0}}\rangle\] Proof.: **Proof of Part 1.** \[\underbrace{\frac{\mathrm{d}\alpha(X)_{i_{0}}^{-1}}{\mathrm{d}x_{i_{1},j_{1} }}}_{\mathrm{scalar}} =\ \underbrace{-1}_{\mathrm{scalar}}\cdot(\underbrace{\alpha(X)_{i_{0}} }_{\mathrm{scalar}})^{-2}\cdot\underbrace{\frac{\mathrm{d}(\alpha(X)_{i_{0}} )}{\mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[=\ -(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{scalar}})^{-2}\cdot( u(X)_{i_{0},i_{0}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle+\langle u(X)_{i_{0}},X^{ \top}W_{*,j_{1}}\rangle)\] \[=\ -\alpha(X)_{i_{0}}^{-1}\cdot(f(X)_{i_{0},i_{0}}\cdot\langle W_{j_ {1},*},X_{*,i_{0}}\rangle+\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}}\rangle)\] where the 1st step follows from Fact B.2, the 2nd step follows by Lemma B.12. **Proof of Part 2.** \[\underbrace{\frac{\mathrm{d}\alpha(X)_{i_{0}}^{-1}}{\mathrm{d}x_{i_{1},j_{1} }}}_{\mathrm{scalar}} =\ \underbrace{-1}_{\mathrm{scalar}}\cdot(\underbrace{\alpha(X)_{i_{0}} }_{\mathrm{scalar}})^{-2}\cdot\underbrace{\frac{\mathrm{d}(\alpha(X)_{i_{0}} )}{\mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[=\ -(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{scalar}})^{-2}\cdot u (X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[=\ -\alpha(X)_{i_{0}}^{-1}\cdot f(X)_{i_{0},i_{1}}\cdot \langle W_{j_{1},*},X_{*,i_{0}}\rangle\] where the 1st step follows from Fact B.2, the 2nd step follows from result from Lemma B.12. ### Gradient for \(f(X)_{i_{0}}\) **Lemma B.14**.: _If the following conditions hold_ * _Let_ \(f(X)_{i_{0}}\) _be defined as Definition_ B.6__ _Then, we have_ \[\underbrace{\frac{\mathrm{d}\alpha(X)_{i_{0}}^{-1}}{\mathrm{d}x_{i_{1},j_{1} }}}_{\mathrm{scalar}} =\ \underbrace{-1}_{\mathrm{scalar}}\cdot(\underbrace{\alpha(X)_{i_{0 }}}_{\mathrm{scalar}})^{-2}\cdot\underbrace{\frac{\mathrm{d}(\alpha(X)_{i_{0 }})}{\mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[=\ -(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{scalar}})^{-2}\cdot( u(X)_{i_{0},i_{0}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle+\langle u(X)_{i_{0}},X^{ \top}W_{*,j_{1}}\rangle)\] \[=\ -(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{scalar}})^{-2}\cdot \underbrace{-1}_{\mathrm{scalar}}\cdot(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{ scalar}})^{-2}\cdot\underbrace{\frac{\mathrm{d}(\alpha(X)_{i_{0}})}{\mathrm{d}x_{i_{1},j_{1}}}}_{ \mathrm{scalar}}\] \[=\ -(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{scalar}})^{-2}\cdot \underbrace{-1}_{\mathrm{scalar}}\cdot(\underbrace{\alpha(X)_{i_{0}}}_{ \mathrm{scalar}})^{-2}\cdot\underbrace{\frac{\mathrm{d}(\alpha(X)_{i_{0}})}{ \mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[=\ -(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{scalar}})^{-2}\cdot \underbrace{-1}_{\mathrm{scalar}}\cdot(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{ scalar}})^{-2}\cdot\underbrace{\frac{\mathrm{d}(\alpha(X)_{i_{0}})}{\mathrm{d}x_{i_{1},j_{1}}}}_{ \mathrm{scalar}}\] \[=\ -(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{scalar}})^{-2}\cdot \underbrace{-1}_{\mathrm{scalar}}\cdot(\underbrace{\alpha(X)_{i_{0}}}_{ \mathrm{scalar}})^{-2}\cdot\underbrace{\frac{\mathrm{d}(\alpha(X)_{i_{0}})}{ \mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[=\ -(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{scalar}})^{-2}\cdot \underbrace{-1}_{\mathrm{scalar}}\cdot(\underbrace{\alpha(X)_{i_{0}}}_{ \mathrm{scalar}})^{-2}\cdot\underbrace{\frac{\mathrm{d}(\alpha(X)_{i_{0}})}{ \mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[=\ -(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{scalar}})^{-2}\cdot \underbrace{-1}_{\mathrm{scalar}}\cdot(\underbrace{\alpha(X)_{i_{0}}}_{ \mathrm{scalar}})^{-2}\cdot\underbrace{\frac{\mathrm{d}(\alpha(X)_{i_{0}})}{ \mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[=\ -(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{scalar}})^{-2}\cdot \underbrace{-1}_{\mathrm{scalar}}\cdot(\underbrace{\alpha(X)_{i_{0}}}_{ \mathrm{scalar}})^{-2}\cdot\underbrace{\frac{\mathrm{d}(\alpha(X)_{i_{0}})}{ \mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[=\ -(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{scalar}})^{-2}\cdot \underbrace{-1}_{\mathrm{scalar}}\cdot(\underbrace{\alpha(X)_{i_{0}}}_{ \mathrm{scalar}})^{-2}\cdot\underbrace{\frac{\mathrm{d}(\alpha(X)_{i_{0}})}{ \mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[=\ -(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{scalar}})^{-2}\cdot \underbrace{-1}_{\mathrm{scalar}}\cdot(\underbrace{\alpha(X)_{i_{0}}}_{ \mathrm{scalar}})^{-2}\cdot\underbrace{\frac{\mathrm{d}(\alpha(X)_{i_{0}})}{ \mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[=\ -(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{scalar}})^{-2}\cdot \underbrace{-1}_{\mathrm{scalar}}\cdot(\underbrace{\alpha(X)_{i_{0}}}_{ \mathrm{scalar}})^{-2}\cdot\underbrace{\frac{\mathrm{d}(\alpha(X)_{i_{0}})}{ \mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[=\ -(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{scalar}})^{-2}\cdot \underbrace{-1}_{\mathrm{scalar}}\cdot(\underbrace{\alpha(X)_{i_{0}}}_{ \mathrm{scalar}})^{-2}\cdot\underbrace{\frac{\mathrm{d}(\alpha(X)_{i_{0}})}{ \mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[=\ -(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{scalar}})^{-2}\cdot \underbrace{-1}_{\mathrm{scalar}}\cdot(\underbrace{\alpha(X)_{i_{0}}}_{\mathrm{ scalar}})^{-2}\cdot\underbrace{\frac{\mathrm{d}(\alpha(X)_{i * **Part 1.**_For all_ \(i_{0}=i_{1}\in[n]\)_,_ \(j_{1}\in[d]\)__ \[\underbrace{\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}}_{ n\times 1}= -\underbrace{f(X)_{i_{0}}}_{n\times 1}\cdot\underbrace{(f(X)_{i_{0},i_{0}} \cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle+\langle f(X)_{i_{0}},X^{\top}W_{*, j_{1}}\rangle)}_{\mathrm{scalar}}\] \[+\underbrace{f(X)_{i_{0}}\circ(e_{i_{0}}\cdot\langle W_{j_{1},*}, X_{*,i_{0}}\rangle+X^{\top}W_{*,j_{1}})}_{n\times 1}\] * **Part 2.** _For all_ \(i_{0}\neq i_{1}\in[n]\)_,_ \(j_{1}\in[d]\)__ \[\underbrace{\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}}_{ n\times 1}= -\underbrace{f(X)_{i_{0}}}_{n\times 1}\cdot\underbrace{f(X)_{i_{0},i_{1}} \cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle}_{\mathrm{scalar}}\] \[+\underbrace{f(X)_{i_{0}}\circ(e_{i_{1}}\cdot\langle W_{j_{1},*}, X_{*,i_{0}}\rangle)}_{n\times 1}\] Proof.: **Proof of Part 1.** \[\underbrace{\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{1},j_{1} }}}_{n\times 1}= \underbrace{\frac{\mathrm{d}\alpha(X)_{i_{0}}^{-1}u(X)_{i_{0}}}{ \mathrm{d}x_{i_{1},j_{1}}}}_{n\times 1}\] \[= \underbrace{u(X)_{i_{0}}}_{n\times 1}\cdot\underbrace{\frac{ \mathrm{d}}{\mathrm{d}x_{i_{1},j_{1}}}\alpha(X)_{i_{0}}^{-1}}_{\mathrm{scalar}}+ \underbrace{\alpha(X)_{i_{0}}^{-1}}_{\mathrm{scalar}}\cdot\underbrace{\frac{ \mathrm{d}}{\mathrm{d}x_{i_{1},j_{1}}}u(X)_{i_{0}}}_{n\times 1}\] \[= -\underbrace{u(X)_{i_{0}}}_{n\times 1}\cdot\underbrace{( \alpha(X)_{i_{0}})^{-1}\cdot(f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{1},*},X_{*, i_{0}}\rangle+\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}}\rangle)}_{\mathrm{ scalar}}\] \[+\underbrace{\alpha(X)_{i_{0}}^{-1}}_{\mathrm{scalar}}\cdot \underbrace{\frac{\mathrm{d}}{\mathrm{d}x_{i_{1},j_{1}}}u(X)_{i_{0}}}_{n\times 1}\] \[= -\underbrace{u(X)_{i_{0}}}_{n\times 1}\cdot\underbrace{( \alpha(X)_{i_{0}})^{-1}\cdot(f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{1},*},X_{*, i_{0}}\rangle+\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}}\rangle)}_{\mathrm{ scalar}}\] \[+\underbrace{\alpha(X)_{i_{0}}^{-1}}_{\mathrm{scalar}}\cdot \underbrace{(u(X)_{i_{0}}\circ(e_{i_{0}}\cdot\langle W_{j_{1},*},X_{*,i_{0}} \rangle+X^{\top}W_{*,j_{1}}))}_{n\times 1}\] \[= -\underbrace{f(X)_{i_{0}}}_{n\times 1}\cdot\underbrace{(f(X)_{i_{0},i_{0 }}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle+\langle f(X)_{i_{0}},X^{\top}W_ {*,j_{1}}\rangle)}_{\mathrm{scalar}}\] \[+\underbrace{f(X)_{i_{0}}\circ(e_{i_{0}}\cdot\langle W_{j_{1},*}, X_{*,i_{0}}\rangle+X^{\top}W_{*,j_{1}})}_{n\times 1}\] where the 1st step follows from the definition of \(f(X)_{i_{0}}\) (see Definition B.6), the 2nd step follows from Fact B.2, the 3rd step follows from Lemma B.13, the 4th step follows from result from Lemma B.11, the 5th step from the definition of \(f(X)_{i_{0}}\) (see Definition B.6). **Proof of Part 2.** \[= \underbrace{u(X)_{i_{0}}}_{n\times 1}\cdot\underbrace{\frac{\mathrm{d} }{\mathrm{d}x_{i_{1},j_{1}}}\alpha(X)_{i_{0}}^{-1}}_{\mathrm{scalar}}+ \underbrace{\alpha(X)_{i_{0}}^{-1}}_{\mathrm{scalar}}\cdot\frac{\mathrm{d}}{ \mathrm{d}x_{i_{1},j_{1}}}u(X)_{i_{0}}\] \[= -\underbrace{u(X)_{i_{0}}}_{n\times 1}\cdot\underbrace{(\alpha(X)_{i_{0} })^{-2}\cdot u(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle}_{ \mathrm{scalar}}\] \[+\underbrace{\alpha(X)_{i_{0}}^{-1}}_{\mathrm{scalar}}\cdot \underbrace{\frac{\mathrm{d}}{\mathrm{d}x_{i_{1},j_{1}}}u(X)_{i_{0}}}_{n \times 1}\] \[= -\underbrace{u(X)_{i_{0}}}_{n\times 1}\cdot\underbrace{(\alpha(X)_{i_{ 0}})^{-2}\cdot u(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle}_ {\mathrm{scalar}}\] \[+\underbrace{\alpha(X)_{i_{0}}^{-1}}_{\mathrm{scalar}}\cdot \underbrace{(u(X)_{i_{0}}\circ(e_{i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}} \rangle)}_{n\times 1}\] \[= -\underbrace{f(X)_{i_{0}}}_{n\times 1}\cdot\underbrace{f(X)_{i_{0},i_ {1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle}_{\mathrm{scalar}}\] \[+e_{i_{1}}\cdot\underbrace{f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{ 1},*},X_{*,i_{0}}\rangle}_{\mathrm{scalar}}\] where the 1st step follows from the definition of \(f(X)_{i_{0}}\) (see Definition B.6), the 2nd step follows from Fact B.2, the 3rd step follows from Lemma B.13, the 4th step follows from result from Lemma B.11, the 5th step from the definition of \(f(X)_{i_{0}}\) (see Definition B.6). ### Gradient for \(h(X)_{j_{0}}\) **Lemma B.15**.: _If the following conditions hold_ * _Let_ \(h(X)_{j_{0}}\) _be defined as Definition_ B.7__ _Then, for all_ \(i_{1}\in[n]\)_,_ \(j_{0},j_{1}\in[d]\)_, we have_ \[\underbrace{\frac{\mathrm{d}h(X)_{j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}}_{n\times 1 }=e_{i_{1}}\cdot v_{j_{1},j_{0}}\] Proof.: \[\underbrace{\frac{\mathrm{d}h(X)_{j_{0}}}{\mathrm{d}x_{i_{1},j_{ 1}}}}_{n\times 1} =\underbrace{\frac{\mathrm{d}X^{\top}V_{*,j_{0}}}{\mathrm{d}x_{i_{1 },j_{1}}}}_{n\times 1}\] \[=\] \[= \underbrace{e_{i_{1}}}_{n\times 1}\cdot\underbrace{e_{j_{1}}^{\top}}_{1 \times d}\cdot\underbrace{V_{*,j_{0}}}_{d\times 1}\] \[= \underbrace{e_{i_{1}}}_{n\times 1}\cdot\underbrace{v_{j_{1},j_{0}}}_{ \mathrm{scalar}}\] where the first step is by definition of \(h(X)_{j_{0}}\) (see Definition B.7), the 2nd and the 3rd step are by differentiation rules, the 4th step is by simple algebra. ### Gradient for \(c(X)_{i_{0},j_{0}}\) **Lemma B.16**.: _If the following conditions hold_ * _Let_ \(c(X)_{i_{0}}\) _be defined as Definition_ B.8__ * _Let_ \(s(X)_{i_{0},j_{0}}:=\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle\)__ _Then, we have_ * **Part 1.* * _For all_ \(i_{0}=i_{1}\in[n]\)_,_ \(j_{0},j_{1}\in[d]\)__ \[\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}=C_{1}(X)+C_{2}(X) +C_{3}(X)+C_{4}(X)+C_{5}(X)\] _where we have definitions:_ * \(C_{1}(X):=-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\)__ * \(C_{2}(X):=-s(X)_{i_{0},j_{0}}\cdot\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}}\rangle\)__ * \(C_{3}(X):=f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\)__ * \(C_{4}(X):=\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}}),h(X)_{j_{0}}\rangle\)__ * \(C_{5}(X):=f(X)_{i_{0},i_{0}}\cdot v_{j_{1},j_{0}}\)__ * **Part 2.* * _For all_ \(i_{0}\neq i_{1}\in[n]\)_,_ \(j_{0},j_{1}\in[d]\)__ \[\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}=C_{6}(X)+C_{7}(X) +C_{8}(X)\] _where we have definitions:_ * \(C_{6}(X):=-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\)__ * _This is corresponding to_ \(C_{1}(X)\)__ * \(C_{7}(X):=f(X)_{i_{0},i_{1}}\cdot h(X)_{j_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\)__ * _This is corresponding to_ \(C_{3}(X)\)__ * \(C_{8}(X):=f(X)_{i_{0},i_{1}}\cdot v_{j_{1},j_{0}}\)__ * _This is corresponding to_ \(C_{5}(X)\)__ Proof.: **Proof of Part 1** \[\underbrace{\frac{\mathrm{d}c(X)_{i_{0},j_{1}}}{\mathrm{d}x_{i_{1 },j_{1}}}}_{\mathrm{scalar}} = \underbrace{\frac{\mathrm{d}(\langle f(X)_{i_{0}},h(X)_{j_{0}} \rangle-b_{i_{0},j_{0}})}{\mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[= \underbrace{\frac{\mathrm{d}\langle f(X)_{i_{0}},h(X)_{j_{0}} \rangle}{\mathrm{d}x_{i_{1},j_{1}}}}_{\mathrm{scalar}}\] \[= \langle\underbrace{\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{ 1},j_{1}}}}_{n\times 1},\frac{h(X)_{j_{0}}}{n\times 1}+\langle\underbrace{f(X)_{i_{0}}} _{n\times 1},\underbrace{\frac{\mathrm{d}h(X)_{j_{0}}}{\mathrm{d}x_{i_{1},j_{1} }}}_{n\times 1}\rangle\] \[= \langle-\underbrace{f(X)_{i_{0},j_{0}}}_{n\times 1}\cdot \underbrace{(f(X)_{i_{0},j_{0}}}_{n\times 1}\cdot\underbrace{X_{*,i_{0}}}_{n \times 1}\cdot\underbrace{(f(X)_{i_{0},j_{0}}}_{n\times 1})\cdot\underbrace{(f(X)_{i_{0},j_{0}} }_{n\times 1})\cdot\underbrace{(f(X)_{i_{0},j_{0}}}_{n\times 1})\cdot\underbrace{(f(X)_{i_{0},j_{0}} }_{n\times 1})\cdot\underbrace{(f(X)_{i_{0},j_{0}}}_{n\times 1})\cdot\underbrace{(f(X)_{i_{0},j_{0}} }_{n\times 1})\cdot\underbrace{(f(X)_{i_{0},j_{0}}}_{n\times 1})\cdot \underbrace{(f(X)_{i_{0},j_{0}}}_{\text{scalar}}\cdot\underbrace{(f(X)_{i_{0},j_ {0}}}_{n\times 1})\cdot\underbrace{(f(X)_{i_{0},j_{0}}}_{\text{scalar}})\cdot\underbrace{ (f(X)_{i_{0},j_{0}}}_{n\times 1})\cdot\underbrace{(f(X)_{i_{0},j_{0}}}_{n\times 1}) \cdot\underbrace{(f(X)_{i_{0},j_{0}}}_{n\times 1})\cdot\underbrace{(f(X)_{i_{0},j_{0}} }_{n\times 1})\cdot\underbrace{(f(X)_{i_{0},j_{0}}}_{n\times 1})\cdot \underbrace{(f(X)_{i_{0},j_{0}}}_{n\times 1})\cdot\underbrace{(f(X)_{i_{0},j_{0}} }_{n\times 1})\cdot\underbrace{(f(X)_{i_{0},j_{0}}}_{n\times 1})\cdot \underbrace{(f(X)_{i_{0},j_{0}}}_{n\times 1})\cdot\underbrace{(f(X)_{i_{0},j_{0}} }_{n \[+f(X)_{i_{0},i_{1}}\cdot h(X)_{j_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{ *,i_{0}}\rangle\] \[+f(X)_{i_{0},i_{1}}\cdot v_{j_{1},j_{0}}\] \[:= C_{6}(X)+C_{7}(X)+C_{8}(X)\] where the first step is by definition of \(c(X)_{i_{0},j_{0}}\) (see Definition B.8), the 2nd step is because \(b_{i_{0},j_{0}}\) is independent of \(X\), the 3rd step is by Fact B.2, the 4th step uses Lemma B.15, the 5th step uses Lemma B.14, the 6th and 7th step are rearrangement of terms. ### Gradient for \(L(x)\) **Lemma B.17**.: _If the following holds_ * _Let_ \(L(X)\) _be defined as Definition_ B.9__ _For \(i_{1}\in[n]\), \(j_{1}\in[d]\), we have_ \[\frac{\mathrm{d}L(X)}{\mathrm{d}x_{i_{1},j_{1}}}=\sum_{i_{0}=1}^{n}\sum_{j_{0} =1}^{d}c(X)_{i_{0},j_{0}}\cdot\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_ {i_{1},j_{1}}}\] Proof.: The result directly follows by chain rule. ## Appendix C Hessian case 1: \(i_{0}=i_{1}\) Here in this section, we provide Hessian analysis for the first case. In Sections C.1, C.2, C.3, C.4, C.5, C.6 and C.8, we calculate the derivative for several important terms. In Section C.9, C.10, C.11, C.12 and C.13 we calculate derivative for \(C_{1},C_{2},C_{3},C_{4}\) and \(C_{5}\) respectively. Finally in Section C.14 we calculate derivative of \(\frac{c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}\mathrm{d}_{i_{2},j_{2}}}\). Now, we list some simplified notations which will be used in following sections. **Definition C.1**.: _We have following definitions to simplify the expression._ * \(s(X)_{i,j}:=\langle f(X)_{i},h(X)_{j}\rangle\)__ * \(w(X)_{i,j}:=\langle W_{j,*},X_{*,i}\rangle\)__ * \(z(X)_{i,j}:=\langle f(X)_{i},X^{\top}W_{*,j}\rangle\)__ * \(z(X)_{i}:=WX\cdot f(X)_{i}\)__ * \(w(X)_{i,*}:=WX_{*,i}\)__ ### Derivative of Scalar Function \(w(X)_{i_{0},j_{1}}\) **Lemma C.2**.: _We have_ * **Part 1** _For_ \(i_{0}=i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}w(X)_{i_{0},j_{1}}}{\mathrm{d}x_{i_{2},j_{2}}}=w_{j_{1},j_{2}}\] * **Part 2** _For_ \(i_{0}=i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}w(X)_{i_{0},j_{1}}}{\mathrm{d}x_{i_{2},j_{2}}}=0\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}w(X)_{i_{0},j_{1}}}{\mathrm{d}x_{i_{2},j_{2}}} = \langle W_{j_{1},*},\frac{\mathrm{d}X_{*,i_{0}}}{\mathrm{d}x_{i_{2 },j_{2}}}\rangle\] \[= \langle W_{j_{1},*},e_{j_{2}}\rangle\] \[= w_{j_{1},j_{2}}\] where the first step and the 2nd step are by Fact B.2, the 3rd step is simple algebra. **Proof of Part 2** \[\frac{\mathrm{d}w(X)_{i_{0},j_{1}}}{\mathrm{d}x_{i_{2},j_{2}}} = \langle W_{j_{1},*},\frac{\mathrm{d}X_{*,i_{0}}}{\mathrm{d}x_{i_{2 },j_{2}}}\rangle\] \[= \langle W_{j_{1},*},\mathbf{0}_{d}\rangle\] \[= 0\] where the first step is by Fact B.2, the 2nd step is because \(i_{0}\neq i_{2}\). ### Derivative of Vector Function \(X^{\top}W_{*,j_{1}}\) **Lemma C.3**.: _We have_ * **Part 1** _For_ \(i_{0}=i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}X^{\top}W_{*,j_{1}}}{\mathrm{d}x_{i_{2},j_{2}}}=e_{i_{0}}\cdot w _{j_{2},j_{1}}\] * **Part 2** _For_ \(i_{0}=i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}X^{\top}W_{*,j_{1}}}{\mathrm{d}x_{i_{2},j_{2}}}=e_{i_{2}}\cdot w _{j_{2},j_{1}}\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}X^{\top}W_{*,j_{1}}}{\mathrm{d}x_{i_{2},j_{2}}} = \frac{\mathrm{d}X^{\top}}{\mathrm{d}x_{i_{2},j_{2}}}\cdot W_{*,j _{1}}\] \[= e_{i_{2}}e_{j_{2}}^{\top}\cdot W_{*,j_{1}}\] \[= e_{i_{2}}\cdot w_{j_{2},j_{1}}\] \[= e_{i_{0}}\cdot w_{j_{2},j_{1}}\] where the first step and the 2nd step are by Fact B.2, the 3rd step is simple algebra, the 4th step holds since \(i_{0}=i_{2}\). **Proof of Part 2** \[\frac{\mathrm{d}X^{\top}W_{*,j_{1}}}{\mathrm{d}x_{i_{2},j_{2}}} = \frac{\mathrm{d}X^{\top}}{\mathrm{d}x_{i_{2},j_{2}}}\cdot W_{*,j _{1}}\] \[= e_{i_{2}}e_{j_{2}}^{\top}\cdot W_{*,j_{1}}\] \[= e_{i_{2}}\cdot w_{j_{2},j_{1}}\] where the first step and the 2nd step are by Fact B.2, the 3rd step is simple algebra. ### Derivative of Scalar Function \(f(X)_{i_{0},i_{0}}\) **Lemma C.4**.: _If the following holds:_ * _Let_ \(f(X)_{i_{0}}\) _be defined as Definition_ B.6__ _We have_ * **Part 1** _For_ \(i_{0}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}f(X)_{i_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}= -f(X)_{i_{0},i_{0}}\cdot(f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2} }+\langle f(X)_{i_{0}},X^{\top}W_{\ast,j_{2}}\rangle)\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{2},\ast}+W_{\ast,j_{2}},X_{ \ast,i_{0}}\rangle\] * **Part 2** _For_ \(i_{0}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}f(X)_{i_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}=-f(X)_{i_{0},i_ {0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}f(X)_{i_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}= \ (-(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0}}\cdot(u(X)_{i_{0},i_{0}} \cdot w(X)_{i_{0},j_{2}}+\langle u(X)_{i_{0}},X^{\top}W_{\ast,j_{2}}\rangle)\] \[+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w(X)_{i_{0},j_{2}}+X^{\top}W_{ \ast,j_{2}}))_{i_{0}}\] \[= \ -(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0},i_{0}}\cdot(u(X)_{i_ {0},i_{0}}\cdot w(X)_{i_{0},j_{2}}+\langle u(X)_{i_{0}},X^{\top}W_{\ast,j_{2}}\rangle)\] \[+(f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w(X)_{i_{0},j_{2}}))_{i_{0}}+( f(X)_{i_{0}}\circ(X^{\top}W_{\ast,j_{2}}))_{i_{0}}\] \[= \ -(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0},i_{0}}\cdot(u(X)_{i_ {0},i_{0}}\cdot w(X)_{i_{0},j_{2}}+\langle u(X)_{i_{0}},X^{\top}W_{\ast,j_{2} }\rangle)\] \[+f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}+f(X)_{i_{0},i_{0}} \cdot\langle W_{\ast,j_{2}},X_{\ast,i_{0}}\rangle\] where the first step uses Lemma B.14 for \(i_{0}=i_{2}\), the following steps are taking the \(i_{0}\)-th entry of \(f(X)_{i_{0}}\), the last step is by the definition of \(f(X)_{i_{0}}\) (see Definition B.6). **Proof of Part 2** \[\frac{\mathrm{d}f(X)_{i_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}= \ (-(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0}}\cdot u(X)_{i_{0},i_{2}} \cdot w(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot w(X)_{i_{0},j_{2}}))_{i_{0}}\] \[= \ -(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0},i_{0}}\cdot u(X)_{i_ {0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\] \[+(f(X)_{i_{0}}\circ(e_{i_{2}}\cdot w(X)_{i_{0},j_{2}}))_{i_{0}}\] \[= \ -(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0},i_{0}}\cdot u(X)_{i_ {0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\] \[= \ -f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\] where the first step uses Lemma B.14 for \(i_{0}\neq i_{2}\), the 2nd step is taking the \(i_{0}\)-th entry of \(f(X)_{i_{0}}\), the 3rd step is because \(i_{0}\neq i_{2}\), the last step is by the definition of \(f(X)_{i_{0}}\) (see Definition B.6). ### Derivative of Scalar Function \(h(X)_{j_{0},i_{0}}\) **Lemma C.5**.: _If the following holds:_ * _Let_ \(h(X)_{j_{0}}\) _be defined as Definition_ B.7__ _We have_ * **Part 1** _For_ \(i_{0}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}h(X)_{j_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}=v_{j_{2},j_{0}}\] * **Part 2** _For_ \(i_{0}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}h(X)_{j_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}=0\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}h(X)_{j_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}} =(e_{i_{2}}\cdot v_{j_{2},j_{0}})_{i_{0}}\] \[=v_{j_{2},j_{0}}\] where the first step is by Lemma B.15, the 2nd step is because \(i_{0}=i_{2}\). **Proof of Part 2** \[\frac{\mathrm{d}h(X)_{j_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}} =(e_{i_{2}}\cdot v_{j_{2},j_{0}})_{i_{0}}\] \[=0\] where the first step is by Lemma B.15, the 2nd step is because \(i_{0}\neq i_{2}\). ### Derivative of Scalar Function \(z(X)_{i_{0},j_{1}}\) **Lemma C.6**.: _If the following holds:_ * _Let_ \(f(X)_{i_{0}}\) _be defined as Definition_ B.6__ * _Let_ \(z(X)_{i_{0},j_{1}}:=\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}}\rangle\)__ * _Let_ \(w(X)_{i_{0},j_{1}}=\langle W_{j_{1},*},X_{*,i_{0}}\rangle\)__ _We have_ * **Part 1** _For_ \(i_{0}=i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}z(X)_{i_{0},j_{1}}}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= -z(X)_{i_{0},j_{1}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[-z(X)_{i_{0},j_{1}}\cdot z(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{1}},X_{*,i_{0}}\rangle \cdot w(X)_{i_{0},j_{2}}\] \[+\langle f(X)_{i_{0}}\circ X^{\top}W_{*,j_{2}},X^{\top}W_{*,j_{1}}\rangle\] \[+f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] * **Part 2** _For \(i_{0}=i_{1}\neq i_{2}\in[n]\), \(j_{1},j_{2}\in[d]\)_ \[\frac{\mathrm{d}\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}}\rangle}{ \mathrm{d}x_{i_{2},j_{2}}}\] \[= -z(X)_{i_{0},j_{1}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\cdot\langle W_{*,j_{1 }},X_{*,j_{0}}\rangle\] \[+f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}}\rangle} {\mathrm{d}x_{i_{2},j_{2}}}\] \[= \langle\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}, X^{\top}W_{*,j_{1}}\rangle+\langle f(X)_{i_{0}},\frac{\mathrm{d}X^{\top}W_{*,j_{1}}} {\mathrm{d}x_{i_{2},j_{2}}}\rangle\] \[= \langle\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}, X^{\top}W_{*,j_{1}}\rangle+\langle f(X)_{i_{0}},e_{i_{0}}\cdot w_{j_{2},j_{1}}\rangle\] \[= \langle\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}, X^{\top}W_{*,j_{1}}\rangle+f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] \[= \langle-(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0}}\cdot(u(X)_{i_ {0},i_{0}}\cdot w(X)_{i_{0},j_{2}}+\langle u(X)_{i_{0}},X^{\top}W_{*,j_{2}})\rangle\] \[+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w(X)_{i_{0},j_{2}}+X^{\top}W_{*,j_{2}}),X^{\top}W_{*,j_{1}}\rangle+f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] \[= \langle-(f(X)_{i_{0}}\cdot(f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_ {2}}+(f(X)_{i_{0}},X^{\top}W_{*,j_{2}}))\] \[+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w(X)_{i_{0},j_{2}}+X^{\top}W_{*,j_{2}}),X^{\top}W_{*,j_{1}}\rangle+f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] \[= -z(X)_{i_{0},j_{1}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[-z(X)_{i_{0},j_{1}}\cdot z(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{1}},X_{*,i_{0}}\rangle \cdot w(X)_{i_{0},j_{2}}\] \[+\langle f(X)_{i_{0}}\circ X^{\top}W_{*,j_{2}},X^{\top}W_{*,j_{1}}\rangle\] \[+f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] where the 1st step is by Fact B.2, the 2nd step uses Lemma C.3, the 3rd step is taking the \(i_{0}\)-th entry of \(f(X)_{i_{0}}\), the 4th step uses Lemma B.14, the 5th step is by the definition of \(f(X)_{i_{0}}\) (see Definition B.6). **Proof of Part 2** \[\frac{\mathrm{d}\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}}\rangle}{ \mathrm{d}x_{i_{2},j_{2}}}\] \[= \langle\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}, X^{\top}W_{*,j_{1}}\rangle+\langle f(X)_{i_{0}},\frac{\mathrm{d}X^{\top}W_{*,j_{1}}} {\mathrm{d}x_{i_{2},j_{2}}}\rangle\] \[= \langle\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}, X^{\top}W_{*,j_{1}}\rangle+\langle f(X)_{i_{0}},e_{i_{2}}\cdot w_{j_{2},j_{1}}\rangle\] \[= \langle\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}, X^{\top}W_{*,j_{1}}\rangle+f(X)_{i_{0},i_{2}}\cdot w_{j_{2},j_{1}}\] \[= \langle-(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0}}\cdot u(X)_{i_{0 },i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w(X)_{i_{0},j_{2}}),X^{\top}W_{*,j_{1}}\rangle+f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] \[= (-f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}},X^{\top}W_{*,j_{1}})+f(X )_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] \[= -z(X)_{i_{0},j_{1}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\cdot\langle W_{*,j_{1 }},X_{*,i_{0}}\rangle\] \[+f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] where the 1st step is by Fact B.2, the 2nd step uses Lemma C.3, the 3rd step is taking the \(i_{0}\)-th entry of \(f(X)_{i_{0}}\), the 4th step uses Lemma B.14, the last step is by the definition of \(f(X)_{i_{0}}\) (see Definition B.6). ### Derivative of Scalar Function \(f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\) **Lemma C.7**.: _If the following holds:_ * _Let_ \(f(X)_{i_{0}}\) _be defined as Definition_ B.6__ * _Let_ \(h(X)_{j_{0}}\) _be defined as Definition_ B.7__ _We have_ * **Part 1** _For_ \(i_{0}=i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}}{ \mathrm{d}x_{i_{2},j_{2}}}\] \[= (-f(X)_{i_{0},i_{0}}\cdot(f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{ 2}}+\langle f(X)_{i_{0}},X^{\top}W_{*,j_{2}}))\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{2},*}+W_{*,j_{2}},X_{*,i_{0 }}\rangle\rangle\cdot h(X)_{j_{0},i_{0}}+f(X)_{i_{0},i_{0}}\cdot v_{j_{2},j_{0}}\] * **Part 2** _For_ \(i_{0}=i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}=-f(X )_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\cdot h(X)_{j_ {0},i_{0}}\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}}{ \mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}f(X)_{i_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot h(X)_{j_{0},i_{0}}+f(X)_{i_{0},i_{0}}\cdot\frac{\mathrm{d}h(X)_{j_{0},i_ {0}}}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}f(X)_{i_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot h(X)_{j_{0},i_{0}}+f(X)_{i_{0},i_{0}}\cdot v_{j_{2},j_{0}}\] \[= (-(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0},i_{0}}\cdot(u(X)_{i_{0 },i_{0}}\cdot w(X)_{i_{0},j_{2}}+\langle u(X)_{i_{0}},X^{\top}W_{*,j_{2}})\rangle\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{2},*}+W_{*,j_{2}},X_{*,i_{0 }}\rangle)\cdot h(X)_{j_{0},i_{0}}+f(X)_{i_{0},i_{0}}\cdot v_{j_{2},j_{0}}\] \[= (-f(X)_{i_{0},i_{0}}\cdot(f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{ 2}}+\langle f(X)_{i_{0}},X^{\top}W_{*,j_{2}})\rangle\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{2},*}+W_{*,j_{2}},X_{*,i_{0 }}\rangle)\cdot h(X)_{j_{0},i_{0}}+f(X)_{i_{0},i_{0}}\cdot v_{j_{2},j_{0}}\] where the fist step is by Fact B.2, the 2nd step calls Lemma C.5, the 3rd step uses Lemma C.4, the last step is by the definition of \(f(X)_{i_{0}}\) (see Definition B.6). **Proof of Part 2** \[\frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}}{ \mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}f(X)_{i_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}\cdot h (X)_{j_{0},i_{0}}+f(X)_{i_{0},i_{0}}\cdot\frac{\mathrm{d}h(X)_{j_{0},i_{0}}}{ \mathrm{d}x_{i_{2},j_{2}}}\] \[= -(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0},i_{0}}\cdot u(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\cdot h(X)_{j_{0},i_{0}}\] \[= -f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2 }}\cdot h(X)_{j_{0},i_{0}}\] where the fist step is by Fact B.2, the 2nd step calls Lemma C.5, the 3rd step uses Lemma C.4, the last step is by the definition of \(f(X)_{i_{0}}\) (see Definition B.6). ### Derivative of Scalar Function \(f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\) **Lemma C.8**.: _If the following holds:_ * _Let_ \(f(X)_{i_{0}}\) _be defined as Definition_ B.6__ _We have_ * **Part 1** _For_ \(i_{0}=i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}}{ \mathrm{d}x_{i_{2},j_{2}}}\] \[= (f(X)_{i_{0},i_{0}}\cdot(f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2 }}+\langle f(X)_{i_{0}},X^{\top}W_{*,j_{2}}\rangle)\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{2},*}+W_{*,j_{2}},X_{*,i_{0} }\rangle)\cdot w(X)_{i_{0},j_{1}}+f(X)_{i_{0},i_{0}}\cdot w_{j_{1},j_{2}}\] * **Part 2** _For_ \(i_{0}=i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}}{ \mathrm{d}x_{i_{2},j_{2}}}=-f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w( X)_{i_{0},j_{2}}\cdot w(X)_{i_{0},j_{1}}\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}}{ \mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}f(X)_{i_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot w(X)_{i_{0},j_{1}}+f(X)_{i_{0},i_{0}}\cdot\frac{\mathrm{d}w(X)_{i_{0},j _{1}}}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}f(X)_{i_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot w(X)_{i_{0},j_{1}}+f(X)_{i_{0},i_{0}}\cdot w_{j_{1},j_{2}}\] \[= (-(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0},i_{0}}\cdot(u(X)_{i_{ 0},i_{0}}\cdot w(X)_{i_{0},j_{2}}+\langle u(X)_{i_{0}},X^{\top}W_{*,j_{2}} \rangle)\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{2},*}+W_{*,j_{2}},X_{*,i_{0} }\rangle)\cdot w(X)_{i_{0},j_{1}}+f(X)_{i_{0},i_{0}}\cdot w_{j_{1},j_{2}}\] \[= (-(X)_{i_{0},i_{0}}\cdot(f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2 }}+\langle f(X)_{i_{0}},X^{\top}W_{*,j_{2}}\rangle)\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{2},*}+W_{*,j_{2}},X_{*,i_{0 }}\rangle)\cdot w(X)_{i_{0},j_{1}}+f(X)_{i_{0},i_{0}}\cdot w_{j_{1},j_{2}}\] where step 1 is by Fact B.2, the 2nd step calls Lemma C.2, the 3rd step uses Lemma C.4, the last step is by the definition of \(f(X)_{i_{0}}\) (see Definition B.6). **Proof of Part 2** \[\frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}}{ \mathrm{d}x_{i_{2},j_{2}}}\] \[=\frac{\mathrm{d}f(X)_{i_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot w(X)_{i_{0},j_{1}}+f(X)_{i_{0},i_{0}}\cdot\frac{\mathrm{d}w(X)_{i_{0},j_{ 1}}}{\mathrm{d}x_{i_{2},j_{2}}}\] \[=\frac{\mathrm{d}f(X)_{i_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot w(X)_{i_{0},j_{1}}\] \[=\,-\left(\alpha(X)_{i_{0}}\right)^{-1}\cdot f(X)_{i_{0},i_{0}} \cdot u(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\cdot w(X)_{i_{0},j_{1}}\] \[=\,-f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j _{2}}\cdot w(X)_{i_{0},j_{1}}\] where step 1 is by Fact B.2, the 2nd step calls Lemma C.2, the 3rd step uses Lemma C.4, the last step is by the definition of \(f(X)_{i_{0}}\) (see Definition B.6). ### Derivative of Vector Function \(f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}})\) **Lemma C.9**.: _If the following holds:_ * _Let_ \(f(X)_{i_{0}}\) _be defined as Definition_ B.6__ _We have_ * **Part 1** _For_ \(i_{0}=i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}})}{\mathrm{ d}x_{i_{2},j_{2}}}\] \[=\,(-f(X)_{i_{0}}\cdot(f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}+ \langle f(X)_{i_{0}},X^{\top}W_{*,j_{2}}\rangle)\] \[\quad+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w(X)_{i_{0},j_{2}}+X^{ \top}W_{*,j_{2}}))\circ(X^{\top}W_{*,j_{1}})+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w _{j_{2},j_{1}})\] * **Part 2** _For_ \(i_{0}=i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}})}{\mathrm{ d}x_{i_{2},j_{2}}}\] \[=\,(-f(X)_{i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\] \[\quad+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot w(X)_{i_{0},j_{2}}))\circ( X^{\top}W_{*,j_{1}})+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot w_{j_{2},j_{1}})\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}})}{\mathrm{ d}x_{i_{2},j_{2}}}\] \[=\,\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}\circ( X^{\top}W_{*,j_{1}})+f(X)_{i_{0}}\circ\frac{\mathrm{d}X^{\top}W_{*,j_{1}}}{ \mathrm{d}x_{i_{2},j_{2}}}\] \[=\,\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}\circ( X^{\top}W_{*,j_{1}})+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w_{j_{2},j_{1}})\] \[=\,(-(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0}}\cdot(u(X)_{i_{0},i _{0}}\cdot w(X)_{i_{0},j_{2}}+\langle u(X)_{i_{0}},X^{\top}W_{*,j_{2}}\rangle)\] \[\quad+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w(X)_{i_{0},j_{2}}+X^{ \top}W_{*,j_{2}}))\circ(X^{\top}W_{*,j_{1}})+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w _{j_{2},j_{1}})\] \[=\,(-f(X)_{i_{0}}\cdot(f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}} +\langle f(X)_{i_{0}},X^{\top}W_{*,j_{2}}\rangle)\] \[+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w(X)_{i_{0},j_{2}}+X^{\top}W_{*,j_{2}}))\circ(X^{ \top}W_{*,j_{1}})+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w_{j_{2},j_{1}})\] where the 1st step is by Fact B.2, the 2nd step uses Lemma C.3, the 3rd step uses Lemma B.14, the last step is by the definition of \(f(X)_{i_{0}}\) (see Definition B.6). **Proof of Part 2** \[\frac{\mathrm{d}f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}})}{\mathrm{d}x _{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}\circ(X^{ \top}W_{*,j_{1}})+f(X)_{i_{0}}\circ\frac{\mathrm{d}X^{\top}W_{*,j_{1}}}{\mathrm{ d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}\circ(X^{ \top}W_{*,j_{1}})+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot w_{j_{2},j_{1}})\] \[= -\left((\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0}}\cdot u(X)_{i_{0 },i_{2}}\cdot w(X)_{i_{0},j_{2}}\right.\] \[+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot w(X)_{i_{0},j_{2}}))\circ(X^{ \top}W_{*,j_{1}})+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot w_{j_{2},j_{1}})\] \[= (-f(X)_{i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}})\] \[+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot w(X)_{i_{0},j_{2}}))\circ(X^{ \top}W_{*,j_{1}})+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot w_{j_{2},j_{1}})\] where the 1st step is by Fact B.2, the 2nd step uses Lemma C.3, the 3rd step uses Lemma B.14, the last step is by the definition of \(f(X)_{i_{0}}\) (see Definition B.6). ### Derivative of \(C_{1}(x)\) **Lemma C.10**.: _If the following holds:_ * _Let_ \(C_{1}(X)\in\mathbb{R}\) _be defined as in Lemma_ B.16__ * _Let_ \(x(X)_{i_{0},j_{1}}=\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}}\rangle\)__ * _Let_ \(w(X)_{i_{0},j_{1}}=\langle W_{j_{1},*},X_{*,i_{0}}\rangle\)__ _We have_ * **Part 1** _For_ \(i_{0}=i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}C_{1}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \begin{table} \begin{tabular}{|l|l|l|l|} \hline **ID** & **Term** & **Symmetric?** & **Table Name** \\ \hline 1 & \(+2s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}^{2}\cdot w(X)_{i_{0},j_{1}}\cdot w (X)_{i_{0},j_{2}}\) & Yes & N/A \\ \hline 2 & \(-f(X)_{i_{0},i_{0}}^{2}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\cdot w (X)_{i_{0},j_{1}}\) & Yes & N/A \\ \hline 3 & \(-f(X)_{i_{0},i_{0}}\cdot\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{2}}),h(X)_{j _{0}}\rangle\cdot w(X)_{i_{0},j_{1}}\) & No & Table 4: 1 \\ \hline 4 & \(-f(X)_{i_{0},i_{0}}^{2}\cdot v_{j_{2},j_{0}}\cdot w(X)_{i_{0},j_{1}}\) & No & Table 5: 1 \\ \hline 5 & \(-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\cdot w(X)_{i _{0},j_{1}}\) & Yes & N/A \\ \hline 6 & \(-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{2}},X_{*,i_{0} }\rangle\cdot w(X)_{i_{0},j_{1}}\) & No & Table 2: 7 \\ \hline 7 & \(-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w_{j_{1},j_{2}}\) & No & Table 2: 9 \\ \hline 8 & \(2f(X)_{i_{0},i_{0}}\cdot s(X)_{i_{0},j_{0}}\cdot z(X)_{i_{0},j_{2}}\cdot w(X)_{i _{0},j_{1}}\) & No & Table 2: 1 \\ \hline \end{tabular} \end{table} Table 1: \(C_{1}\) Part 1 Summary \[= +2s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}} \cdot w(X)_{i_{0},j_{1}}\] \[+2f(X)_{i_{0},i_{0}}\cdot s(X)_{i_{0},j_{0}}\cdot z(X)_{i_{0},j_{2} }\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{0}}^{2}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0}, j_{2}}\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{0}}\cdot\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_ {2}}),h(X)_{j_{0}}\rangle\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{0}}^{2}\cdot v_{j_{2},j_{0}}\cdot w(X)_{i_{0},j_{ 1}}\] \[-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2} }\cdot w(X)_{i_{0},j_{1}}\] \[-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_ {2}},X_{*,i_{0}}\rangle\cdot w(X)_{i_{0},j_{1}}\] \[-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w_{j_{1},j_{2}}\] * **Part 2** _For_ \(i_{0}=i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}C_{1}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2} }\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2}}\cdot w(X)_{i_{0},j_{2 }}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}}\cdot f(X)_{i_{0},i_{0}} \cdot w(X)_{i_{0},j_{1}}\] \[+s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{ 2}}\cdot w(X)_{i_{0},j_{2}}\cdot w(X)_{i_{0},j_{1}}\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}C_{1}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w (X)_{i_{0},j_{1}}}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= -\frac{\mathrm{d}s(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[-s(X)_{i_{0},j_{0}}\cdot\frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot w (X)_{i_{0},j_{1}}}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= -\frac{\mathrm{d}s(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[-s(X)_{i_{0},j_{0}}\cdot\left((-(\alpha(X)_{i_{0}})^{-1}\cdot f(X )_{i_{0},i_{0}}\cdot(u(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}+\langle u(X)_ {i_{0}},X^{\top}W_{*,j_{2}})\right)\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{2},*}+W_{*,j_{2}},X_{*,i_{0 }}\rangle)\cdot w(X)_{i_{0},j_{1}}+f(X)_{i_{0},i_{0}}\cdot w_{j_{1},j_{2}})\] \[= -(-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{ 2}}-s(X)_{i_{0},j_{0}}\cdot\langle f(X)_{i_{0}},X^{\top}W_{*,j_{2}}\rangle\] \[+f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[+\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{2}}),h(X)_{j_{0}} \rangle+f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}})\cdot f(X)_{i_{0},i_{0}}\cdot w( X)_{i_{0},j_{1}}\] \[-s(X)_{i_{0},j_{0}}\cdot\left((-f(X)_{i_{0},i_{0}}\cdot(f(X)_{i_{ 0},i_{0}}\cdot w(X)_{i_{0},j_{2}}+\langle f(X)_{i_{0}},X^{\top}W_{*,j_{2}})\right)\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{2},*}+W_{*,j_{2}},X_{*,i_{0 }}\rangle)\cdot w(X)_{i_{0},j_{1}}+f(X)_{i_{0},i_{0}}\cdot w_{j_{1},j_{2}})\] \[= 2s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}^{2}\cdot w(X)_{i_{0},j _{2}}\cdot w(X)_{i_{0},j_{1}}\] \[+2s(X)_{i_{0},j_{0}}\cdot Z(X)_{i_{0},j_{2}}\cdot f(X)_{i_{0},i_{0 }}\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{0}}^{2}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j _{2}}\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{0}}\cdot\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{2}}),h(X)_{j _{0}}\rangle\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{0}}^{2}\cdot v_{j_{2},j_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{2}, *}+W_{*,j_{2}},X_{*,i_{0}}\rangle\cdot w(X)_{i_{0},j_{1}}\] \[-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w_{j_{1},j_{2}}\] where the first step is by definition of \(C_{1}(X)\) (see Lemma B.16), the 2nd step is by Fact B.2, the 3rd step is by Lemma C.8, the 4th step is because Lemma B.16, the 5th step is a rearrangement. **Proof of Part 2** \[\frac{\mathrm{d}C_{1}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w (X)_{i_{0},j_{1}}}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= -\frac{\mathrm{d}s(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[-s(X)_{i_{0},j_{0}}\cdot\frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot w (X)_{i_{0},j_{1}}}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= -\frac{\mathrm{d}s(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[+s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{2} }\cdot w(X)_{i_{0},j_{2}}\cdot w(X)_{i_{0},j_{1}}\] \[= -(-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_ {2}}+f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}})\cdot f(X)_{i_{0},i_{0}} \cdot w(X)_{i_{0},j_{1}}\] \[+s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{2 }}\cdot w(X)_{i_{0},j_{2}}\cdot w(X)_{i_{0},j_{1}}\] \[= s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2 }}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2}}\cdot w(X)_{i_{0},j_{2 }}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}}\cdot f(X)_{i_{0},i_{0}} \cdot w(X)_{i_{0},j_{1}}\] \[+s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{2 }}\cdot w(X)_{i_{0},j_{2}}\cdot w(X)_{i_{0},j_{1}}\] where the first step is by definition of \(C_{1}(X)\) (see Lemma B.16), the 2nd step is by Fact B.2, the 3rd step is by Lemma C.8, the 4th step is because Lemma B.16, the 5th step is a rearrangement. ### Derivative of \(C_{2}(x)\) **Lemma C.11**.: _If the following holds:_ * _Let_ \(C_{2}(X)\) _be defined as in Lemma_ B.16__ * _We define_ \(z(X)_{i_{0},j_{1}}:=\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}}\rangle\)_._ _We have_ * **Part 1** _For_ \(i_{0}=i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}C_{2}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= +2s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_ {2}}\cdot z(X)_{i_{0},j_{1}}\] \[+s(X)_{i_{0},j_{0}}\cdot z(X)_{i_{0},j_{2}}\cdot z(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}} \cdot z(X)_{i_{0},j_{1}}\] \[-\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{2}}),h(X)_{j_{0}} \rangle\cdot z(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{0}}\cdot v_{j_{2},j_{0}}\cdot z(X)_{i_{0},j_{1}}\] \[+s(X)_{i_{0},j_{0}}\cdot z(X)_{i_{0},j_{1}}\cdot f(X)_{i_{0},i_{0 }}\cdot z(X)_{i_{0},j_{2}}\] \[-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{1 }},X_{*,i_{0}}\rangle\cdot w(X)_{i_{0},j_{2}}\] \[-s(X)_{i_{0},j_{0}}\cdot\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_ {2}}),X^{\top}W_{*,j_{1}}\rangle\] \[-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] * **Part 2**_For_ \(i_{0}=i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}C_{2}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= +s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2 }}\cdot z(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2}}\cdot w(X)_{i_{0},j_{2 }}\cdot z(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}}\cdot z(X)_{i_{0},j_{1}}\] \[+s(X)_{i_{0},j_{0}}\cdot(f(X)_{i_{0}},X^{\top}W_{*,j_{1}})\cdot f (X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{1 }},X_{*,i_{0}}\rangle\cdot w(X)_{i_{0},j_{2}}\] \[-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}-C_{2}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}s(X)_{i_{0},j_{0}}\cdot z(X)_{i_{0},j_{1}}}{ \mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}s(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}\cdot z (X)_{i_{0},j_{1}}+s(X)_{i_{0},j_{0}}\cdot\frac{\mathrm{d}z(X)_{i_{0},j_{1}}}{ \mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}s(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot z(X)_{i_{0},j_{1}}\] \begin{table} \begin{tabular}{|l|l|l|l|} \hline **ID** & **Term** & **Symmetric Terms** & **Table Name** \\ \hline 1 & \(2s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\cdot z(X)_{i_ {0},j_{1}}\) & No & Table 1: 9 \\ \hline 2 & \(s(X)_{i_{0},j_{0}}\cdot z(X)_{i_{0},j_{2}}\cdot z(X)_{i_{0},j_{1}}\) & Yes & N/A \\ \hline 3 & \(-f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\cdot z(X)_{i_ {0},j_{1}}\) & No & Table 3: 3 \\ \hline 4 & \(-\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{2}}),h(X)_{j_{0}}\rangle\cdot z(X)_{i _{0},j_{1}}\) & No & Table 4: 2 \\ \hline 5 & \(-f(X)_{i_{0},i_{0}}\cdot v_{j_{2},j_{0}}\cdot z(X)_{i_{0},j_{1}}\) & No & Table 5: 2 \\ \hline 6 & \(+s(X)_{i_{0},j_{0}}\cdot z(X)_{i_{0},j_{1}}\cdot f(X)_{i_{0},i_{0}}\cdot z(X)_{i _{0},j_{2}}\) & Yes & N/A \\ \hline 7 & \(-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{1}},X_{*,i_{0}} \rangle\cdot w(X)_{i_{0},j_{2}}\) & No & Table 1: 6 \\ \hline 8 & \(-s(X)_{i_{0},j_{0}}\cdot\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{2}}),X^{ \top}W_{*,j_{1}}\rangle\) & Yes & N/A \\ \hline 9 & \(-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\) & No & Table 1: 7 \\ \hline \end{tabular} \end{table} Table 2: \(C_{2}\) Part 1 Summary \[+s(X)_{i_{0},j_{0}}\cdot(\langle-(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0}} \cdot(u(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}+\langle u(X)_{i_{0}},X^{\top}W_{ *,j_{2}}\rangle)\] \[+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w(X)_{i_{0},j_{2}}+X^{\top}W_{*, j_{2}}),X^{\top}W_{*,j_{1}})+f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}})\] \[= (-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2 }}-s(X)_{i_{0},j_{0}}\cdot\langle f(X)_{i_{0}},X^{\top}W_{*,j_{2}}\rangle\] \[+f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{2 }}\] \[+\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{2}}),h(X)_{j_{0}} \rangle+f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}})\cdot z(X)_{i_{0},j_{1}}\] \[+s(X)_{i_{0},j_{0}}\cdot(\langle-(\alpha(X)_{i_{0}})^{-1}\cdot f (X)_{i_{0}}\cdot(u(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}+\langle u(X)_{i_{0} },X^{\top}W_{*,j_{2}}\rangle)\] \[+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w(X)_{i_{0},j_{2}}+X^{\top}W_{ *,j_{2}}),X^{\top}W_{*,j_{1}})+f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}})\] \[= -s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2 }}\cdot z(X)_{i_{0},j_{1}}\] \[-s(X)_{i_{0},j_{0}}\cdot z(X)_{i_{0},j_{2}}\cdot z(X)_{i_{0},j_{1}}\] \[+f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{2 }}\cdot z(X)_{i_{0},j_{1}}\] \[+\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{2}}),h(X)_{j_{0}} \rangle\cdot z(X)_{i_{0},j_{1}}\] \[+f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}}\cdot z(X)_{i_{0},j_{1}}\] \[-s(X)_{i_{0},j_{0}}\cdot\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}} \rangle\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[-s(X)_{i_{0},j_{0}}\cdot\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}} \rangle\cdot f(X)_{i_{0},i_{0}}\cdot\langle f(X)_{i_{0}},X^{\top}W_{*,j_{2}}\rangle\] \[+s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{1 }},X_{*,i_{0}}\rangle\cdot w(X)_{i_{0},j_{2}}\] \[+s(X)_{i_{0},j_{0}}\cdot\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j _{2}}),X^{\top}W_{*,j_{1}}\rangle\] \[+s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] where the first step is by definition of \(C_{2}(X)\) (see Lemma B.16), the 2nd step is by Fact B.2, the 3rd step is by Lemma C.6, the 4th step is because Lemma B.16, the 5th step is a rearrangement. **Proof of Part 2** \[\frac{\mathrm{d}-C_{2}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}s(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}\cdot \langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}}\rangle\] \[= \frac{\mathrm{d}s(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}\cdot z (X)_{i_{0},j_{1}}+s(X)_{i_{0},j_{0}}\cdot\frac{\mathrm{d}\langle f(X)_{i_{0}}, X^{\top}W_{*,j_{1}}\rangle}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}s(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}\cdot z (X)_{i_{0},j_{1}}\] \[+s(X)_{i_{0},j_{0}}\cdot(\langle-(\alpha(X)_{i_{0}})^{-1}\cdot f(X) _{i_{0}}\cdot u(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w(X)_{i_{0},j_{2}}),X^{\top}W_{*, j_{1}}\rangle+f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}})\] \[= (-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2 }}+f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}})\cdot z(X)_{i_{0},j_{1}}\] \[+s(X)_{i_{0},j_{0}}\cdot(\langle-(\alpha(X)_{i_{0}})^{-1}\cdot f(X) _{i_{0}}\cdot u(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w(X)_{i_{0},j_{2}}),X^{\top}W_{*, j_{1}}\rangle+f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}})\] \[= -s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2 }}\cdot z(X)_{i_{0},j_{1}}\] \[+f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2}}\cdot w(X)_{i_{0},j_{2 }}\cdot z(X)_{i_{0},j_{1}}\] \[+f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}}\cdot z(X)_{i_{0},j_{1}}\] \[-s(X)_{i_{0},j_{0}}\cdot\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}} \rangle\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[+s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{1}},X_{*,i_{0}}\rangle\cdot w(X)_{i_{0},j_{2}}\] \[+s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] where the first step is by definition of \(C_{2}(X)\) (see Lemma B.16), the 2nd step is by Fact B.2, the 3rd step is by Lemma C.6, the 4th step is because Lemma B.16, the 5th step is a rearrangement. ### Derivative of \(C_{3}(x)\) **Lemma C.12**.: _If the following holds:_ * _Let_ \(C_{3}(X)\) _be defined as in Lemma_ B.16__ _We have_ * **Part 1** _For_ \(i_{0}=i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}C_{3}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= -f(X)_{i_{0},i_{0}}^{2}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j _{2}}\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{0}}\cdot z(X)_{i_{0},j_{2}}\cdot h(X)_{j_{0},i_{0 }}\cdot w(X)_{i_{0},j_{1}}\] \[+f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\cdot h(X)_{j_{0},i_{0 }}\cdot w(X)_{i_{0},j_{1}}\] \[+f(X)_{i_{0},i_{0}}\cdot\left\langle W_{*,j_{2}},X_{*,i_{0}} \right\rangle\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[+f(X)_{i_{0},i_{0}}\cdot v_{j_{2},j_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[+f(X)_{i_{0},i_{0}}\cdot h(X)_{i_{0},i_{0}}\cdot w_{j_{1},j_{2}}\] * **Part 2** _For_ \(i_{0}=i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}C_{3}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= -f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2 }}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}C_{3}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot h(X)_{i_{0},i_{0}}\cdot w (X)_{i_{0},j_{1}}}{\mathrm{d}x_{i_{2},j_{2}}}\] \begin{table} \begin{tabular}{|l|l|l|l|} \hline ID & Term & Symmetric Terms & Table Name \\ \hline 1 & \(-f(X)_{i_{0},i_{0}}^{2}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\cdot w( X)_{i_{0},j_{1}}\) & Yes & N/A \\ \hline 2 & \(f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_ {0},j_{1}}\) & Yes & N/A \\ \hline 3 & \(-f(X)_{i_{0},i_{0}}\cdot z(X)_{i_{0},j_{2}}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i _{0},j_{1}}\) & No & Table 2: 3 \\ \hline 4 & \(f(X)_{i_{0},i_{0}}\cdot\left\langle W_{*,j_{2}},X_{*,i_{0}}\right\rangle\cdot h (X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\) & No & Table 4: 3 \\ \hline 5 & \(f(X)_{i_{0},i_{0}}\cdot v_{j_{2},j_{0}}\cdot w(X)_{i_{0},j_{1}}\) & No & Table 5: 3 \\ \hline 6 & \(f(X)_{i_{0},i_{0}}\cdot h(X)_{i_{0},i_{0}}\cdot w_{j_{1},j_{2}}\) & No & Table 4: 5 \\ \hline \end{tabular} \end{table} Table 3: \(C_{3}\) Part 1 Summary \[= \frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot h(X)_{i_{0},i_{0}}}{ \mathrm{d}x_{i_{2},j_{2}}}\cdot w(X)_{i_{0},j_{1}}+f(X)_{i_{0},i_{0}}\cdot h(X)_{ i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[= \frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot h(X)_{i_{0},i_{0}}}{ \mathrm{d}x_{i_{2},j_{2}}}\cdot w(X)_{i_{0},j_{1}}+f(X)_{i_{0},i_{0}}\cdot h(X) _{i_{0},i_{0}}\cdot w_{j_{1},j_{2}}\] \[= ((-f(X)_{i_{0},i_{0}}\cdot(f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{ 2}}+\langle f(X)_{i_{0}},X^{\top}W_{*,j_{2}}\rangle)\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{2},*}+W_{*,j_{2}},X_{*,i_{0} }\rangle)\cdot h(X)_{j_{0},i_{0}}+f(X)_{i_{0},i_{0}}\cdot v_{j_{2},j_{0}}) \cdot w(X)_{i_{0},j_{1}}\] \[+f(X)_{i_{0},i_{0}}\cdot h(X)_{i_{0},i_{0}}\cdot w_{j_{1},j_{2}}\] \[= -f(X)_{i_{0},i_{0}}^{2}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0}, j_{2}}\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{0}}\cdot Z(X)_{i_{0},j_{2}}\cdot h(X)_{j_{0},i_{0 }}\cdot w(X)_{i_{0},j_{1}}\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{2},*}+W_{*,j_{2}},X_{*,i_{0 }}\rangle\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[+f(X)_{i_{0},i_{0}}\cdot v_{j_{2},j_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[+f(X)_{i_{0},i_{0}}\cdot h(X)_{i_{0},i_{0}}\cdot w_{j_{1},j_{2}}\] where the first step is by definition of \(C_{3}(X)\) (see Lemma B.16), the 2nd step is by Fact B.2, the 3rd step is by Lemma C.2, the 4th step is because Lemma C.7, the 5th step is a rearrangement. **Proof of Part 2** \[\frac{\mathrm{d}C_{3}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot h(X)_{i_{0},i_{0}}\cdot w (X)_{i_{0},j_{1}}}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot h(X)_{i_{0},i_{0}}}{ \mathrm{d}x_{i_{2},j_{2}}}\cdot w(X)_{i_{0},j_{1}}+f(X)_{i_{0},i_{0}}\cdot h(X )_{i_{0},i_{0}}\cdot\frac{\mathrm{d}w(X)_{i_{0},j_{1}}}{\mathrm{d}x_{i_{2},j_ {2}}}\] \[= \frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot h(X)_{i_{0},i_{0}}}{ \mathrm{d}x_{i_{2},j_{2}}}\cdot w(X)_{i_{0},j_{1}}\] \[= -f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2 }}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] where the first step is by definition of \(C_{3}(X)\) (see Lemma B.16), the 2nd step is by Fact B.2, the 3rd step is by Lemma C.2, the 4th step is because Lemma C.7, the 5th step is a rearrangement. ### Derivative of \(C_{4}(x)\) **Lemma C.13**.: _If the following holds:_ \begin{table} \begin{tabular}{|l|l|l|l|} \hline ID & Term & Symmetric? & Table Name \\ \hline 1 & \(-\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}}),h(X)_{j_{0}}\rangle\cdot f(X)_{i _{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\) & No & Table 1: 3 \\ \hline 2 & \(-\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}}),h(X)_{j_{0}}\rangle\cdot Z(X)_{i _{0},j_{2}}\) & No & Table 2: 4 \\ \hline 3 & \(f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\cdot\langle W_{*,j_{1}},X_{*,i_{0} }\rangle\cdot w(X)_{i_{0},j_{2}}\) & No & Table 3: 4 \\ \hline 4 & \(\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{2}})\circ(X^{\top}W_{*,j_{1}}),h(X)_{j _{0}}\rangle\) & Yes & N/A \\ \hline 5 & \(f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\cdot w_{j_{2},j_{1}}\) & No & Table 3: 6 \\ \hline 6 & \(f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{1}},X_{*,i_{0}}\rangle\cdot v_{j_{2},j_{0}}\) & No & Table 5:4 \\ \hline \end{tabular} \end{table} Table 4: \(C_{4}\) Part 1 Summary * _Let_ \(C_{4}(X)\) _be defined as in Lemma_ B.16__ _We have_ * **Part 1** _For_ \(i_{0}=i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}C_{4}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= -\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}}),h(X)_{j_{0}} \rangle\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[-\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}}),h(X)_{j_{0}} \rangle\cdot Z(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\cdot\langle W_{*,j_{1 }},X_{*,i_{0}}\rangle\cdot w(X)_{i_{0},j_{2}}\] \[+\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{2}})\circ(X^{\top}W_{ *,j_{1}}),h(X)_{j_{0}}\rangle\] \[+f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{1}},X_{*,i_{0}}\rangle \cdot v_{j_{2},j_{0}}\] * **Part 2** _For_ \(i_{0}=i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}C_{4}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= -\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}}),h(X)_{j_{0}} \rangle\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2}}\cdot\langle W_{*,j_{1 }},X_{*,i_{2}}\rangle\cdot w(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2}}\cdot w_{j_{2},j_{1}}\] \[+f(X)_{i_{0},i_{2}}\cdot\langle W_{*,j_{1}},X_{*,i_{2}}\rangle \cdot v_{j_{2},j_{0}}\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}C_{4}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}}),h (X)_{j_{0}}\rangle}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \langle\frac{\mathrm{d}\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{ 1}})}{\mathrm{d}x_{i_{2},j_{2}}},h(X)_{j_{0}}\rangle+\langle f(X)_{i_{0}} \circ(X^{\top}W_{*,j_{1}}),\frac{\mathrm{d}h(X)_{j_{0}}}{\mathrm{d}x_{i_{2},j _{2}}}\rangle\] \[= \langle\frac{\mathrm{d}f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}})}{ \mathrm{d}x_{i_{2},j_{2}}},h(X)_{j_{0}}\rangle+\langle f(X)_{i_{0}}\circ(X^{ \top}W_{*,j_{1}}),e_{i_{2}}\cdot v_{j_{2},j_{0}}\rangle\] \[= \langle(-f(X)_{i_{0}}\cdot(f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j _{2}}+\langle f(X)_{i_{0}},X^{\top}W_{*,j_{2}}))\] \[+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w(X)_{i_{0},j_{2}}+X^{\top}W_{*,j_{2}}))\circ(X^{\top}W_{*,j_{1}})+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot w_{j_{2}, j_{1}}),h(X)_{j_{0}}\rangle\] \[+\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}}),e_{i_{0}}\cdot v_{ j_{2},j_{0}}\rangle\] \[= -\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}}),h(X)_{j_{0}} \rangle\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[-\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}}),h(X)_{j_{0}} \rangle\cdot\langle f(X)_{i_{0}},X^{\top}W_{*,j_{2}}\rangle\] \[+f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\cdot\langle W_{*,j_{ 1}},X_{*,i_{0}}\rangle\cdot w(X)_{i_{0},j_{2}}\] \[+\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{2}})\circ(X^{\top}W_{*,j_{1}}),h(X)_{j_{0}}\rangle\] \[+f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{1}},X_{*,i_{0}}\rangle\cdot v_{j_{2},j_{0}}\] where the first step is by definition of \(C_{4}(X)\) (see Lemma B.16), the 2nd step is by Fact B.2, the 3rd step is by Lemma B.15, the 4th step is because Lemma C.9, the 5th step is a rearrangement. **Proof of Part 2** \[\frac{\mathrm{d}C_{4}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[=\frac{\mathrm{d}\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}}), h(X)_{j_{0}}\rangle}{\mathrm{d}x_{i_{2},j_{2}}}\] \[=\langle\frac{\mathrm{d}f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}})}{ \mathrm{d}x_{i_{2},j_{2}}},h(X)_{j_{0}}\rangle+\langle f(X)_{i_{0}}\circ(X^{ \top}W_{*,j_{1}}),\frac{\mathrm{d}h(X)_{j_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}\rangle\] \[=\langle\frac{\mathrm{d}f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}})}{ \mathrm{d}x_{i_{2},j_{2}}},h(X)_{j_{0}}\rangle+\langle f(X)_{i_{0}}\circ(X^{ \top}W_{*,j_{1}}),e_{i_{2}}\cdot v_{j_{2},j_{0}}\rangle\] \[=\langle-(f(X)_{i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j _{2}}\] \[\quad+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot w(X)_{i_{0},j_{2}}))\circ( X^{\top}W_{*,j_{1}})+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot w_{j_{2},j_{1}}),h(X)_{j_{0}}\rangle\] \[\quad+\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}}),e_{i_{2}} \cdot v_{j_{2},j_{0}}\rangle\] \[=\,-\,\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}}),h(X)_{j_{0}} \rangle\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\] \[\quad+f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2}}\cdot W_{*,j_{2}, j_{1}}\] \[\quad+f(X)_{i_{0},i_{2}}\cdot\langle W_{*,j_{1}},X_{*,i_{2}} \rangle\cdot v_{j_{2},j_{0}}\] where the first step is by definition of \(C_{4}(X)\) (see Lemma B.16), the 2nd step is by Fact B.2, the 3rd step is by Lemma B.15, the 4th step is because Lemma C.9, the 5th step is a rearrangement. ### Derivative of \(C_{5}(x)\) **Lemma C.14**.: _If the following holds:_ * _Let_ \(C_{5}(X)\) _be defined as in Lemma_ B.16__ _We have_ * **Part 1** _For_ \(i_{0}=i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}C_{5}(X)}{\mathrm{d}x_{i_{2},j_{2}}}=\,-\,f(X)^{2}_{i_{0},i_{0} }\cdot w(X)_{i_{0},j_{2}}\cdot v_{j_{1},j_{0}}\] \begin{table} \begin{tabular}{|l|l|l|} \hline Term & Symmetric Terms & Table Name \\ \hline \(-f(X)^{2}_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\cdot v_{j_{1},j_{0}}\) & No & \(C_{1}(X):4\) \\ \hline \(-f(X)_{i_{0},i_{0}}\cdot z(X)_{i_{0},j_{2}}\cdot v_{j_{1},j_{0}}\) & No & Table 2: 5 \\ \hline \(f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\cdot v_{j_{1},j_{0}}\) & No & Table 3:5 \\ \hline \(f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{2}},X_{*,i_{0}}\rangle\cdot v_{j_{1},j_{ 0}}\) & No & Table 4: 6 \\ \hline \end{tabular} \end{table} Table 5: \(C_{5}\) Part 1 Summary \[-f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\cdot v_{j_{1},j_{0}}\] \[-f(X)_{i_{0},i_{0}}\cdot(f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}} \cdot v_{j_{1},j_{0}}\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{2}},X_{*,i_{0}}\rangle \cdot v_{j_{1},j_{0}}\] * **Part 2**_For_ \(i_{0}=i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}C_{5}(X)}{\mathrm{d}x_{i_{2},j_{2}}}=\ -f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}} \cdot v_{j_{1},j_{0}}\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}C_{5}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot v_{j_{1},j_{0}}}{\mathrm{ d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}f(X)_{i_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}}\cdot v _{j_{1},j_{0}}\] \[= (-f(X)_{i_{0},i_{0}}\cdot(f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{ 2}}+\langle f(X)_{i_{0}},X^{\top}W_{*,j_{2}}\rangle)\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{2},*}+W_{*,j_{2}},X_{*,i_{0} }\rangle)\cdot v_{j_{1},j_{0}}\] \[= -f(X)_{i_{0},i_{0}}^{2}\cdot w(X)_{i_{0},j_{2}}\cdot v_{j_{1},j_{ 0}}\] \[-f(X)_{i_{0},i_{0}}\cdot\langle f(X)_{i_{0}},X^{\top}W_{*,j_{2}} \rangle\cdot v_{j_{1},j_{0}}\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{2},*}+W_{*,j_{2}},X_{*,i_{0} }\rangle\cdot v_{j_{1},j_{0}}\] where the first step is by definition of \(C_{5}(X)\) (see Lemma B.16), the 2nd step is by Fact B.2, the 3rd step is by Lemma C.4, the 4th step is a rearrangement. **Proof of Part 2** \[\frac{\mathrm{d}C_{5}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot v_{j_{1},j_{0}}}{\mathrm{ d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}f(X)_{i_{0},i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot v_{j_{1},j_{0}}\] \[= -f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{ 2}}\cdot v_{j_{1},j_{0}}\] where the first step is by definition of \(C_{5}(X)\) (see Lemma B.16), the 2nd step is by Fact B.2, the 3rd step is by Lemma C.4. ### Derivative of \(\frac{c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}\) **Lemma C.15**.: _If the following holds:_ * _Let_ \(c(X)_{i_{0},j_{0}}\) _be defined as in Definition_ B.8__ _We have_ * **Part 1** _For_ \(i_{0}=i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}x_{i_{2},j_{2}}}= \sum_{i=1}^{21}D_{i}(X)\] _where we have following definitions_ \[D_{1}(X) := 2s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}^{2}\cdot w(X)_{i_{0}, j_{2}}\cdot w(X)_{i_{0},j_{1}}\] \[D_{2}(X) := 2f(X)_{i_{0},i_{0}}\cdot s(X)_{i_{0},j_{0}}\cdot z(X)_{i_{0},j_{2 }}\cdot w(X)_{i_{0},j_{1}}\] \[+2f(X)_{i_{0},i_{0}}\cdot s(X)_{i_{0},j_{0}}\cdot z(X)_{i_{0},j_{ 1}}\cdot w(X)_{i_{0},j_{2}}\] \[D_{3}(X) := -f(X)_{i_{0},i_{0}}^{2}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0}, j_{2}}\cdot w(X)_{i_{0},j_{1}}\] \[D_{4}(X) := -f(X)_{i_{0},i_{0}}\cdot\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j _{2}}),h(X)_{j_{0}}\rangle\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{0}}\cdot\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j _{1}}),h(X)_{j_{0}}\rangle\cdot w(X)_{i_{0},j_{2}}\] \[D_{5}(X) := -f(X)_{i_{0},i_{0}}^{2}\cdot v_{j_{2},j_{0}}\cdot w(X)_{i_{0},j_{ 1}}-f(X)_{i_{0},i_{0}}^{2}\cdot v_{j_{1},j_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[D_{6}(X) := -s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2 }}\cdot w(X)_{i_{0},j_{1}}\] \[D_{7}(X) := -s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{ 2}},X_{*,i_{0}}\rangle\cdot w(X)_{i_{0},j_{1}}\] \[-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{ 1}},X_{*,i_{0}}\rangle\cdot w(X)_{i_{0},j_{2}}\] \[D_{8}(X) := -s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w_{j_{1},j_{2}}- s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] \[D_{9}(X) := s(X)_{i_{0},j_{0}}\cdot z(X)_{i_{0},j_{2}}\cdot z(X)_{i_{0},j_{ 1}}\] \[D_{10}(X) := -f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{ 2}}\cdot z(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{ 1}}\cdot z(X)_{i_{0},j_{2}}\] \[D_{11}(X) := -\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{2}}),h(X)_{j_{0}} \rangle\cdot z(X)_{i_{0},j_{1}}\] \[-\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}}),h(X)_{j_{0}} \rangle\cdot z(X)_{i_{0},j_{2}}\] \[D_{12}(X) := -f(X)_{i_{0},i_{0}}\cdot v_{j_{2},j_{0}}\cdot z(X)_{i_{0},j_{1}}- f(X)_{i_{0},i_{0}}\cdot v_{j_{1},j_{0}}\cdot z(X)_{i_{0},j_{2}}\] \[D_{13}(X) := s(X)_{i_{0},j_{0}}\cdot z(X)_{i_{0},j_{1}}\cdot f(X)_{i_{0},i_{0}} \cdot z(X)_{i_{0},j_{2}}\] \[D_{14}(X) := -s(X)_{i_{0},j_{0}}\cdot\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j _{2}}),X^{\top}W_{*,j_{1}}\rangle\] \[D_{15}(X) := -f(X)_{i_{0},i_{0}}^{2}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j _{2}}\cdot w(X)_{i_{0},j_{1}}\] \[D_{16}(X) := f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\cdot h(X)_{j_{0},i_{0}} \cdot w(X)_{i_{0},j_{1}}\] \[D_{17}(X) := f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{2}},X_{*,i_{0}}\rangle \cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[+f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{1}},X_{*,i_{0}}\rangle \cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[D_{18}(X) := f(X)_{i_{0},i_{0}}\cdot v_{j_{2},j_{0}}\cdot w(X)_{i_{0},j_{1}}+f( X)_{i_{0},i_{0}}\cdot v_{j_{1},j_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[D_{19}(X) := f(X)_{i_{0},i_{0}}\cdot h(X)_{i_{0},i_{0}}\cdot w_{j_{1},j_{2}}+ f(X)_{i_{0},i_{0}}\cdot h(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] \[D_{20}(X) := \langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{2}})\circ(X^{\top}W_{*,j _{1}}),h(X)_{j_{0}}\rangle\] \[D_{21}(X) := f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{2}},X_{*,i_{0}}\rangle \cdot v_{j_{1},j_{0}}+f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{1}},X_{*,i_{0}} \rangle\cdot v_{j_{2},j_{0}}\] * **Part 2** _For_ \(i_{0}=i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}x_{i_{2},j_{2}}}= \sum_{i=1}^{15}E_{i}(X)\] _where we have following definitions_ \[E_{1}(X):=2s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\cdot f( X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[E_{2}(X):= -2f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}} \cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[E_{3}(X):= -f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}}\cdot f(X)_{i_{0},i_{0}} \cdot w(X)_{i_{0},j_{1}}\] \[E_{4}(X):= s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}} \cdot z(X)_{i_{0},j_{1}}\] \[E_{5}(X):= -f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2}}\cdot w(X)_{i_{0},j_{2} }\cdot z(X)_{i_{0},j_{1}}\] \[E_{6}(X):= -f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}}\cdot z(X)_{i_{0},j_{1}}\] \[E_{7}(X):= s(X)_{i_{0},j_{0}}\cdot\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}} \rangle\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[E_{8}(X):= -s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot\langle W_{*,j_{1 }},X_{*,i_{0}}\rangle\cdot w(X)_{i_{0},j_{2}}\] \[E_{9}(X):= -s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w_{j_{2},j_{1}}\] \[E_{10}(X):= -f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2 }}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] \[E_{11}(X):= -\langle f(X)_{i_{0}}\circ(X^{\top}W_{*,j_{1}}),h(X)_{j_{0}} \rangle\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\] \[E_{12}(X):= f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2}}\cdot\langle W_{*,j_{1 }},X_{*,i_{2}}\rangle\cdot w(X)_{i_{0},j_{2}}\] \[E_{13}(X):= f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2}}\cdot w_{j_{2},j_{1}}\] \[E_{14}(X):= f(X)_{i_{0},i_{2}}\cdot\langle W_{*,j_{1}},X_{*,i_{2}} \rangle\cdot v_{j_{2},j_{0}}\] \[E_{15}(X):= -f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2 }}\cdot v_{j_{1},j_{0}}\] Proof.: The proof is a combination of derivatives of \(C_{i}(X)\) in this section. Notice that the symmetricity for **Part 1** is verified by tables in this section. ## Appendix D Hessian case 2: \(i_{0}\neq i_{1}\) In this section, we focus on the second case of Hessian. In Sections D.1, D.2, D.3, D.4 and D.5, we calculated derivative of some important terms. In Sections D.6, D.7 and D.8 we calculate derivative of \(C_{6}\), \(C_{7}\) and \(C_{8}\) respectively. And in Section D.9 we calculate the derivative of \(\frac{\mathrm{d}c(X)_{i_{0},j_{1}}}{\mathrm{d}x_{i_{1},j_{1}}}\). ### Derivative of scalar function \(f(X)_{i_{0},i_{1}}\) **Lemma D.1**.: _If the following holds:_ * _Let_ \(f(X)_{i_{0}}\) _be defined as Definition_ B.6__ * _For_ \(i_{0}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ _We have_ * **Part 1.** _For_ \(i_{0}\neq i_{2},i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}f(X)_{i_{0},i_{1}}}{\mathrm{d}x_{i_{2},j_{2}}}= -f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0},i_{1}}\cdot w(X)_{i_{0},j_{2}}\] * **Part 2.** _For_ \(i_{0}\neq i_{2},i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}f(X)_{i_{0},i_{1}}}{\mathrm{d}x_{i_{2},j_{2}}}= -f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}f(X)_{i_{0},i_{1}}}{\mathrm{d}x_{i_{2},j_{2}}} = (-(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0}}\cdot u(X)_{i_{0},i_{2} }\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\] \[+f(X)_{i_{0}}\circ(e_{i_{1}}\cdot\langle W_{j_{2},*},X_{*,i_{0}} \rangle))_{i_{1}}\] \[= -(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0},i_{1}}\cdot u(X)_{i_{0 },i_{2}}\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\] \[+f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\] \[= -f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0},i_{1}}\cdot w(X)_{i_{0},j_{2}}\] where the first step follows from Part 1 of Lemma B.14, the second step follows from simple algebra, the first step follows from Definition B.6. **Proof of Part 2** \[\frac{\mathrm{d}f(X)_{i_{0},i_{1}}}{\mathrm{d}x_{i_{2},j_{2}}} = (-(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0}}\cdot u(X)_{i_{0},i_{2 }}\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\] \[+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot\langle W_{j_{2},*},X_{*,i_{0}} \rangle))_{i_{1}}\] \[= -(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0},i_{1}}\cdot u(X)_{i_{0 },i_{2}}\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\] \[= -f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{2}}\] where the first step follows from Part 1 of Lemma B.14, the second step follows from simple algebra, the first step follows from Definition B.6. ### Derivative of scalar function \(h(X)_{j_{0},i_{1}}\) **Lemma D.2**.: _If the following holds:_ * _Let_ \(h(X)_{j_{0}}\) _be defined as Definition_ B.7__ * _For_ \(i_{0}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ _We have_ * **Part 1.** _For_ \(i_{0}\neq i_{2},i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}h(X)_{j_{0},i_{1}}}{\mathrm{d}x_{i_{2},j_{2}}}=v_{j_{2},j_{0}}\] * **Part 2.** _For_ \(i_{0}\neq i_{2},i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}h(X)_{j_{0},i_{1}}}{\mathrm{d}x_{i_{2},j_{2}}}=0\] Proof.: **Proof of Part 1.** \[\frac{\mathrm{d}h(X)_{j_{0},i_{1}}}{\mathrm{d}x_{i_{2},j_{2}}} = (e_{i_{2}}\cdot v_{j_{2},j_{0}})_{i_{1}}\] \[= v_{j_{2},j_{0}}\] where the first step follows from Lemma B.7, the second step follows from \(i_{1}=i_{2}\). **Proof of Part 1.** \[\frac{\mathrm{d}h(X)_{j_{0},i_{1}}}{\mathrm{d}x_{i_{2},j_{2}}} =(e_{i_{2}}\cdot v_{j_{2},j_{0}})_{i_{1}}\] \[=0\] where the first step follows from Lemma B.7, the second step follows from \(i_{1}\neq i_{2}\). ### Derivative of scalar function \(\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle\) **Lemma D.3**.: _If the following holds:_ * _Let_ \(f(X)_{i_{0}}\) _be defined as Definition_ B.6__ * _Let_ \(h(X)_{j_{0}}\) _be defined as Definition_ B.7__ * _For_ \(i_{0}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ _We have_ \[\frac{\mathrm{d}\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle}{ \mathrm{d}x_{i_{2},j_{2}}} =\langle-f(X)_{i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot\langle W_{j_{2 },*},X_{*,i_{0}}\rangle\] \[\quad+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot\langle W_{j_{2},*},X_{*,i_ {0}}\rangle),h(X)_{j_{0}}\rangle+f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}}\] Proof.: \[\frac{\mathrm{d}\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle}{ \mathrm{d}x_{i_{2},j_{2}}} =\langle\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{2},j_{2}}},h(X)_{j_{0}}\rangle+\langle f(X)_{i_{0}},\frac{\mathrm{d}h(X)_{j_{0}}}{ \mathrm{d}x_{i_{2},j_{2}}}\rangle\] \[=\langle-(\alpha(X)_{i_{0}})^{-1}\cdot f(X)_{i_{0}}\cdot u(X)_{i_ {0},i_{2}}\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\] \[\quad+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot\langle W_{j_{2},*},X_{*,i_ {0}}\rangle),h(X)_{j_{0}}\rangle+\langle f(X)_{i_{0}},\frac{\mathrm{d}h(X)_{j _{0}}}{\mathrm{d}x_{i_{2},j_{2}}}\rangle\] \[=\langle-f(X)_{i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot\langle W_{j_{2 },*},X_{*,i_{0}}\rangle\] \[\quad+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot\langle W_{j_{2},*},X_{*,i_ {0}}\rangle),h(X)_{j_{0}}\rangle+\langle f(X)_{i_{0}},\frac{\mathrm{d}h(X)_{j _{0}}}{\mathrm{d}x_{i_{2},j_{2}}}\rangle\] \[=\langle-f(X)_{i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot\langle W_{j_{2 },*},X_{*,i_{0}}\rangle\] \[\quad+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot\langle W_{j_{2},*},X_{*,i_ {0}}\rangle),h(X)_{j_{0}}\rangle+f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}}\] where the first step follows from simple differential rule, the second step follows from Lemma B.14, the third step follows from simple algebra and Definition B.6, the fourth step follows from Lemma B.15, the last step follows from simple algebra. ### Derivative of scalar function \(f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\) **Lemma D.4**.: _If the following holds:_ * _Let_ \(f(X)_{i_{0}}\) _be defined as Definition_ B.6__ * _For_ \(i_{0}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ _We have_ * **Part 1.** _For_ \(i_{0}\neq i_{2},i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_ {0}}\rangle}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \ (-f(X)_{i_{0},i_{2}}\cdot f(X)_{i_{0},i_{1}}+f(X)_{i_{0},i_{1}} )\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot\langle W_{j_{1},*},X_{*,i_{0 }}\rangle\] * **Part 2.** _For_ \(i_{0}\neq i_{2},i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_ {0}}\rangle}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \ -f(X)_{i_{0},i_{2}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{ 2},*},X_{*,i_{0}}\rangle\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i _{0}}\rangle}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \ \frac{\mathrm{d}f(X)_{i_{0},i_{1}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle+\frac{\mathrm{d}\langle W_{j_{1},* },X_{*,i_{0}}\rangle}{\mathrm{d}x_{i_{2},j_{2}}}\cdot f(X)_{i_{0},i_{1}}\] \[= \ (-f(X)_{i_{0},i_{2}}f(X)_{i_{0},i_{1}}+f(X)_{i_{0},i_{1}}) \cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot\langle W_{j_{1},*},X_{*,i_{0 }}\rangle\] \[\ +\frac{\mathrm{d}\langle W_{j_{1},*},X_{*,i_{0}}\rangle}{ \mathrm{d}x_{i_{2},j_{2}}}\cdot f(X)_{i_{0},i_{1}}\] \[= \ (-f(X)_{i_{0},i_{2}}f(X)_{i_{0},i_{1}}+f(X)_{i_{0},i_{1}}) \cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot\langle W_{j_{1},*},X_{*,i_{0 }}\rangle+\mathbf{0}_{d}\cdot f(X)_{i_{0},i_{1}}\] \[= \ (-f(X)_{i_{0},i_{2}}f(X)_{i_{0},i_{1}}+f(X)_{i_{0},i_{1}}) \cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot\langle W_{j_{1},*},X_{*,i_{ 0}}\rangle\] where the first step follows from simple differential rule, the second step follows from Lemma D.1, the third step follows from \(i_{0}\neq i_{2}\), the last step follows from simple algebra. **Proof of Part 2** \[\frac{\mathrm{d}f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i _{0}}\rangle}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \ \frac{\mathrm{d}f(X)_{i_{0},i_{1}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle+\frac{\mathrm{d}\langle W_{j_{1},* },X_{*,i_{0}}\rangle}{\mathrm{d}x_{i_{2},j_{2}}}\cdot f(X)_{i_{0},i_{1}}\] \[= \ (-f(X)_{i_{0},i_{2}}f(X)_{i_{0},i_{1}}+f(X)_{i_{0},i_{1}}) \cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot\langle W_{j_{1},*},X_{*,i_{ 0}}\rangle\] \[\ +\frac{\mathrm{d}\langle W_{j_{1},*},X_{*,i_{0}}\rangle}{ \mathrm{d}x_{i_{2},j_{2}}}\cdot f(X)_{i_{0},i_{1}}\] \[= \ (-f(X)_{i_{0},i_{2}}f(X)_{i_{0},i_{1}}+f(X)_{i_{0},i_{1}}) \cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot\langle W_{j_{1},*},X_{*,i_{ 0}}\rangle+\mathbf{0}_{d}\cdot f(X)_{i_{0},i_{1}}\] \[= \ -f(X)_{i_{0},i_{2}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] where the first step follows from simple differential rule, the second step follows from Lemma D.1, the third step follows from \(i_{0}\neq i_{2}\), the last step follows from simple algebra. ### Derivative of scalar function \(f(X)_{i_{0},i_{1}}\cdot h(X)_{j_{0},i_{1}}\) **Lemma D.5**.: _If the following holds:_ * _Let_ \(f(X)_{i_{0}}\) _be defined as Definition_ B.6__ * _Let_ \(h(X)_{j_{0}}\) _be defined as Definition_ B.7__ _We have_ * **Part 1** _For_ \(i_{0}\neq i_{2},i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}f(X)_{i_{0},i_{1}}\cdot h(X)_{j_{0},i_{1}}}{ \mathrm{d}x_{i_{2},j_{2}}}\] \[= (-f(X)_{i_{0},i_{2}}\cdot f(X)_{i_{0},i_{1}}+f(X)_{i_{0},i_{1}}) \cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot h(X)_{j_{0},i_{1}}\] \[+v_{j_{2},j_{0}}\cdot f(X)_{i_{0},i_{1}}\] * **Part 2** _For_ \(i_{0}\neq i_{2},i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}}{ \mathrm{d}x_{i_{2},j_{2}}}\] \[= -f(X)_{i_{0},i_{2}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot h(X)_{j_{0},i_{1}}\] Proof.: **Proof of Part 1.** \[\frac{\mathrm{d}f(X)_{i_{0},i_{1}}\cdot h(X)_{j_{0},i_{1}}}{ \mathrm{d}x_{i_{2},j_{2}}} = \frac{\mathrm{d}f(X)_{i_{0},i_{1}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot h(X)_{j_{0},i_{1}}+\frac{\mathrm{d}h(X)_{j_{0},i_{1}}}{\mathrm{d}x_{i_{2 },j_{2}}}\cdot f(X)_{i_{0},i_{1}}\] \[= (-f(X)_{i_{0},i_{2}}f(X)_{i_{0},i_{1}}+f(X)_{i_{0},i_{1}})\cdot \langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot h(X)_{j_{0},i_{1}}\] \[+\frac{\mathrm{d}h(X)_{j_{0},i_{1}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot f(X)_{i_{0},i_{1}}\] \[= (-f(X)_{i_{0},i_{2}}\cdot f(X)_{i_{0},i_{1}}+f(X)_{i_{0},i_{1}}) \cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot h(X)_{j_{0},i_{1}}\] \[+v_{j_{2},j_{0}}\cdot f(X)_{i_{0},i_{1}}\] where the first step follows from simple differential rule, the second step follows from Lemma D.1, the third step follows from Part 1 of Lemma D.2. **Proof of Part 2.** \[\frac{\mathrm{d}f(X)_{i_{0},i_{1}}\cdot h(X)_{j_{0},i_{1}}}{ \mathrm{d}x_{i_{2},j_{2}}} = \frac{\mathrm{d}f(X)_{i_{0},i_{1}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot h(X)_{j_{0},i_{1}}+\frac{\mathrm{d}h(X)_{j_{0},i_{1}}}{\mathrm{d}x_{i_{2 },j_{2}}}\cdot f(X)_{i_{0},i_{1}}\] \[= -f(X)_{i_{0},i_{2}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot h(X)_{j_{0},i_{1}}\] \[+\frac{\mathrm{d}h(X)_{j_{0},i_{1}}}{\mathrm{d}x_{i_{2},j_{2}}} \cdot f(X)_{i_{0},i_{1}}\] \[= -f(X)_{i_{0},i_{2}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot h(X)_{j_{0},i_{1}}\] where the first step follows from simple differential rule, the second step follows from Lemma D.1, the third step follows from Part 2 of Lemma D.2. ### Derivative of \(C_{6}(x)\) **Lemma D.6**.: _If the following holds:_ * _Let_ \(C_{6}(X)\in\mathbb{R}\) _be defined as in Lemma_ B.16__ * _For_ \(i_{0}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ _We have_ * **Part 1** _For_ \(i_{0}\neq i_{2},i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}C_{6}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= -\left(\langle-f(X)_{i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot\langle W_ {j_{2},*},X_{*,i_{0}}\rangle\right.\] \[+f(X)_{i_{0}}\circ\left(e_{i_{1}}\cdot\langle W_{j_{2},*},X_{*,i_ {0}}\rangle\right),h(X)_{j_{0}})+f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}})\cdot f (X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[+(-\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle)\cdot(-f(X)_{i_{0},i_ {2}}f(X)_{i_{0},i_{1}}+f(X)_{i_{0},i_{1}})\cdot\langle W_{j_{2},*},X_{*,i_{0}} \rangle\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] * **Part 2** _For_ \(i_{0}\neq i_{2},i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}C_{6}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= -\left(\langle-f(X)_{i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot\langle W _{j_{2},*},X_{*,i_{0}}\rangle\right.\] \[+f(X)_{i_{0}}\circ\left(e_{i_{2}}\cdot\langle W_{j_{2},*},X_{*,i_ {0}}\rangle\right),h(X)_{j_{0}})+f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}})\cdot f (X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[+\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle\cdot f(X)_{i_{0},i_{2} }\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot \langle W_{j_{1},*},X_{*,i_{0}}\rangle\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}C_{6}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}}{\mathrm{d}x_{i_{2},j_{2}}}(-\langle f(X)_{i_{0 }},h(X)_{j_{0}}\rangle\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_ {0}}\rangle)\] \[= \frac{\mathrm{d}}{\mathrm{d}x_{i_{2},j_{2}}}(-\langle f(X)_{i_{0 }},h(X)_{j_{0}}\rangle)\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*, i_{0}}\rangle\] \[+(-\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle)\cdot\frac{\mathrm{d }}{\mathrm{d}x_{i_{2},j_{2}}}(f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*, i_{0}}\rangle)\] \[= \frac{\mathrm{d}}{\mathrm{d}x_{i_{2},j_{2}}}(-\langle f(X)_{i_{0 }},h(X)_{j_{0}}\rangle)\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*, i_{0}}\rangle\] \[+(-\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle)\cdot(-f(X)_{i_{0},i_ {2}}f(X)_{i_{0},i_{1}}+f(X)_{i_{0},i_{1}})\cdot\langle W_{j_{2},*},X_{*,i_{0}} \rangle\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[= -\left(\langle-f(X)_{i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot\langle W _{j_{2},*},X_{*,i_{0}}\rangle\right.\] \[+f(X)_{i_{0}}\circ\left(e_{i_{1}}\cdot\langle W_{j_{2},*},X_{*,i_ {0}}\rangle\right),h(X)_{j_{0}})+f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}})\cdot f (X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[+(-\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle)\cdot(-f(X)_{i_{0},i_ {2}}f(X)_{i_{0},i_{1}}+f(X)_{i_{0},i_{1}})\cdot\langle W_{j_{2},*},X_{*,i_{0}} \rangle\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] where the first step follows from Lemma B.16, the second step follows from simple differential rule, the third step follows from Lemma D.4, last step follows from Lemma D.3. **Proof of Part 2** \[\frac{\mathrm{d}C_{6}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}}{\mathrm{d}x_{i_{2},j_{2}}}(-\langle f(X)_{i_{0 }},h(X)_{j_{0}}\rangle\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_ {0}}\rangle)\] \[= \frac{\mathrm{d}}{\mathrm{d}x_{i_{2},j_{2}}}(-\langle f(X)_{i_{0 }},h(X)_{j_{0}}\rangle)\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_ {0}}\rangle\] \[+(-\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle)\cdot\frac{\mathrm{d }}{\mathrm{d}x_{i_{2},j_{2}}}(f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_ {0}}\rangle)\] \[= \frac{\mathrm{d}}{\mathrm{d}x_{i_{2},j_{2}}}(-\langle f(X)_{i_{0}},h( X)_{j_{0}}\rangle)\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[+\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle)\cdot f(X)_{i_{0},i_{2}} \cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot \langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[= -\langle(-f(X)_{i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot\langle W_{j_{ 2},*},X_{*,i_{0}}\rangle\] \[+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot\langle W_{j_{2},*},X_{*,i_{0}} \rangle),h(X)_{j_{0}}\rangle+f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}}\rangle \cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[+\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle\cdot f(X)_{i_{0},i_{2}} \cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot \langle W_{j_{1},*},X_{*,i_{0}}\rangle\] where the first step follows from Lemma B.16, the second step follows from simple differential rule, the third step follows from Lemma D.4, last step follows from Lemma D.3. ### Derivative of \(C_{7}(x)\) **Lemma D.7**.: _If the following holds:_ * _Let_ \(C_{7}(X)\in\mathbb{R}\) _be defined as in Lemma_ B.16__ _We have_ * **Part 1.** _For_ \(i_{0}\neq i_{2},i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[= (-f(X)_{i_{0},i_{2}}+1)\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_ {2},*},X_{*,i_{0}}\rangle\cdot h(X)_{j_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[+v_{j_{2},j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] * **Part 2.** _For_ \(i_{0}\neq i_{2},i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}C_{7}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= -f(X)_{i_{0},i_{2}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot h(X)_{j_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_ {0}}\rangle\] Proof.: **Proof of Part 1.** \[\frac{\mathrm{d}C_{7}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}}{\mathrm{d}x_{i_{2},j_{2}}}(f(X)_{i_{0},i_{1}} \cdot h(X)_{j_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle)\] \[= \frac{\mathrm{d}}{\mathrm{d}x_{i_{2},j_{2}}}(f(X)_{i_{0},i_{1}} \cdot h(X)_{j_{0},i_{1}})\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle+f(X)_{i_ {0},i_{1}}\cdot h(X)_{j_{0},i_{1}}\cdot\frac{\mathrm{d}}{\mathrm{d}x_{i_{2},j _{2}}}(\langle W_{j_{1},*},X_{*,i_{0}}\rangle)\] \[= (-f(X)_{i_{0},i_{2}}+1)\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_ {2},*},X_{*,i_{0}}\rangle\cdot h(X)_{j_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[+v_{j_{2},j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[+f(X)_{i_{0},i_{1}}\cdot h(X)_{j_{0},i_{1}}\cdot\frac{\mathrm{d}}{ \mathrm{d}x_{i_{2},j_{2}}}(\langle W_{j_{1},*},X_{*,i_{0}}\rangle)\] \[= (-f(X)_{i_{0},i_{2}}+1)\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_ {2},*},X_{*,i_{0}}\rangle\cdot h(X)_{j_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[+v_{j_{2},j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X _{*,i_{0}}\rangle\] where the first step follows from Lemma B.16, the second step follows from differential rule, the third step follows from Part 1 of Lemma D.3, the fourth step follows from \(i_{0}\neq i_{2}\). **Proof of Part 2.** \[\frac{\mathrm{d}C_{7}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}}{\mathrm{d}x_{i_{2},j_{2}}}(f(X)_{i_{0},i_{1}} \cdot h(X)_{j_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle)\] \[= \frac{\mathrm{d}}{\mathrm{d}x_{i_{2},j_{2}}}(f(X)_{i_{0},i_{1}} \cdot h(X)_{j_{0},i_{1}})\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle+f(X)_{i_{ 0},i_{1}}\cdot h(X)_{j_{0},i_{1}}\cdot\frac{\mathrm{d}}{\mathrm{d}x_{i_{2},j_{ 2}}}(\langle W_{j_{1},*},X_{*,i_{0}}\rangle)\] \[= -f(X)_{i_{0},i_{2}}f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2},*},X_{ *,i_{0}}\rangle\cdot h(X)_{j_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[+f(X)_{i_{0},i_{1}}\cdot h(X)_{j_{0},i_{1}}\cdot\frac{\mathrm{d} }{\mathrm{d}x_{i_{2},j_{2}}}(\langle W_{j_{1},*},X_{*,i_{0}}\rangle)\] \[= -f(X)_{i_{0},i_{2}}f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2},*},X_{ *,i_{0}}\rangle\cdot h(X)_{j_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[+f(X)_{i_{0},i_{1}}\cdot h(X)_{j_{0},i_{1}}\cdot\mathbf{0}_{d}\] \[= -f(X)_{i_{0},i_{2}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2}, *},X_{*,i_{0}}\rangle\cdot h(X)_{j_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{ 0}}\rangle\] where the first step follows from Lemma B.16, the second step follows from differential rule, the third step follows from Part 2 of Lemma D.3, the fourth step follows from \(i_{0}\neq i_{2}\), the last step follows from simple algebra. ### Derivative of \(C_{8}(x)\) **Lemma D.8**.: _If the following holds:_ * _Let_ \(C_{8}(X)\in\mathbb{R}\) _be defined as in Lemma_ B.16__ * _For_ \(i_{0}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ _We have_ * **Part 1.** _For_ \(i_{0}\neq i_{2},i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}C_{8}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= -f(X)_{i_{0},i_{2}}f(X)_{i_{0},i_{1}}+f(X)_{i_{0},i_{1}})\cdot \langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot v_{j_{1},j_{0}}\] * **Part 2.** _For_ \(i_{0}\neq i_{2},i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}C_{8}(X)}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= -f(X)_{i_{0},i_{2}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2}, *},X_{*,i_{0}}\rangle\cdot v_{j_{1},j_{0}}\] Proof.: **Proof of Part 1** \[\frac{\mathrm{d}C_{8}(X)}{\mathrm{d}x_{i_{2},j_{2}}} =\frac{\mathrm{d}}{\mathrm{d}x_{i_{2},j_{2}}}f(X)_{i_{0},i_{1}} \cdot v_{j_{1},j_{0}}\] \[=(-f(X)_{i_{0},i_{2}}f(X)_{i_{0},i_{1}}+f(X)_{i_{0},i_{1}})\cdot \langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot v_{j_{1},j_{0}}\] where the first step follows from Lemma B.16, the second step follows from differential rule and Lemma D.1. **Proof of Part 2** \[\frac{\mathrm{d}C_{8}(X)}{\mathrm{d}x_{i_{2},j_{2}}} = \frac{\mathrm{d}}{\mathrm{d}x_{i_{2},j_{2}}}f(X)_{i_{0},i_{1}}\cdot v _{j_{1},j_{0}}\] \[= -f(X)_{i_{0},i_{2}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2}, *},X_{*,i_{0}}\rangle\cdot v_{j_{1},j_{0}}\] where the first step follows from Lemma B.16, the second step follows from differential rule and Lemma D.1. ### Derivative of \(\frac{\mathrm{d}c(X)_{i_{0},j_{1}}}{\mathrm{d}x_{i_{1},j_{1}}}\) **Lemma D.9**.: _If the following holds:_ * _Let_ \(c(X)_{i_{0},j_{1}}\in\mathbb{R}\) _be defined as in Lemma_ B.16 _and Definition_ B.8__ _We have_ * **Part 1** _For_ \(i_{0}\neq i_{2},i_{1}=i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}c(X)}{\mathrm{d}x_{i_{1},j_{1}},\mathrm{d}x_{i_{2},j_{2}}}= \sum_{i=1}^{6}F_{i}(X)\] _where we have following definitions_ \[F_{1}(X)= \ 2s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{1}}^{2}\cdot w(X)_{i_{0 },j_{2}}\cdot w(X)_{i_{0},j_{1}}\] \[F_{2}(X)= \ -f(X)_{i_{0},i_{1}}^{2}\cdot h(X)_{j_{0},i_{1}}\cdot w(X)_{i_{0 },j_{2}}\cdot w(X)_{i_{0},j_{1}}\] \[F_{3}(X)= \ -f(X)_{i_{0},i_{1}}^{2}\cdot v_{j_{2},j_{0}}\cdot w(X)_{i_{0},j_ {1}}-f(X)_{i_{0},i_{1}}^{2}\cdot v_{j_{1},j_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[F_{4}(X)= \ -s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot w(X)_{i_{0},j_ {1}}\cdot w(X)_{i_{0},j_{2}}\] \[F_{5}(X)= \ f(X)_{i_{0},i_{1}}\cdot w(X)_{i_{0},j_{1}}\cdot w(X)_{i_{0},j_ {2}}\cdot h(X)_{j_{0},i_{1}}\] \[F_{6}(X)= \ v_{j_{2},j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot w(X)_{i_{0},j_{1}} +v_{j_{1},j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot w(X)_{i_{0},j_{2}}\] * **Part 2** _For_ \(i_{0}\neq i_{2},i_{1}\neq i_{2}\in[n]\)_,_ \(j_{1},j_{2}\in[d]\)__ \[\frac{\mathrm{d}c(X)}{\mathrm{d}x_{i_{1},j_{1}},\mathrm{d}x_{i_{2},j_{2}}}= \sum_{i=1}^{3}G_{i}(X)\] _where we have following definitions_ \[G_{1}(X)= \ 2s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_ {2}}\cdot w(X)_{i_{0},j_{2}}\cdot w(X)_{i_{0},j_{1}}\] \[G_{2}(X)= \ -f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_ {2}}\cdot w(X)_{i_{0},j_{1}}\cdot(h(X)_{j_{0},i_{2}}+h(X)_{j_{0},i_{1}})\] \[G_{3}(X)= \ -f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_{2}}\cdot(v_{j_{2},j_{0}} \cdot w(X)_{i_{0},j_{1}}+v_{j_{1},j_{0}}\cdot w(X)_{i_{0},j_{2}})\] Proof.: **Proof of Part 1.** \[\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}},\mathrm{d}x_{i_{2 },j_{2}}}\] \[= \frac{\mathrm{d}C_{6}}{\mathrm{d}x_{i_{2},j_{2}}}+\frac{\mathrm{d}C_{7 }}{\mathrm{d}x_{i_{2},j_{2}}}+\frac{\mathrm{d}C_{8}}{\mathrm{d}x_{i_{2},j_{2}}}\] \[= -\left((-f(X)_{i_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2 },*},X_{*,i_{0}}\rangle+f(X)_{i_{0}}\circ(e_{i_{1}}\cdot\langle W_{j_{2},*},X_ {*,i_{0}}\rangle),h(X)_{j_{0}}\right)\] \[+f(X)_{i_{0},i_{1}}\cdot v_{j_{2},j_{0}}\right)\cdot f(X)_{i_{0}, i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[+(-\langle f(X)_{i_{0},h}(X)_{j_{0}}\rangle)\cdot(-f(X)_{i_{0},i_ {1}}^{2}+f(X)_{i_{0},i_{1}})\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot \langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[(-f(X)_{i_{0},i_{2}}+1)\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_ {2},*},X_{*,i_{0}}\rangle\cdot h(X)_{j_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[+v_{j_{2},j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[+(-f(X)_{i_{0},i_{1}}^{2}+f(X)_{i_{0},i_{1}})\cdot\langle W_{j_{ 2},*},X_{*,i_{0}}\rangle\cdot v_{j_{1},j_{0}}\] \[= 2s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot w(X)_{i_{0},j_{ 2}}\cdot w(X)_{i_{0},j_{1}}\] \[-2f(X)_{i_{0},i_{1}}^{2}\cdot h(X)_{j_{0},i_{1}}\cdot w(X)_{i_{0},j_{2}}\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{1}}^{2}\cdot v_{j_{2},j_{0}}\cdot w(X)_{i_{0},j_{ 1}}-f(X)_{i_{0},i_{1}}^{2}\cdot v_{j_{1},j_{0}}\cdot w(X)_{i_{0},j_{2}}\] \[-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot w(X)_{i_{0},j_{ 1}}\cdot w(X)_{i_{0},j_{2}}\] \[+f(X)_{i_{0},i_{1}}\cdot w(X)_{i_{0},j_{1}}\cdot w(X)_{i_{0},j_{ 2}}\cdot h(X)_{j_{0},i_{1}}\] \[+v_{j_{2},j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot w(X)_{i_{0},j_{1}}+ v_{j_{1},j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot w(X)_{i_{0},j_{2}}\] where the first step follows from Lemma B.16, the second step follows from previous results in this section, the last step is a rearrangement. **Proof of Part 2.** \[\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}, \mathrm{d}x_{i_{2},j_{2}}}\] \[= \frac{\mathrm{d}C_{6}}{\mathrm{d}x_{i_{2},j_{2}}}+\frac{\mathrm{d }C_{7}}{\mathrm{d}x_{i_{2},j_{2}}}+\frac{\mathrm{d}C_{8}}{\mathrm{d}x_{i_{2},j _{2}}}\] \[= -\left(\langle-f(X)_{i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot\langle W _{j_{2},*},X_{*,i_{0}}\rangle\right.\] \[+f(X)_{i_{0}}\circ(e_{i_{2}}\cdot\langle W_{j_{2},*},X_{*,i_{0}} \rangle),h(X)_{j_{0}}\rangle+f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}})\cdot f(X )_{i_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[+\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle\cdot f(X)_{i_{0},i_{2}} \cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2},*},X_{*,i_{0}}\rangle\cdot \langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[-f(X)_{i_{0},i_{2}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2},* },X_{*,i_{0}}\rangle\cdot h(X)_{j_{0},i_{1}}\cdot\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[-f(X)_{i_{0},i_{2}}\cdot f(X)_{i_{0},i_{1}}\cdot\langle W_{j_{2},* },X_{*,i_{0}}\rangle\cdot v_{j_{1},j_{0}}\] \[= 2s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_{ 2}}\cdot w(X)_{i_{0},j_{2}}\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2} }\cdot w(X)_{i_{0},j_{2}}\cdot w(X)_{i_{0},j_{1}}\] \[-f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},j_{1} }\cdot w(X)_{i_{0},j_{2}}\cdot h(X)_{j_{0},i_{1}}\] \[-f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_{2}}\cdot v_{j_{2},j_{0}} \cdot w(X)_{i_{0},j_{1}}-f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_{2}}\cdot v_{j_{1},j_ {0}}\cdot w(X)_{i_{0},j_{2}}\] where the first step follows from Lemma B.16, the second step follows from Lemma D.6, the third step follows from Part 2 of Lemma D.7, the last step follows from Lemma D.8. Notice that, by our construction, **Part 1** should be symmetric w.r.t. \(j_{1},j_{2}\), **Part 2** should be symmetric w.r.t. \(i_{1},i_{2}\), which are all satisfied. ## Appendix E Hessian Reformulation In this section, we provide a reformulation of Hessian formula, which simplifies our calculation and analysis. In Section E.1 we show the way we split the Hessian. In Section E.2 we show the decomposition when \(i_{0}=i_{1}=i_{2}\). ### Hessian split **Definition E.1** (Hessian of functions of matrix).: _We define the Hessian of \(c(X)_{i_{0},j_{0}}\) by considering its Hessian with respect to \(x=\operatorname{vec}(X)\). This means that, \(\nabla^{2}c(X)_{i_{0},j_{0}}\) is a \(nd\times nd\) matrix with its \((i_{1}\cdot j_{1},i_{2}\cdot j_{2})\)-th entry being_ \[\frac{\operatorname{d}\!c(X)_{i_{0},j_{0}}}{\operatorname{d}\!x_{i_{1},j_{2}} x_{i_{2},j_{2}}}\] **Definition E.2** (Hessian split).: _We split the hessian of \(c(X)_{i_{0},j_{0}}\) into following cases_ * _Part 1:_ \(i_{0}=i_{1}=i_{2}\) _:_ \(H_{1}^{(i_{1},i_{2})}\)__ * _Part 2:_ \(i_{0}=i_{1}\)_,_ \(i_{0}\neq i_{2}\) _:_ \(H_{2}^{(i_{1},i_{2})}\)__ * _Part 3:_ \(i_{0}\neq i_{1}\)_,_ \(i_{0}=i_{2}\) _:_ \(H_{3}^{(i_{1},i_{2})}\)__ * _Part 4:_ \(i_{0}\neq i_{1}\)_,_ \(i_{0}\neq i_{2}\)_,_ \(i_{1}=i_{2}\)_:_ \(H_{4}^{(i_{1},i_{2})}\)__ * _Part 5:_ \(i_{0}\neq i_{1}\)_,_ \(i_{0}\neq i_{2}\)_,_ \(i_{1}\neq i_{2}\)_:_ \(H_{5}^{(i_{1},i_{2})}\)__ _In above, \(H_{i}^{(i_{1},i_{2})}\) is a \(d\times d\) matrix with its \(j_{1},j_{2}\)-th entry being_ \[\frac{\operatorname{d}\!c(X)_{i_{0},j_{0}}}{\operatorname{d}\!x_{i_{1},j_{2}} x_{i_{2},j_{2}}}\] Utilizing above definitions, we split the Hessian to a \(n\times n\) partition with its \(i_{1},i_{2}\)-th component being \(H_{i}(i_{1},i_{2})\) based on above definition. **Definition E.3**.: _We define \(\nabla^{2}c(X)_{i_{0},j_{0}}\) to be as following_ \[\left[\begin{array}{cccccccc}H_{4}^{(1,1)}&H_{5}^{(1,2)}&H_{5}^{(1,3)}& \cdots&H_{5}^{(1,i_{0}-1)}&H_{3}^{(1,i_{0})}&H_{5}^{(1,i_{0}+1)}&\cdots&H_{5}^{ (1,n)}\\ H_{5}^{(2,1)}&H_{4}^{(2,2)}&H_{5}^{(2,3)}&\cdots&H_{5}^{(2,i_{0}-1)}&H_{3}^{(2, i_{0})}&H_{5}^{(2,i_{0}+1)}&\cdots&H_{5}^{(2,n)}\\ H_{5}^{(3,1)}&H_{5}^{(3,2)}&H_{4}^{(3,3)}&\cdots&H_{5}^{(3,i_{0}-1)}&H_{3}^{(3, i_{0})}&H_{5}^{(3,i_{0}+1)}&\cdots&H_{5}^{(3,n)}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\ H_{2}^{(i_{0},1)}&H_{2}^{(i_{0},2)}&H_{2}^{(i_{0},3)}&\cdots&H_{2}^{(i_{0},i_{0 }-1)}&H_{1}^{(i_{0},i_{0})}&H_{2}^{(i_{0},i_{0}+1)}&\cdots&H_{2}^{(i_{0},n)}\\ H_{5}^{(i_{0}+1,1)}&H_{5}^{(i_{0}+1,2)}&H_{5}^{(i_{0}+1,3)}&\cdots&H_{5}^{(i_{0 }+1,i_{0}-1)}&H_{3}^{(i_{0}+1,i_{0})}&H_{4}^{(i_{0}+1,i_{0}+1)}&\cdots&H_{5}^{( i_{0}+1,n)}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\ H_{5}^{(n,1)}&H_{5}^{(n,2)}&H_{5}^{(n,3)}&\cdots&H_{5}^{(n,i_{0}-1)}&H_{3}^{(n,i_{0 })}&H_{5}^{(n,i_{0}+1)}&\cdots&H_{4}^{(n,n)}\end{array}\right]\] ### Decomposition Hessian : Part 1 **Lemma E.4** (Helpful lemma).: _Under following conditions_ * _Let_ \(z(X)_{i_{0}}:=W^{\top}X\cdot f(X)_{i_{0}}\)__ * _Let_ \(w(X)_{i_{0},*}:=WX_{*,i_{0}}\)__ _we have_ * _Part 1:_ \(w(X)_{i_{0},j_{1}}=e_{j_{1}}^{\top}\cdot w(X)_{i_{0},*}\)__ * _Part 2:_ \(z(X)_{i_{0},j_{1}}=e_{j_{1}}^{\top}\cdot z(X)_{i_{0}}\)__ Proof.: **Proof of Part 1** \[w(X)_{i_{0},j_{1}} =\langle W_{j_{1},*},X_{*,i_{0}}\rangle\] \[=W_{j_{1},*}^{\top}X_{*,i_{0}}\] \[=e_{j_{1}}^{\top}\cdot WX_{*,i_{0}}\] \[=e_{j_{1}}^{\top}\cdot w(X)_{i_{0},*}\] where the first step is by the definition of \(w(X)_{i_{0},j_{1}}\) the 2nd and 3rd step are from linear algebra facts, the 4th step is by the definition of \(w(X)_{i_{0},*}\). **Proof of Part 2** \[z(X)_{i_{0},j_{1}} =\langle f(X)i_{0},X^{\top}W_{*,j_{1}}\rangle\] \[=(X^{\top}W_{*,j_{1}})^{\top}f(X)_{i_{0}}\] \[=W_{*,j_{1}}^{\top}X\cdot f(X)_{i_{0}}\] \[=e_{j_{1}}^{\top}\cdot W^{\top}X\cdot f(X)_{i_{0}}\] \[=e_{j_{1}}^{\top}\cdot z(X)_{i_{0}}\] where the first step is by the definition of \(w(X)_{i_{0},j_{1}}\) the 2nd, 3rd, and the 4th step are from linear algebra facts, the 5th step is by the definition of \(w(X)_{i_{0},*}\). **Lemma E.5**.: _Under following conditions_ * _Let_ \(D_{i}(X)\) _be defined as Lemma_ C.15__ * _Let_ \(z(X)_{i_{0}}:=W^{\top}X\cdot f(X)_{i_{0}}\)__ * _Let_ \(w(X)_{i_{0},*}:=WX_{*,i_{0}}\)__ _we have_ \[D_{1}(X) =e_{j_{1}}^{\top}\cdot w(X)_{i_{0},*}\cdot 2s(X)_{i_{0},j_{0}} \cdot f(X)_{i_{0},i_{0}}^{2}\cdot w(X)_{i_{0},*}^{\top}\cdot e_{j_{2}}\] \[D_{2}(X) =e_{j_{1}}^{\top}\cdot(w(X)_{i_{0},*}\cdot 2f(X)_{i_{0},i_{0}} \cdot s(X)_{i_{0},j_{0}}\cdot z(X)_{i_{0}}^{\top}\] \[\quad+z(X)_{i_{0}}\cdot 2f(X)_{i_{0},i_{0}}\cdot s(X)_{i_{0},j_{0} }\cdot w(X)_{i_{0},*}^{\top})\cdot e_{j_{2}}\] \[D_{3}(X) =-\,e_{j_{1}}^{\top}\cdot w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{0} }^{2}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},*}^{\top}\cdot e_{j_{2}}\] \[D_{4}(X) =-\,e_{j_{1}}^{\top}\cdot W^{\top}\cdot f(X)_{i_{0},i_{0}}\cdot X \cdot\operatorname{diag}(f(X)_{i_{0}})\cdot h(X)_{j_{0}}\cdot w(X)_{i_{0},*}^ {\top}\cdot e_{j_{2}}\] \[\quad-\,e_{j_{1}}^{\top}\cdot w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_ {0}}\cdot h(X)_{j_{0}}^{\top}\cdot\operatorname{diag}(f(X)_{i_{0}})\cdot X ^{\top}\cdot W\cdot e_{j_{2}}\] \[D_{5}(X) =-\,e_{j_{1}}^{\top}\cdot(w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{0} }^{2}\cdot V_{*,j_{0}}^{\top}+V_{*,j_{0}}\cdot f(X)_{i_{0},i_{0}}^{2}\cdot w(X )_{i_{0},*}^{\top})\cdot e_{j_{2}}\] \[D_{6}(X) =-\,e_{j_{1}}^{\top}\cdot w(X)_{i_{0},*}\cdot s(X)_{i_{0},j_{0}} \cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},*}^{\top}\cdot e_{j_{2}}\] \[D_{7}(X) =-\,e_{j_{1}}^{\top}\cdot w(X)_{i_{0},*}\cdot s(X)_{i_{0},j_{0} }\cdot f(X)_{i_{0},i_{0}}\cdot X_{*,i_{0}}^{\top}\cdot W\cdot e_{j_{2}}\] \[\quad-\,e_{j_{1}}^{\top}\cdot W^{\top}\cdot X_{*,i_{0}}\cdot s(X )_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},*}^{\top}\cdot e_{j_{2}}\] \[D_{8}(X) =e_{j_{1}}^{\top}\cdot s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0} }\cdot(W^{\top}-W)\cdot e_{j_{2}}\] \[D_{9}(X) =e_{j_{1}}^{\top}\cdot z(X)_{i_{0}}\cdot s(X)_{i_{0},j_{0}}\cdot z (X)_{i_{0}}^{\top}\cdot e_{j_{2}}\] \[D_{10}(X) =-\,e_{j_{1}}^{\top}\cdot(z(X)_{i_{0}}\cdot f(X)_{i_{0},i_{0}} \cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},*}^{\top}\] \[+w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0},i_{0}}\cdot z (X)_{i_{0}}^{\top})\cdot e_{j_{2}}\] \[D_{11}(X)= -e_{j_{1}}^{\top}\cdot(z(X)_{i_{0}}\cdot h(X)_{j_{0}}^{\top}\cdot \operatorname{diag}(f(X)_{i_{0}})\cdot X^{\top}\cdot W\] \[+W^{\top}\cdot X\cdot\operatorname{diag}(f(X)_{i_{0}})\cdot h(X) _{j_{0}}\cdot z(X)_{i_{0}}^{\top})\cdot e_{j_{2}}\] \[D_{12}(X)= -e_{j_{1}}^{\top}\cdot(z(X)_{i_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot V _{*,j_{0}}^{\top}+V_{*,j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot z(X)_{i_{0}}^{\top}) \cdot e_{j_{2}}\] \[D_{13}(X)= \ e_{j_{1}}^{\top}\cdot z(X)_{i_{0}}\cdot s(X)_{i_{0},j_{0}}\cdot f (X)_{i_{0},i_{0}}\cdot z(X)_{i_{0}}^{\top}\cdot e_{j_{2}}\] \[D_{14}(X)= -e_{j_{1}}^{\top}\cdot W^{\top}\cdot X\cdot s(X)_{i_{0},j_{0}} \cdot\operatorname{diag}(f(X)_{i_{0}})\cdot X^{\top}\cdot W\cdot e_{j_{2}}\] \[D_{15}(X)= -e_{j_{1}}^{\top}\cdot w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{0}}^{2} \cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},*}^{\top}\cdot e_{j_{2}}\] \[D_{16}(X)= \ e_{j_{1}}^{\top}\cdot w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{0}} \cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},*}^{\top}\cdot e_{j_{2}}\] \[D_{17}(X)= \ e_{j_{1}}^{\top}\cdot(w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{0}} \cdot X_{*,i_{0}}^{\top}\cdot h(X)_{j_{0},i_{0}}\cdot W\] \[+W^{\top}\cdot X_{*,i_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{ 0},i_{0}}\cdot w(X)_{i_{0}})\cdot e_{j_{2}}\] \[D_{18}(X)= \ e_{j_{1}}^{\top}\cdot(w(X)_{i_{0},*}f(X)_{i_{0},i_{0}}\cdot V _{j_{2},*}^{\top}+V_{j_{1},*}^{\top}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},*}^{\top})\cdot e_{j_{2}}\] \[D_{19}(X)= \ e_{j_{1}}^{\top}\cdot f(X)_{i_{0},i_{0}}\cdot h(X)_{i_{0},i_{0} }\cdot(W+W^{\top})\cdot e_{j_{2}}\] \[D_{20}(X):= \ e_{j_{1}}^{\top}\cdot W^{\top}\cdot X\cdot\operatorname{diag} (f(X)_{i_{0}})\cdot\operatorname{diag}(h(X)_{j_{0}})\cdot X^{\top}\cdot W \cdot e_{j_{2}}\] \[D_{21}(X):= \ e_{j_{1}}^{\top}\cdot(W^{\top}\cdot X_{*,i_{0}}\cdot f(X)_{i_{ 0},i_{0}}\cdot V_{*,j_{0}}^{\top}+V_{*,j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot X_{*,i_{0}}^{\top}\cdot W)\cdot e_{j_{2}}\] Proof.: This lemma is followed by Lemma E.4 and linear algebra facts. Based on above auxiliary lemma, we have following definition. **Definition E.6**.: _Under following conditions_ * _Let_ \(z(X)_{i_{0}}:=W^{\top}X\cdot f(X)_{i_{0}}\)__ * _Let_ \(w(X)_{i_{0},*}:=WX_{*,i_{0}}\)__ _We present the_ **Case 1** _component of Hessian_ \(c(X)_{i_{0},j_{0}}\) _to be_ \[H_{1}^{(i_{0},i_{0})}(X):=B(X)\] _where we have_ \[B(X):= \ \sum_{i=1}^{21}B_{i}(X)\] \[B_{1}(X):= \ w(X)_{i_{0},*}\cdot 2s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}^{2} \cdot w(X)_{i_{0},*}^{\top}\] \[B_{2}(X):= \ w(X)_{i_{0},*}\cdot 2f(X)_{i_{0},i_{0}}\cdot s(X)_{i_{0},j_{0}} \cdot z(X)_{i_{0}}^{\top}\] \[+z(X)_{i_{0}}\cdot 2f(X)_{i_{0},i_{0}}\cdot s(X)_{i_{0},j_{0}} \cdot w(X)_{i_{0},*}^{\top}\] \[B_{3}(X):= \ -w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{0}}^{2}\cdot h(X)_{j_{0},i_{0 }}\cdot w(X)_{i_{0},*}^{\top}\] \[B_{4}(X):= \ -W^{\top}\cdot f(X)_{i_{0},i_{0}}\cdot X\cdot\operatorname{diag} (f(X)_{i_{0}})\cdot h(X)_{j_{0}}\cdot w(X)_{i_{0},*}^{\top}\] \[-w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{0}}\cdot h(X)_{j_{0}}^{\top} \cdot\operatorname{diag}(f(X)_{i_{0}})\cdot X^{\top}\cdot W\] \[B_{5}(X):= \ -w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{0}}^{2}\cdot V_{*,j_{0}}^{\top} -V_{*,j_{0}}\cdot f(X)_{i_{0},i_{0}}^{2}\cdot w(X)_{i_{0},*}^{\top}\] \[B_{6}(X):= \ -w(X)_{i_{0},*}\cdot s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}} \cdot w(X)_{i_{0},*}^{\top}\] \[B_{7}(X):= \ -w(X)_{i_{0},*}\cdot s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}} \cdot X_{*,i_{0}}^{\top}\cdot W\] \[\begin{split} B_{10}(X)&:=\,-\,\sum_{i=1}^{\infty}\sum_{j=1}^ {\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty} \sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty} \sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty} \sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{ \infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty} \sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1}^{\infty}\sum_{j=1} \[E_{11}(X) =\ -\,e_{j_{1}}^{\top}\cdot W^{\top}\cdot X\cdot\operatorname{diag} (f(X)_{i_{0}})\cdot h(X)_{j_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},*}^{ \top}\cdot e_{j_{2}}\] \[E_{12}(X) =\ e_{j_{1}}^{\top}\cdot W^{\top}\cdot X_{*,i_{2}}\cdot f(X)_{i_{0 },i_{2}}\cdot h(X)_{j_{0},i_{2}}\cdot w(X)_{i_{0},*}^{\top}\cdot e_{j_{2}}\] \[E_{13}(X) =\ e_{j_{1}}^{\top}\cdot W^{\top}\cdot f(X)_{i_{0},i_{2}}\cdot h( X)_{j_{0},i_{2}}\cdot e_{j_{2}}\] \[E_{14}(X) =\ e_{j_{1}}^{\top}\cdot W^{\top}\cdot X_{*,i_{2}}\cdot f(X)_{i_{ 0},i_{2}}\cdot V_{*,j_{0}}^{\top}\cdot e_{j_{2}}\] \[E_{15}(X) =\ -\,e_{j_{1}}^{\top}\cdot V_{*,j_{0}}\cdot f(X)_{i_{0},i_{0}} \cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},*}^{\top}\cdot e_{j_{2}}\] Proof.: This lemma is followed by Lemma E.4 and linear algebra facts. Based on above auxiliary lemma, we have following definition. **Definition E.8**.: _Under following conditions_ * _Let_ \(z(X)_{i_{0}}:=W^{\top}X\cdot f(X)_{i_{0}}\)__ * _Let_ \(w(X)_{i_{0},*}:=WX_{*,i_{0}}\)__ _We present the_ **Case 2** _component of Hessian_ \(c(X)_{i_{0},j_{0}}\) _to be_ \[H_{2}^{(i_{0},i_{2})}(X):=J(X)\] _where we have_ \[J(X) :=\ \sum_{i=1}^{15}J_{i}(X)\] \[J_{1}(X) :=\ w(X)_{i_{0},*}\cdot 2s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{2} }\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},*}^{\top}\] \[J_{2}(X) :=-\,w(X)_{i_{0},*}\cdot 2f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2} }\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},*}^{\top}\] \[J_{3}(X) :=\ -\,w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{2}}\cdot f(X)_{i_{0},i_ {0}}\cdot V_{*,j_{0}}^{\top}\] \[J_{4}(X) :=\,z(X)_{i_{0}}\cdot s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{2} }\cdot w(X)_{i_{0},*}^{\top}\] \[J_{5}(X) :=\ -\,z(X)_{i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_ {2}}\cdot w(X)_{i_{0},*}^{\top}\] \[J_{6}(X) :=\ -\,z(X)_{i_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot V_{*,j_{0}}^{\top}\] \[J_{7}(X) :=\,z(X)_{i_{0}}\cdot s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}} \cdot w(X)_{i_{0},*}^{\top}\] \[J_{8}(X) :=\ -\,w(X)_{i_{0},*}\cdot s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_ {0}}\cdot w(X)_{i_{0},*}^{\top}\] \[J_{9}(X) :=\ -\,W^{\top}\cdot s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\] \[J_{10}(X) :=\ -\,w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_ {2}}\cdot h(X)_{j_{0},i_{0}}\cdot w(X)_{i_{0},*}^{\top}\] \[J_{11}(X) :=\ -\,W^{\top}\cdot X\cdot\operatorname{diag}(f(X)_{i_{0}}) \cdot h(X)_{j_{0}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},*}^{\top}\] \[J_{12}(X) :=\ W^{\top}\cdot X_{*,i_{2}}\cdot f(X)_{i_{0},i_{2}}\cdot h(X)_{ j_{0},i_{2}}\cdot w(X)_{i_{0},*}^{\top}\] \[J_{13}(X) :=\ W^{\top}f(X)_{i_{0},i_{2}}\cdot h(X)_{j_{0},i_{2}}\] \[J_{14}(X) :=\ W^{\top}\cdot X_{*,i_{2}}\cdot f(X)_{i_{0},i_{2}}\cdot V_{*,j _{0}}^{\top}\] \[J_{15}(X) :=\ -\,V_{*,j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot f(X)_{i_{0},i_ {2}}\cdot w(X)_{i_{0},*}^{\top}\] Next, we define the third case by the symmetricity of Hessian. **Definition E.9**.: _We present the_ **Case 3** _component of Hessian \(c(X)_{i_{0},j_{0}}\) to be_ \[H_{3}^{(i,i_{0})}(X):=H_{2}^{(i_{0},i)}(X)\] ### Decomposition Hessian : Part 4 **Lemma E.10**.: _Under following conditions_ * _Let_ \(F_{i}(X)\) _be defined as Lemma_ D.9__ * _Let_ \(z(X)_{i_{0}}:=W^{\top}X\cdot f(X)_{i_{0}}\)__ * _Let_ \(w(X)_{i_{0},*}:=WX_{*,i_{0}}\)__ _we have_ \[F_{1}(X)= \ e_{j_{1}}^{\top}\cdot w(X)_{i_{0},*}\cdot 2s(X)_{i_{0},j_{0}} \cdot f(X)_{i_{0},i_{1}}^{2}\cdot w(X)_{i_{0},*}^{\top}\cdot e_{j_{2}}\] \[F_{2}(X)= \ -e_{j_{1}}^{\top}\cdot w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{1}}^{2 }\cdot h(X)_{j_{0},i_{1}}\cdot w(X)_{i_{0},*}^{\top}\cdot e_{j_{2}}\] \[F_{3}(X)= \ -e_{j_{1}}^{\top}\cdot(w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{1}}^{2 }\cdot V_{*,j_{0}}^{\top}+V_{*,j_{0}}\cdot f(X)_{i_{0},i_{1}}^{2}\cdot w(X)_{i _{0},*}^{\top})\cdot e_{j_{2}}\] \[F_{4}(X)= \ -e_{j_{1}}^{\top}\cdot w(X)_{i_{0},*}\cdot s(X)_{i_{0},j_{0}} \cdot f(X)_{i_{0},i_{1}}\cdot w(X)_{i_{0},*}^{\top}\cdot e_{j_{2}}\] \[F_{5}(X)= \ e_{j_{1}}^{\top}\cdot w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{1}} \cdot h(X)_{j_{0},i_{1}}\cdot w(X)_{i_{0},*}^{\top}\cdot e_{j_{2}}\] \[F_{6}(X)= \ e_{j_{1}}^{\top}\cdot(w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{1}} \cdot V_{*,j_{0}}^{\top}+V_{*,j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot w(X)_{i_{0},*}^{\top})\cdot e_{j_{2}}\] Proof.: This lemma is followed by Lemma E.4 and linear algebra facts. Based on above auxiliary lemma, we have following definition. **Definition E.11**.: _Under following conditions_ * _Let_ \(z(X)_{i_{0}}:=W^{\top}X\cdot f(X)_{i_{0}}\)__ * _Let_ \(w(X)_{i_{0},*}:=WX_{*,i_{0}}\)__ _We present the_ **Case 4** _component of Hessian_ \(c(X)_{i_{0},j_{0}}\) _to be_ \[H_{4}^{(i_{1},i_{1})}(X):=K(X)\] _where we have_ \[K(X):= \ \sum_{i=1}^{6}K_{i}(X)\] \[K_{1}(X):= \ w(X)_{i_{0},*}\cdot 2s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{1}}^{2 }\cdot w(X)_{i_{0},*}^{\top}\] \[K_{2}(X):= \ -w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{1}}^{2}\cdot h(X)_{j_{0},i_{1 }}\cdot w(X)_{i_{0},*}^{\top}\] \[K_{3}(X):= \ -w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{1}}^{2}\cdot V_{*,j_{0}}^{ \top}-V_{*,j_{0}}\cdot f(X)_{i_{0},i_{1}}^{2}\cdot w(X)_{i_{0},*}^{\top}\] \[K_{4}(X):= \ -w(X)_{i_{0},*}\cdot s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{1 }}\cdot w(X)_{i_{0},*}^{\top}\] \[K_{5}(X):= \ w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{1}}\cdot h(X)_{j_{0},i_{1}} \cdot w(X)_{i_{0},*}^{\top}\] \[K_{6}(X):= \ w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{1}}\cdot V_{*,j_{0}}^{\top}+ V_{*,j_{0}}\cdot f(X)_{i_{0},i_{1}}\cdot w(X)_{i_{0},*}^{\top}\] ### Decomposition Hessian : Part 5 **Lemma E.12**.: _Under following conditions_ * _Let_ \(G_{i}(X)\) _be defined as Lemma_ D.9__ * _Let_ \(z(X)_{i_{0}}:=W^{\top}X\cdot f(X)_{i_{0}}\)__ * _Let_ \(w(X)_{i_{0},*}:=WX_{*,i_{0}}\)__ _we have_ \[G_{1}(X) =e_{j_{1}}^{\top}\cdot w(X)_{i_{0},*}\cdot 2s(X)_{i_{0},j_{0}} \cdot f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},*}^{\top} \cdot e_{j_{2}}\] \[G_{2}(X) =\ -e_{j_{1}}^{\top}\cdot w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{1}} \cdot f(X)_{i_{0},i_{2}}\cdot(h(X)_{j_{0},i_{2}}+h(X)_{j_{0},i_{1}})\cdot w(X)_ {i_{0},*}^{\top}\cdot e_{j_{2}}\] \[G_{3}(X) =\ -e_{j_{1}}^{\top}\cdot f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_{2 }}\cdot(w(X)_{i_{0},*}\cdot V_{*,j_{0}}^{\top}+V_{*,j_{0}}\cdot w(X)_{*,j_{2}}) \cdot e_{j_{2}}\] Proof.: This lemma is followed by Lemma E.4 and linear algebra facts. Based on above auxiliary lemma, we have following definition. **Definition E.13**.: _Under following conditions_ * _Let_ \(z(X)_{i_{0}}:=W^{\top}X\cdot f(X)_{i_{0}}\)__ * _Let_ \(w(X)_{i_{0},*}:=WX_{*,i_{0}}\)__ _We present the_ **Case 5** _component of Hessian_ \(c(X)_{i_{0},j_{0}}\) _to be_ \[H_{5}^{(i_{1},i_{2})}(X):=N(X)\] _where we have_ \[N(X) :=\ \sum_{i=1}^{3}N_{i}(X)\] \[N_{1}(X) :=\ w(X)_{i_{0},*}\cdot 2s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{1}} \cdot f(X)_{i_{0},i_{2}}\cdot w(X)_{i_{0},*}^{\top}\] \[N_{2}(X) :=\ -w(X)_{i_{0},*}\cdot f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_{2 }}\cdot(h(X)_{j_{0},i_{2}}+h(X)_{j_{0},i_{1}})\cdot w(X)_{i_{0},*}^{\top}\] \[N_{3}(X) :=\ -f(X)_{i_{0},i_{1}}\cdot f(X)_{i_{0},i_{2}}\cdot(w(X)_{i_{0},*} \cdot V_{*,j_{0}}^{\top}+V_{*,j_{0}}\cdot w(X)_{*,j_{2}}^{\top})\] ## Appendix F Hessian of loss function In this section, we provide the Hessian of our loss function. **Lemma F.1** (A single entry).: _Under following conditions_ * _Let_ \(L(X)\) _be defined as Definition_ B.9__ _we have_ \[\frac{\mathrm{d}L(X)}{\mathrm{d}x_{i_{1},j_{1}}x_{i_{2},j_{2}}}= \sum_{i_{0}=1}^{n}\sum_{j_{0}=1}^{d}\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{ \mathrm{d}x_{i_{1},j_{1}}}\cdot\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x _{i_{1},j_{2}}}+c(X)_{i_{0},j_{0}}\cdot\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{ \mathrm{d}x_{i_{1},j_{1}}x_{i_{2},j_{2}}}\] Proof.: **Proof of Part 1:**\(i_{1}=i_{2}\) \[\frac{\mathrm{d}L(X)}{\mathrm{d}x_{i_{1},j_{1}}x_{i_{2},j_{2}}} =\frac{\mathrm{d}}{\mathrm{d}x_{i_{2},j_{2}}}(\sum_{i_{0}=1}^{n} \sum_{j_{0}=1}^{d}c(X)_{i_{0},j_{0}}\cdot\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{ \mathrm{d}x_{i_{1},j_{1}}})\] \[=\sum_{i_{0}=1}^{n}\sum_{j_{0}=1}^{d}\frac{\mathrm{d}c(X)_{i_{0},j _{0}}}{\mathrm{d}x_{i_{1},j_{1}}}\cdot\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{ \mathrm{d}x_{i_{2},j_{2}}}+c(X)_{i_{0},j_{0}}\cdot\frac{\mathrm{d}c(X)_{i_{0},j _{0}}}{\mathrm{d}x_{i_{1},j_{1}}x_{i_{2},j_{2}}}\] where the first step is given by chain rule, and the 2nd step are given by product rule. **Lemma F.2** (Matrix Representation of Hessian).: _Under following conditions_ * _Let_ \(c(X)_{i_{0},j_{0}}\) _be defined as Definition_ B.8__ * _Let_ \(L(X)\) _be defined as Definition_ B.9__ _we have_ \[\nabla^{2}L(X)=\sum_{i_{0}=1}^{n}\sum_{j_{0}=1}^{d}\nabla c(X)_{i_{0},j_{0}} \cdot\nabla c(X)_{i_{0},j_{0}}^{\top}+c(X)_{i_{0},j_{0}}\cdot\nabla^{2}c(X)_{i _{0},j_{0}}\] Proof.: This is directly given by the single-entry representation in Lemma F.1. ## Appendix G Bounds for basic functions In this section, we prove the upper bound for each function, with following assumption about the domain of parameters. In Section G.1 we bound the basic terms. In Section G.2 we bound the gradient of \(f(X)_{i_{0}}\). In Section G.3 we bound the gradient of \(c(X)_{i_{0},j_{0}}\) **Assumption G.1** (Bounded parameters).: _Let \(W,V,X,B\) be defined as in Section B.2,_ * _Let_ \(R\) _be some fixed constant satisfies_ \(R>1\)__ * _We have_ \(\|W\|\leq R\)_,_ \(\|V\|\leq R\)_,_ \(\|X\|\leq R\) _where_ \(\|\cdot\|\) _is the matrix spectral norm_ * _We have_ \(b_{i_{0},j_{0}}\leq R^{2}\)__ ### Bounds for basic functions **Lemma G.2**.: _Under Assumption G.1, for all \(i_{0}\in[n],j_{0}\in[d]\), we have following bounds:_ * _Part 1_ \[\|f(X)_{i_{0}}\|_{2}\leq 1\] * _Part 2_ \[\|h(X)_{i_{0}}\|_{2}\leq R^{2}\] * _Part 3_ \[|c(X)_{i_{0},j_{0}}|\leq 2R^{2}\] * _Part 4_ \[\|x^{\top}W_{*,j_{0}}\|_{2}\leq R^{2}\] * _Part 5_ \[|w(X)_{i_{0},j_{0}}|\leq R^{2}\] * _Part 6_ \[|z(X)_{i_{0},j_{0}}|\leq R^{2}\] * _Part 7_ \[|s(X)_{i_{0},j_{0}}|\leq R^{2}\] Proof.: **Proof of Part 1** The proof is similar to [13], and hence is omitted here. **Proof of Part 2** \[\|h(X)_{j_{0}}\|_{2} =\|X^{\top}V_{*,j_{0}}\|_{2}\] \[\leq\|V\|\cdot\|X\|\] \[\leq R^{2}\] where the first step is by Definition B.7, the 2nd step is by basic algebra, the 3rd follows by Assumption G.1. **Proof of Part 3** \[|c(X)_{i_{0},j_{0}}| =|\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle-b_{i_{0},j_{0}}|\] \[\leq|\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle|+|b_{i_{0},j_{0}}|\] \[\leq\|f(X)_{i_{0}}\|_{2}\cdot\|h(X)_{j_{0}}\|_{2}+|b_{i_{0},j_{0}}|\] \[\leq 2R^{2}\] where the first step is by Definition B.8, the 2nd step uses triangle inequality, the 3rd step uses Cauchy-Schwartz inequality, the 4th step is by Assumption G.1 and **Part 2**. **Proof of Part 4** \[\|x^{\top}W_{*,j_{0}}\|_{2} \leq\|x\|\cdot\|W\|\] \[\leq R^{2}\] where the first step is by basic algebra, the second is by Assumption G.1. **Proof of Part 5** \[|w(X)_{i_{0},j_{0}}| =|\langle W_{j_{0},*},X_{*,j_{0}}|\] \[\leq\|W_{j_{0},*}\|_{2}\cdot\|X_{*,j_{0}}\|_{2}\] \[\leq R^{2}\] where the first step is by the definition of \(w(X)_{i_{0},j_{0}}\), the 2nd step is Cauchy-Schwartz inequality, the 3rd step is by Assumption G.1. **Proof of Part 6** \[|z(X)_{i_{0},j_{0}}| =|\langle f(X)_{i_{0}},X^{\top}W_{*,j_{0}}\rangle|\] \[\leq\|f(X)_{i_{0}}\|_{2}\cdot\|X\|\cdot\|W_{*,j_{0}}\|\] \[\leq R^{2}\] where the first step is by the definition of \(z(X)_{i_{0},j_{0}}\), the 2nd step is Cauchy-Schwartz inequality, the 3rd step is by Assumption G.1. **Proof of Part 7** \[|s(X)_{i_{0},j_{0}}| =|\langle f(X)_{i_{0}},h(X)_{j_{0}}\rangle|\] \[\leq\|f(X)_{i_{0}}\|_{2}\cdot\|h(X)_{j_{0}}\|_{2}\] \[\leq R^{2}\] where the first step is by the definition of \(s(X)_{i_{0},j_{0}}\), the 2nd step is Cauchy-Schwartz inequality, the 3rd step is by **Part 1** and **Part 2**. ### Bounds for gradient of \(f(X)_{i_{0}}\) **Lemma G.3**.: _Under following conditions_ * _Let_ \(f(X)_{i_{0}}\) _be defined as Definition_ B.6__ * _Assumption_ G.1 _holds_ * _We use_ \(\nabla f(X)_{i_{0}}\) _to define a matrix that its_ \((j_{0},i_{1}\cdot j_{1})\)_-th entry is_ \[\frac{\mathrm{d}f(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}\] _i.e., its_ \((i_{1}\cdot j_{1})\)_-th column is_ \[\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}\] _Then we have:_ * _Part 1: for all_ \(i_{0},i_{1}\in[n],j_{1}\in[d]\)_,_ \[\|\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}\|_{2}\leq 4R^{2}\] * _Part 2:_ \[\|\nabla f(X)_{i_{0}}\|_{F}\leq 4\sqrt{nd}R^{2}\] Proof.: **Proof of Part 1** \[|\frac{\mathrm{d}f(X)_{i_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}| =|-f(X)_{i_{0}}\cdot(f(X)_{i_{0},i_{0}}\cdot\langle W_{j_{1},*}, X_{*,i_{0}}\rangle+\langle f(X)_{i_{0}},X^{\top}W_{*,j_{1}}\rangle)\] \[\quad+f(X)_{i_{0}}\circ(e_{i_{0}}\cdot\langle W_{j_{1},*},X_{*,i_ {0}}\rangle+X^{\top}W_{*,j_{1}}\rangle|\] \[\leq |f(X)_{i_{0}}\|_{2}^{2}\cdot\|h(X)_{j_{0}}\|_{2}\cdot|w(X)_{i_{0},j_{0 }}|+\|f(X)_{i_{0}}\|_{2}\cdot\|h(X)_{j_{0}}\|_{2}\cdot|z(X)_{i_{0},j_{1}}|\] \[+\|f(X)_{i_{0}}\|_{2}\cdot\|h(X)_{j_{0}}\|_{2}\cdot|w(X)_{i_{0},j_ {0}}|+\|f(X)_{i_{0}}\|_{2}\cdot\|h(X)_{j_{0}}\|_{2}\cdot|z(X)_{i_{0},j_{1}}|\] \[+\|f(X)_{i_{0}}\|_{2}\cdot\|h(X)_{j_{0}}\|_{2}\cdot|w(X)_{i_{0},j_ {0}}|\] \[+\|f(X)_{i_{0}}\|_{2}\cdot\|X\|\cdot\|h(X)_{j_{0}}\|_{2}+\|f(X)_{i_ {0}}\|_{2}\cdot\|V\|\] \[\leq \ R^{4}+R^{4}+R^{4}+R^{2}\] \[\leq \ 5R^{4}\] where the first step is by Lemma B.16, the 2nd step is by triangle inequality, the 3rd step is by Fact B.1, the 4th step is by Lemma G.2, the 5th step holds by \(R>1\). **Proof of Part 2** \[\|\nabla c(X)_{i_{0},j_{0}}\|_{2} =(\sum_{i_{1}=1}^{n}\sum_{j_{1}=1}^{d}\|\frac{\mathrm{d}c(X)_{i_{ 0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}\|_{2}^{2})^{\frac{1}{2}}\] \[\leq(\sum_{i_{1}=1}^{n}\sum_{j_{1}=1}^{d}25R^{8})^{\frac{1}{2}}\] \[=5\sqrt{nd}R^{4}\] where the first step is by the definition of \(\nabla f(X)_{i_{0}}\), the 2nd step is by **Part 1**. ### Bounds for Hessian of \(c(X)_{i_{0},j_{0}}\) **Lemma G.5**.: _Under following conditions_ * _Let_ \(c(X)_{i_{0},j_{0}}\) _be defined as Definition_ B.8__ * _Assumption_ G.1 _(Bounded parameter) holds_ * _Let_ \(B_{i}(X)\) _be defined as in Definition_ E.6__ _we have_ * _Part 1: For all_ \(i_{0}=i_{1}=i_{2}\in[n]\)_, we have_ \[\|H_{1}(X)^{(i_{0},i_{0})}\|\leq 23R^{6}+R^{5}+12R^{3}\] * _Part 2: For all_ \(i_{0}=i_{1}\neq i_{2}\in[n]\)_, we have_ \[\|H_{2}(X)^{(i_{0},i_{2})}\|\leq 11R^{6}+6R^{3}\] * _Part 3: For all_ \(i_{0}=i_{2}\neq i_{1}\in[n]\)_, we have_ \[\|H_{3}(X)^{(i_{1},i_{0})}\|\leq 11R^{6}+6R^{3}\] * _Part 4: For all_ \(i_{0}\neq i_{1}=i_{2}\in[n]\)_, we have_ \[\|H_{4}(X)^{(i_{1},i_{1})}\|\leq 5R^{6}+4R^{3}\] * _Part 5: For all_ \(i_{0}\neq i_{1},i_{0}\neq i_{2},i_{1}\neq i_{2}\in[n]\)_, we have_ \[\|H_{5}(X)^{(i_{1},i_{2})}\|\leq 4R^{6}+2R^{3}\] Proof.: The proof is similar to Lemma G.4 and hence omit. ## Appendix H Lipschitz of Hessian In Section H.1 we provide tools and facts. In Sections H.2, H.3, H.4, H.7, H.6, H.7 and H.8 we provide proof of lipschitz property of several important terms. And finally in Section H.9 we provide proof for Lipschitz property of Hessian of \(L(X)\). ### Facts and Tools In this section, we introduce 2 tools for effectively calculate the Lipschitz for Hessian. **Fact H.1** (Mean value theorem for vector function, Fact 34 in [10]).: _Under following conditions,_ * _Let_ \(x,y\in C\subset\mathbb{R}^{n}\) _where_ \(C\) _is an open convex domain_ * _Let_ \(g(x):C\to\mathbb{R}^{n}\) _be a differentiable vector function on_ \(C\)__ * _Let_ \(\|g^{\prime}(a)\|_{F}\leq M\) _for all_ \(a\in C\)_, where_ \(g^{\prime}(a)\) _denotes a matrix which its_ \((i,j)\)_-th term is_ \(\frac{\mathrm{d}g(a)_{j}}{\mathrm{d}a_{i}}\) _then we have_ \[\|g(y)-g(x)\|_{2}\leq M\|y-x\|_{2}\] **Fact H.2** (Lipschitz for product of functions).: _Under following conditions_ * _Let_ \(\{f_{i}(x)\}_{i=1}^{n}\) _be a sequence of function with same domain and range_ * _For each_ \(i\in[n]\) _we have_ * \(f_{i}(x)\) _is bounded:_ \(\forall x,\|f_{i}(x)\|\leq M_{i}\) _with_ \(M_{i}\geq 1\)__ * \(f_{i}(x)\) _is Lipschitz continuous:_ \(\forall x,y,\|f_{i}(x)-f_{i}(y)\|\leq L_{i}\|x-y\|\)__ _Then we have_ \[\|\prod_{i=1}^{n}f_{i}(x)-\prod_{i=1}^{n}f_{i}(y)\|\leq 2^{n-1} \cdot\max_{i\in[n]}\{L_{i}\}\cdot(\prod_{i=1}^{n}M_{i})\cdot\|x-y\|\] Proof.: We prove it by mathematical induction. The case that \(i=1\) obviously. Now assume the case holds for \(i=k\). Consider \(i=k+1\), we have. \[\|\prod_{i=1}^{k+1}f_{i}(x)-\prod_{i=1}^{k+1}f_{i}(y)\|\] \[\leq\|\prod_{i=1}^{k+1}f_{i}(x)-f_{k+1}(x)\cdot\prod_{i=1}^{k}f_{i }(y)\|+\|f_{k+1}(x)\cdot\prod_{i=1}^{k}f_{i}(y)-\prod_{i=1}^{k+1}f_{i}(y)\|\] \[\leq\|f_{k+1}(x)\|\cdot\|\prod_{i=1}^{k}f_{i}(x)-\prod_{i=1}^{k}f _{i}(y)\|+\|f_{k+1}(x)-f_{k+1}(y)\|\cdot\|\prod_{i=1}^{k}f_{i}(y)-\prod_{i=1}^ {k}f_{i}(y)\|\] \[\leq M_{k+1}\cdot\|\prod_{i=1}^{k}f_{i}(x)-\prod_{i=1}^{k}f_{i}(y )\|+(\prod_{i=1}^{k}M_{i})\cdot\|f_{k+1}(x)-f_{k+1}(y)\|\] \[\leq 2^{k-1}(\prod_{i=1}^{k+1}M_{i})\cdot\max_{i\in[k]}\{L_{i}\}\|x -y\|+(\prod_{i=1}^{k}M_{i})\cdot\|f_{k+1}(x)-f_{k+1}(y)\|\] \[\leq 2^{k-1}(\prod_{i=1}^{k+1}M_{i})\cdot\max_{i\in[k]}\{L_{i}\}\|x -y\|+(\prod_{i=1}^{k}M_{i})\cdot L_{k+1}\|x-y\|\] \[\leq 2^{k-1}(\prod_{i=1}^{k+1}M_{i})\cdot\max_{i\in[k]}\{L_{i}\}\|x -y\|+(\prod_{i=1}^{k+1}M_{i})\cdot L_{k+1}\|x-y\|\] \[\leq 2^{k}(\prod_{i=1}^{k+1}M_{i})\cdot\max_{i\in[k+1]}\{L_{i}\}\|x-y\|\] where the first step is by triangle inequality, the 2nd step is by property of norm, the 3rd step is by upper bound of functions, the 4th step is by induction hypothesis, the 5th step is by Lipschitz of \(f_{k+1}(x)\), the 6th step is by \(M_{k+1}\geq 1\), the 7th step is a rearrangement. Since the claim holds for \(i=k+1\), we prove the desired result. ### Lipschitz for \(f(X)_{i_{0}}\) **Definition H.3** (Notation of norm).: _For writing efficiency, we use \(\|X-Y\|\) to denote \(\|\operatorname{vec}(X)-\operatorname{vec}(Y)\|_{2}\), which is equivalent to \(\|X-Y\|_{F}\)._ **Lemma H.4**.: _Under following conditions_ * _Assumption_ G.1 _holds_ * _Let_ \(f(X)_{i_{0}}\) _be defined as Definition_ B.6__ _For \(X,Y\in\mathbb{R}^{d\times n}\), we have_ \[\|f(X)_{i_{0}}-f(Y)_{i_{0}}\|_{2} \leq 4\sqrt{nd}R^{2}\cdot\|X-Y\|\] Proof.: \[\|f(X)_{i_{0}}-f(Y)_{i_{0}}\|_{2} \leq\|\nabla f(X)_{i_{0}}\|_{F}\cdot\|X-Y\|\] \[\leq 4\sqrt{nd}R^{2}\cdot\|X-Y\|\] where the first step is given by Mean Value Theorem (Lemma H.1) and the 2nd step is due to upper bound for gradient of \(f(X)_{i_{0}}\) (Lemma G.3). ### Lipschitz for \(c(X)_{i_{0},j_{0}}\) **Lemma H.5**.: _Under following conditions_ * _Assumption_ G.1 _holds_ * _Let_ \(c(X)_{i_{0},j_{0}}\) _be defined as Definition_ B.8__ _For \(X,Y\in\mathbb{R}^{d\times n}\), we have_ \[|c(X)_{i_{0},j_{0}}-c(Y)_{i_{0},j_{0}}|\leq 5\sqrt{nd}R^{4}\cdot\|X-Y\|\] Proof.: \[|c(X)_{i_{0},j_{0}}-c(Y)_{i_{0},j_{0}}| \leq\|\nabla c(X)_{i_{0},j_{0}}\|_{2}\cdot\|X-Y\|\] \[\leq 5\sqrt{nd}R^{4}\cdot\|X-Y\|\] where the first step is given by Mean Value Theorem (Lemma H.1) and the 2nd step is due to upper bound for gradient of \(c(X)_{i_{0},j_{0}}\) (Lemma G.4). ### Lipschitz for \(h(X)_{j_{0}}\) **Lemma H.6**.: _Under following conditions_ * _Assumption_ G.1 _holds_ * _Let_ \(h(X)_{j_{0}}\) _be defined as Definition_ B.7__ _For \(X,Y\in\mathbb{R}^{d\times n}\), we have_ \[\|h(X)_{j_{0}}-h(Y)_{j_{0}}\|_{2}\leq R\|X-Y\|\] Proof.: \[\|h(X)_{j_{0}}-h(Y)_{j_{0}}\| =\|V_{*,j_{0}}\|_{2}\cdot\|X-Y\|\] \[\leq R\cdot\|X-Y\|\] where the first step is from the definition of \(h(X)_{j_{0}}\) (see Definition B.7), the 2nd step is by Assumption G.1. ### Lipschitz for \(w(X)_{i_{0},j_{0}}\) **Lemma H.7**.: _Under following conditions_ * _Assumption_ G.1 _holds_ _For \(X,Y\in\mathbb{R}^{d\times n}\), we have_ \[|w(X)_{i_{0},j_{0}}-w(Y)_{i_{0},j_{0}}|\leq R\|X-Y\|\] Proof.: \[|w(X)_{i_{0},j_{0}}-w(Y)_{i_{0},j_{0}}| =|\langle W_{j_{0},*},X_{*,i_{0}}-Y_{*,i_{0}}\rangle|\] \[\leq\|W_{j_{0},*}\|_{2}\cdot\|X-Y\|\] \[\leq R\cdot\|X-Y\|\] where the first step is from the definition of \(w(X)_{i_{0},j_{0}}\), the 2nd step is by Fact B.1, the 3rd step holds since Assumption G.1. ### Lipschitz for \(z(X)_{i_{0},j_{0}}\) **Lemma H.8**.: _Under following conditions_ * _Assumption_ G.1 _holds_ _For \(X,Y\in\mathbb{R}^{d\times n}\), we have_ \[|z(X)_{i_{0},j_{0}}-z(Y)_{i_{0},j_{0}}| \leq 5\sqrt{nd}R^{4}\cdot\|X-Y\|\] Proof.: \[|z(X)_{i_{0},j_{0}}-z(Y)_{i_{0},j_{0}}| =|\langle f(X)_{i_{0}},X^{\top}W_{*,j_{0}}\rangle-\langle f(Y)_{i _{0}},Y^{\top}W_{*,j_{0}}\rangle|\] \[\leq|\langle f(X)_{i_{0}},X^{\top}W_{*,j_{0}}\rangle-\langle f(X) _{i_{0}},Y^{\top}W_{*,j_{0}}\rangle|\] \[\leq\|f(X)_{i_{0}}\|_{2}\cdot\|X-Y\|\cdot\|W_{*,j_{0}}\|_{2}+\|f(X)_{i_{0} }-f(Y)_{i_{0}}\|\cdot\|Y\|\cdot\|W_{*,j_{0}}\|\] \[\leq R\cdot\|X-Y\|+R^{2}\|f(X)_{i_{0}}-f(Y)_{i_{0}}\|\] \[\leq 5\sqrt{nd}R^{4}\cdot\|X-Y\|\] where the first step is from the definition of \(w(X)_{i_{0},j_{0}}\), the 2nd step is by Fact B.1, the 3rd step holds since Assumption G.1, the 4th step uses Lemma H.4. ### Lipschitz for first order derivative of \(c(X)_{i_{0},j_{0}}\) **Lemma H.9**.: _Under following conditions_ * _Assumption G.1 holds_ * _Let_ \(c(X)_{i_{0},j_{0}}\) _be defined as Definition_ B.8__ _For \(X,Y\in\mathbb{R}^{d\times n}\), we have_ \[|\frac{c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}-\frac{c(Y)_{i_{0},j_{0} }}{\mathrm{d}y_{i_{1},j_{1}}}|\leq O(\sqrt{nd}R^{6})\cdot\|X-Y\|\] Proof.: Recall \(C_{i}(X)\) defined in Lemma B.16. The Lipschitz constant of \(\frac{c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}\) is bounded the summation of that of \(C_{i}(X)\). We only present the proof for Lipschitz for \(C_{1}(X)\) here. Notice that \[C_{1}(X):=-s(X)_{i_{0},j_{0}}\cdot f(X)_{i_{0},i_{0}}\cdot w(X)_{i_{0},j_{1}}\] By upper bound and lipschitz constant for basic functions, we have * \(|s(X)_{i_{0},j_{0}}|\leq R^{2}\) * \(|f(X)_{i_{0},i_{0}}|\leq 1\) * \(|w(X)_{i_{0},j_{1}}|\leq R^{2}\) * \(\max_{f\in\{s(X)_{i_{0},j_{0}},f(X)_{i_{0},i_{0}},w(X)_{i_{0},j_{1}}\}}\{\text {Lipschitz}(f)\}=4\sqrt{nd}R^{2}\) * \(n=3\) By Fact H.2. \[|C_{1}(X)-C_{1}(Y)| \leq 2^{n-1}\cdot\max_{i\in[n]}\{L_{i}\}\cdot(\prod_{i=1}^{n}M_{i}) \cdot\|X-Y\|\] \[= 4\cdot 4\sqrt{nd}R^{2}\cdot R^{4}\cdot\|X-Y\|\] \[= 16\sqrt{nd}R^{6}\cdot\|X-Y\|\] ### Lipschitz for second order derivative of \(c(X)_{i_{0},j_{0}}\) **Lemma H.10**.: _Under following conditions_ * _Assumption_ G.1 _holds_ * _Let_ \(c(X)_{i_{0},j_{0}}\) _be defined as Definition_ B.8__ _For \(X,Y\in\mathbb{R}^{d\times n}\), we have_ \[|\frac{c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}x_{i_{2},j_{2}}}-\frac{c(Y )_{i_{0},j_{0}}}{\mathrm{d}y_{i_{1},j_{1}}y_{i_{2},j_{2}}}|\leq O(\sqrt{nd}R^{ 8})\cdot\|X-Y\|\] Proof.: The proof is similar to Lemma H.9 and hence omit. Notice that the upper bound for \(\frac{c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}x_{i_{2},j_{2}}}\) is given by Lemma G.5. ### Lipschitz for Hessian of \(L(x)\) **Lemma H.11**.: _Under following conditions_ * _Assumption_ G.1 _holds_ * _Let_ \(c(X)_{i_{0},j_{0}}\) _be defined as Definition_ B.8__ _For \(X,Y\in\mathbb{R}^{d\times n}\), we have_ \[\|\nabla^{2}L(X)-\nabla^{2}L(Y)\|\leq O(n^{3.5}d^{3.5}R^{10})\cdot\|X-Y\|\] Proof.: Recall that \[\frac{\mathrm{d}L(X)}{\mathrm{d}x_{i_{1},j_{1}}x_{i_{2},j_{2}}} =\sum_{i_{0}=1}^{n}\sum_{j_{0}=1}^{d}\frac{\mathrm{d}c(X)_{i_{0}, j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}\cdot\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{ \mathrm{d}x_{i_{2},j_{2}}}+c(X)_{i_{0},j_{0}}\cdot\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}x_{i_{2},j_{2}}}\] \[=\sum_{i_{0}=1}^{n}\sum_{j_{0}=1}^{d}U_{1}(X)+U_{2}(X)\] For the first item \(U_{1}(X)\), we have \[|U_{1}(X)-U_{1}(Y)| =|\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}} \cdot\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{2}}}-\frac{ \mathrm{d}c(Y)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1}}}\cdot\frac{\mathrm{d}c (Y)_{i_{0},j_{0}}}{\mathrm{d}y_{i_{1},j_{2}}}|\] \[\leq|\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{1} }}|\cdot|\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{2}}}-\frac{ \mathrm{d}c(Y)_{i_{0},j_{0}}}{\mathrm{d}y_{i_{2},j_{2}}}|\] \[\quad+|\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{1},j_{ 1}}}\cdot\frac{-\mathrm{d}c(Y)_{i_{0},j_{0}}}{\mathrm{d}y_{i_{1},j_{1}}}|\cdot |\frac{\mathrm{d}c(Y)_{i_{0},j_{0}}}{\mathrm{d}y_{i_{2},j_{2}}}|\] \[\leq 10R^{4}\cdot|\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i_{ 1},j_{1}}}\cdot\frac{-\mathrm{d}c(Y)_{i_{0},j_{0}}}{\mathrm{d}y_{i_{1},j_{1}}}|\] \[\leq O(\sqrt{nd}R^{10})\cdot\|X-Y\|\] where the 2nd step is by triangle inequality, the 3rd step is by Lemma G.4, the 4th step uses Lemma H.9. For the 2nd item \(U_{2}(X)\), we have \[|U_{2}(X)-U_{2}(Y)| =|c(X)_{i_{0},j_{0}}\cdot\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{ d}x_{i_{1},j_{1}}x_{i_{2},j_{2}}}-c(Y)_{i_{0},j_{0}}\cdot\frac{\mathrm{d}c(Y)_{i_{0},j_{0}}}{\mathrm{d}y_{i_{1},j_{1}}y_{i_{2},j_{2}}}|\] \[\leq|c(X)_{i_{0},j_{0}}|\cdot|\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{ \mathrm{d}x_{i_{1},j_{1}}x_{i_{2},j_{2}}}-\frac{\mathrm{d}c(Y)_{i_{0},j_{0}}}{ \mathrm{d}y_{i_{1},j_{1}}y_{i_{2},j_{2}}}|\] \[\quad+|c(X)_{i_{0},j_{0}}-c(Y)_{i_{0},j_{0}}|\cdot|\frac{\mathrm{ d}c(Y)_{i_{0},j_{0}}}{\mathrm{d}y_{i_{1},j_{1}}y_{i_{2},j_{2}}}|\] \[\leq 2R^{2}\cdot|\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i _{1},j_{1}}x_{i_{2},j_{2}}}-\frac{\mathrm{d}c(Y)_{i_{0},j_{0}}}{\mathrm{d}y_{ i_{1},j_{1}}y_{i_{2},j_{2}}}|\] \[\quad+|c(X)_{i_{0},j_{0}}-c(Y)_{i_{0},j_{0}}|\cdot|\frac{\mathrm{ d}c(Y)_{i_{0},j_{0}}}{\mathrm{d}y_{i_{1},j_{1}}y_{i_{2},j_{2}}}|\] \[\leq 2R^{2}\cdot|\frac{\mathrm{d}c(X)_{i_{0},j_{0}}}{\mathrm{d}x_{i _{1},j_{1}}x_{i_{2},j_{2}}}-\frac{\mathrm{d}c(Y)_{i_{0},j_{0}}}{\mathrm{d}y_{ i_{1},j_{1}}y_{i_{2},j_{2}}}|+5\sqrt{nd}R^{4}\cdot\|X-Y\|\cdot|\frac{\mathrm{d}c(Y)_{i_{0},j_{0}}}{\mathrm{d}y_{i_{1},j_{1}}y_{i_{2},j_{2}}}|\] \[\leq O(\sqrt{nd}R^{10})\cdot\|X-Y\|+5\sqrt{nd}R^{4}\cdot\|X-Y\| \cdot|\frac{\mathrm{d}c(Y)_{i_{0},j_{0}}}{\mathrm{d}y_{i_{1},j_{1}}y_{i_{2},j _{2}}}|\] \[\leq O(\sqrt{nd}R^{10})\cdot\|X-Y\|\] where the 2nd step is by triangle inequality, the 3rd step uses Lemma G.2, the 4th step uses Lemma H.5, the 5th step uses Lemma H.10, the last step uses Lemma G.5. Combining the above 2 items, we have \[|\frac{\mathrm{d}L(X)}{\mathrm{d}x_{i_{1},j_{1}}x_{i_{2},j_{2}}}-\frac{ \mathrm{d}L(Y)}{\mathrm{d}y_{i_{1},j_{1}}y_{i_{2},j_{2}}}|\leq O(n^{1.5}d^{1.5 }R^{10})\cdot\|X-Y\|\] Then, we have \[\|\nabla^{2}L(X)-\nabla^{2}L(Y)\| \leq\|\nabla^{2}L(X)-\nabla^{2}L(Y)\|_{F}\] \[\leq n^{2}d^{2}\cdot O(n^{1.5}d^{1.5}R^{10}\|X-Y\|\] \[=O(n^{3.5}d^{3.5}R^{10})\cdot\|X-Y\|\] where the 1st step is by matrix calculus, the 2nd is by the lipschitz for each entry of \(\nabla^{2}L(X)\). ## Appendix I Strongly Convexity In this section, we provide proof for PSD bounds for the Hessian of Loss function. ### PSD bounds for Hessian of \(c(X)_{i_{0},j_{0}}\) **Lemma I.1** (PSD bounds for \(\nabla^{2}c(X)_{i_{0},j_{0}}\)).: _Under following conditions,_ * _Let_ \(c_{i_{0},j_{0}}\) _be defined as in Definition_ B.8__ * _Let Assumption_ G.1 _be satisfied_ _For all \(i_{0}\in[n],j_{0}\in[d]\), we have_ \[-36R^{6}\cdot\mathbf{I}_{nd}\preceq\nabla^{2}c(X)_{i_{0},j_{0}}\preceq 36R^{6} \cdot\mathbf{I}_{nd}\] Proof.: We prove this statement by the definition of PSD. Let \(p\in\mathbb{R}^{n\times d}\) be a vector. Let \(i\in[n]\), we use \(p_{i}\in\mathbb{R}^{d}\) to denote the vector formed by the \((i-1)\cdot n+1\)-th term to the \(i\cdot n\)-th term of vector \(p\). Then, we have \[|p^{\top}\nabla^{2}c(X)_{i_{0},j_{0}}p|= \ |p_{i_{0}}^{\top}H_{1}(X)^{i_{0},i_{0}}p_{i_{0}}+\sum_{i\in[n] \setminus\{i_{0}\}}p_{i_{0}}^{\top}H_{2}(X)^{(i_{0},i)}p_{i}\] \[\ +\sum_{i\in[n]\setminus\{i_{0}\}}p_{i}^{\top}H_{3}(X)^{(i,i_{0} )}p_{i_{0}}+\sum_{i\in[n]\setminus\{i_{0}\}}p_{i}^{\top}H_{4}(X)^{(i,i)}p_{i}\] \[\ +\sum_{i_{1}\in[n]\setminus\{i_{0}\}}\sum_{i_{2}\in[n]\setminus \{i_{0}\}}p_{i_{1}}^{\top}H_{5}(X)^{(i_{1},i_{2})}p_{i_{2}}|\] \[\leq \ \max_{i\in[5]}\|H_{i}(X)\|\cdot\sum_{i_{1}\in[n]}\sum_{i_{2}\in[ n]}p_{i_{1}}^{\top}p_{i_{2}}\] \[\leq \ \max_{i\in[5]}\|H_{i}(X)\|\cdot p^{\top}p\] \[\leq \ 36R^{6}\cdot p^{\top}p\] where the 1st step is by the formulation of \(\nabla^{2}c(X)_{i_{0},j_{0}}\) (see Definition E.3), the 2nd and 3rd steps are from simple algebra, the 4th step uses Lemma G.5. ### PSD bounds for Hessian of loss **Lemma I.2** (PSD bound for \(\nabla^{2}L(X)\)).: _Under following conditions,_ * _Let_ \(L(X)\) _be defined as in Definition_ B.9__ * _Let Assumption_ G.1 _be satisfied_ _we have_ \[\nabla^{2}L(X)\succeq-O(ndR^{8})\cdot\mathbf{I}_{nd}\] Proof.: Recall in Lemma F.2, we have \[\nabla^{2}L(X)=\sum_{i_{0}=1}^{n}\sum_{j_{0}=1}^{d}\nabla c(X)_{i_{0},j_{0}} \cdot\nabla c(X)_{i_{0},j_{0}}^{\top}+c(X)_{i_{0},j_{0}}\cdot\nabla^{2}c(X)_{ i_{0},j_{0}} \tag{2}\] Notice that the first term is PSD, so we omit it. By Lemma G.2, we have \[|c(X)_{i_{0},j_{0}}|\leq 2R^{2}\] Therefore, we have \[\nabla^{2}c(X)_{i_{0},j_{0}}\succeq \ -72R^{8}\cdot\mathbf{I}_{nd}\] \[i.e.,\nabla^{2}L(X)\succeq \ -72ndR^{8}\cdot\mathbf{I}_{nd}\] where the first line is by Lemma I.1 and the 2nd line is given by Eq. (2). Final Result **Theorem J.1** (Formal of Theorem 1.3, Main Result).: _We assume our model satisfies the following conditions_ * _Bounded parameters: there exists_ \(R>1\) _such that_ * \(\|W\|_{F}\leq R\)_,_ \(\|V\|_{F}\leq R\)__ * \(\|X\|_{F}\leq R\)__ * \(\forall i\in[n],j\in[d],|b_{i,j}|\leq R\) _where_ \(b_{i,j}\) _denotes the_ \(i,j\)_-th entry of_ \(B\)__ * _Regularization: we consider the following problem:_ \[\min_{X\in\mathbb{R}^{n\times d}}\|D(X)^{-1}\exp(X^{\top}WX)X^{\top}V-B\|_{F}^ {2}+\gamma\cdot\|\operatorname{vec}(X)\|_{2}^{2}\] * _Good initial point: We choose an initial point_ \(X_{0}\) _such that_ \(M\cdot\|X_{0}-X^{*}\|_{F}\leq 0.1l\)_, where_ \(M=O(n^{3}d^{3}R^{10})\)__ _Then, for any accuracy parameter \(\epsilon\in(0,0.1)\) and a failure probability \(\delta\in(0,0.1)\), an algorithm based on the Newton method can be employed to recover the initial data. The result of this algorithm guarantee within \(T=O(\log(|X_{0}-X^{*}|_{F}/\epsilon))\) executions, it outputs a matrix \(\widetilde{X}\in\mathbb{R}^{d\times n}\) satisfying \(\|\widetilde{X}-X^{*}\|_{F}\leq\epsilon\) with a probability of at least \(1-\delta\). The execution time for each iteration is \(\operatorname{poly}(n,d)\)._ Proof.: Choosing \(\gamma\geq O(ndR^{8})\), by Lemma I.2, we have the PD property of Hessian. By Lemma H.11, we have the Lipschitz property of Hessian. Since \(M\) is bounded (in the condition of Theorem), then by iterative shrinking lemma (see Lemma 6.9 in [10] as an example), we prove the convergence.
2303.06248
Second And Third-Order Structure Functions Of An 'Engineered' Random Field And Emergence Of The Kolmogorov 4/5 And 2/3-Scaling Laws Of Turbulence
The 4/5 and 2/3 laws of turbulence can emerge from a theory of 'engineered' random vector fields $\mathcal{X}_{i}(x,t) =X_{i}(x,t)+\tfrac{\theta}{\sqrt{d(d+2)}} X_{i}(x,t)\psi(x)$ existing within $\mathbf{D}\subset\mathbf{R}^{d}$. Here, $X_{i}(x,t)$ is a smooth deterministic vector field obeying a nonlinear PDE for all $(x,t)\in\mathbf{D}\times\mathbf{R}^{+}$, and $\theta$ is a small parameter. The field $\psi(x)$ is a regulated and differentiable Gaussian random field with expectation $\mathbb{E}[\psi(x)]=0$, but having an antisymmetric covariance kernel $\mathscr{K}(x,y)=\mathbb{E}[\psi(x)\psi(y)]=f(x,y)K(\|x-y\|;\lambda)$ with $f(x,y)=-f(y,x)=1,f(x,x)=f(y,y)=0$ and with $K(\|x-y\|;\lambda)$ a standard stationary symmetric kernel. For $0\le\ell\le \lambda<L$ with $X_{i}(x,t)=X_{i}=(0,0,X)$ and $\theta=1$ then for $d=3$, the third-order structure function is \begin{align} S_{3}[\ell]=\mathbb{E}\left[|\mathcal{X}_{i}(x+\ell,t)-\mathcal{X}(x,t)|^{3}\right]=-\frac{4}{5}\|X_{i}\|^{3}=-\frac{4}{5}X^{3}\nonumber \end{align} and $S_{2}[\ell]=CX^{2}$. The classical 4/5 and 2/3-scaling laws then emerge if one identifies the random field $\mathcal{X}_{i}(x,t)$ with a turbulent fluid flow $\mathcal{U}_{i}(x,t)$ or velocity, with mean flow $\mathbb{E}[\mathcal{U}_{i}(x,t)]=U_{i}(x,t)=U_{i}$ being a trivial solution of Burger's equation. Assuming constant dissipation rate $\epsilon$, small constant viscosity $\nu$, corresponding to high Reynolds number, and the standard energy balance law, then for a range $\eta\le\ell\ll \lambda<L$ \begin{align} S_{3}[\ell]=\mathbb{E}\left[|\mathcal{U}_{i}(x+\ell,t)-\mathcal{U}(x,t)|^{3}\right]=-\frac{4}{5}\epsilon\ell\nonumber \end{align} where $\eta=(\nu^{3/4}\epsilon)^{-1/4}$. For the second-order structure function, the 2/3-law emerges as $S_{2}[\ell]=C\epsilon^{2/3}\ell^{2/3}$.
Steven D Miller
2023-03-10T23:51:23Z
http://arxiv.org/abs/2303.06248v1
Second and third-order structure functions of an 'engineered' random field and emergence of the Kolmogorov 4/5 and 2/3-scaling laws of turbulence ###### Abstract. The \(4/5\) and \(2/3\) laws of turbulence can emerge from a theory of 'engineered' random vector fields \(\mathcal{X}_{i}(x,t)=X_{i}(x,t)+\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x,t)\psi(x)\) existing within \(\mathbf{D}\subset\mathbf{R}^{d}\). Here, \(X_{i}(x,t)\) is a smooth deterministic vector field obeying a nonlinear PDE for all \((x,t)\in\mathbf{D}\times\mathbf{R}^{+}\), and \(\theta\) is a small parameter. The field \(\psi(x)\) is a regulated and differentiable Gaussian random field with expectation \(\mathbb{E}[\psi(x)]=0\), but having an antisymmetric covariance kernel \(\mathscr{K}(x,y)=\mathbb{E}[\psi(x)\psi(y)]=f(x,y)K(\|x-y\|;\lambda)\) with \(f(x,y)=-f(y,x)=1,f(x,x)=f(y,y)=0\) and with \(K(\|x-y\|;\lambda)\) a standard stationary symmetric kernel. For \(0\leq\ell\leq\lambda<L\) with \(X_{i}(x,t)=X_{i}=(0,0,X)\) and \(\theta=1\) then for \(d=3\), the third-order structure function is \[S_{3}[\ell]=\mathbb{E}\left[|\mathcal{X}_{i}(x+\ell,t)|^{3}\right]=-\frac{4}{ 5}\|X_{i}\|^{3}=-\frac{4}{5}X^{3}\] and \(S_{2}[\ell]=CX^{2}\). The classical \(4/5\) and \(2/3\)-scaling laws then emerge if one identifies the random field \(\mathcal{X}_{i}(x,t)\) with a turbulent fluid flow \(\mathcal{U}_{i}(x,t)\) or velocity, with mean flow \(\mathbb{E}[\mathcal{U}_{i}(x,t)]=U_{i}(x,t)=U_{i}\) being a trivial solution of Burger's equation. Assuming constant dissipation rate \(\epsilon\), small constant viscosity \(\nu\), corresponding to high Reynolds number, and the standard energy balance law, then for a range \(\eta\leq\ell\ll\lambda<L\) \[S_{3}[\ell]=\mathbb{E}\left[|\mathcal{U}_{i}(x+\ell,t)-\mathcal{U}(x,t)|^{3} \right]=-\frac{4}{5}\epsilon\ell\] where \(\eta=(\nu^{\beta/4}\epsilon)^{-1/4}\). For the second-order structure function, the \(2/3\)-law emerges as \(S_{2}[\ell]=C\epsilon^{2/3}\ell^{2/3}\). ## Contents * 1 INTRODUCTION: THE 4/5 AND 2/3 SCALING LAWS OF TURBULENCE * 2 RANDOM SCALAR FIELDS AND 'ENGINERED' RANDOM VECTOR FIELDS IN A DOMAIN \(\mathbf{D}\subset\mathbf{R}^{d}\) * 2.1 Random vector fields engineered from random scalar fields * 2.2 Structure functions * 2.3 Emergence of a \(4/5\)-law for the 3rd-order structure function * 2.4 Expression for the 2nd-order structure function * 3 APPLICATION TO FLUID MECHANICS AND THE EMERGENCE OF THE CLASSICAL KOLMOGOROV SCALING LAWS OF TURBULENCE * 3.1 Basic results from fluid mechanics * 3.2 Emergence of the classical \(4/5\) and \(2/3\) scaling laws * 4 CONCLUSION ## 1. Introduction: The \(4/5\) and \(2/3\) scaling laws of turbulence In fluid mechanics, the \(4/5\) and \(2/3\)-laws and the law of finite energy dissipation are very important and well-established foundational results of modern turbulence theory, and there is by now a very considerable volume of literature devoted to them in both theory and experiment, and to turbulence in general [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57], and references therein. However, there is no mathematically rigorous and fully deductive theory or derivation which begins with the Navier-Stokes equations and derives these laws exactly from first principles. In this note, it is shown how the structure or exact form of both laws can emerge naturally from a theory of specifically 'engineered' forms of random vector fields existing within a d-dimenional domain. The classical 4/5 and 2/3 scaling laws then follow naturally if one identifies these random vector fields with a turbulent fluid flow or velocity, assuming the fluid has a constant dissipation rate \(\epsilon\) and a constant (very small) viscosity \(\nu\) and obeys the standard energy balance law. Suppose an incompressible fluid with very small viscosity \(\gtrapprox 0\) and velocity \(U(x,t)\), evolving by the Burgers or Euler equations from some initial data and with suitable boundary conditions, flows within \(\mathbf{D}\subset\mathbf{R}^{d}\). If the fluid velocity is dissociated into'mean' and 'fluctuating' contributions then \(U(x,t)=\overline{U(x,t)}+\widetilde{U(x,t)}\) with \(\left\langle U(x,t)\right\rangle=\overline{U(x,t)}\) with \(\left\langle\widetilde{U(x,t)}\right\rangle=0\) and \(\left\langle U(x,t)U(y,t)\right\rangle\neq 0\). Here \(\left\langle\bullet\right\rangle\) is a suitable average such as a time or ensemble average. Then at very high, but not infinite, Reynolds number \(\mathscr{R}\gg 0\) all of the small-scale statistical properties are assumed to be uniquely and universally determined by the length scale \(\ell\), the mean dissipation rate (per unit mass). Despite its conjectural status from the perspective of mathematical rigour, with some heuristic assumptions on statistical properties (homogeneity, isotropy), Kolmogorov **[1-5]** made these key prediction about the structure of turbulent velocity fields for incompressible viscous fluids at high Reynolds number, namely that for \(d=3\), the following 4/5 and 2/3-scaling law holds over an inertial range \[S_{3}[\ell] =\left\langle\left|U(x+\ell,t)-U(x,t)\right|^{3}\right\rangle=- \frac{4}{5}\epsilon\ell \tag{1.1}\] \[S_{2}[\ell] =\left\langle\left|U(x+\ell,t)-U(x,t)\right|^{2}\right\rangle=C _{2}\epsilon^{2/3}\ell^{2/3} \tag{1.2}\] where \(C_{2}\) is some constant. These should hold in the limit of large Reynolds number and small scales \(\ell\sim 0\). In particular, the 4/5-law is an exact result. In Fourier space, the 2/3-law becomes the \(5/3-law\) for the energy spectrum. Here, \(\ell\) is within the so-called _inertial range_ of length scales \[\eta\leq\ell\leq L \tag{1.3}\] The length \(\eta=(\nu^{3/4}\epsilon)^{-1/4}\) known as the Kolmogorov scale, represents a small scale dissipative cutoff or the size of the smallest eddies, and the integral scale L represents the size of the largest eddy in the flow, which cannot exceed the dimensions of the domain. At this scale, viscosity dominates and the kinetic energy is dissipated into heat. There is a 'cascade' process whereby energy is transferred from the largest scales/eddies to the Kolmogorov scale. The objects \(S_{p}[\ell]\) are the pth-order (longitudinal) structure functions and have the generic form \[S_{p}[\ell]=\left\langle\left|U(x+\ell,t)-U(x,t)\right|^{p}\right\rangle=C_{p }\big{(}\epsilon\ell\big{)}^{\zeta_{p}} \tag{1.4}\] Although highly cited, the work still remains mathematically incomplete and essentially heuristic. But the central results have remained robust over the decades and are at least acceptably correct within the confines of the underlying assumptions, and supported by strong experimental evidence, at least under specific conditions. But it is also well known that real fluids in general do not conform exactly to the Kolmogorov predictions. Intermittency, non-uniformity of the velocity's 'roughness' and energy dissipation rate, result in deviations of the scaling exponents from a purely linear behavior in **[6-10]**. Experiments do however indicate that for p near three, the formula approximately holds with \(\zeta_{2}=\frac{2}{3}+[0.03,0.06]\) and \(\zeta_{3}\sim 1\). For example, in the flow past a sphere a value of \(\zeta_{2}\sim 0.701\) is reported in **[11]** and \(\zeta_{2}\sim 0.71\) in **[12]**. Some recent high-resolution numerical simulations report \(\zeta_{2}\sim 0.725\). Although there are slight variations, these results all conform to \(\zeta_{2}\gtrapprox 2\) and \(\zeta_{3}\lessapprox 1\). Kolmogorov also improved the 2/3-law in 1962 to account for intermittency **[3]**. It is clear that fluid mechanics continues to be very challenging, both physically and computationally, and from the perspective of mathematical rigour. Many of the issues discussed by Von Neumann in his well-known review paper remain relevant [26]. There remains opportunity (and an ongoing need) to apply new and established mathematical tools and methods to the problem of developed turbulence: these should include stochastic PDE, stochastic and statistical/random geometry and random fields. A central issue within fully developed turbulence is how to define and calculate Reynolds stresses, structure functions and velocity correlations. Established methods are mostly heuristic and it is very difficult to rigorously define or mathematically formalise the required spatial, temporal or ensemble averages \(\left\langle\ \bullet\ \right\rangle\) in a useful manner. Rigorously defining statistical averages in conventional statistical hydrodynamics is fraught with technical difficulties and limitations, as well as having a limited scope of physical applicability. But a key insight of Kolmogorov's work is that turbulent flows seem to be essentially random fields. In this paper, we consider a mathematical construction of spatio-temporal random vectors fields within a closed Euclidean domain \(\mathbf{D}\subset\mathbf{R}^{d}\). The classical 4/5 and 2/3-scaling laws then emerge for \(d=3\) if one identifies the random field with an 'engineered' turbulent fluid flow or velocity, with mean flow \(U_{i}(x,t)=U_{i}\) being a trivial solution of Burger's equation. And assuming constant dissipation rate \(\epsilon\), small constant viscosity \(\nu\sim 0\), corresponding to high Reynolds number, and the standard energy balance law. These scaling laws hold for a range \(\eta\leq\ell\ll\lambda<L\), where \(\lambda\) is a correlation length. Random Scalar Fields and 'Engineered' Random Vector Fields in a Domain \(\mathbf{D}\subset\mathbf{R}^{d}\) In this section, the 3rd order and 2nd-order structure functions of a random vector field are calculated, without any reference to fluid mechanics. Later, it will be assumed that the noise or random fluctuation in fully developed turbulence is a generic noise determined by general theorems in probability theory, stochastic analysis, and random fields or functions. Classical random fields or functions correspond naturally to structures, and properties of systems, that are varying randomly in time and/or space. They have found many useful applications in mathematics and applied science: in the statistical theory or turbulence, in geoscience, machine learning and data science, medical science, engineering, imaging, computer graphics, statistical mechanics and statistics, biology and cosmology [58]. Gaussian random fields (GRFs) are of special significance as they are more mathematically tractable and can occur spontaneously in systems with a larger number of degrees of freedom via the central limit theorem. A GRF is defined with respect to a probability space/triplet as follows: **Definition 2.1**.: _(**Formal definition of Gaussian random fields**) Let \((\Omega,\mathscr{F},\mathbb{P})\) be a probability space. Then: \(\mathbb{P}\) is a function such that \(\mathbb{P}:\mathscr{F}\to[0,1]\), so that for all \(\mathcal{B}\in\mathscr{F}\), there is an associated probability \(\mathbb{P}(\mathcal{B})\). The measure is a probability measure when \(\mathbb{P}(\Omega)=1\). Let \(x_{i}\subset\mathbf{D}\subset\mathbf{R}^{n}\) be Euclidean coordinates and let \((\Omega,\mathscr{F},\mathbf{P})\) be a probability space. Let \(\mathscr{F}(x;\omega)\) be a random scalar function that depends on the coordinates \(x\subset\mathbf{D}\subset\mathbf{R}^{n}\) and also \(\omega\in\Omega\). Given any pair \((x,\omega)\) there \(\exists\) map \(\mathfrak{M}:\mathbf{R}^{n}\times\Omega\to\mathbf{R}\) such that \(\mathfrak{M}:(\omega,x)\longrightarrow\psi(x;\omega)\), so that \(\psi(x;\omega)\) is a **random variable or field** on \(\mathbf{D}\subset\mathbf{R}^{n}\) with respect to the probability space \((\mathbf{\Omega},\mathscr{F},\mathbf{P})\). A random field is then essentially a family of random variables \(\{\psi(x;\omega)\}\) defined with respect to the space \((\Omega,\mathscr{F},\mathbf{P})\) and \(\mathbf{R}^{n}\). The fields can also include a time variable \(t\in\mathbf{R}^{+}\) so that given any triplet \((x,t,\omega)\) there is a mapping \(\mathfrak{M}:\mathbf{R}\times\mathbf{R}^{n}\times\mathbf{\Omega}\to\mathbf{R}\) such that \(\mathfrak{M}:(x,t,\omega)\hookrightarrow\psi(x,t;\omega)\) is a **spatio-temporal random field**. Normally, the field will be expressed in the form \(\psi(x,t)\) or \(\psi(x)\) with \(\omega\) dropped. From here, only spatial fields \(\psi(x)\) will be considered. The random field \(\psi(x)\) will have the following bounds and continuity properties [REFs] \(\mathbb{P}[\sup_{x\in\mathbf{D}}|\psi(x)|\ <\ \infty]\ =+1\) and \(\mathbb{P}[\lim_{x\to y}\big{|}\psi(x)-\psi(x)\big{|}=0,\ \forall\ (x,y)\in \mathbf{D}]=1\)._ **Lemma 2.2**.: _The random field is at the least, mean-square differentiable in that [56], [62], [64]_ \[\nabla_{j}\psi(x)=\frac{\partial}{\partial x_{j}}\Xi(x)=\lim_{\ell\to 0} \big{\{}\psi(x+|\ell|\mathbf{e}_{j})-\psi(x)\big{\}}|\ell|^{-1} \tag{2.1}\] _where \(\mathbf{e}_{j}\) is a unit vector in the \(j^{th}\) direction. For a Gaussian field, sufficient conditions for differentiability can be given in terms of the covariance or correlation function, which must be regulated at \(x=y\) The derivatives of the field \(\nabla_{i}\psi,\nabla_{i}\nabla_{j}\psi(x)\) exist at least up to 2nd order and do line, surface and volume integrals \(\int_{\mathbf{\Omega}}\!\psi(x,t)d\mu(x)\). The derivatives or integrals of a random field are also a random field._ **Definition 2.3**.: _The stochastic expectation \(\mathbb{E}\langle\bullet\rangle\) and binary correlation with respect to the space \((\Omega,\mathscr{F},\mathbb{P})\) is defined as follows, with \((\omega,\vartheta)\in\Omega\)_ \[\mathbb{E}[\bullet]=\int_{\omega}\bullet\ d\mathbb{P}[\omega],\ \mathbb{E}[\bullet\times\bullet]=\iint_{\Omega}\bullet\times\bullet\ d \mathbf{P}[\omega]d\mathbb{P}[\vartheta] \tag{2.2}\] _For Gaussian random fields \(\bullet=\psi(x,t)\) only the binary correlation or covariance is required so that_ \[\mathbb{E}[\psi(x)]=\int_{\omega}\!\psi(x;\omega)\ d\mathbb{P}[\omega]=0\] \[\mathbb{E}[\psi(x)\psi(y)]=\iint_{\Omega}\!\psi(x;\omega)\psi(y ;\zeta)\ d\mathbb{P}[\omega]d\mathbb{P}[\zeta]=K(\|x-y\|;\lambda)\mathbf{\varphi }(t,s) \tag{2.3}\] _and regulated at \(x=y\) for all \((x,y)\in\mathbf{D}\) and \(t\in[0,\infty)\) if \(\mathbb{E}[\psi(x)\psi(x)]<\beta<\infty\)._ **Definition 2.4**.: _Two random fields \(\psi(x),\psi(y))\) defined for any \((x,y)\in\mathbf{D}\) are correlated or uncorrelated if_ \[\mathbb{E}[\psi(x)\psi(y)] \neq 0\] \[\mathbb{E}[\psi(x)\psi(y)] =0 \tag{2.4}\] **Definition 2.5**.: _The covariance function of a zero-centred Gaussian random field is_ \[cov\left(\psi(x),\psi(y)\right)=\mathbb{E}[\psi(x)\psi(y)]+\mathbb{E}[\psi(x) ]\mathbb{E}[\psi(y)]=\mathbb{E}[\psi(x)\psi(y)]=K(\|x-y\|;\lambda) \tag{2.5}\] _so that the binary correlation and the covariance are equivalent. Here \(\lambda\) is the correlation length. The GRF is isotropic if \(K(\|x-y\|;\lambda)=K(\|y-x\|;\lambda)\) depends only on the separation \(\|x-y\|\) and is stationary if \(K(\|(x+\delta x)-(y+\delta y)\|;\lambda)=K(\|x-y\|;\lambda)\). Hence, the 2-point function \(K(\|x-y\|;\lambda)\) is translationally and rotationally invariant in \(\mathbf{R}^{d}\) for all \(\delta x>0\) and \(\delta y>0\)._ Typical covariances for Gaussian random fields \(\psi(x)\) are the rational quadratic form \[\mathbb{E}[\psi(x)\psi(y)]=K(\|x-y\|;\lambda)=\beta\left(1+\frac{\|x-y\|^{2}} {2\alpha\lambda^{2}}\right)^{-\alpha} \tag{2.6}\] where \(\lambda\) is the correlation length and \(\alpha\) is the'scale-mixing' parameter. Another commonly used covariance kernel is the Gaussian \[\mathbb{E}[\psi(x)]\psi(y)]=K(\|x-y\|;\lambda)=\beta\exp\left(-\frac{|x-y|^{2} }{\lambda^{2}}\right) \tag{2.7}\] In both cases, in the limit that \(\lambda\to 0\), the noise reduces to a white-in-space noise which is delta correlated \[\mathbb{E}[\psi(x)\psi(y)]\longrightarrow\mathbb{E}[\mathcal{W}(t)\mathcal{W }(s)]=-\delta^{3}(x-y) \tag{2.8}\] and the 2nd-order moment blows up in that \(\mathbb{E}[|\psi(x)|^{2}]=\infty\). The random field or noise \(\psi(x)\) is differentiable because the field is regulated in that \[\mathbb{E}[\psi(x)\psi(x)]=\phi(x,x;\lambda)=\beta<\infty \tag{2.9}\] Gaussian random fields also have a Fourier representation in \(k\)-space. **Definition 2.6**.: _If \(\mathfrak{F}:\mathbf{R}\rightarrow\mathbf{K}\) is a Fourier transform then a generic random Gaussian scalar field \(\psi(x)\) is said to be **harmonisable** if_ \[\psi(x)=\int_{\mathbb{R}^{\mu}}\exp(ik_{i}x^{i})\psi(k)d^{3}k \tag{2.10}\] _Let \(\psi(x)\) be an arbitrary harmonisable Gaussian random scalar field existing for all \(x\in\mathbf{R}^{3}\). Given the basic Fourier representation of the binary correlation_ \[\mathbb{E}[\psi(x)\psi(y)]=\int_{\mathbb{K}^{3}}\!d^{3}k\Phi(k)\exp(ik_{i}(x- y)^{i}) \tag{2.11}\] _where \(\Phi(k)\) is a spectral function, then for \(x=y\) such that \(\mathbb{E}\left[\psi(x)\ \psi(x)\right]=\int_{\mathbb{K}^{3}}\Phi(k)d^{3}k\)._ For \(\Phi(k)=1\) for example, one recovers an unregulated white noise with \[\mathbb{E}[\psi(x)\psi(y)]=\int_{\mathbb{K}^{3}}\!d^{3}k\exp(ik_{i}(x-y)^{i}) =\mathbb{E}[\mathcal{W}(x)\ \mathcal{W}(y)]=C\delta^{3}(x-y) \tag{2.12}\] and for \(\Phi(k)=\frac{\beta}{k^{2}}\exp\left(-\frac{1}{4}\lambda^{2}k^{2}\right)\), one recovers the kernel (2.7). It is possible to construct or 'engineer' new kernels from the standard kernels. (This kernel will be applied later.) **Proposition 2.7**.: _Let \(\psi:\mathbf{D}\times\Omega\rightarrow\mathbf{R}^{+}\) be a GRF existing for all \(x\in\mathbf{D}\). Let \(f:\mathbf{D}\times\mathbf{D}\rightarrow\{-1,0,1\}\) be an antisymmetric function such that \(f(x,y)=-f(y,x)=1\) and \(f(y,x)=-1\) with \(f(x,x)=f(y,y)=0\). Given any standard stationary and isotropic kernel \(K(\|x-y\|;\lambda)\) with correlation length \(\lambda\), then an antisymmetric kernel is_ \[\mathscr{K}(\|x-y\|;\lambda)=\mathbb{E}[\psi(x)\psi(y)]=f(x,y)K(\|x-y\|;\lambda) \tag{2.13}\] _The random field then commute for any pair \((x,y)\in\mathbf{D}\) so that_ \[\llbracket\psi(x),\psi(y)\rrbracket=\psi(x)\psi(y)-\psi(y)\psi(x)=0 \tag{2.14}\] _But the expectations do not commute_ \[\mathbb{E}\big{[}[\![\psi(x),\psi(y)]\!]\big{]}=\mathbb{E}[\psi(x)\psi(y)]- \mathbb{E}[\psi(y)\psi(x)]=2\mathbb{E}[\psi(x)\psi(y)] \tag{2.15}\] Correlations involving the first derivative \(\nabla_{i}\psi(x)\) vanish. **Lemma 2.8**.: \[\mathbb{E}[\psi(x)\nabla_{i}\psi(x)]=0\] (2.16) \[\mathbb{E}[\nabla_{i}\psi(x)\nabla_{j}\psi(x)]=0\] (2.17) Proof.: Let \(\nabla_{i}^{(x)}=\frac{\partial}{\partial xi}\) so that \(\nabla_{i}^{(x)}\psi(y)=0\). Then \[\nabla_{i}^{(x)}\mathbb{E}[\psi(x)\psi(y)]=\mathbb{E}[\nabla_{i} ^{(x)}\psi(x)\psi(y)]\] \[=\nabla_{i}f(x,y)K(\|x-y\|;\lambda)+f(x,y)\nabla_{i}^{(x)}K(\|x- y\|;\lambda)=f(x,y)\nabla_{i}^{(x)}K(\|x-y\|;\lambda)\] \[=f(x,y)d\left(\frac{\|x-y\|}{\alpha\lambda^{2}}\right)\left(1- \frac{\|x-y\|^{2}}{2\alpha\lambda^{2}}\right)^{-\alpha-1} \tag{2.18}\] since \(\nabla_{i}x=d\). Taking the limit \(y\to x\) gives \[\mathbb{E}[\psi(x)\nabla_{i}\psi(x)]=\lim_{y\to x}\mathbb{E}[\nabla_{i}^{(x) }\psi(x)\psi(y)]=\lim_{y\to x}f(x,y)d\left(\frac{\|x-y\|}{\alpha\lambda^{2}} \right)\left(1-\frac{\|x-y\|^{2}}{2\alpha\lambda^{2}}\right)^{-\alpha-1}=0 \tag{2.19}\] Taking the derivative again then leads to (2.17). ### Random vector fields engineered from random scalar fields Given the random (scalar) field \(\psi(x)\) and a smooth deterministic spatio-temporal vector field \(X_{i}(x,t)\), a new random vector field can be defined as follows. **Proposition 2.9**.: _Let \(X_{i}:\ \mathbf{D}\times\mathbf{R}^{+}\to\mathbf{R}^{d}\) be a smooth deterministic vector field existing for all \((x,t)\in\mathbf{D}\times\mathbf{R}^{+}\). By smooth, the first and second derivatives \(\nabla_{j}X_{i}(x,t),\Delta X_{i}(x,t)\) exist, and deterministic is taken to mean that the field \(X_{i}(x,t)\) evolves from initial data \(X_{i}(x,0)=X_{i}^{\theta}(x)\) by some non-linear PDE such that_ \[\partial_{t}X_{i}(x,t)+\mathscr{D}_{N}\big{[}\nabla,\Delta,X_{i}(x,t)\big{]}X _{i}(x,t)\equiv\partial_{t}X_{i}(x,t)+\mathscr{D}_{N}X_{i}(x,t) \tag{2.20}\] _where \(\mathscr{D}_{N}=\mathscr{D}_{N}[\nabla,\Delta,X_{i}]\) is a nonlinear differential operator. A trivial solution is \(X_{i}(x,t)=X_{i}\). The Gaussian field \(\psi(x)\) has the properties_ \[\mathbb{E}[\psi(x)]=0 \tag{2.21}\] \[\mathbb{E}[\psi(x)\psi(x)]=0\] (2.22) \[\mathbb{E}[\nabla_{i}\psi(x)]=0\] (2.23) \[\mathbb{E}[\Delta\psi(x)]=0\] (2.24) \[\mathscr{K}(\|x-y\|;\lambda)=\mathbb{E}[\psi(x)\psi(y)]=f(x,y)K( \|x-y\|;\lambda) \tag{2.25}\] _Then a random vector field \(\mathcal{X}_{i}(x,t)\) in d dimensions can be defined by a'mixing' ansatz_ \[\mathcal{X}_{i}(x,t)=X_{i}(x,t)+\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x,t)\psi(x) \tag{2.26}\] _so that the expected value is_ \[\mathbb{E}[\mathcal{X}_{i}(x,t)]=X_{i}(x,t)+\frac{\theta}{\sqrt{d(d+2)}}X_{i} (x,t)\mathbb{E}[\psi(x)]=X_{i}(x,t) \tag{2.27}\] _where \(\theta>0\) is a small real parameter._ The random vector field then satisfies a stochastically averaged nonlinear SPDE. First \[\partial_{t}\mathcal{X}_{i}(x,t)+\mathscr{D}_{N}\mathcal{X}_{i}(x,t)=\partial_{t}\mathcal{X}_{i}(x,t)+\mathscr{D}_{N}X_{i}(x,t)+\mathscr{D}_{N} \left\{\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x,t)\psi(x)\right\}\] \[=\partial_{t}\mathcal{X}_{i}(x,t)+\mathscr{D}_{N}\big{\{}\frac{ \theta}{\sqrt{d(d+2)}}\theta X_{i}(x,t)\psi(x)\big{\}} \tag{2.28}\] The stochastically averaged SPDEs is then \[\mathbb{E}[\mathscr{D}_{N}\mathcal{X}_{i}(x,t)]=\mathscr{D}_{N}X_{i}( x,t)+\mathbb{E}\left[\mathscr{D}_{N}\left\{\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x,t) \psi(x)\right\}\right]\] \[=\mathscr{D}_{N}\big{(}\nabla,\Delta\big{)}\big{\{}\theta X_{i}(x,t)\psi(x)\big{\}} \tag{2.29}\] New terms may arise in general upon taking the expectation since the underlying PDE is nonlinear. ### Structure functions The structure functions for the vector field \(\mathcal{X}_{i}(x,t)\) are now defined. In the Kolmogorov turbulence theory, the third-order structure function in 3 dimensions leads to the famous 4/5 scaling law and the second order structure function to the 2/3-law **[1-5]**. It will be shown how the same mathematical form arises from the structure functions of the random field \(\mathcal{X})_{i}(x,t)\). The second-order structure function is also equivalent of the square of the canonical metric for the random fields **Definition 2.10**.: _Given the GRVF \(\mathcal{X}_{i}(x,t)\) for all \((x,t)\in\mathbf{D}\times\mathbf{R}^{+}\), the **canonical metric** is defined as [58]_ \[d_{2}(x,y|t)\equiv d_{2}(x,y)=\sqrt{\mathbb{E}\left[\big{|} \mathcal{X}_{i}(y,t)-\mathcal{X}_{i}(x,t)\big{|}^{2}\right]} \tag{2.30}\] _The **structure functions**\(S[\mathcal{X}]\) of \(\mathcal{X}_{i}(x,t)\) are then equivalent to the square of the canonical metric_ \[S_{2}[\|x-y\|]\equiv d_{2}^{2}(x,y)=\mathbb{E}\left[\big{|} \mathcal{X}_{i}(y,t)-\mathcal{X}_{i}(x,t)\big{|}^{2}\right] \tag{2.31}\] _If \(S_{2}[\mathcal{X}]\) obeys a scaling law over some range of length scales \(\ell=|y-x|\leq L\) then one expects_ \[S_{2}[\mathcal{X}]=\mathbb{E}\left[\big{|}\mathcal{X}_{i}(y,t)- \mathcal{X}_{i}(x,t)\big{|}^{2}\right]\sim C|y-x|^{\zeta}=C\ell^{\zeta} \tag{2.32}\] _where \(\alpha>0\) is some (fractional) power. If \(y=x+\ell\) with \(\ell\ll L\) and \(\mathrm{Vol}(\mathbf{D})\sim L^{3}\), then_ \[S_{2}[\ell]\equiv d_{2}^{2}(x,x+\ell)=\mathbb{E}\left[\big{|} \mathcal{X}_{i}(x+\ell,t)-\mathcal{X}_{i}(x,t)\big{|}^{2}\right]\] \[=\mathbb{E}\left[\big{|}\mathcal{X}_{i}(x+\ell,t)\big{|}^{2} \right]-2\mathbb{E}\left[\big{|}\mathcal{X}_{i}(x+\ell,t)\mathcal{X}_{i}(x,t) \big{|}\right]+\mathbb{E}\left[\big{|}\mathcal{X}_{i}(x,t)\big{|}^{2}\right] \tag{2.33}\] _If \(S_{2}[\ell]\) obeys a scaling law then one expects_ \[S_{2}[\ell]=\mathbb{E}\left[\big{|}\mathcal{X}_{i}(x+\ell,t)- \mathcal{X}_{i}(x,t)\big{|}^{2}\right]\sim C\ell^{\zeta} \tag{2.34}\] Figure 1. Separation between the random fields at two points in \(\mathbf{D}\) **Corollary 2.11**.: _If \(\boldsymbol{\ell}=0\) or \(x=y\) then_ \[S_{2}[\ell=0]=d_{2}(x,x)=0 \tag{2.35}\] It will be convenient always to assume that \(S_{2}[\mathcal{X}]\) is continuous so that \[\lim_{y\to x}d_{2}^{2}(x,y)=\lim_{y\to x}\mathbb{E}\left[\left| \mathcal{X}_{i}(y,t)-\mathcal{X}_{i}(x,t)\right|^{2}\right]\] \[=\mathbb{E}\left[\lim_{y\to x}\left|\mathcal{X}_{i}(y,t)- \mathcal{X}_{i}(x,t)\right|^{2}\right] \tag{2.36}\] which is equivalent to the condition \[\mathbb{P}\big{[}\lim_{y\to x}\left|\mathcal{X}_{i}(y,t)-\mathcal{X}_{i}(x,t) \right|=0,\ \forall\ (x,y)\in\mathbf{D}\big{]}=1 \tag{2.37}\] which is also consistent with a scaling law. ### Emergence of a 4/5-law for the 3rd-order structure function The main theorem now follows. Beginning with the 'engineered' random field \(\mathcal{X}_{i}(x,t)\) it is shown that a 4/5 law emerges when the 3rd-order structure function is computed. **Theorem 2.12**.: _(Emergence of a 4/5-law via 'engineered' random vector fields in \(\mathbf{D}\subset\mathbf{R}^{d}\)) Let a vector field \(X_{i}(x,t)\) evolve within a domain \(\mathbf{D}\) of volume \(Vol(\mathbf{D})\sim L^{d}\) via some non-linear PDE_ \[\partial_{t}X_{i}(x,t)+\mathscr{D}_{N}X_{i}(x,t)=0,\ (x,t)\in\mathbf{D} \times\mathbf{R}^{+} \tag{2.38}\] _and from some initial Cauchy data \(X_{i}(x,0)=g_{i}(x)\), where \(\mathscr{D}_{N}\) is a nonlinear operator involving \(\nabla,\Delta\) and \(X(x,t)\) itself. A trivial steady state solution is then \(X_{i}(x,t)=X_{i}=const.\). Let \(\psi(x)\) be a random Gaussian scalar field as previously defined, having an antisymmetric rational quadratic covariance kernel_ \[\mathscr{K}(x,y;\lambda)=\mathbb{E}[\psi(x)\psi(y)]=f(x,y)K(\|x-y\|;\lambda) \tag{2.39}\] _Here, \(f(x,y)\) is an antisymmetric function \(f:\mathbf{D}\times\mathbf{D}\rightarrow\{-1,0,1\}\) such that \(f(x,y)=-f(y,x)\) with \(f(x,y)=1\) for all \((x,y)\in\mathbf{D}\), and \(f(y,x)=-1\) with \(f(x,x)=f(y,y)=0\). Then \(\nabla_{i}^{(x)}f(x,y)=\nabla_{j}^{(y)}f(x,y)=0\). The kernel \(K(\|x-y\|;\lambda)\) is any standard stationary and isotropic covariance kernel for Gaussian random fields; for example a rational quadratic covariance with scale-mixing parameter \(\alpha\) gives_ \[\mathscr{K}(x,y;\lambda)=\mathbb{E}[\psi(x)\psi(y)]=f(x,y)\left(1-\frac{\ell^ {2}}{2\alpha\lambda^{2}}\right)^{-\alpha},\ (\alpha,\beta>0) \tag{2.40}\] _For a Gaussian quadratic_ \[\mathscr{K}(x,y;\lambda)=\mathbb{E}[\psi(x)\psi(y)]=f(x,y)\exp\left(-\frac{\|x -y\|^{2}}{2\lambda^{2}}\right) \tag{2.41}\] _Then for \(y=x+\ell\)_ \[\mathbb{E}[\psi(x)]=0 \tag{2.42}\] \[\mathbb{E}[\psi(x)\psi(x)]=0\] (2.43) \[\mathbb{E}[\psi(x+\ell)\psi(x+\ell)]=0\] (2.44) \[\mathbb{E}[\psi(x)\psi(x+\ell)]=f(x,x+\ell)K(\|\ell\|;\lambda)\] (2.45) \[\mathbb{E}[\psi(x+\ell)\psi(x)]=-f(x,x+\ell)K(\|\ell\|;\lambda) \equiv f(x+\ell,x)K(\|\ell\|;\lambda)\] (2.46) \[\mathbb{E}[\psi(x)\psi(x)\psi(x)]=0\] (2.47) \[\mathbb{E}[\psi(x+\ell)\psi(x+\ell)\psi(x)]=0\] (2.48) \[\mathbb{E}[\psi(x)\psi(x)\psi(x+\ell)]=0\] (2.49) \[\mathbb{E}[\psi(x+\ell)\psi(x+\ell)\psi(x+\ell)]=0 \tag{2.50}\] _since all odd moments vanish for a GRF. Now let \(\mathbf{Q}=[0,L]\) so that_ \[\mathbf{Q}=\mathbf{Q}_{1}\bigcup\mathbf{Q}_{2}=[0,\lambda]\bigcup(\lambda,L]\] _then either \(\ell\in\mathbf{Q}_{1}\) or \(\ell\in\mathbf{Q}_{2}\). We now 'engineer' the following random vector field within \(\mathbf{D}\subset\mathbf{R}^{d}\)._ \[\mathcal{X}_{i}(x,t)=X_{i}(x,t)+\frac{\theta}{\sqrt{d(d+2}}X_{i}(x,t)\psi(x) \tag{2.51}\] _so that \(\theta=\frac{1}{\sqrt{15\beta}}\) and \(\mathbb{E}[\mathcal{X}_{i}(x,t)]=X_{i}(x,t)\). The 3rd-order structure function is then_ \[S_{3}(\ell)=\mathbb{E}[|\mathcal{X}_{i}(x+\ell,t)-\mathcal{X}_{i}(x,t)|^{3}] \tag{2.52}\] _Computing \(S_{3}(\ell)\) and then letting \(X_{i}(x,t)\to X_{i}=(0,0,X)\), one obtains_ \[S_{3}(\ell)=-\frac{12}{d(d+2)}\theta^{2}\beta\|X_{i}\|^{3}K(\|\ell\|;\lambda) \tag{2.53}\] _In three dimensions, \(d=3\) and choosing \(\theta=1\) gives_ \[S_{3}(\ell)=-\frac{12}{15}\|X_{i}\|^{3}K(\|\ell\|;\lambda)=-\frac{4}{5}\|X_{i }\|^{3}K(\|\ell\|;\lambda) \tag{2.54}\] _Choosing the kernel (2.39) with \(\beta=1\)_ \[S_{3}(\ell)=-\frac{12}{15}\|X_{i}\|^{3}K(\|\ell\|;\lambda)=-\frac{4}{5}\|X_{i }\|^{3}\left(1-\frac{\ell^{2}}{2\alpha\lambda^{2}}\right)^{-\alpha} \tag{2.55}\] _Then for \(\ell\in\mathbf{Q}_{1}=[0,\lambda]\), with \(\ell\ll\lambda\), the term \(\frac{1}{2}|\ell/\lambda|^{2}\) is very close to zero so that_ \[S_{3}(\ell)=-\frac{4}{5}\|X_{i}\|^{3} \tag{2.56}\] _holds over this range of length scales. This is a 4/5-law._ Proof.: The random fields at \(x\) and \(x+\ell\) are \[\mathcal{X}_{i}(x,t)=X_{i}(x,t)+\frac{\theta}{\sqrt{d(d+2}}X_{i}(x,t)\psi(x) \tag{2.57}\] \[\mathcal{X}_{i}(x+\ell,t)=X_{i}(x+\ell,t)+\frac{\theta}{\sqrt{d(d+2}}X_{i}(x+ \ell,t)\psi(x+\ell) \tag{2.58}\] Expanding out \(|\mathcal{X}_{i}(x+\ell,t)-\mathcal{X}_{i}(x,t)|^{3}\) \[|\mathcal{X}_{i}(x+\ell,t)-\mathcal{X}_{i}(x,t)|^{3}\] \[=\frac{\theta^{3}}{(d(d+2))^{3/2}}|\psi(x+\ell)\psi(x+\ell)\psi(x +\ell)X_{i}(x+\ell)X_{i}(x+\ell)X_{i}(x+\ell)\] \[-3\frac{\theta^{3}}{(d(d+2))^{3/2}}X_{i}(x)X_{i}(x+\ell)X_{i}(x+ \ell)\psi(x)\psi(x+\ell)\psi(x+\ell)\] \[+3\frac{\theta^{3}}{(d(d+2))^{3/2}}X_{i}(x)X_{i}(x)X_{i}(x+\ell) \psi(x)\psi(x+\ell)\] \[+3\frac{\theta^{2}}{d(d+2)}X_{i}(x)X_{i}(x+\ell)X_{i}(x+\ell) \psi(x)\psi(x)\] \[+3\frac{\theta^{2}}{d(d+2)}X_{i}(x+\ell)X_{i}(x+\ell)X_{i}(x+ \ell)\psi(x+\ell)\psi(x+\ell)\] \[-3\frac{\theta^{2}}{d(d+2)}X_{i}(x)X_{i}(x+\ell)X_{i}(x+\ell)\psi (x+\ell)\psi(x+\ell)\] \[-6\frac{\theta^{2}}{d(d+2)}X_{i}(x)X_{i}(x+\ell)X_{i}(x+\ell) \psi(x)\psi(x+\ell)\] \[+6\frac{\theta^{2}}{d(d+2)}X_{i}(x)X_{i}(x)X_{i}(x+\ell)\psi(x+ \ell)\psi(x)\] \[+3\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x)X_{i}(x+\ell)X_{i}(x+\ell) \psi(x)\] \[+6\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x)X_{i}(x+\ell)X_{i}(x+\ell) \psi(x)\] \[+3\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x+\ell)X_{i}(x+\ell)X_{i}(x+ \ell)\psi(x+\ell)\] \[+6\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x)X_{i}(x+\ell)X_{i}(x+\ell) \psi(x=\ell)\] \[+2\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x)X_{i}(x)X_{i}(x+\ell)\psi(x +\ell)\] \[+X_{i}(x+\ell)X_{i}(x+\ell)X_{i}(x+\ell)\] \[-3X_{i}(x)X_{i}(x+\ell)X_{i}(x+\ell)\] \[+3X_{i}(x)X_{i}(x+\ell)X_{i}(x+\ell)\] \[-\frac{\theta^{3}}{(d(d+2))^{3/2}}X_{i}(x)X_{i}(x)X_{i}(x)\psi(x )\psi(x)\psi(x)\] \[-3\frac{\theta^{2}}{d(d+2)}X_{i}(x)X_{i}(x)X_{i}(x)\psi(x)\psi(x)\] \[-3\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x)X_{i}(x)X_{i}(x)\psi(x)\] \[-X_{i}(x)X_{i}(x)X_{i}(x)X_{i}(x) \tag{2.59}\] Now taking the stochastic expectation and using (2.42-2.50), only the underbraced terms survive \[S_{3}(\ell)=\mathbb{E}[|\mathcal{X}_{i}(x+\ell,t)-\mathcal{X}_{ i}(x,t)|^{3}]\] \[=\frac{\theta^{3}}{(d(d+2))^{3/2}}X_{i}(x+\ell)]X_{i}(x+\ell)X_{i} (x+\ell)\mathbb{E}[|\psi(x+\ell)\psi(x+\ell)\psi(x+\ell)X_{i}(x+\ell)]\] \[-3\frac{\theta^{3}}{(d(d+2))^{3/2}}X_{i}(x)X(x+\ell)X_{i}(x+\ell) \mathbb{E}[\psi(x)\psi(x+\ell)\psi(x+\ell)]\] \[+3\frac{\theta^{3}}{(d(d+2))^{3/2}}X_{i}(x)X_{i}(x)X_{i}(x+\ell) \mathbb{E}[\psi(x)\psi(x+\ell)]\] \[+3\frac{\theta^{2}}{d(d+2)}X_{i}(x)X_{i}(x+\ell)X_{i}(\ell) \mathbb{E}[\psi(x)\psi(x)]\] \[+3\frac{\theta^{2}}{d(d+2)}X_{i}(x+\ell)X_{i}(x+\ell)X_{i}(x+ \ell)\mathbb{E}[\psi(x+\ell)\psi(x+\ell)]\] \[-3\frac{\theta^{2}}{d(d+2)}X_{i}(x)X_{i}(x+\ell)X_{i}(x+\ell) \mathbb{E}[\psi(x+\ell)\psi(x+\ell)]\] \[-\underbrace{6\frac{\theta^{2}}{d(d+2)}X_{i}(x)X_{i}(x+\ell)X_{i} (x+\ell)\mathbb{E}[\psi(x)\psi(x+\ell)]}_{\text{$\text{$\text{$\text{$\text{$ \text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$ $\text{$\text{$\text{$}$}}}}}}}}$}$}}}}\] \[+6\frac{\theta^{2}}{d(d+2)}X_{i}(x)X_{i}(x)X_{i}(x+\ell)\mathbb{E}[ \psi(x+\ell)\psi(x)]\] \[+3\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x)X_{i}(x+\ell)X_{i}(x+\ell) \mathbb{E}[\psi(x)]\] \[+6\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x)X_{i}(x)X_{i}(x+\ell) \mathbb{E}[\psi(x)]\] \[+3\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x+\ell)X_{i}(x+\ell)X_{i}(x+ \ell)\mathbb{E}[\psi(x+\ell)]\] \[+6\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x)X_{i}(x+\ell)X_{i}(x+\ell) \mathbb{E}[\psi(x+\ell)]\] \[+2\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x)X_{i}(x)X_{i}(x+\ell)\mathbb{E}[ \psi(x+\ell)]\] \[+\underbrace{X_{i}(x+\ell)X_{i}(x+\ell)X_{i}(x+\ell)}_{-3X_{i}(x)X_ {i}(x+\ell)X_{i}(x+\ell)}\] \[+3X_{i}(x)X_{i}(x+\ell)X_{i}(x+\ell)\] \[-X_{i}(x)X_{i}(x)X_{i}(x) \tag{2.62}\] or \[S_{3}(\ell)=\mathbb{E}[|\mathcal{X}_{i}(x+\ell,t)-\mathcal{X}_{i }(x,t)|^{3}]\] \[=-6\frac{\theta^{2}}{d(d+2)}X_{i}(x)X_{i}(x+\ell)X_{i}(x+\ell)f(x,y)K(\|\ell\|;\lambda)\] \[-6\frac{\theta^{2}}{d(d+2)}X_{i}(x)X_{i}(x)X_{i}(x+\ell)f(x,y)K( \|\ell\|;\lambda)\] \[+X_{i}(x+\ell)X_{i}(x+\ell)X_{i}(x+\ell)\] \[-3X_{i}(x)X_{i}(x+\ell)X_{i}(x+\ell)\] \[+3X_{i}(x)X_{i}(x+\ell)X_{i}(x+\ell)\] \[-X_{i}(x)X_{i}(x)X_{i}(x) \tag{2.63}\] Now let \(X_{i}(x,t)=X_{i}(x+\ell,t)=X_{i}=(0,0,X)\) with \(\|X_{i}\|=X\), for all \((x,t)\in\mathbf{D}\times\mathbf{R}^{+}\). Then \[S_{3}(\ell)=\mathbb{E}[|\mathcal{X}_{i}(x+\ell,t)-\mathcal{X}_{i }(x,t)|^{3}]\] \[=-6\frac{\theta^{2}}{d(d+2)}X_{i}X_{i}X_{i}f(x,y)K(\|\ell\|;\lambda)\] \[-6\frac{\theta^{2}}{d(d+2)}X_{i}X_{i}X_{i}f(x,y)K(\|\ell\|;\lambda)\] \[+X_{i}X_{i}X_{i}-3X_{i}X_{i}X_{i}+3X_{i}X_{i}X_{i}-X_{i}X_{i}X_{i}\] \[=-6\frac{\theta^{2}}{d(d+2)}X_{i}X_{i}X_{i}f(x,y)K(\|\ell\|; \lambda)-6\frac{\theta^{2}}{d(d+2)}X_{i}X_{i}X_{i}f(x,y)K(\|\ell\|;\lambda)\] \[=-6\frac{\theta^{2}}{d(d+2)}X_{i}X_{i}X_{i}\big{(}f(x,y)+f(x,y) \big{)})K(\|\ell\|;\lambda)\] \[=-12\frac{\theta^{2}}{d(d+2)}X_{i}X_{i}X_{i}f(x,y)K(\|\ell\|;\lambda)\] \[=-12\frac{\theta^{2}}{d(d+2)}X_{i}X_{i}X_{i}K(\|\ell\|;\lambda)\] \[=-12\frac{\theta^{2}}{d(d+2)}\|X_{i}\|^{3}K(\|\ell\|;\lambda) \tag{2.64}\] Setting \(\theta=1\) since this parameter are arbitrary, and in three dimensions \(d=3\) so that \[S_{3}(\ell)=-\frac{12\theta^{2}}{15}\|X_{i}\|^{3}K(\|\ell\|;\lambda)=-\frac{4} {5}\|X_{i}\|^{3}K(\|\ell\|;\lambda) \tag{2.65}\] If the rational quadratic covariance (2.6) is chosen then \[S_{3}(\ell)=-\frac{4}{5}\|X_{i}\|^{3}K(\|\ell\|;\lambda)=-\frac{4}{5}\|X_{i} \|^{3}\beta\left(1-\frac{|\ell|^{2}}{2\alpha\lambda^{2}}\right)^{-\alpha} \tag{2.66}\] and setting \(\beta=1\) \[S_{3}(\ell)=-\frac{4}{5}\|X_{i}\|^{3}K(\|\ell\|;\lambda)=-\frac{4}{5}\|X_{i} \|^{3}\left(1-\frac{|\ell|^{2}}{2\alpha\lambda^{2}}\right)^{-\alpha} \tag{2.67}\] For the range of length scales for which \(\ell/\lambda\) is very small, that is \(\ell\in\mathbf{Q}_{1}=[0,\lambda]\) with \(\frac{\ell^{2}}{2\alpha\lambda^{2}}\ll 1\) then \[\boxed{S_{3}(\ell)=-\frac{4}{5}\|X_{i}\|^{3}} \tag{2.68}\] which is a \(4/5\) law. **Remark 2.13**.: _The Gaussian-decaying kernel (2.39) gives the same result since (2.62) then becomes_ \[S_{3}(\ell)=-12\frac{\theta^{2}}{15}\|X_{i}\|^{3}K(\|\ell\|;\lambda)=-\frac{4} {5}\|X_{i}\|^{3}K(\|\ell\|;\lambda)=-\frac{4}{5}\|X_{i}\|^{3}\beta\exp\left(- \frac{\ell^{2}}{\lambda^{2}}\right) \tag{2.69}\] _which is \(S_{3}(\ell)=-\frac{4}{5}\|X_{i}\|^{3}\) for \(\beta=1\) and \(\ell/\lambda\sim 0\) being very small._ **Remark 2.14**.: _Notice that there are two possibilities for the 3rd-order structure function. For \(X_{i}(x)=X_{i}(x+\ell)=X_{i}\) one can have_ \[S_{3}(\ell)=-\frac{6\theta^{2}}{d(d+2)}X_{i}X_{i}X_{i}\mathbb{E}[\psi(x)\psi( x+\ell)]+\frac{6\theta^{2}}{d(d+2)}X_{i}X_{i}X_{i}\mathbb{E}[\psi(x+\ell)\psi(x)] \tag{2.70}\] _or_ \[S_{3}^{*}(\ell)=-\frac{6\theta^{2}}{d(d+2)}X_{i}X_{i}X_{i}\mathbb{E}[\psi(x) \psi(x+\ell)]+\frac{6\theta^{2}}{d(d+2)}X_{i}X_{i}X_{i}\mathbb{E}[\psi(x)\psi( x+\ell)] \tag{2.71}\] _since \(\psi(x)\psi(x+\ell)\equiv\psi(x+\ell)\psi(x)\) but \(\mathbb{E}[\psi(x)\psi(x+\ell)]=-\mathbb{E}[\psi(x+\ell)\psi(x)]\). Then \(S_{3}^{*}(\ell)=0\) and \(S_{3}(\ell)=-\frac{12}{d(d+2)}X_{i}^{3}[\psi(x)\psi(x+\ell)]=-\frac{12}{d(d+2)}X _{i}^{3}f(x,y)K(\|x-y\|;\lambda)\). Hence, the non-vanishing option is chosen._ ### Expression for the 2nd-order structure function Next, the 2nd-order structure function \(S_{2}(\ell)\) is computed for the same random field \(\mathcal{X}_{i}(x,t)\) **Theorem 2.15**.: _Let the same scenario of Thm (2.10) hold. Then the 2nd-order structure function is_ \[S_{2}(\ell)=\mathbb{E}[|\mathcal{X}_{i}(x+\ell)-\mathcal{X}_{i}(x)|^{2}] \tag{2.72}\] _The random field is again given by (2.48) so that_ \[\mathcal{X}_{i}(x,t)=X_{i}(x,t)+\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x,t)\psi(x) \tag{2.73}\] _with \(\psi(x)\) having the same properties and covariance as before. Then_ \[|\mathcal{X}_{i}(x+\ell)-\mathcal{X}_{i}(x)|^{2}=\theta^{2}\theta ^{2}d(d+2)X_{i}(x+\ell)X_{i}(x+\ell)\psi(x+\ell)\psi(x+\ell)\] \[+2\frac{\theta^{2}}{d(d+2)}x(X)X(x+\ell)\psi(x+\ell)\psi(x)\] \[-2\frac{\theta}{\sqrt{d(d+2)}}X(x)X(x+\ell)\psi(x)\] \[-2\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x+\ell)X_{i}(x+\ell)\psi(x+\ell)\] \[+2\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x)X_{i}(x+\ell)\psi(x+\ell)\] \[+X_{i}(x+\ell)X_{i}(x+\ell)\] \[-2X_{i}(x)X_{i}(x+\ell)\] \[+\frac{\theta^{2}}{d(d+2)}X_{i}(x)X_{i}(x)\] \[-X_{i}(x)X_{i}(x)\] \[+2\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x)X_{i}(x)\psi(x) \tag{2.74}\] _Taking the stochastic expectation, only the underbraced term survives_ \[S_{2}(\ell)=\mathbb{E}[|\mathcal{X}_{i}(x+\ell)-\mathcal{X}_{i}( x)|^{2}]\] \[=\theta^{2}\theta^{2}d(d+2)X_{i}(x+\ell)X_{i}(x+\ell)\mathbb{E}[ \psi(x+\ell)\psi(x+\ell)]\] \[+\underbrace{2\frac{\theta^{2}}{d(d+2)}x(X)X(x+\ell)\mathbb{E}[ \psi(x)\psi(x+\ell)]}_{\begin{subarray}{c}\text{$\mathcal{X}_{i}(x+\ell)$} \end{subarray}}\] \[-2\frac{\theta}{\sqrt{d(d+2)}}X(x)X(x+\ell)\mathbb{E}[\psi(x)]\] \[-2\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x+\ell)X_{i}(x+\ell)\mathbb{ E}[\psi(x+\ell)]\] \[+2\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x)X_{i}(x+\ell)\mathbb{E}[ \psi(x+\ell)]\] \[+X_{i}(x+\ell)X_{i}(x+\ell)\] \[-2X_{i}(x)X_{i}(x+\ell)\] \[+\frac{\theta^{2}}{d(d+2)}X_{i}(x)X_{i}(x)\] \[-X_{i}(x)X_{i}(x)\] \[+2\frac{\theta}{\sqrt{d(d+2)}}X_{i}(x)X_{i}(x)\mathbb{E}[\psi(x)] \tag{2.75}\] _so that_ \[S_{2}(\ell)=\mathbb{E}\left[|\mathcal{X}_{i}(x+\ell)-\mathcal{X}_{i}(x)|^{2} \right]=\frac{2\theta^{2}}{d(d+2)}X_{i}(x)X_{i}(x+\ell)\mathbb{E}[\psi(x)\psi(x+ \ell)]\] \[=\frac{2\theta^{2}}{d(d+2)}X_{i}(x)X_{i}(x+\ell)f(x,x+\ell)K(\|\ell\|;\lambda) \tag{2.76}\] _Setting \(X_{i}(x+\ell)=X_{i}(x)=X_{i}=(0,0,X)\)_ \[S_{2}(\ell)=\frac{2\theta^{2}}{d(d+2)}X_{i}X_{i}f(x,x+\ell)K(\| \ell\|;\lambda)\] \[\equiv\frac{2\theta^{2}}{d(d+2)}\|X_{i}\|^{2}f(x,x+\ell)K(\|\ell \|;\lambda)\] \[\equiv\frac{2\theta^{2}}{d(d+2)}\|X_{i}\|^{2}K(\|\ell\|;\lambda) \tag{2.77}\] _For \(d=3\)_ \[S_{2}(\ell)=\frac{2\theta^{2}}{15}\|X_{i}\|^{2}K(\|\ell\|;\lambda)=C\|X_{i}\| ^{2}K(\|\ell\|;\lambda) \tag{2.78}\] _Choosing the kernel (2.39)_ \[S_{2}(\ell)=C\|X_{i}\|^{2}\left(1+\frac{\ell^{2}}{2\alpha\lambda^{2}}\right)^ {-\alpha} \tag{2.79}\] _When \(\ell\in\mathbf{Q}=[0,\lambda]\) and \(\ell\ll\lambda\) then \((\ell^{2}/\lambda^{2})\sim 0\)._ \[\boxed{S_{2}(\ell)=C\|X_{i}\|^{2}} \tag{2.80}\] Application to Fluid Mechanics and the Emergence of the Classical Kolmogorov Scaling Laws of Turbulence It can now be demonstrated that the mathematical form of the classical Kolmogorov scaling laws for turbulence emerge from the formalism when the random field \(\mathcal{X}_{i}(x,t)\) is identified with a turbulent velocity field for \(d=3\); that is, the emergence of both the **K41**\(2/3\)-law and the **K41**\(4/5\)-law. ### Basic results from fluid mechanics Some basic background results from smooth or deterministic 'laminar' fluid mechanics are briefly given [33,34,50,51,54]. In the absence of turbulence, we consider a set of smooth and deterministic solutions \((U_{i}(x,t),\rho)\) of the steady state viscous Burgers equations, with the pressure gradient term set to zero in the Navier-Stokes equations. Here \(U_{i}(x,t)\) is the steady state fluid velocity at \(x\in\mathbf{D}\subset\mathbf{R}^{d}\), and \(\rho\) is the (uniform) density. For the general Burgers equations, let \(\mathbf{D}\subset\mathbf{R}^{3}\) be a compact bounded domain with \(x\in\mathbf{D}\) and filled with a fluid of density \(\rho:[0,T]\times\mathbf{R}^{3}\rightarrow\mathbf{R}_{\geq}\), and velocity \(U:\mathbf{D}\times\mathbf{R}^{(+)}\rightarrow\mathbf{R}^{3}\) with \(U_{i}(x,t))_{1\leq i\leq d}\) so that \(\rho=\rho(x,t)\) and \(U_{i}=U_{i}(x,t)\). The Burger's equations is then \[\partial_{t}U_{i}(x,t)+\mathscr{D}_{N}U_{i}(x,t)\] \[\equiv\partial_{t}U_{i}(x,t)-\nu\Delta U_{i}(x,t)+U^{j}(x,t) \nabla_{j}U_{i}(x,t)=0,\ (x,t)\in\mathbf{D}\times\mathbf{R}^{+} \tag{3.1}\] The viscosity of the fluid is \(\nu\) is very small so that \(\nu\sim 0\), with the incompressibility condition \(\nabla_{i}U^{i}=0\). **Definition 3.1**.: _The following will also apply:_ 1. _The smooth initial Cauchy data is_ \(U(x,0)=U_{o}(x)\)_. One could also impose periodic boundary conditions if_ \(\mathbf{D}\) _is a cube or box of with sides of length_ \(\mathrm{L}\) _such that_ \(U_{i}(x+L,t)=U_{i}(x,t)\)_, or no-slip BCs_ \(U_{i}(x,t)=0,\forall\ x\in\partial\mathbf{D}\)_. For some_ \(C,K>0\)_, the initial data will also satisfy a bound of the typical form_ \(|\nabla_{x}^{\alpha}U_{o}(x)|\leq C(\alpha,K)(1+|x|)^{-K}\)_. By a_ **smooth deterministic flow**_, we mean a_ \(U_{i}(x,t)\) _which is deterministic and non-random and evolves predictably by the NS equations from some initial Cauchy data_ \(U_{i}(x,0)=U_{o}(x)\)_. For example, a simple trivial laminar flow solution ith_ \(U_{i}(x,t)=U_{i}=const\)_. A generic smooth flow will be differentiable to at least 2nd order so that_ \(\nabla_{j}U_{i}(x,t)\) _and_ \(\nabla_{i}\nabla_{j}U_{i}(x,t)\) _exist. The fluid velocity_ \(U_{i}(x,t)\) _is a divergence-free vector field that should be physically reasonable: that is, the solution should not grow too large or blow up as_ \(t\rightarrow\infty\) _._ 2. _The Reynolds number within_ \(\mathbf{D}\) _with_ \(Vol(\mathbf{D})\sim L^{d}\) _is_ \[\mathscr{R}(x,t;\nu,L)=\frac{\|U_{i}(x,t)\|L}{\nu}\] (3.2) _and for a constant velocity_ \(U_{i}(x,t)=U_{i}\)__ \[\mathscr{R}=\frac{\|U_{i}\|L}{\nu}\] (3.3) _For_ \(\nu>0\) _but_ \(\nu\sim 0\)_, then the Reynolds number will be very large but not infinite._ 3. _Given the flow_ \(U_{i}(x,t)\) _and a closed curve or knot_ \(\Gamma\in\mathbf{D}\) _with_ \(x\in\Gamma\) _then the circulation is_ \[C(\Gamma)=\oint_{\Gamma}U_{i}(x,t)dx^{i}\] (3.4) _and the vorticity is_ \[W^{i}(x,t)=\varepsilon^{ijk}\nabla_{j}U_{k}(x,t)\] (3.5) 4. _The basic energy balance equation for a viscous fluid is obeyed such that_ \[\|U_{i}(\bullet,t\|_{L_{2}(\mathbf{D})}^{2}=\frac{dE(t)}{dt}=\frac{d}{dt}\! \int_{\mathbf{D}}\!U_{i}(x,t)U^{i}(x,t)d^{3}x=-\nu\!\int_{\mathbf{D}}\!|\nabla_ {j}U_{i}(x,t)\nabla^{j}U^{i}(x,t)|^{2}d^{3}x\] (3.6) _or_ \(\frac{dE(t)}{dt}=-\nu\mathfrak{E}(t)\)_, where_ \(\mathfrak{E}\) _is the enstrophy._ 5. _The energy dissipation rate for a constant viscosity_ \(\nu\) _is_ \[\epsilon(x,t)=\frac{1}{2}\nu\left(\nabla_{j}U_{i}(x,t)+\nabla^{i}U^{j}(x,t) \right)^{2}\] (3.7) _If the fluid is isotropic then_ \(\nabla_{j}U_{i}=\nabla_{i}U_{j}\) _so that_ \[\epsilon(x,t)=\nu\left(\nabla_{j}U_{i}(x,t)\nabla^{j}U^{i}(x,t)\right)\] (3.8) _Then the volume integral is_ \[\int_{\mathbf{D}}\epsilon(x,t)d^{3}x=\nu\int_{\mathbf{D}}\left(\nabla_{j}U_{i }(x,t)\nabla^{j}U^{i}(x,t)\right)^{2}d^{3}x\] (3.9) _which is minus the rhs of (_3.4_) and is the enstrophy integral. The energy balance equation is then_ \[\frac{d}{dt}\!\int_{\mathbf{D}}\!U_{i}(x,t)U^{i}(x,t)d^{3}x=-\int_{\mathbf{D} }\epsilon(x,t)d^{3}x\] (3.10) _or_ \[\left|\frac{d}{dt}\!\int_{\mathbf{D}}\!U_{i}(x,t)U^{i}(x,t)d^{3}x\right|= \left|-\int_{\mathbf{D}}\epsilon(x,t)d^{3}x\right|\equiv\left|\int_{\mathbf{ D}}\epsilon(x,t)d^{3}x\right|\] (3.11) **Remark 3.2**.: _From (3.6), one might naturally expect that in the limit as \(\nu\to 0\) then the energy dissipation rate \(\epsilon(x,t)\) simply vanishes. However, this (naive) expected outcome is contradicted by observation, experimentally and numerically-the dissipation rate always remains finite as \(\nu\) vanishes, and is independent of \(\nu\). This is known as anomalous dissipation (AD) in the fluid mechanics literature. This can also be expressed as_ \[\lim_{\nu\to 0}\nu|\nabla_{j}U_{i}(x,t)|^{2}=\epsilon>0 \tag{3.12}\] _Kolmogorov assumed AD in his original 1941 derivations of the 2/3 and 4/5 laws. The reason for AD is that no matter how small \(\nu\) is there is always a cascade of energy from larger to smaller scales. Onsanger interpreted AD in terms of the Holder continuity of weak solutions of the Euler equations [13] but there is no rigourous mathematical description of AD._ We now examine an 'engineered' random field \(\mathcal{U}_{i}(x,t)\) representing a turbulent fluid flow, and show that the form of the 4/5 law emerges when the 3rd-order structure function is computed. **Proposition 3.3**.: _The random field \(\mathcal{U}_{i}(x,t)\) has the same form as (2.24) so that for all \(x\in\mathbf{D}\subset\mathbf{R}^{d},t\in\mathbf{R}^{+}\)._ \[\mathcal{U}_{i}(x,t)=U_{i}(x,t)+\frac{\theta}{\sqrt{d(d+2)}}U_{i}(x,t)\psi(x) \tag{3.13}\] _and when \(U_{i}(x,t)=U_{i}\)_ \[\mathcal{U}_{i}(x,t)=U_{i}+\frac{\theta}{\sqrt{d(d+2)}}U_{i}\psi(x) \tag{3.14}\] _where \(U_{i}(x,t)\) evolves by the Burgers equation and \(U_{i}\) is a trivial steady state solution. In three dimensions with \(d=3\)_ \[\mathcal{U}_{i}(x,t) =U_{i}(x,t)+\frac{\theta}{\sqrt{15}}U_{i}(x,t)\psi(x) \tag{3.15}\] \[\mathcal{U}_{i}(x,t) =U_{i}+\frac{\theta}{\sqrt{15}}U_{i}\psi(x) \tag{3.16}\] _The expectation gives the mean flow velocity_ \[\mathbb{E}[\mathcal{U}_{i}(x,t)]=U_{i}(x,t)+\frac{\theta}{\sqrt{d(d+2}}U_{i}( x,t)\mathbb{E}[\psi(x)]=U_{i}(x,t) \tag{3.17}\] _and when \(U_{i}(x,t)=U_{i}\)_ \[\mathbb{E}[\mathcal{U}_{i}(x,t)]=U_{i}+\frac{\theta}{\sqrt{d(d+2}}U_{i} \mathbb{E}[\psi(x)]=U_{i} \tag{3.18}\] It can be briefly demonstrated that the turbulent flow or random field \(\mathcal{U}_{i}(x,t)\) is compatible with basic important aspects of fluid mechanics. **Lemma 3.4**.: _The flow \(U_{i}(x,t)\) is isotropic if \(\nabla_{j}U_{i}(x,t)=\nabla_{i}U_{j}(x,t)\) and incompressible if \(\nabla^{j}U_{j}(x,t)=0\). It then follows that for the turbulent flow_ \[\mathbb{E}[\nabla^{j}U_{j}(x,t)]=0 \tag{3.19}\] \[\mathbb{E}[\nabla_{i}\mathcal{U}_{j}(x,t)]=\mathbb{E}[\nabla_{j} \mathcal{U}_{i}(x,t)] \tag{3.20}\] Proof.: \[\nabla^{j}U_{j}(x,t)=\nabla^{j}U_{j}(x,t)+\frac{\theta}{\sqrt{d(d+2)}}\nabla_ {j}U_{j}(x,t)\psi(x)+\frac{\theta}{\sqrt{d(d+2)}}U_{j}(x,t)\nabla_{j}\psi(x)\] (3.21) The expectation is then \[\mathbb{E}[\nabla^{j}\mathcal{U}_{j}(x,t)]=\nabla^{j}U_{j}(x,t)+ \frac{\theta}{\sqrt{d(d+2)}}\nabla_{j}U_{j}(x,t)\mathbb{E}[\psi(x)]\] \[+\frac{\theta}{\sqrt{d(d+2)}}U_{j}(x,t)\mathbb{E}[\nabla_{j}\psi( x)]=\nabla^{j}U_{j}(x,t) \tag{3.22}\] and \[\mathbb{E}[\nabla^{j}\mathcal{U}_{i}(x,t)]=\nabla^{j}U_{i}(x,t)+ \frac{\theta}{\sqrt{d(d+2)}}\nabla_{j}U_{i}(x,t)\mathbb{E}[\psi(x) \tag{3.23}\] \[]+\frac{\theta}{\sqrt{d(d+2)}}U_{i}(x,t)\mathbb{E}[\nabla_{j} \psi(x)]=\nabla^{j}U_{i}(x,t)\] (3.24) \[\mathbb{E}[\nabla^{i}\mathcal{U}_{j}(x,t)]=\nabla^{i}U_{j}(x,t)+ \frac{\theta}{\sqrt{d(d+2)}}\nabla_{i}U_{j}(x,t)\mathbb{E}[\psi(x)]\] (3.25) \[+\frac{\theta}{\sqrt{d(d+2)}}U_{j}(x,t)\mathbb{E}[\nabla_{j}\psi( x)]=\nabla^{i}U_{j}(x,t) \tag{3.26}\] **Lemma 3.5**.: _For the turbulent flow \(\mathcal{U}_{i}(x,t)\) with the Gaussian field \(\psi(x)\) having the properties_ \[\mathbb{E}[\psi(x)]=0 \tag{3.27}\] \[\mathbb{E}[\psi(x)\psi(x)]=0\] (3.28) \[\mathbb{E}[\nabla_{i}\psi(x)]=0\] (3.29) \[\mathbb{E}[\nabla_{i}\psi(x)\nabla^{i}\psi(x)]\] (3.30) \[\mathbb{E}[\Delta\psi(x)]=0\] (3.31) \[\mathscr{K}(\|x-y\|;\lambda)=\mathbb{E}[\psi(x)\psi(y)]=f(x,y)K( \|x-y\|;\lambda) \tag{3.32}\] 1. _The averaged energy integral is_ \[\mathbb{E}\left[\int_{\mathbf{D}}\mathcal{U}_{i}(x,t)\mathcal{U}^{i}(x,t)d^{3 }x\right]=\int_{\mathbf{D}}U_{i}(x,t)U^{i}(x,t)d^{3}x\] (3.33) 2. _The averaged energy dissipation is_ \[\mathbb{E}[\mathcal{E}(x,t)]=\nu\mathbb{E}[|\nabla_{j}\mathcal{U}_{i}\nabla^{ j}\mathcal{U}^{i}|]=\nu\nabla_{j}U_{i}\nabla^{j}U^{i}=\epsilon(x,t)\] (3.34) 3. _The averaged energy balance law for a fixed viscosity_ \(\nu\) _is then_ \[\frac{d}{dt}\mathbb{E}\left[\int_{\mathbf{D}}\mathcal{U}_{i}(x,t)\mathcal{U}^ {i}(x,t)d^{3}x\right]=-\nu\int_{\mathbf{D}}\mathbb{E}[|\nabla_{j}\mathcal{U}_{ i}\nabla^{j}\mathcal{U}^{i}|]d^{3}x\] (3.35) _iff_ \[\frac{d}{dt}\int_{\mathbf{D}}U_{i}(x,t)U^{i}(x,t)d^{3}x=-\nu\int_{\mathbf{D} }[|\nabla_{j}U_{i}\nabla^{j}U^{i}|]d^{3}x\] (3.36) _which is the basic energy balance law for an isotropic fluid with_ \(\nabla_{i}U_{j}(x,t)=\nabla_{j}U_{i}(x,t)\)_._ Proof.: To prove (1) \[\mathbb{E}\left[\int_{\mathbf{D}}\mathcal{U}_{i}(x,t)\mathcal{U}^ {i}(x,t)d^{3}x\right]\] \[=\int_{\mathbf{D}}U_{i}(x)U^{i}(x)d^{3}x+\frac{2\theta}{\sqrt{d(d +2)}}\!\int_{\mathbf{D}}\!U_{i}(x,t)U^{i}(x,t)\mathbb{E}\left[\psi(x)\psi(x) \right]d^{3}x \tag{3.37}\] \[+\frac{\theta^{2}}{d(d+2)}\!\int_{\mathbf{D}}\!U_{i}(x,t)U^{i}(x,t)\mathbb{E}\left[\psi(x)\psi(x)\right]d^{3}x=\int_{\mathbf{D}}\!U_{i}(x)U^{ i}(x)d^{3}x \tag{3.38}\] To prove (2) \[\mathbb{E}\left[\int_{\mathbf{D}}\nabla_{j}\mathcal{U}_{i}(x,t) \nabla^{j}\mathcal{U}^{i}(x,t)d^{3}x\right]\] \[=\int_{\mathbf{D}}\!\nabla_{j}U_{i}(x)\nabla^{j}U^{i}(x)d^{3}x+ \frac{2\theta}{\sqrt{d(d+2)}}\!\int_{\mathbf{D}}\!\nabla_{j}U_{i}(x,t)\nabla^{ j}U^{i}(x,t)\mathbb{E}\left[\psi(x)\right]d^{3}x \tag{3.39}\] \[+\frac{\theta^{2}}{d(d+2)}\!\int_{\mathbf{D}}\!\nabla_{j}U_{i}(x,t)\nabla^{j}U^{i}(x,t)\mathbb{E}\left[\nabla_{j}\psi(x)\nabla^{j}\psi(x) \right]d^{3}x=\int_{\mathbf{D}}\!\nabla_{j}U_{i}(x)\nabla^{j}U^{i}(x)d^{3}x \tag{3.40}\] then (3.24) holds if (3.25) holds. Note that the Fubuni theorem has been applied which states that if \(\mathcal{Y}_{i}(x,t)\) is a generic random field then \(\mathbb{E}[\int\mathcal{Y}_{i}(x,t)d^{3}x]\equiv\int\mathbb{E}[\mathcal{Y}_{i }(x,t)]d^{3}x\). The presence of eddies and vortices over a very large range of length and times scales is a characteristic feature of turbulence [57]. The extent of this range is essentially determined by the Reynolds number, which is the ratio of inertial to viscous forces. For very large but not infinite Reynolds numbers (\(\nu\sim 0\)) the inertial term or nonlinearity dominates resulting in eddies/vortices of many scales being created and so the flow is turbulent; for example in the ocean and atmosphere, vortices can range from hundreds of kilometers to a few millimetres. In the Kolmogorov theory, energy is transferred from large eddies to smaller ones down to the Kolmogorov length scale \(\eta\) and dissipated as heat. But as the viscosity is increased turbulence tends to be suppressed; for example, turbulence is very highly suppressed or eliminated in flowing honey or olive oil. The following lemma shows that the random field \(\mathcal{U}_{i}(x,t)\) can lead to vortex 'tanging' or correlations. **Lemma 3.6**.: _As before, the Gaussian field \(\psi(x)\) has the properties_ \[\mathbb{E}[\psi(x)]=0 \tag{3.41}\] \[\mathbb{E}[\psi(x)\psi(x)]=0\] (3.42) \[\mathbb{E}[\nabla_{i}\psi(x)]=0\] (3.43) \[\mathbb{E}[\nabla_{i}\psi(x)\nabla^{i}\psi(x)]\] (3.44) \[\mathbb{E}[\Delta\psi(x)]=0\] (3.45) \[\mathscr{K}(\|x-y\|;\lambda)=\mathbb{E}[\psi(x)\psi(y)]=f(x,y)K( \|x-y\|;\lambda) \tag{3.46}\] _Take \(d=3\) and let \(\gamma_{1},\gamma_{2}\in\mathbf{D}\subset\mathbf{R}^{3}\) be curves or knots in \(\mathbf{D}\) with \(x\in\gamma_{1},y\in\gamma_{2}\). The circulations at \((x,y)\) are then_ \[\mathrm{C}(\gamma_{1})=\int_{\gamma_{1}}U_{i}(x,t)dx^{i},\ \mathrm{C}( \gamma_{2})=\int_{\gamma_{2}}U_{j}(y,t)dy^{i} \tag{3.47}\] _For closed curves or knots_ \[\mathrm{C}(\gamma_{1})=\oint_{\gamma_{1}}U_{i}(x,t)dx^{i},\ \mathrm{C}( \gamma_{2})=\oint_{\gamma_{2}}U_{j}(y,t)dy^{i} \tag{3.48}\] _The turbulent flows or random fields at (x,y) are_ \[\mathcal{X}_{i}(x,t) =X_{i}(x,t)+\frac{\theta}{\sqrt{15}}U_{i}(x,t)\psi(x) \tag{3.49}\] \[\mathcal{X}_{j}(y,t) =X_{j}(y,t)+\frac{\theta}{\sqrt{15}}U_{j}(y,t)\psi(y) \tag{3.50}\] _Stochastic circulations are then proposed as_ \[\mathcal{C}(\gamma_{1}) =\oint_{\gamma_{1}}\mathcal{X}_{i}(x,t)dx^{i}=\oint_{\gamma_{1}} X_{i}(x,t)dx^{i}+\frac{\theta}{\sqrt{15}}\oint_{\gamma_{1}}U_{i}(x,t)\psi(x)d^{3}x \tag{3.51}\] \[\mathcal{C}(\gamma_{2}) =\oint_{\gamma_{2}}\mathcal{X}_{j}(y,t)dx^{i}=\oint_{\gamma_{1}} X_{j}(y,t)dy^{j}+\frac{\theta}{\sqrt{15}}\oint_{\gamma_{1}}U_{j}(x,t)\psi(x)d^{3}x \tag{3.52}\] _with expectations_ \[\mathbb{E}[\mathcal{C}(\gamma_{1})] =\mathbb{E}\left[\oint_{\gamma_{1}}\mathcal{X}_{i}(x,t)dx^{i} \right]=\oint_{\gamma_{1}}X_{i}(x,t)dx^{i}+\frac{\theta}{\sqrt{15}}\oint_{ \gamma_{1}}U_{i}(x,t)\mathbb{E}[\psi(x)]dx^{i}=C(\gamma_{1}) \tag{3.53}\] \[\mathbb{E}[\mathcal{C}(\gamma_{2})] =\mathbb{E}\left[\oint_{\gamma_{2}}\mathcal{X}_{j}(y,t)dy^{i} \right]=\oint_{\gamma_{2}}X_{j}(y,t)dy^{i}+\frac{\theta}{\sqrt{15}}\oint_{ \gamma_{2}}U_{j}(y,t)\mathbb{E}[\psi(y)]dy^{j}=C(\gamma_{2}) \tag{3.54}\] _Now define a correlation or 'tangle' between 2 vortices as \(\mathcal{C}(\gamma_{1})\bigcap\mathcal{C}(\gamma_{2})\). Then_ \[\mathbb{E}\left[\mathcal{C}(\gamma_{1})\bigcap\mathcal{C}(\gamma_{2})\right]= \oint\oint_{\gamma_{1},\gamma_{2}}\mathbb{E}[\mathcal{U}_{i}(x,t)\mathcal{U}(y,t)]dx^{i}dy^{j} \tag{3.55}\] _which is_ \[\mathbb{E}\left[\mathcal{C}(\gamma_{1})\bigcap\mathcal{C}(\gamma_ {2})\right]=\oint\oint_{\gamma_{1},\gamma_{2}}U_{i}(x,t)U(y,t)dx^{i}dy^{j}\] \[+\oint\oint_{\gamma_{1},\gamma_{2}}f(x,y)K(\|x-y\|;\lambda)U_{i} (x,t)U_{j}(y,t)dx^{i}dy^{j} \tag{3.56}\] _Then_ \[\mathbb{E}\left[\mathcal{C}(\gamma_{1})\bigcap\mathcal{C}(\gamma_{2})\right] \longrightarrow\oint\oint_{\gamma_{1},\gamma_{2}}U_{i}(x,t)U(y,t)dx^{i}dy^{j} \tag{3.57}\] _when \(\|x-y\|\gg\lambda\), and the vortices are uncorrelated or 'untangled'._ Proof.: The proof follows from \[\mathbb{E}[\mathcal{U}_{i}(x,t)\mathcal{U}_{j}(y,t)]=U_{i}(x,t)U_{j}(y,t)+f(x, y)K(\|x-y\|;\lambda)U_{i}(x,t)U_{j}(y,t) \tag{3.58}\] **Lemma 3.7**.: _The turbulent flow \(\mathcal{U}_{i}(x,t)\) is a solution of the stochastically averaged Burger's equation. As before, the Gaussian field \(\psi(x)\) has the properties_ \[\mathbb{E}[\psi(x)]=0 \tag{3.59}\] \[\mathbb{E}[\psi(x)\psi(x)]=0\] (3.60) \[\mathbb{E}[\nabla_{i}\psi(x)]=0\] (3.61) \[\mathbb{E}[\nabla_{i}\psi(x)\psi_{i}(x)]=0\] (3.62) \[\mathbb{E}[\nabla_{i}\psi(x)\nabla^{i}\psi(x)]\] (3.63) \[\mathbb{E}[\Delta\psi(x)]=0\] (3.64) \[\mathscr{K}(\|x-y\|;\lambda)=\mathbb{E}[\psi(x)\psi(y)]=f(x,y)K( \|x-y\|;\lambda) \tag{3.65}\] _Then_ \[\mathbb{E}[\partial_{t}\mathcal{U}_{i}(x,t)-\nu\Delta\mathcal{U}_ {i}(x,t)+\mathcal{U}^{j}(x,t)\nabla_{j}\mathcal{U}_{i}(x,t)]\] \[=\partial_{t}U_{i}(x,t)-\nu\Delta U_{i}(x,t)+U^{j}(x,t)\nabla_{j} U_{i}(x,t)=0 \tag{3.66}\] Proof.: First \[\partial_{t}\mathcal{U}_{i}(x,t)-\nu\Delta\mathcal{U}_{i}(x,t)+ \mathcal{U}^{j}(x,t)\nabla_{j}\mathcal{U}_{i}(x,t) \tag{3.67}\] \[=\partial_{t}U_{i}(x,t)+\partial_{t}U_{i}(x,t)\left(\frac{\theta} {\sqrt{d(d+2)}}\psi(x)\right)\] \[-\nu\Delta U_{i}(x,t)-\frac{\nu\theta}{\sqrt{d(d+2)}}\left(\psi( x)\Delta U_{i}(x,t)-\nabla_{i}U_{i}(x,t)\nabla^{j}\psi(x)\right)\] \[-\frac{\nu\theta}{\sqrt{d(d+2)}}\left(\nabla^{j}U_{i}\nabla_{j} \psi(x)+U_{i}(x,t)\Delta\psi(x)\right)\] \[+U^{j}(x,t)\nabla_{j}U_{i}(x,t)+\frac{\theta}{\sqrt{d(d+2)}}U^{j }(x,t)\psi(x)\nabla_{j}U_{i}(x,t)\] \[+\frac{\theta}{\sqrt{d(d+2)}}U^{j}(x,t)U_{i}(x,t)\nabla_{j}\psi(x)\] \[\frac{\theta}{\sqrt{d(d+2)}}U^{j}(x,t)\nabla_{j}U_{i}(x,t)\psi(x )+\frac{\theta^{2}}{d(d+2)}U^{j}(x,t)\nabla_{j}U_{i}(x,t)\psi(x)\psi(x)\] \[\left|\epsilon tVol(\mathbf{D})\right|=\left|U_{i}U^{i}Vol(\mathbf{D})\right| \tag{3.77}\] and so \[|\epsilon t|=\epsilon t=|U_{i}U^{i}| \tag{3.78}\] Checking the units on both sides gives \(cm^{2}s^{-3}\times s=cm^{2}s^{-2}\), as required. **Lemma 3.9**.: _Given \(|\epsilon t|=\epsilon t=|U_{i}U^{i}|\) it follows also that_ \[\|U_{i}\|=U=\epsilon^{1/3}\ell^{1/3} \tag{3.79}\] \[\|U_{i}\|^{2}=U^{2}=\epsilon^{2/3}\ell^{2/3} \tag{3.80}\] \[\|U_{i}\|^{3}=U^{3}=\epsilon\ell \tag{3.81}\] Proof.: Choose \(\ell\) such that \[\|U_{i}\|\equiv U=\frac{\ell}{t} \tag{3.82}\] then \[t=\frac{\ell}{U} \tag{3.83}\] Then from (3.60) \[\epsilon t=U^{2}=\epsilon\frac{\ell}{U} \tag{3.84}\] which gives \[U^{3}=\epsilon\ell \tag{3.85}\] so that \[U=\epsilon^{1/3}\ell^{1/3} \tag{3.86}\] Equations (3.79) and (3.80) then follow. **Theorem 3.10**.: _(Emergence of a 4/5-law via an 'engineered' turbulent flow in \(\mathbf{D}\subset\mathbf{R}^{3}\)) Let a vector field \(U_{i}(x,t)\) be a deterministic/smooth flow within a domain \(\mathbf{D}\) of volume \(Vol(\mathbf{D})\sim L^{3}\) via the Burger's equations_ \[\partial_{t}U_{i}(x,t)+\mathscr{D}_{N}U_{i}(x,t)\] \[=\partial_{t}U_{i}(x,t)+U^{j}(x,t)\nabla_{j}U_{i}(x,t),\ (x,t)\in\mathbf{D}\times\mathbf{R}^{+} \tag{3.87}\] _and from some initial Cauchy data \(U_{i}(x,0)=g_{i}(x)\). A trivial steady state solution is then \(X_{i}(x,t)=U_{i}=g_{i}=const.\). Let \(\psi(x)\) be a random Gaussian scalar field as previously defined, and again having an antisymmetric rational quadratic covariance kernel_ \[\mathscr{K}(x,y;\lambda)=\mathbb{E}[\psi(x)\psi(y)]=f(x,y)K(\|x-y\|;\lambda) \tag{3.88}\] _Here, \(f(x,y)\) is an antisymmetric function \(f:\mathbf{D}\times\mathbf{D}\rightarrow\{0,1\}\) such that \(f(x,y)=-f(y,x)\) with \(f(x,y)=1\) for all \((x,y)\in\mathbf{D}\), and \(f(y,x)=-1\) with \(f(x,x)=f(y,y)=0\). Then \(\partial_{x}f(x,y)=\partial_{y}f(x,y)=0\). The kernel \(K(\|x-y\|;\lambda)\) is again any standard stationary and isotropic covariance kernel for Gaussian random fields; for example a rational quadratic covariance with scale-mixing parameter \(\alpha\) gives_ \[\mathscr{K}(x,y;\lambda)=\mathbb{E}[\psi(x)\psi(y)]=\beta f(x,y)\left(1-\frac {\ell^{2}}{2\alpha\lambda^{2}}\right)^{-\alpha},\ (\alpha,\beta>0) \tag{3.89}\] _Then for \(y=x+\ell\)_ \[\mathbb{E}[\psi(x)]=0 \tag{3.90}\] \[\mathbb{E}[\psi(x)\psi(x)]=0\] (3.91) \[\mathbb{E}[\psi(x+\ell)\psi(x+\ell)]=0\] (3.92) \[\mathbb{E}[\psi(x)\psi(x+\ell)]=\beta f(x,x+\ell)K(\|\ell\|;\lambda) \tag{3.93}\] \[\mathbb{E}[\psi(x)\psi(x)\psi(x)]=0 \tag{3.94}\] \[\mathbb{E}[\psi(x+\ell)\psi(x+\ell)\psi(x)]=0\] (3.95) \[\mathbb{E}[\psi(x)\psi(x)\psi(x+\ell)]=0\] (3.96) \[\mathbb{E}[\psi(x+\ell)\psi(x+\ell)\psi(x+\ell)]=0 \tag{3.97}\] _Now let \(\mathbf{Q}=[0,L]\) so that_ \[\mathbf{Q}=\mathbf{Q}_{1}\bigcup\mathbf{Q}_{2}=[0,\lambda]\bigcup(\lambda,L]\] _then either \(\ell\in\mathbf{Q}_{1}\) or \(\ell\in\mathbf{Q}_{2}\). We now 'engineer' the following random field representing a turbulent fluid flow within \(\mathbf{D}\subset\mathbf{R}^{d}\)._ \[\mathcal{U}_{i}(x,t)=U_{i}(x,t)+\theta U_{i}(x,t)\psi(x)\equiv U_{i}(x,t)+ \frac{\theta}{\sqrt{d(d+2}}U_{i}(x,t)\psi(x) \tag{3.98}\] _so that \(\mathbb{E}[\mathcal{U}_{i}(x,t)]=U_{i}(x,t)\). The 3rd-order structure function is then_ \[S_{3}(\ell)=\mathbb{E}[|\mathcal{U}_{i}(x+\ell,t)-\mathcal{U}_{i}(x,t)|^{3}] \tag{3.99}\] _Computing \(S_{3}(\ell)\) and then letting \(U_{i}(x,t)\to U_{i}=(0,0,U)\), one obtains_ \[S_{3}(\ell)=-\frac{12}{d(d+2)}\theta^{2}\beta\|U_{i}\|^{3}K(\|\ell\|;\lambda) \tag{3.100}\] _In three dimensions, \(d=3\) and choosing \(\theta=1\) gives_ \[S_{3}(\ell)=-\frac{12}{15}\|U_{i}\|^{3}K(\|\ell\|;\lambda)=-\frac{4}{5}\|U_{i} \|^{3}K(\|\ell\|;\lambda) \tag{3.101}\] _Choosing the kernel (2.13) with \(\beta=1\)_ \[S_{3}(\ell)=-\frac{12}{15}\|U_{i}\|^{3}K(\|\ell\|;\lambda)=-\frac{4}{5}\|U_{i }\|^{3}\left(1-\frac{\ell^{2}}{2\alpha\lambda^{2}}\right)^{-\alpha} \tag{3.102}\] _Then for \(\ell\in\mathbf{Q}_{1}=[0,\lambda]\), with \(\ell\ll\lambda\), the term \(\frac{1}{2}|\ell/\lambda|^{2}\) is very close to zero so that_ \[S_{3}(\ell)=-\frac{4}{5}\|U_{i}\|^{3} \tag{3.103}\] _holds over this range of length scales. Finally, applying (3.80)_ \[\boxed{S_{3}(\ell)=-\frac{4}{5}\|U_{i}\|^{3}=-\frac{4}{5}\epsilon\ell} \tag{3.104}\] _This is then the exact 4/5-law of turbulence._ **Theorem 3.11**.: _Let the scenario of Thm (2.10) and (2.12) hold. With \(\mathcal{U}_{i}(x,t)\) replacing \(\mathcal{X}_{i}(x,t)\), the 2nd-order structure function is given by (2.80)_ \[S_{2}(\ell)=C\|U_{i}\|^{2} \tag{3.105}\] _Then using (3.79)_ \[\boxed{S_{2}(\ell)=C\epsilon^{2/3}\ell^{2/3}} \tag{3.106}\] _which is the \(2/3\) law, with \(C\) a constant._ ## 4. Conclusion The classical 2/3 and 4/5 laws of turbulence can be reproduced from a theory of engineered random fields, existing within an Euclidean domain \(\mathbf{D}\). It is admitted that the fields and their kernels, have been'reverse-engineered' in this way so to speak, in order to get the required answers. However, this still demonstrates that the concept of random fields can lead to these important classical results when one computes their structure functions. An insight of Kolmogorov's work was that turbulent flows seem to be essentially random fields. It has been assumed that the noise or random fluctuation in fully developed turbulence is a generic noise determined by well-established general theorems in probability theory, stochastic analysis, and random fields or functions. Classical random fields or functions correspond naturally to structures, and properties of systems, that are varying randomly in time and/or space, and this should include turbulent fluids. Expectations or stochastic averages are also well defined. Rigorously defining time or spatial statistical averages in conventional statistical hydrodynamics however, is fraught with technical difficulties and limitations, as well as having a limited scope of physical applicability.
2308.13251
Constructing and sampling partite, $3$-uniform hypergraphs with given degree sequence
Partite, $3$-uniform hypergraphs are $3$-uniform hypergraphs in which each hyperedge contains exactly one point from each of the $3$ disjoint vertex classes. We consider the degree sequence problem of partite, $3$-uniform hypergraphs, that is, to decide if such a hypergraph with prescribed degree sequences exists. We prove that this decision problem is NP-complete in general, and give a polynomial running time algorithm for third almost-regular degree sequences, that is, when each degree in one of the vertex classes is $k$ or $k-1$ for some fixed $k$, and there is no restriction for the other two vertex classes. We also consider the sampling problem, that is, to uniformly sample partite, $3$-uniform hypergraphs with prescribed degree sequences. We propose a Parallel Tempering method, where the hypothetical energy of the hypergraphs measures the deviation from the prescribed degree sequence. The method has been implemented and tested on synthetic and real data. It can also be applied for $\chi^2$ testing of contingency tables. We have shown that this hypergraph-based $\chi^2$ test is more sensitive than the standard $\chi^2$ test. The extra sensitivity is especially advantageous on small data sets, where the proposed Parallel Tempering method shows promising performance.
Andras Hubai, Tamas Robert Mezei, Ferenc Beres, Andras Benczur, Istvan Miklos
2023-08-25T08:55:44Z
http://arxiv.org/abs/2308.13251v1
Constructing and sampling partite, 3-uniform hypergraphs ## Abstract Partite, 3-uniform hypergraphs are 3-uniform hypergraphs in which each hyperedge contains exactly one point from each of the 3 disjoint vertex classes. We consider the degree sequence problem of partite, 3-uniform hypergraphs, that is, to decide if such a hypergraph with prescribed degree sequences exists. We prove that this decision problem is NP-complete in general, and give a polynomial running time algorithm for third almost-regular degree sequences, that is, when each degree in one of the vertex classes is \(k\) or \(k-1\) for some fixed \(k\), and there is no restriction for the other two vertex classes. We also consider the sampling problem, that is, to uniformly sample partite, 3-uniform hypergraphs with prescribed degree sequences. We propose a Parallel Tempering method, where the hypothetical energy of the hypergraphs measures the deviation from the prescribed degree sequence. The method has been implemented and tested on synthetic and real data. It can also be applied for \(\chi^{2}\) testing of contingency tables. We have shown that this hypergraph-based \(\chi^{2}\) test is more sensitive than the standard \(\chi^{2}\) test. The extra sensitivity is especially advantageous on small data sets, where the proposed Parallel Tempering method shows promising performance. ## Introduction Degree sequence problems are one of the most intensively studied topics in algorithmic graph theory. The basic question is the following: given a sequence of non-negative integers, \(D:=d_{1},d_{2},\ldots,d_{n}\), is there a simple graph \(G=(V,E)\) with \(|V|=n\) such that for all \(i=1,2,\ldots,n\), the degree of vertex \(v_{i}\) is \(d_{i}\)? Such a graph \(G\) is called a _realization_ of \(D\). In the middle of the previous century, Havel [22] and Hakimi [20] independently gave efficient algorithms that construct a simple graph with a given degree sequence or report that there is no simple graph with the prescribed degree sequence. The running time of these algorithms grows polynomially with \(n\), the length of the degree sequence. Erdos and Gallai [11] gave inequalities that are necessary and sufficient to have a simple graph with a prescribed degree sequence. Gale [16] and Ryser [30] gave necessary and sufficient inequalities to have a bipartite graph with prescribed degree sequences of the two vertex classes. Hypergraphs are generalizations of simple graphs. In a hypergraph \(H=(V,E)\), any hyperedge or simply edge \(e\in E\) is a non-empty subset of \(V\). A hypergraph is \(k\)-uniform if each edge is a subset of vertices of size \(k\). In this way, we can consider simple graphs as 2-uniform hypergraphs. For a long time, it was an open question whether or not efficient algorithms exist for hypergraph degree sequence problems. Recently, Deza _et al._[8, 9] proved that it is already NP-complete to decide if a 3-uniform hypergraph exists with a prescribed degree sequence. On the other hand, efficient algorithms have been developed for some special classes of degree sequences. These efficient algorithms can decide if a hypergraph realization exists when the degree sequences are very close to regular degree sequences [15, 29]. Another intensively studied computational problem is to generate a random realization of a given degree sequence drawn from the uniform distribution. Above importance sampling [2, 17], Markov chain Monte Carlo methods have been the standard approaches to generate random realizations of a prescribed degree sequence. A _switch operation_ removes edges \((v_{1},v_{2})\) and \((v_{3},v_{4})\) and add edges \((v_{1},v_{4})\) and \((v_{2},v_{3})\). It is easy to see that a switch operation does not change the degree sequence, and any graph with a prescribed degree sequence can be transformed into another graph of the same prescribed degree sequence by a finite series of switch operations. The consequence is that a random walk applying random switches on the current realization of a prescribed degree sequence converges to the uniform distribution of all realizations, given that the probabilities of the random switches are set carefully. One easy way to appropriately adjust the probabilities of the switches is the Metropolis-Hastings algorithm [21, 25]. Kannan, Tetali and Vempala [23] conjectured that the switch Markov chain is rapidly mixing for any degree sequence. The first rigorous proof was given by Cooper, Dyer and Greenhill [6] for regular degree sequences. The conjecture has been proved for larger and larger degree sequence classes; for a state-of-the-art, see [12]. Beyond its theoretical importance, sampling realizations of a prescribed degree sequence is used to generate background statistics of null hypotheses in hypothesis testing. Random 0-1 matrices with prescribed row and column sums (which are equivalent to random bipartite graphs with prescribed degree sequences) are generated to test competition in ecological systems [26]. For other statistical testing of graphs, see [28]. Another family of combinatorial objects that are subject to statistical analysis are the contingency tables that can be considered as bipartite adjacency matrices of bipartite multigraphs. The standard statistical analysis on contingency tables is the \(\chi^{2}\) test. In the case of small entries, the theoretical \(\chi^{2}\) distribution might be far from the exact \(\chi^{2}\) distribution. In such cases, Fisher's exact test is used [1, 14], that generates all possible contingency tables and computes their generalized hypergeometric probabilities (in Eq. 1). The \(p\)-value of the test is the sum of the generalized hypergeometric probabilities of the contingency tables whose probability is less than the tested contingency table. For large tables, the exact computation is not feasible, and Monte Carlo methods have to be used. In such a Monte Carlo method, a random contingency table with entries \(a_{i,j}\) should be generated with probability \[\frac{\prod_{i=1}^{n}R_{i}!\prod_{j=1}^{m}C_{j}!}{N!\prod_{i=1}^{n}\prod_{j=1} ^{m}a_{i,j}!} \tag{1}\] where \(R_{i}\) are the row sums, \(C_{j}\) are the column sums and \(N\) is the total sum of the contingency table (see, for example, [1]). The Metropolis-Hastings algorithm can be used to generate random contingency tables following these prescribed probabilities. The Monte Carlo estimation to the \(p\)-value is the fraction of samples with generalized hypergeometric probability smaller than the generalized hypergeometric probability of the tested contingency table. There are numerous cases when certain "agents" have different types of events during some time span and we are interested in the aggregation of such events. An example might be _(patients, disease, time point)_ triplets, where the "agents" are the patients and having certain diseases are the possible events. We can ask if the different types of diseases are distributed evenly during time or whether some of the diseases are aggregated at certain time points. Another example might be _(users, tweet types, time)_ triplets. The different tweet types might be characterized by their hashtags. We might ask if the hashtags are distributed evenly during time or if they are aggregated. These data types can be described with so-called partite, 3-uniform hypergraphs. Such hypergraphs have three vertex classes: agents, event types, and time points. The hyperedges are triangles such that the triangle has one-one point in each vertex class. There is a hyperedge incident to vertices \(a\), \(b\) and \(c\) if agent \(a\) had an event of type \(b\) at time point \(c\). In such data sources, it might be an important factor that different agents have different total number of _(event, time point)_ pairs, that is, they place different number of events to the event-time point table. For example, some people might be more healthy and being ill fewer times in a given time frame, while others might be ill more frequently. Similarly, some people are Twitter-addicts and post tweets frequently, while other users have considerably fewer tweets in a given time frame. Furthermore, the _(event, time point)_ pairs coming from one agent are different entries in a table. On the other hand, based on the well-known birthday paradox, if more than square root of \(n\) elements are selected from an \(n\)-set with replacement, then with high probability there will be an element selected multiple times. Therefore, if agents place several items into the _(event, time point)_ table, then the items will be more evenly distributed than independently distributing the same number of items. The consequence is that the \(\chi^{2}\) statistics will be shifted towards smaller values. To consider the activity of the agents, an exact \(\chi^{2}\) test on the aggregation of event types should be obtained from uniformly sampling _(agent, event type, time point)_ triplets such that each agent has the same activity as in the real data set, each event type is as frequent as in the real data set and each time point is as busy as in the real data set. That is, we need to generate random partite, 3-uniform hypergraphs with prescribed degree sequences. This generation problem is hard in general. Indeed, as we show in this paper, it is already NP-complete to decide if a partite, 3-uniform hypergraph exists with a given degree sequence, and randomly generating one such a hypergraph does not seem to be an easier computational problem. However, we show in this paper that the decision and the construction problem is easy if one of the vertex classes is almost-regular, that is, each degree in that vertex class is either \(k\) or \(k-1\) for some \(k\). We do not have to assume anything about the other two vertex classes, that is, the degrees in those two vertex classes might be arbitrary irregular. We call such a degree sequence _third almost-regular_. We also show that any realization of a third almost-regular degree sequence can be transformed into another one by a series of switch operations. We use this result in a Parallel Tempering Markov chain Monte Carlo method to generate random partite, 3-uniform hypergraphs with prescribed degree sequences. In that framework, the hypothetical energy of a hypergraph tells the deviation of a partite, 3-uniform hypergraph from the prescribed degree sequence, and the minimal energy is obtained when there is no deviation. The Parallel Tempering method cools down the Boltzmann distribution of the hypergraphs to the possible realizations of the prescribed degree sequence. At high temperature, hypergraphs with deviated degree sequences have a high probability in the Boltzmann distribution. Those deviated degree sequences contain the third almost-regular degree sequences, too, on which the switch operations are irreducible. We also give further analysis to the mixing properties of the proposed Markov chain Monte Carlo method. Although this approach is assumed to fail on some problem instances (extremely long time is needed to find a realization) due to the theoretical hardness of the problem, in practical data sets, its performance is acceptable. We demonstrate the applicability of the method on simulated and real data, and we also show that it indeed provides a more sensitive \(\chi^{2}\) testing. ## Realizing hypergraph degree sequences **Definition 1**.: _A hypergraph \(H=(V,E)\) is a generalization of simple graphs. For all \(e\in E\), \(e\) is a non-empty subset of \(V\). A hyperedge \(e\) is incident with \(v\) if \(v\in e\). A hypergraph is \(t\)-uniform if for all \(e\in E\), \(e\in\binom{V}{t}\). A hypergraph \(H=(V,E)\) is partite \(t\)-uniform if \(V\) is a disjoint partition of \(V_{1},V_{2},\ldots,V_{t}\), and for all \(e\in E\) and for all \(i=1,2,\ldots t\), \(|e\cap V_{i}|=1\), that is, each edge is incident with exactly one vertex in each vertex class._ _The degree of a vertex of a hypergraph is the number of hyperedges incident with it. The degree sequence of a hypergraph is the sequence of the degrees of its vertices. If a hypergraph is partite \(t\)-uniform, then the degree sequence can be naturally broken down by the vertex classes, that is, it can be written as_ \[(d_{1,1},d_{1,2},\ldots,d_{1,n_{1}}),(d_{2,1},d_{2,2},\ldots,d_{2,n_{2}}), \ldots,(d_{t,1},d_{t,2},\ldots,d_{t,n_{t}}).\] _If \(D\) is a sequence of non-negative integers, we say that a hypergraph \(H=(V,E)\) is a realization of \(D\), if the sequence of the degrees of the vertices of \(H\) is \(D\). If \(D\) has a realization, then we say that \(D\) is graphic._ In this paper we consider partite \(3\)-uniform hypergraphs, and for sake of simplicity, from now by "hypergraph" we mean partite \(3\)-uniform hypergraphs. Hypergraphs will be denoted by \(H=(A,B,C,E)\), where \(A\), \(B\) and \(C\) are the three disjoint vertex sets. Hypergraph degree sequences sometimes will be denoted by \(D=(D_{A},D_{B},D_{C})\), where \(D_{A}\), \(D_{B}\) and \(D_{C}\) are the degree sequences of the vertex classes \(A\), \(B\) and \(C\), respectively. We are going to manipulate hypergraphs by switch operations that we describe below. **Definition 2**.: _A switch operation on a hypergraph \(H=(A,B,C,E)\) removes two hyperedges \((a_{1},b_{1},c_{1}),(a_{2},b_{2},c_{2})\in E(H)\) and creates two new hyperedges \((a_{2},b_{1},c_{1}),(a_{1},b_{2},c_{2})\). We require that neither \((a_{2},b_{1},c_{1})\) nor \((a_{1},b_{2},c_{2})\) be a hyperedge in \(H\) before the switch operation. We similarly define switch operations that swaps the vertices in the vertex class \(B\) or \(C\)._ Observe that the switch operation does not change the degrees of the vertices, that is, a switch operation creates another realization of the same degree sequence. We also introduce the following operations that do change the degree sequence. **Definition 3**.: _A hinge flip operation on a hypergraph \(H=(A,B,C,E)\) removes a hyperedge \((a,b,c)\in E(H)\) and adds a new hyperedge \((a^{\prime},b,c)\). We require that \((a^{\prime},b,c)\) be not a hyperedge in the hypergraph before the hinge flip operation. We similarly define hinge flip operations that move a vertex of a hyperedge in the vertex class \(B\) or \(C\)._ _A toggle out operation on a hypergraph \(H=(A,B,C,E)\) deletes a hyperedge \((a,b,c)\). Its inverse operation is the toggle in operation that adds a hyperedge \((a,b,c)\) to \(H\)._ It is easy to see that a hinge flip removing a hyperedge \((a,b,c)\) and adding a new hyperedge \((a^{\prime},b,c)\) decreases the degree of \(a\) by \(1\) and increases the degree of \(a^{\prime}\) by \(1\). A toggle out that removes hyperedge \((a,b,c)\) decreases the degree of \(a\), \(b\) and \(c\) by \(1\). A toggle in that adds hyperedge \((a,b,c)\) increases the degree of \(a\), \(b\) and \(c\) b \(1\). The central question is whether or not there is a hypergraph with a prescribed degree sequence. We will prove the following theorem claiming that this is a hard computational problem. **Theorem 4**.: _Let_ \[D:=(d_{1,1},d_{1,2},\ldots,d_{1,n_{1}}),(d_{2,1},d_{2,2},\ldots,d_{2,n_{2}}),(d_{3, 1},d_{3,2},\ldots,d_{3,n_{3}})\] _be a hypergraph degree sequence. Then it is NP-complete to decide if \(D\) is graphic._ Theorem 4 follows almost verbatim from the argument of [8, 9]. We will reduce the so-called numerical \(3\)-dimensional matching problem to the realization problem in Theorem 4. In the definition of the numerical \(3\)-dimensional matching problem, we use the following notations. Let \([n]\) denote the set \(\{1,2,\ldots,n\}\) that is naturally indexed by its elements. For a subset \(X\subseteq[n]\), we denote the vector from \(\{0,1\}^{n}\) containing \(1\) in the indices corresponding to elements of \(X\) and \(0\) elsewhere by \(\mathbf{1}_{X}\). We denote the inner product of a row vector \(r\) and a column vector \(c\) by \(r\cdot c\). Vectors are column vectors by default, and row vectors are obtained by transposing column vectors. The transposition is denoted by \(T\) in the exponent of the column vector. **Definition 5** (Numerical \(3\)-dimensional matching problem).: _Let \(A\), \(B\), \(C\) be a partition of \([n]\) with \(|A|=|B|=|C|=k\) so that \(n=3k\). Let \(a\in\mathbb{Z}^{n}\) be a weight vector, and let \(b\in\mathbb{Z}_{0}^{+}\) be a prescribed bound. Decide whether there exists a subset \(M\subseteq\{0,1\}^{n}\) such that_ * \(\sum_{x\in M}x=\mathbf{1}_{[n]}\)_, and_ * \(\forall x\in M\) _satisfies_ \(\mathbf{1}_{A}^{T}\cdot x=\mathbf{1}_{B}^{T}\cdot x=\mathbf{1}_{C}^{T}\cdot x=1\)_, and_ * \(\forall x\in M\) _satisfies_ \(a^{T}\cdot x=b\)_._ In words, we are looking for a disjoint partitioning of \([n]\) such that each partition contains exactly \(1\)-\(1\)-\(1\) element from \(A\), \(B\) and \(C\), and the sum of the weights of each member of the partition is \(b\). **Theorem 6** ([5] in [18]).: _The numerical \(3\)-dimensional matching problem is \(\mathrm{NP}\)-complete._ We are ready to prove our NP-completeness result. Proof of Theorem 4.: The partite \(3\)-uniform hypergraph realization problem is contained in \(\mathrm{NP}\), because it is easy to check whether the degree sequence of a given hypergraph matches a prescribed degree sequence. Let \(A,B,C\) be a partition of \([n]\), and let \(a\in\mathbb{Z}^{n}\) and \(b\in\mathbb{Z}_{0}^{+}\) define an instance of the numerical \(3\)-dimensional matching problem. If an appropriate \(M\) exists, then \[3a^{T}\cdot\mathbf{1}_{[n]}=3a^{T}\cdot\sum_{x\in M}x=3kb=nb. \tag{2}\] The above equality is clearly necessary for the existence of a solution to the numerical \(3\)-dimensional matching problem. If (2) does not hold, then we reduce the instance to a hypergraph degree sequence which is all zero except for one non-zero entry; it is not graphic as a \(3\)-uniform hypergraph, so the answer to both instances is \(\mathbf{NO}\). Suppose from now on that (2) holds. Let \(w:=3a-b\mathbf{1}_{[n]}\). Notice, that \[w^{T}\cdot\mathbf{1}_{[n]}=3a^{T}\cdot\mathbf{1}_{[n]}-b\mathbf{1}_{[n]}^{T} \cdot\mathbf{1}_{[n]}=3a^{T}\cdot\mathbf{1}_{[n]}-bn=0. \tag{3}\] Let \[S=\{x\in\{0,1\}^{n}\mid\mathbf{1}_{A}^{T}\cdot x=\mathbf{1}_{B}^{T}\cdot x= \mathbf{1}_{C}^{T}\cdot x=1\}.\] We are ready to define the degree sequence associated to an instance of the numerical 3-dimensional matching problem: \[d(w):=\mathbf{1}_{[n]}+\sum_{x\in S,\ w^{T}.x>0}x. \tag{4}\] To finish the proof, we will show that \(d(w)\) has a hypergraph realization which is 3-partite on classes \(A\), \(B\), and \(C\) if and only if the numerical 3-dimensional matching problem defined by \(a,b\) on \(A,B,C\) has a solution. Suppose, that \(M\) is a solution to the studied instance of the numerical 3-dimensional matching problem. Observe, that for any \(x\in M\), we have \[w^{T}\cdot x=3a^{T}\cdot x-b\mathbf{1}_{[n]}^{T}\cdot x=3b-b\cdot 3=0\] Let the hypergraph associated to \(M\) be \(H(M)=([n],E(M))\), where \[E(M):=\left\{e\subseteq[n]\mid\mathbf{1}_{e}\in M\cup\left\{x\in S\mid w^{T}x >0\right\}\right\}\] By definition, \(H(M)\) is a partite 3-uniform hypergraph on classes \(A,B,C\). The degree sequence of \(H(M)\) is \[D(H(M))=\sum_{e\in E(M)}\mathbf{1}_{e}=\sum_{x\in M}x+\sum_{x\in S,\ w^{T}.x>0 }x=d(w),\] thus if there is a solution to the numerical 3-dimensional matching problem, then \(d(w)\) is graphic. Suppose next, that the degree sequence of some hypergraph \(H\) is \(d(w)\). Using (3), we have \[\begin{split} w^{T}\cdot\sum_{\begin{subarray}{c}x\in S\\ w^{T}.x>0\end{subarray}}& x=w^{T}\cdot\mathbf{1}_{[n]}+w^{T}\cdot \sum_{\begin{subarray}{c}x\in S\\ w^{T}.x>0\end{subarray}}x=w^{T}\cdot d(w)=\\ &=w^{T}\cdot\sum_{e\in E(H)}\mathbf{1}_{e}=w^{T}\cdot\left( \sum_{\begin{subarray}{c}e\in E(H)\\ w^{T}.\mathbf{1}_{e}>0\end{subarray}}\mathbf{1}_{e}+\sum_{\begin{subarray}{c}e \in E(H)\\ w^{T}.\mathbf{1}_{e}=0\end{subarray}}\mathbf{1}_{e}+\sum_{\begin{subarray}{c}e \in E(H)\\ w^{T}.\mathbf{1}_{e}<0\end{subarray}}\mathbf{1}_{e}\right).\end{split} \tag{5}\] Because \(H\) is 3-partite on classes \(A,B,C\), we have \(\mathbf{1}_{e}\in S\) for any \(e\in E(H)\). Equality in (5) implies that \(w^{T}\mathbf{1}_{e}\geq 0\) holds for every \(e\in E(H)\). Subsequently, any \(x\in S\) such that \(w^{T}\cdot x>0\) must be \(x=\mathbf{1}_{e}\) the characteristic vector of some edge \(e\in E(H)\). Let \(M(H):=\{\mathbf{1}_{e}\mid e\in E(H),\ w^{T}\cdot\mathbf{1}_{e}=0\}\). For any \(x\in M(H)\), we have \(w^{T}\cdot x=0\), therefore: \[3a^{T}\cdot x =b\mathbf{1}_{n}^{T}\cdot x,\] \[3a^{T}\cdot x =b(\mathbf{1}_{A}^{T}+\mathbf{1}_{B}^{T}+\mathbf{1}_{C}^{T})\cdot x =3b,\] \[a^{T}\cdot x =b.\] Lastly, since \(\{x\in S\mid w^{T}\cdot x>0\}=\{\mathbf{1}_{e}\mid e\in E(H),\ w^{T}\mathbf{1} _{e}>0\}\), using (4) we get \[\sum_{x\in M(H)}x=\sum_{e\in E(H)}\mathbf{1}_{e}-\sum_{e\in E(H),\ w^{T}. \mathbf{1}_{e}>0}\mathbf{1}_{e}=d(w)-\sum_{x\in S,\ w^{T}.x>0}x=\mathbf{1}_{[n]},\] which completes the proof that \(M(H)\) a solution to the desired instance of the Numerical 3-dimensional matching problem. On the other hand, in this paper, we also show that it is easy to decide whether or not some special degree sequences are graphic. We start with some definitions. **Definition 7**.: _Let \(D:=(d_{1,1},d_{1,2},\ldots,d_{1,n_{1}}),(d_{2,1},d_{2,2},\ldots,d_{2,n_{2}}),(d_{ 3,1},d_{3,2},\ldots,d_{3,n_{3}})\) be a hypergraph degree sequence. We say that \(D\) is third almost-regular, if for some \(k\), for all \(i=1,2,\ldots,n_{1}\), \(d_{1,i}\in\{k,k-1\}\)._ **Definition 8**.: _Let \(H=(A,B,C,E)\) be a hypergraph, where \(A\), \(B\) and \(C\) are the vertex classes. The \((A,B)\)-projection of \(H\) is a bipartite multigraph \(\bar{G}=(A,B,\bar{E})\), where the number of parallel edges between any \((a_{i},b_{j})\) is the number of \(c_{k}\) vertices such that \((a_{i},b_{j},c_{k})\in E(H)\). The \((A,B)\)-shadow of \(H\) is a bipartite graph \(\bar{G}=(A\times B,C,\bar{E})\), where \(((a_{i},b_{j}),c_{k})\in\bar{E}\) if and only if \((a_{i},b_{j},c_{k})\in E(H)\)._ _The \((A,B)\)-projection is \(b\)-balanced if there exists an \(l\) such that for all \(a_{i}\in A\), the number of parallel edges between \(a_{i}\) and \(b\) is either \(l\) or \(l-1\). The projection is \(B\)-balanced if for all \(b_{j}\in B\) the projection is \(b_{j}\)-balanced._ _The trace of a \(B\)-balanced \((A,B)\)-projection is a bipartite (simple) graph defined in the following way: In the adjacency matrix of the \((A,B)\)-projection, in each column, we replace each \(l\) by \(1\) and each \(l-1\) by \(0\). The trace is the bipartite graph whose adjacency matrix is the so-obtained \(0\)-\(1\) matrix._ It is clear that the degree of \((a_{i},b_{j})\) in \(\bar{G}\) is the number of parallel edges between \(a_{i}\) and \(b_{j}\) in \(\bar{G}\). Further, it is easy to see the following lemma. **Lemma 9**.: _Let \(D=(D_{A},D_{B},D_{C})\) be a hypergraph degree sequence. Then \(D\) is graphic if and only if there is a graphical bipartite degree sequence \(\bar{D}=D_{A\times B},D_{C}\) such that for all \(i\),_ \[\sum_{j=1}^{n_{2}}d((a_{i},b_{j}))=d(a_{i}),\] _where \(d(a_{i})\) is the degree of \(a_{i}\) in the hypergraph degree sequence \(D\) and \(n_{2}\) is the length of \(D_{B}\), and for all \(j\),_ \[\sum_{i=1}^{n_{1}}d((a_{i},b_{j}))=d(b_{j}),\] _where \(d(b_{j})\) is the degree of \(b_{j}\) in the hypergraph degree sequence \(D\) and \(n_{1}\) is the length of \(D_{A}\), and further \(D_{C}\) in \(\bar{D}\) equals \(D_{C}\) in \(D\)._ Proof.: The \(\Rightarrow\) direction: If \(D\) is graphic, let \(H\) be a realization of it, and let \(\bar{G}\) be its \((A,B)\)-shadow. Then the degree sequence of \(\bar{G}\) satisfies the conditions, and since \(\bar{G}\) is a realization of its own degree sequence, we have found a graphical degree sequence with the prescribed conditions. The \(\Leftarrow\) direction: If there is a graphic degree sequence \(\bar{D}=(D_{A\times B},D_{C})\), then let \(\bar{G}\) be one of its realizations. We can think about \(\bar{G}\) as an \((A,B)\)-shadow of a hypergraph \(H\). Constructing \(H\) is trivial: for each edge \(((a_{i},b_{j}),c_{k})\), we create hyperedge \((a_{i},b_{j},c_{k})\). It is easy to see that the so obtained hypergraph has degree sequence \(D\), thus \(D\) is graphic. Since \(B\) might not be an almost-regular vertex class, \(l\) might vary across the vertices of \(B\) in a \(B\)-balanced projection. Clearly, for each \(b_{j}\), the corresponding \(l\) and \(l-1\) is the ceiling and floor of the degree of \(b_{j}\) in \(H\) divided by the size of \(A\). A bipartite multigraph \(G=(A,B,E)\) can be represented by its adjacency matrix, which is an \(|A|\times|B|\) matrix \(M\), and for all \(i=1,2,\ldots,n_{1}\) and \(j=1,2,\ldots,n_{2}\)\(m_{i,j}\) is the number of multiedges between \(a_{i}\) and \(b_{j}\). In this way, it is easy to see that an \((A,B)\)-projection is \(B\)-balanced if each column of its adjacency matrix contains at most two different values that differ from each other by \(1\). Since \(A\) is the almost-regular vertex class, the row sums of the adjacency matrix of the projection are almost-regular, that is, each row sum is either \(k\) or \(k-1\) for some \(k\). The following is the key lemma for third almost-regular degree sequences. **Lemma 10**.: _Let \(D=(D_{A},D_{B},D_{C})\) be a third almost-regular degree sequence. If \(D\) has a realization \(H=(A,B,C,E)\) then \(D\) also has a realization \(H^{\prime}\) whose \((A,B)\)-projection is \(B\)-balanced. Furthermore, \(H^{\prime}\) can be obtained from \(H\) by a series of switch operations._ Proof.: We will prove the statement by induction on the size of \(B\). Let \(H=(A,B,C,E)\) be a realization of \(D\). If \(B\) contains exactly one element, then \(H\) is third almost-regular precisely when the \((A,B)\)-projection of \(H\) is \(B\)-balanced, thus the base case of the induction holds. Suppose that the induction hypothesis holds for degree sequences whose second vertex class has size \(|B|-1\). If the \((A,B)\)-projection of \(H\) is \(B\)-balanced, then the induction step is trivial. Assume from now on, that the \((A,B)\)-projection of \(H\) is not \(b\)-balanced for some \(b\in B\). By finding an appropriate series of switch operations, we are going to construct a realization \(H^{\prime}\) whose \((A,B)\)-projection is \(b\)-balanced, and further, after removing the column corresponding to \(b\) in the adjacency matrix, the cropped adjacency matrix still has almost-regular row sums. Indeed, if such an \(H^{\prime}\) exists, then the degree sequence of \(H^{\prime\prime}:=H^{\prime}\setminus b\) is third almost-regular. By induction, there exists some \(H^{\prime\prime\prime}\) which is \((B\setminus\{b\})\)-balanced such that \(H^{\prime\prime\prime}\) and \(H^{\prime\prime}\) share their degree sequence. By construction, \(H^{\prime\prime\prime}+\{e\in H^{\prime}\mid b\in e\}\) will be a \(B\)-balanced realization of \(D\). Regarding the claim of the lemma that any realization \(H\) can be transformed to a \(B\)-balanced realization \(H^{\prime}\) with a finite series of switch operations, the removement of a column can be considered as "freezing" the corresponding hyperedges and considering the remaining subgraph. Let \(l=\left\lceil\frac{d(b)}{|A|}\right\rceil\), where \(d(b)\) is the degree of \(b\) in \(H\) (which is not \(b\)-balanced). Then there is a unique solution how many \(l\)'s are in column of the adjacency matrix of the \((A,B)\)-projection corresponding to \(b\) such that this column is balanced. Let \(\#k\) denote the number of rows in the adjacency matrix of the projection whose sum is \(k\), and let \(\#l\) denote the number of \(l\)'s such that \[\#l\times l+(n_{1}-\#l)\times(l-1)=d(b).\] There are \(3\) sub-cases: 1. \(\#l=\#k\). Then we will construct an \(H^{\prime}\) such that in the adjacency matrix of its \((A,B)\)-projection, exactly those entries will be \(l\) in the column corresponding to \(b\) whose row sum is \(k\). Then after removing the column corresponding to \(b\), we got a matrix in which each row sum is \(k-l[=k-1-(l-1)]\). 2. \(\#l<\#k\). Then we will construct an \(H^{\prime}\) such that in the adjacency matrix of its \((A,B)\)-projection, \(\#l\) entries will be \(l\) in the column corresponding to \(b\) whose row sum is \(k\), \(\#k-\#l\) entries will be \(l-1\) such that the corresponding row sum is \(k\) and all \(n_{1}-\#k\) entries whose corresponding row sum is \(k-1\) will get \(l-1\). After removing the column corresponding to \(b\), \(\#k-\#l\) rows will have row sum \(k-(l-1)=k-l+1\), and \(n_{1}-\#k+\#l\) rows will have row sum \(k-l[=k-1-(l-1)]\). That is, the row sums are still almost-regular. 3. \(\#l>\#k\). Then we will construct an \(H^{\prime}\) such that in the adjacency matrix of its \((A,B)\)-projection, all \(\#k\) entries whose row sum is \(k\) will be \(l\) in the column corresponding to \(b\), \(\#l-\#k\) entries will be \(l\) such that the corresponding row sum is \(k-1\) and all \(n_{1}-\#l\) entries will be \(l-1\) such that the corresponding row sum is \(k-1\). After removing the column corresponding to \(b\), \(\#l-\#k\) rows will have row sum \(k-1-l=k-l-1\), and \(n_{1}-\#l+\#k\) rows will have row sum \(k-l[=k-1-(l-1)]\). That is, the row sums are still almost-regular. In the adjacency matrix of the \((A,B)\)-projection of \(H\), some of the entries in the column corresponding to \(b\) are not the values that are prescribed in the above list. We measure the deviation as the sum of the absolute values of the differences between the prescribed and the actual values. We are going to show that this deviation can be strictly monotonously decreased by switch operations. Particularly, while there is a wrong entry in the inferred column, we will be able to find a switch operation decreasing the deviation by \(2\). Clearly, if there is an entry which is larger than prescribed, then there must be an entry that is smaller than prescribed. Indeed, during the switch operations, the degree of \(b\) does not change and in the adjacency matrix of the \((A,B)\)-projection, the sum of the inferred column is fixed: it is the degree of \(b\). We have the following cases when an entry is greater than prescribed: 1. In a row with sum \(k\), there is an entry greater than \(l\). Then the entry is at least \(l+1\) and the remaining row sum is at most \(k-l-1\). 2. In a row with sum \(k\), there is an entry greater than \(l-1\). Then the entry is at least \(l\) and the remaining row sum is at most \(k-l\). 3. In a row with sum \(k-1\), there is an entry greater than \(l\). Then the entry is at least \(l+1\) and the remaining row sum is at most \(k-l-2\). 4. In a row with sum \(k-1\), there is an entry greater than \(l-1\). Then the entry is at least \(l\) and the remaining row sum is at most \(k-l-1\). Further, we have the following cases when an entry is lower than prescribed: 1. In a row with sum \(k\), there is an entry lower than \(l\). Then the entry is at most \(l-1\), and the remaining row sum is at least \(k-l+1\). 2. In a row with sum \(k\), there is an entry lower than \(l-1\). Then the entry is at most \(l-2\), and the remaining row sum is at least \(k-l+2\). 3. In a row with sum \(k-1\), there is an entry lower than \(l\). Then the entry is at most \(l-1\), and the remaining row sum is at least \(k-l\). 4. In a row with sum \(k-1\), there is an entry lower than \(l-1\). Then the entry is at most \(l-2\), and the remaining row sum is at least \(k-l+1\). We can see that any of the possible combinations of to-be-decreased and to-be-increased entries, the entry to be decreased is strictly larger than the degree to be increased. Let the row index containing the entry to be decreased be \(i\) and let the row index containing the entry to be increased be \(i^{\prime}\). Then since there is no case with a prescribed entry \(l-1\) in a row with row sum \(k\) and the same time a prescribed entry \(l\) in a row with row sum \(k-1\), we can conclude that the remaining row sum in row \(i\) is strictly smaller than the remaining row sum in row \(i^{\prime}\). Since the entry we would like to decrease is strictly larger than the entry we would like to increase, by pigeonhole principle it follows that there exists a \(c\) such that \((a_{i},b,c)\in E(H)\) and \((a_{i^{\prime}},b,c)\not\in E(H)\). Since the remaining row sum in row \(i\) is strictly smaller than the remaining row sum in row \(i^{\prime}\), also by pigeonhole principle it follows that there exists a \(b^{\prime}\) such that the in the \((A,B)\)-projection of \(H\), the number of parallel edges between \(a_{i^{\prime}}\) and \(b^{\prime}\) is strictly greater than the number of parallel edges between \(a_{i}\) and \(b^{\prime}\). Also by pigeonhole principle, there exists a \(c^{\prime}\) such that \((a_{i^{\prime}},b^{\prime},c^{\prime})\in H(E)\) and \((a_{i},b^{\prime},c^{\prime})\not\in E(H)\). Then we can switch \(a_{i}\) and \(a_{i^{\prime}}\) in the hyperedges \((a_{i},b,c)\) and \((a_{i^{\prime}},b^{\prime},c^{\prime})\) to get the hyperedges \((a_{i^{\prime}},b,c)\) and \((a_{i},b^{\prime},c^{\prime})\). This switch operation decreases the deviation of the column corresponding to \(b\). Since the deviation of the column corresponding to \(b\) can be decreased by switch operation while this deviation is larger than \(0\), after finite number of switches, the column of \(b\) will be balanced. Further, by removing \(b\) from the hypergraph obtained from \(H\) by the above-described switches still has almost-regular degrees on its vertex class \(A\), we can keep balancing vertices in the vertex class \(B\) till all vertices become balanced. Then we can add back the removed vertices in the vertex class \(B\) together with their hyperedges to obtain a \(B\)-balanced realization of the original degree sequence. With this key lemma, we can prove the following theorem. **Theorem 11**.: _Let \(D:=(d_{1,1},d_{1,2},\ldots,d_{1,n_{1}}),(d_{2,1},d_{2,2},\ldots,d_{2,n_{2}}),(d_ {3,1},d_{3,2},\ldots,d_{3,n_{3}})\) be a third almost-regular hypergraph degree sequence. Then there is a polynomial time algorithm that decides if \(D\) is graphic, and if it is graphic, the algorithm also constructs a realization of \(D\)._ Proof.: First, we construct a bipartite multigraph \(G=(A,B,\tilde{E})\) with degree sequence \(D_{A}\) and \(D_{B}\). It is a triviality that the necessary and sufficient condition for a bipartite degree sequence to have a bipartite multigraph realization is that the degrees in \(D_{A}\) and \(D_{B}\) must have the same sum, and in case of having the same sum, constructing a bipartite multigraph is also a trivial task. Then we can make switch operations as described in the proof of Lemma 10 to obtain a \(B\)-balanced multigraph \(\tilde{G}\). Now consider the bipartite degree sequence \(\tilde{D}=(D_{A\times B},D_{C})\), where \(D_{A\times B}\) contains the entries of the adjacency matrix of \(\tilde{G}\). We claim that \(D\) has a hypergraph realization if and only if \(\tilde{D}\) is graphic. Indeed, by Lemma 10, we also know that \(D\) has a hypergraph realization if it also has a \(B\)-balanced hypergraph realization \(H\). Take the \((A,B)\)-projection of \(H\). We claim that the entries of the adjacency matrix of the \((A,B)\)-projection is the same than the degree sequence \(D_{A\times B}\) of \(\tilde{D}\). Indeed, as we discussed, the number of \(l\)'s and \(l-1\)'s in each column in the adjacency matrix of a \(B\)-balanced realization is determined by the corresponding degree in \(D_{B}\). Now take the \((A,B)\)-shadow of \(H\). Its degree sequence is indeed \(\tilde{D}\). To prove the opposite direction, assume that \(\bar{D}\) is graphic, and construct a realization of it, \(\bar{G}=(A\times B,C,\bar{E})\). Then construct a hypergraph \(H=(A,B,C)\) in which \((a_{i},b_{j},c_{k})\in E(H)\) if and only if \(((a_{i},b_{j}),c_{k})\in\bar{E}(\bar{G})\). It is easy to see that \(H\) is a realization of \(\bar{D}\). We can also prove that any realizations of a third almost-regular degree sequence can be transformed into any other realization of the same degree sequence by a series of switch operations. First, we prove that balanced realizations can be transformed into each other. **Lemma 12**.: _Let \(H_{1}\) and \(H_{2}\) be two \(B\)-balanced hypergraph realizations of the third almost-regular degree sequence \(D\). Then there exists a series of switch operations that transforms \(H_{1}\) into \(H_{2}\)._ Proof.: If the two realizations have the same \((A,B)\)-projections, then their \((A,B)\)-shadows have the same degree sequences. But \((A,B)\)-shadows are bipartite graphs, further, bipartite graphs with the same degree sequences can be transformed into each other by switch operations [20, 22]. These switch operations can be lifted back to the hypergraph realizations. Indeed, if a switch in the \((A,B)\)-shadow deletes edges \(((a_{1},b_{1}),c_{1})\) and \(((a_{2},b_{2}),c_{2})\) and creates edges \(((a_{1},b_{1}),c_{2})\) and \(((a_{2},b_{2}),c_{1})\), then its corresponding switch operations on hypergraphs deletes the hyperedges \((a_{1},b_{1},c_{1})\) and \((a_{2},b_{2},c_{2})\) and creates hyperedges \((a_{1},b_{1},c_{2})\) and \((a_{2},b_{2},c_{1})\). Thus we only have to show that any \(B\)-balanced realization can be transformed into another \(B\)-balanced realization with a prescribed \((A,B)\)-projection. Let \(M_{1}\) and \(M_{2}\) be two different \(B\)-balanced \((A,B)\)-projections of two different hypergraphs \(H_{1}\) and \(H_{2}\), and let their traces be \(G_{1}\) and \(G_{2}\). It is easy to see that \(G_{1}\) and \(G_{2}\) are bipartite (simple) graphs with the same degree sequences. Indeed, the column sums of \(M_{1}\) and \(M_{2}\) are the same. Therefore, for each column \(c\), the number of \(l\)'s in column \(c\) in \(M_{1}\) is the same that the number of \(l\)'s in column \(c\) in \(M_{2}\). Further, the row sums in \(M_{1}\) and \(M_{2}\) are the same. Therefore, for each row \(r\), the number of times row \(r\) contains column average ceiling (of the column in question) in \(M_{1}\) is the same than the number of times \(r\) contains column average ceiling (of the column in question) in \(M_{2}\). Bipartite graphs with the same degree sequences can be transformed into each other by switch operations, therefore the trace \(G_{1}\) can be transformed into \(G_{2}\) by switch operations. Any switch operation in a trace has a corresponding switch operation in the \(B\)-balanced \((A,B)\)-projection. Indeed, a switch operation in \(G_{1}\) that deletes the vertices \((a_{1},b_{1})\) and \((a_{2},b_{2})\) and creates the vertices \((a_{2},b_{1})\) and \((a_{1},b_{2})\) has a corresponding switch operation in \(M_{1}\) that decreases the number of parallel edges between \(a_{1}\) and \(b_{1}\) from \(b_{1}\)-average ceiling (the \(l\) for the column of \(b_{1}\)) to \(b_{1}\)-average flooring (the \(l-1\) for the column of \(b_{1}\)), decreases the parallel edges between \(a_{2}\) and \(b_{2}\) from \(b_{2}\)-average ceiling to \(b_{2}\)-average flooring, and increases the number of parallel edges between \(a_{1}\) and \(b_{1}\) from \(b_{1}\)-flooring to \(b_{1}\)-ceiling and increases the number of parallel edges between \(a_{1}\) and \(b_{2}\) from \(b_{2}\)-average flooring to \(b_{2}\) average ceiling. Due to the pigeonhole principle, there is a \(c_{1}\) such that \((a_{1},b_{1},c_{1})\) is a hyperedge in \(H_{1}\) and \((a_{2},b_{1},c_{1})\) is not a hyperedge in \(H_{1}\). Similarly, due to pigeonhole principle, there is a \(c_{2}\) such that \((a_{2},b_{2},c_{1})\) is a hyperedge and \((a_{1},b_{2},c_{2})\) is not a hyperedge in \(H_{1}\). Therefore each switch operation in \(G_{1}\) has at least one switch operation in \(H_{1}\). In this way, when a trace \(G_{1}\) is transformed into \(G_{2}\) with switch operations, the corresponding hypergraph \(H_{1}\) is transformed into another hypergraph \(H_{1}^{\prime}\) that has trace \(G_{2}\). Then \(H_{1}^{\prime}\) has the same \((A,B)\)-projection as \(H_{2}\). As we discussed, \(H_{1}^{\prime}\) can be transformed into \(H_{2}\) by switch operations. **Theorem 13**.: _Let \(H_{1}\) and \(H_{2}\) be two hypergraph realizations of the same third almost-regular degree sequence \(D\). Then there exists a finite series of switches that transforms \(H_{1}\) into \(H_{2}\)._ Proof.: Based on lemma 10, we can transform \(H_{1}\) into a \(B\)-balanced realization \(H_{1}^{\prime}\) by switch operations. Also, we can transform \(H_{2}\) into a \(B\)-balanced realization \(H_{2}^{\prime}\) by switch operations. Due to lemma 12, \(H_{1}^{\prime}\) can be transformed into \(H_{2}^{\prime}\) by switch operations. Thus, \(H_{1}\) can be transformed into \(H_{2}^{\prime}\) by switch operations. Since the inverse of a switch operation is also a switch operation, \(H_{2}^{\prime}\) can be transformed into \(H_{2}\) by switch operations, and thus, \(H_{1}\) can be transformed into \(H_{2}\) by switch operations. Finally, we show how to transform any realization of any degree sequence to any other realization of the same degree sequence. **Theorem 14**.: _Let \(D=(D_{A},D_{B},D_{C})\) be a hypergraph degree sequence, and let \(H_{1}\) and \(H_{2}\) be two realizations of them. Then \(H_{1}\) can be transformed into \(H_{2}\) by a finite series of hinge-flip and switch operations._ Before we prove this theorem, we would like to remark that hinge-flips do not keep the degree sequence. However, theorem 14 is the key of the Parallel Tempering method that we will introduce in the next section. Proof of Theorem 14.: It is enough to show that both \(H_{1}\) and \(H_{2}\) can be transformed into realizations of the same third almost-regular degree sequence. Indeed, let \(H_{1}^{\prime}\) and \(H_{2}^{\prime}\) be two realizations of a third almost-regular degree sequence. Then \(H_{1}^{\prime}\) can be transformed into \(H_{2}^{\prime}\) by switch operations. Therefore if \(H_{1}\) can be transformed into \(H_{1}^{\prime}\) and \(H_{2}\) can be transformed into \(H_{2}^{\prime}\) by hinge flips, then \(H_{1}\) can be transformed into \(H_{2}\) by hinge flips and switches. Indeed, the inverses of hinge flips are also hinge flips, therefore \(H_{2}^{\prime}\) can be transformed into \(H_{2}\) by hinge flips, thus \(H_{1}\) can be transformed into \(H_{2}\) by hinge flips and switches via \(H_{1}^{\prime}\) and \(H_{2}^{\prime}\). Without loss of generality, we might assume that the degrees in \(D_{A}\) are in non-increasing order. Let \(\alpha\) be the average degree in \(D_{A}\) and let \(k:=\lceil\alpha\rceil\). Further, let \(m\) be the number that satisfies the equation \[mk+(|D_{A}|-m)(k-1)=\alpha|D_{A}|.\] Then let \(D_{A}^{\prime}\) be the degree sequence \(\underbrace{k,k\ldots,k}_{m},\underbrace{k-1,k-1\ldots,k-1}_{|D_{A}|-m}\). We are going to show that both \(H_{1}\) and \(H_{2}\) can be transformed into realizations of \(D^{\prime}:=(D_{A}^{\prime},D_{B},D_{C})\) by hinge flips. This proof is constructive, and it should be clear that the construction proceeds on \(H_{1}\) and \(H_{2}\) in the same way. We show the construction for \(H_{1}\). Let \(D^{*}:=D\) and \(H_{1}^{*}=H_{1}\) at the beginning of a series of transformations. Until \(D^{*}\) is not equal to \(D^{\prime}\), we find hinge flips on \(H_{1}^{*}\), which is a realization of \(D^{*}\) that bring closer to a realization of \(D^{\prime}\). We measure the distance as the \(L_{1}\) distance between \(D^{*}\) and \(D^{\prime}\). Having said these, let \(i\) be the largest index for which \(d_{i}^{*}-d_{i}^{\prime}>0\) and let \(j\) be the smallest index for which \(d_{j}^{*}-d_{j}^{\prime}<0\). It is easy to see that \(i\) exists if and only if \(j\) exists, and further, neither of them exists if and only if \(D^{*}=D^{\prime}\). It is also easy to see that \(d_{i}^{*}>d_{j}^{*}\) for the degrees are non-increasing in \(D^{\prime}\). Then it follows that there exists \(b\) and \(c\) such that \((a_{i},b,c)\in E(H_{1}^{*})\) and \((a_{j},b,c)\not\in E(H_{1}^{*})\). Then a hinge flip that removes \((a_{i},b,c)\) and adds \((a_{j},b,c)\) leads to a hypergraph whose degree sequence is closer to \(D^{\prime}\) in \(L_{1}\) distance. Thus let the new \(H_{1}^{*}\) be the hypergraph obtained from the old \(H_{1}^{*}\) by this hinge flip, and adjust \(D^{*}\) accordingly. Since the \(L_{1}\) distance is decreased by each hinge flip, and the distance cannot be smaller than \(0\), in finite number of steps, \(D^{*}\) will be \(D^{\prime}\) and \(H_{1}^{*}\) will be a realization of \(D^{\prime}\). ## Parallel Tempering Markov chain Monte Carlo methods have been one of the most frequently used methods to generate random objects following a prescribed distribution. These objects are called _states_ in the MCMC literature and the ensemble of the objects are called the _state space_. The key is to find a primary Markov chain, that is, a random walk on the state space obeying some mild conditions. The conditions are that _i)_ the random walk must be irreducible, that is, any state can be reached from any other state in finite number of steps with non-zero probability, _ii)_ if there is non-zero probability to go to state \(y\) from state \(x\) in one step, then the probability of going to \(x\) from \(y\) in one step should be also non-zero, _iii)_ the probability of going to \(x\) from \(y\) should be calculable and _iv)_ the ratio of the probabilities of \(x\) and \(y\) in the prescribed distribution should be calculable. Any primary Markov chain satisfying these conditions can be tailored to a Markov chain that converges to the prescribed distribution by the Metropolis-Hastings algorithm [21, 25]. In case of hypergraphs, the question of irreducibility is not trivial. Recall that switches are irreducible on realizations of simple and bipartite graph degree sequences. That is, let \(D\) be an arbitrary degree sequence of a simple (respectively bipartite) graph. Then any simple (respectively, bipartite) realization of \(D\) can be transformed into any another realization of \(D\) by a finite series of switch operations. It is unknown if a similar statement holds for general hypergraph degree sequences. The strong conjecture is that it does not hold. That is, there might exist a hypergraph degree sequence \(D\) such that some of the realizations of \(D\) cannot be switched into another realization of by a finite number of switch operations. This could be a striking difference between graphs and hypergraphs. Indeed, not only is it well-known that switches are irreducible on graphs with an arbitrary degree sequence, but it is also conjectured that the switch Markov chain is rapidly mixing, which has already been proved for a large class of degree sequences [12]. If switches are not irreducible on hypergraph realizations of a degree sequence, then they alone cannot be used in a Markov chain Monte Carlo framework to uniformly sample realizations of a degree sequence. One possible way to avoid the question of irreducibility of the switches is to enlarge the space of the Markov chain and extend the possible random operations. Still, we would like to require that the random walk spend sufficient amount of time on realizations of the prescribed degree sequence. To achieve this, we introduce a Parallel Tempering framework [19]. The Parallel Tempering method runs several parallel Markov chains, each of which converges to a Boltzmann distribution at a given (hypothetical) temperature based on the (hypothetical) energy of the elements of the state space. The chains regularly change their state with a prescribed probability. The central theorem of Parallel Tempering is that these random changes do not change the convergence of any of the chains. Still, these changes create a "tunneling effect": a state of the Markov chain with low temperature can jump from a local minimum to another local minimum. In our approach, the hypothetical energy of a hypergraph measures the deviation of its degree sequence from a prescribed one. This causes that at near zero temperature, the Boltzmann distribution is frozen in the realizations of the prescribed degree sequence. The random perturbations of the Markov chains consist of a mixture of switch, hinge flip, toggle out and toggle in operations. At high temperature, the Markov chain can freely walk on arbitrary hypergraphs. By exchanging the states between parallel chains, a frozen state at a low temperature can jump from one local minimum to another local one. In the next subsection, we give precise definitions of the Markov chain Monte Carlo approach. ### The Parallel Tempering Markov chain **Definition 15**.: _Let \(D=(D_{A},D_{B},D_{C})\) be a prescribed hypergraph degree sequence on the vertex set \(A\cup B\cup C\). Let \(d(a)\) (respectively, \(d(b)\), \(d(c)\)) denote the prescribed degree of the vertex \(a\in A\) (respectively, \(b\in B\), \(c\in C\)). Let \(H=(A,B,C,E)\) be a hypergraph. Let the degree of \(a\in A\) (respectively, \(b\in B\), \(c\in C\)) in \(H\) be denoted by \(d_{H}(a)\) (respectively, \(d_{H}(b)\), \(d_{H}(c)\)). The energy of the hypergraph \(H=(A,B,C,E)\) is defined as_ \[\Delta G(H):=\sum_{a\in A}|d(a)-d_{H}(a)|+\sum_{b\in B}|d(b)-d_{H}(b)|+\sum_{ c\in C}|d(c)-d_{H}(c)|.\] _Let \(\mathcal{H}(A,B,C)\) denote the set of all possible hypergraphs on the vertex set \(A\cup B\cup C\). The Boltzmann distribution of \(\mathcal{H}(A,B,C)\) at temperature \(T\) is denoted by \(\pi_{T}\). The probability of a particular hypergraph \(H\) in this distribution is_ \[\pi_{T}(H)\propto e^{\frac{-\Delta G(H)}{T}}.\] Here \(\propto\) means "proportional to". The exact probability of a particular hypergraph is \[\pi_{T}(H)=\frac{1}{Z}e^{\frac{-\Delta G(H)}{T}},\] where \[Z:=\sum_{H\in\mathcal{H}(A,B,C)}e^{\frac{-\Delta G(H)}{T}}.\] The quantity \(Z\) is called _partition function_. Its computation is typically as hard as sampling from the corresponding Boltzmann distribution [24]. In many applications, computing \(Z\) is not necessary since we are interested in only the ratios of probabilities. Observe that \(Z\) is canceled in the ratio of the probabilities of two hypergraphs. Indeed, \[\frac{\pi_{T}(H_{1})}{\pi_{T}(H_{2})}=\frac{e^{\frac{-\Delta G(H_{1})}{T}}}{e^{ \frac{-\Delta G(H_{2})}{T}}}. \tag{6}\] (See also Eq. 7, 8 and9.) We define a Markov chain on \(\mathcal{H}(A,B,C)\). **Definition 16**.: _Let \(D=(D_{A},D_{B},D_{C})\) be a degree sequence, and let \(T>0\) be a real number. The Markov chain \(M_{T}\) walks on the hypergraphs in \(\mathcal{H}(A,B,C)\). If the current state is \(H_{t}\), then we define the next state with the following algorithm:_ 1. _With probability_ \(\frac{1}{3}\)_, we perform a'switch' operation. We independently and uniformly choose two edges of the hypergraph_ \(e_{1},e_{2}\in E(H_{t})\)_, where_ \(e_{1}=(a_{i},b_{i},c_{i})\) _and_ \(e_{2}=(a_{j},b_{j},c_{j})\)_, and uniformly choose one vertex set_ \(X\in\{A,B,C\}\)_. For_ \(X=A\) _(respectively_ \(X=B\)_,_ \(X=C\)_), we calculate new edges_ \(e^{\prime}_{1}=(a_{j},b_{i},c_{i}),e^{\prime}_{2}=(a_{i},b_{j},c_{j})\)_,_ \(e^{\prime}_{1}=(a_{i},b_{i},c_{j}),e^{\prime}_{2}=(a_{j},b_{j},c_{i})\)_). If none of these new edges are in the current hypergraph_ \(e^{\prime}_{1},e^{\prime}_{2}\not\in E(H_{t})\)_, we replace the original edges with them, that is, we take_ \(E(H^{\prime}):=E(H_{t})\cup\{e^{\prime}_{1},e^{\prime}_{2}\}\setminus\{e_{1}, e_{2}\}\)_._ 2. _With probability_ \(\frac{1}{3}\)_, we perform a 'hinge-flip' operation. We uniformly choose an edge_ \(e\in E(H_{t})\)_, uniformly choose a vertex set_ \(X\in\{A,B,C\}\)_, and for this vertex set_ \(X\)_, we uniformly choose a node_ \(x\in X\)_,_ \(x\notin e\)_. For_ \(X=A\) _(respectively_ \(X=B\)_,_ \(X=C\)_), we calculate the new edge_ \(e^{\prime}=(x,b,c)\) _(respectively_ \(e^{\prime}=(a,x,c)\)_,_ \(e^{\prime}=(a,b,x)\)_). If the new edge is not in the current hypergraph_ \(e^{\prime}\not\in E(H_{t})\)_, we replace the original edge with the new edge, that is, we take_ \(E(H^{\prime}):=E(H_{t})\cup\{e^{\prime}\}\setminus\{e\}\)_._ 3. _With probability_ \(\frac{1}{3}\)_, we perform a 'toggle in/out' operation. We uniformly choose an arbitrary set of nodes_ \((a,b,c)\)_. If this is an edge of the current hypergraph_ \((a,b,c)\in E(H_{t})\)_, we remove this edge ('toggle out'), that is, we take_ \(E(H^{\prime}):=E(H_{t})\setminus\{(a,b,c)\}\)_, Alternatively, if this is not an edge of the current hypergraph_ \((a,b,c)\not\in E(H_{t})\)_, we add a new edge corresponding to this set of nodes ('toggle in') that is, we take_ \(E(H^{\prime}):=E(H_{t})\cup\{(a,b,c)\}\)_._ _We apply the random operation on \(H_{t}\) to get a hypergraph \(H^{\prime}\). Draw a random number \(u\) uniformly distributed on the \([0,1]\) interval. Then \(H_{t+1}\) is equal to \(H^{\prime}\) if_ \[u\leq\frac{e^{\frac{-\Delta G(H^{\prime})}{T}}}{e^{\frac{-\Delta G(H_{t})}{T} }}, \tag{7}\] _and we set \(H_{t+1}\) to \(H_{t}\) otherwise._ The Markov chain in definition 16 follows the rule of the Metropolis-Hastings algorithm [25, 21], and therefore, this Markov chain converges to the Boltzmann distribution \(\pi_{T}\). Indeed, observe that for any \(H_{t}\) and \(H^{\prime}\) the probability that the algorithm we defined proposes \(H^{\prime}\) from \(H_{t}\) is exactly the probability of proposing \(H_{t}\) from \(H^{\prime}\). In the Metropolis-Hastings algorithm, a state \(y\) proposed from state \(x\) is accepted if \[u\leq\frac{\pi(y)T(x|y)}{\pi(x)T(y|x)}, \tag{8}\] where \(\pi\) is the target distribution the Markov chain converge to and \(T(a|b)\) is the probability of proposing \(a\) from a state \(b\). Here the proposal probabilities cancel, and the ratio of the probabilities of the states in the target distribution is exactly the fraction indicated (see also Eq. 6). Although Theorem 14 guarantees that switches and hinge-flips already make the Markov chain irreducible, we add toggle in/out operations to the Markov chain as they guarantee rapid mixing at high temperatures. Indeed, the state space \(\mathcal{H}(A,B,C)\) can be considered as the vertices of an \(|A|\times|B|\times|C|\) dimensional hypercube, where each coordinate of the vertices tells whether or not the corresponding hyperedge is in the hypergraph. Observe that at infinite temperature, the Boltzmann distribution is the uniform distribution on \(\mathcal{H}(A,B,C)\). The toggle in/out operations can be considered as moves along the edges of the hypercube. It is well-known that a random walk along the edges of a hypercube converging to the uniform distribution of the vertices is rapidly mixing. That is, the toggle in/out operations alone make the random walk rapidly mixing in a chain with infinite temperature. Accommodating other operations (switches, hinge-flips) provides even better mixing. Next, we define the Parallel Tempering. **Definition 17**.: _Let \(D=(D_{A},D_{B},D_{C})\) be a degree sequence, and let \(0<T_{1}<T_{2}<\ldots T_{k}\) be real numbers. Let \(M_{T_{1}},M_{T_{2}},\ldots,M_{T_{k}}\) be Markov chains defined in definition 16. The \(\mathcal{M}\) Markov chain walks on \(\mathcal{H}(A,B,C)\times\mathcal{H}(A,B,C)\times\ldots\times\mathcal{H}(A,B,C)\) (\(k\) times the Descartes product of \(\mathcal{H}(A,B,C)\)), and a random step is defined by the following algorithm:_ 1. _With probability_ \(\frac{1}{2}\)_, draw a random_ \(i\) _uniformly distributed on_ \(\{1,2,\ldots,k\}\)_, and do a random step on the_ \(i^{\text{th}}\) _coordinate according to Markov chain_ \(M_{T_{i}}\)_._ 2. _With probability_ \(\frac{1}{2}\)_, draw a random_ \(i\) _uniformly distributed on_ \(\{1,2,\ldots,k-1\}\)_._ _Draw a random number_ \(u\) _uniformly distributed on the_ \([0,1]\) _interval. If_ \[u\leq\frac{e^{\frac{-\Delta G(H_{i})}{T_{i+1}}}\times e^{\frac{-\Delta G(H_{i+ 1})}{T_{i}}}}{e^{\frac{-\Delta G(H_{i})}{T_{i}}}\times e^{\frac{-\Delta G(H_{i+ 1})}{T_{i+1}}}}\] (9) _then swap the current states_ \(H_{i}\) _and_ \(H_{i+1}\) _in the Markov chains_ \(M_{T_{i}}\) _and_ \(M_{T_{i+1}}\)_, otherwise do nothing._ Here we again use the cancellation of the partition functions of the Boltzmann distributions at temperatures \(T_{i}\) and \(T_{i+1}\). Since the construction of the Markov chain \(\mathcal{M}\) follows the rule of the Parallel Tempering [19], the following theorem holds: **Theorem 18**.: _The Markov chain \(\mathcal{M}\) defined in definition 17 converges to the distribution_ \[\pi_{T_{1}}\times\pi_{T_{2}}\times\ldots\times\pi_{T_{k}},\] _that is, each coordinate is independent of the other coordinates and identical to the Boltzmann distribution on \(\mathcal{H}(A,B,C)\) with the appropriate temperature._ In practice, the number of parallel chains as well as the temperatures of these parallel chains should be designed carefully. There are three basic rules that should be followed: 1. The zero energy states (here: the realizations of the prescribed degree sequence) should be a non-negligible part of the Boltzmann distribution at the lowest temperature. 2. The Boltzmann distribution should be close to the uniform distribution at the highest temperature 3. The acceptance probability of swapping states (that is, the probability that \(u\) is smaller than the fraction on the right-hand side of Eq. 9) should be relatively large. ## Application: exact \(\chi^{2}\) test ### Exact \(\chi^{2}\) test _Aggregation_ is a term in ecology for the association (i.e. correlated distribution) of species. In hypergraphs where one of the vertex classes (say \(A\)) represents agents (species, users etc.), we shall use the term aggregation for the association of the connected vertices of the other two vertex classes (say \(B\) and \(C\)). For measuring _hypergraph aggregation_, we propose an aggregation index \(\chi^{2}_{H}\). Let us take \(\tilde{G}_{BC}\), the \((B,C)\)-projection of \(H\), and store the number of its parallel edges between \((b_{i},c_{j})\) as \(t_{ij}\) of matrix \(T\). The _expected_ number of parallel edges \(e_{ij}\) in the absence of association can be calculated from the contingency table of \(T\) (the row and column sums are the degree sequences of vertex class \(B\) and \(C\), the total sum is \(2|E|\)): \[e_{ij}=\frac{\sum_{i}t_{ij}\sum_{j}t_{ij}}{\sum_{i}\sum_{j}t_{ij}}.\] The aggregation of \(H\) is then \[\chi^{2}_{H}=\sum_{i}\sum_{j}\frac{(t_{ij}-e_{ij})^{2}}{e_{ij}}.\] To decide whether or not a given \(\chi^{2}_{H}\) suggests significant hypergraph aggregation, one has to compare its value to the \(\chi^{2}\) distribution: this is a \(\chi^{2}\) test. As there are several ways to determine the \(\chi^{2}\) distribution, there are also different \(\chi^{2}\) tests. The _theoretical \(\chi^{2}\) test_ disregards that agents place the _(event, time point)_ entries, and also disregards the finiteness of the sample, that is, it assumes that the \(\chi^{2}\) values follow the \(\chi^{2}\)-distribution with \((n_{b}-1)(n_{c}-1)\) degrees of freedom. The _exact \(\chi^{2}\) test_ also disregards that agents place the _(event, time point)_ entries, however, it considers the finiteness of the sample. That is, it defines the \(\chi^{2}\) distribution via the uniform distribution of the placements with prescribed row and column sums, which is the generalized hypergeometric distribution (see Eq. 1) of the possible contingency tables with prescribed row and column sums. The prescribed row and column sums are the degree sequences \(D_{B}\) and \(D_{C}\). It is similar to Fisher's exact test as larger \(\chi^{2}\) values highly correlate with smaller probabilities in the hypergeometric distribution. To see this correlation, observe that the probabilities in the hypergeometric distribution are inversely proportional to the product of the factorials of the \(a_{i,j}\) entries. This product is the smallest when the entries are distributed as evenly as possible, but we also have to consider the constraint of prescribed row and column sums. The _hypergraph-based exact \(\chi^{2}\) test_ defines the \(\chi^{2}\) distribution via the uniform distribution of the hypergraphs with prescribed degree sequence given as the degree sequence of \(H\). Though it is unfeasible to generate all possible hypergraphs even for short degree sequences, the exact \(\chi^{2}\) distribution can be computed from a uniform sample of hypergraphs with prescribed degree sequence. Such a sample can be achieved with the above-detailed Parallel Tempering method. Generally, exact tests estimate the \(p\)-value as the frequency of the sampled cases having a more extreme statistic than the tested case. For small \(p\)-values, it frequently happens that none of the samples have more extreme statistics than the tested case. Then the inverse of the sample size gives an upper bound for the \(p\)-value. Here, to allow for a higher precision than the reciprocal of the sample size, we approximate the sampled distribution with a normal distribution of the corresponding mean and standard deviation, and calculate the \(p\)-value from this normal distribution. Observe the following. Let \(D_{B}\) and \(D_{C}\) be row and column sums of a contingency table with total sum \(N\). Then the exact \(\chi^{2}\) test with row and column sums \(D_{B}\) and equals the hypergraph-based exact \(\chi^{2}\) test with degree sequence \(D=(D_{A},D_{B},D_{C})\), where \(D_{A}\) is a sequence of \(1\)s of length \(N\). Indeed, for each possible contingency table \(T\) with entries \(t_{i,j}\) and row and column sums \(D_{B}\) and \(D_{C}\), there are exactly \(\binom{t_{i,1},t_{1},\ldots,t_{|D_{B}|:|D_{C}|}}{\binom{t_{i,1},t_{1},\ldots,t_ {|D_{B}|:|D_{C}|}}{\binom{t}{2}}}\) hypergraph realizations of \(D\) with \((B,C)\)-projection \(T\). This observation indicates that the difference between the exact and hypergraph-based exact \(\chi^{2}\) tests is vanishing when each agent has degree \(1\), that is, places exactly one _(event, time point)_ entry. We shall illustrate the effect of changing the degrees of the agents by considering degree sequences with fixed \(D_{B}\) and \(D_{C}\) and varying \(D_{A}\). We generated large (\(n=2000\)) samples of random regular hypergraphs and obtained their empirical \(\chi^{2}\) distribution, see Fig. 1. These hypergraphs have \(n_{1,i}\in\{3,4,5,6,10,12,15,20,30,60,600,7200\}\) nodes in vertex class \(A\), \(n_{2}=n_{3}=60\) nodes in vertex classes \(B\) and \(C\), and have \(7200\) hyperedges. That is, \(D_{B}\) and \(D_{C}\) are fixed to be \(120\)-regular (\(60\) times \(120\) makes \(7200\)), and \(D_{A}\) varies from \(2400\)-regular to \(1\)-regular. We find that having more agents (i.e. more vertices in vertex class \(A\), thus having smaller degrees) leads to a higher mean aggregation of the null distribution (see Fig 1). The distribution of \(D_{A}=1\) corresponds to the null distribution of the exact \(\chi^{2}\) test. Based on this example, one shall expect that the null distribution of the exact \(\chi^{2}\) test will have a higher mean than that of the hypergraph-based exact \(\chi^{2}\) test, and consequently be less sensitive in identifying hypergraph aggregation. In the next subsection, we shall find an illustrative case when the hypergraph-based exact \(\chi^{2}\) test shows significant aggregation that the exact and theoretical \(\chi^{2}\) tests cannot discern Fig 1: **The aggregation index distribution of random regular hypergraphs with varying degrees of agents.** The hypergraphs have fixed degree sequences \(D_{B}\) and \(D_{C}\), both of them are \(120\)-regular on \(60\) vertices. The degree sequence \(D_{A}\) vary from \(d=2400\)-regular to \(d=1\)-regular on \(\frac{120\cdot 60}{d}\) vertices. As the degree of the agents decreases the aggregation index increases on average. See text for more detail. from no aggregation. ### Application on Twitter data We turn to real-world data, a COVID-19 vaccination-related Twitter data set collected during the first six months of 2021, used previously for vaccine skepticism detection [3] and sentiment analysis [4]. There are 33K tweets in the data set that the authors collected by specifying vaccination-related keywords1 to the public Twitter Search API. For each tweet, the following variables were recorded: their author (user ID), the author's categorization (healthcare professional, news media source, other accounts with thousands of followers), the vaccine mentioned, the language and the general sentiment2 of the tweet (on a scale of 1 to 5 from negative to positive tone), and the date of publication (to the precision of seconds). When multiple vaccines are mentioned in a tweet, it is recorded as multiple tweets, one for each vaccine. Footnote 1: Set of keywords used for data collection: vaccine, vaccination, vaccinated, vaxzer, vaxzers, #Covid-Vaccine, ”covid denier”, pfizer, moderna, ”astra” and ”zeneca”, sinopharm, sputnik. Footnote 2: BERT-based model used for multilingual sentiment analysis: [https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) The Twitter data set provides the source for our study of hypergraph aggregation. We can construct hypergraphs from this data corresponding to each selection of three discrete variables that serve as the three vertex classes. Their unique values become the vertices, and then for each tweet, a hyperedge connects the respective vertices. Identical hyperedges are treated as a single hyperedge, not as multiedges. In case study #1, we proceed with a natural choice: the three sets correspond to the author, the vaccine mentioned, and the date of publication (to the precision of a day). We found that the corresponding hypergraph is extremely aggregated (Fig 2). This result should not come as a surprise considering what no aggregation would mean: that each vaccine was mentioned in the same proportion of tweets on each day, i.e. irrespective of news selectively affecting vaccines (e.g. peaks after March 19: Scientists find a link to AstraZeneca rare blood clotting; March 31: Pfizer 100% efficacy for teenagers). Also, we found that this result is independent of the method. In line with what we expect based on Fig 1, we find in Fig 2 that hypergraph-based \(\chi^{2}\) values are shifted to the left compared to the exact and theoretical \(\chi^{2}\) values. To check whether this translates to the hypergraph-based \(\chi^{2}\) test being more sensitive in showing significant aggregation, we simulate having much fewer data to study. Case study #2 has \(n_{1}=4\) authors, randomly chosen from the authors of case study #1 (1.5 percent), and only their tweets are kept. Here our expectation is confirmed: the hypergraph-based method shows significant aggregation still (\(p\ll 0.05\)), but the exact and theoretical methods do not (\(p>0.05\)) (Fig 3) We report the design and the performance of the Parallel Tempering method for case study #2. Miklos and Tannier (Appendix B in [27]) gave a general design of how to set up parallel chains in Parallel Tempering. They used a quite weak but easy to compute upper bound on the acceptance probability of swapping states between the parallel chains based on the maximum possible difference between energies of the states. Their method could yield an extremely large prescribed number of parallel chains because here the maximum difference between the energies of states to be swapped is the sum of the degrees in the complete tripartite graph minus the sum of the given degrees, that is \(3\cdot 4\cdot 5\cdot 164-3\cdot 517=8289\). Instead, we ran independent Markov chains to give a rough estimation of the quartiles of the energies of the hypergraphs in the Boltzmann distributions at several temperatures, see Fig 4. Then we set the temperatures such that the upper quartile at the colder temperature be the lower quartile of the warmer temperature. This causes that with probability at least \(\left(\frac{1}{4}\right)^{2}\), the energy of the state of the colder chain will be larger than the energy of the state of the warmer chain, in which case the acceptance probability is 1. That is, the acceptance probability between the chains must be at least 6.25% (in other cases, the swap between the two chains might be accepted with non-zero probabilities, too). The observed acceptance probabilities in the Parallel Tempering were at least 20% as shown in Fig. 5. With this protocol, we defined 64 temperatures. The hypergraphs with 0 energy (that is, realizations of the prescribed degree sequence) constituted more than 90% of the Boltzmann distribution at the coldest temperature. Fig. 6 shows the acceptance probabilities of the three types of operations in the individual Markov chains (switches, hinge-flips, toggles), as well as the probabilities to propose an invalid operation (that is, trying to add a hyperedge to a position where there is already a hyperedge). Observe that any valid switch operation is accepted with probability 1 since a switch operation does not change the energy of the state. Therefore the sum of the switch acceptance probability and the invalid switch probability is 1 at any temperature. Toggle in/out and hinge-flip operations change the energies of the current state. Since the probability of changing the energy towards a Fig 2: **Both the exact and the hypergraph-based exact \(\chi^{2}\) tests can identify strong aggregation.** Case study #1. **a)** Hypergraph \(H\) (_vertical line_) corresponds to a data set of 33K tweets, incorporating their 22434 unique _(author, vaccine, date)_ triplets as hyperedges. The dark green histogram shows a uniform distribution of hypergraphs of the same degree sequences as \(H\); the corresponding test is the _hypergraph-based exact \(\chi^{2}\) test_. The light green histogram shows a uniform distribution of graphs with the same degree sequences as the \((B,C)\)-projection of \(H\); the corresponding test is the _exact \(\chi^{2}\) test_. The distribution of the _theoretical \(\chi^{2}\) test_ (_dashed blue line_) closely follows that of the _exact \(\chi^{2}\) test_. Note that the horizontal axis is broken. **b)** The contingency table of the \((B,C)\)-projection of \(H\) is also suggestive of aggregation: its patterns depart from what could be explained by its row and column means (top and right bars). positive direction is higher than the probability of decreasing the energy, toggle in/outs and hinge-flips are accepted with small probabilities at low temperatures. However, at high temperatures the hinge-flip acceptance and the invalid hinge-flip probabilities sum to almost 1. The same holds for the toggle in/out acceptance and invalid toggle in/out probabilities. Therefore, these probabilities give evidences that the Boltzmann distribution of the warmest chain is close to an Erdos-Renyi distribution of hypergraphs with \(p=0.5\), that is, when each potential hyperedge is in the hypergraph with probability \(p=0.5\). Indeed, in such a case, there is 0.25 probability that neither of the proposed new hyperedges defined by a switch operation will be in the current hypergraph. This is in accordance with the cc. 75% of probability that a proposed switch is invalid in the warmest chain. Similarly, if each hyperedge is in the hypergraph Fig 3: **The sensitivity of the exact and the hypergraph-based exact \(\chi^{2}\) tests differ.** Case study #2. Hypergraph \(H\) corresponds to the tweets of a small subset, 1.5%, of the authors of case study #1 (765 tweets, of which 517 are unique). \(H\) shows significant aggregation according to the hypergraph-based exact \(\chi^{2}\) test but not according to the exact \(\chi^{2}\) test. The three vertex classes \(A,B,C\) of the hypergraph correspond respectively to Twitter user, vaccine type, and day of the tweet. Panels a) and b) correspond to that of Fig 2. Panel c) shows a rearranging of the contingency table of panel b). with 0.5 probability, then there is a 0.5 probability for a valid hinge flip, and thus the probability of an invalid hinge-flip is 50%. Note that the uniform distribution of all possible hypergraphs is the Erdos-Renyi distribution of hypergraphs with \(p=0.5\). A rough estimation of the expected energy at infinite temperature can be computed as the sum of the absolute differences between the prescribed degrees and half the maximal degrees. In case study #2, it is 3369. The lower and upper quartiles at the maximal temperature \(T=148\) were 3283 and 3390. This means that the warmest chain can be considered as essentially having infinite temperature, and thus, at that temperature the Markov chain is rapidly mixing. Further, this uniform distribution is cooled down to the distribution containing mainly the realizations of the prescribed degree sequence via largely overlapping Boltzmann distributions. It took around 5 hours to generate 1854 samples of the prescribed degree sequence (using a custom Python script run on a single ca. 3GHz processor). The program performed 201065 Markov chain Monte Carlo steps in the Parallel Tempering framework. The expected number of steps inside the coldest chain was set to switch each hyperedge once, in expectation, between two samples. The convergence of the Parallel Tempering was further confirmed by autocorrelation analysis and independent runs with a different starting position (data not shown). Fig 4: **Temperatures selected for the Parallel Tempering of case study #2.** Energies in the Boltzmann distribution were explored at 100 locations, regularly spaced along the logarithmic temperature axis, by independent Markov chains. The interpolation of their lower and upper quartiles, respectively, provides the orange and blue lines; some noise was removed from the lines to make them monotonic. The gray staircase line depicts the temperature selecting procedure: the lower quartile at temperature \(T_{i}\) is equal to the upper quartile at temperature \(T_{i}-1\). Black dots indicate the thus selected temperatures. See text for more details. ## Conclusions Partite, 3-uniform hypergraphs naturally appear in data science, and frequently we are interested in the marginals of two dimensions of these hypergraphs. In such marginals, it is important to consider the third dimension, the "agents" that place the items in the contingency table. As we have shown in this paper, agents placing many items into the contingency table distribute the entries in the contingency table more evenly. This more balanced distribution causes a shift of the \(\chi^{2}\) distribution towards smaller values. Therefore, a hypergraph-based \(\chi^{2}\) test will be more sensitive than the theoretical \(\chi^{2}\) test that does not consider the effect of the agents. The exact computation of the hypergraph-based \(\chi^{2}\) distribution is computationally infeasible as there might be a large number of possible hypergraphs with the prescribed degrees. Nevertheless, as we also showed in this paper, it is already NP-complete to decide if a partite, 3-uniform hypergraph exists with prescribed degrees. Therefore it is a natural attempt to develop a Monte Carlo method for computing the hypergraph-based \(\chi^{2}\) distribution. It needs random generation of partite, 3-uniform hypergraphs with prescribed degrees. We proposed a Parallel Tempering MCMC method, in which the hypothetical energy measures the deviation from the prescribed degree sequence. The transitions of the MCMC consist of switches, hinge-flips and toggle ins/outs, of which switches preserve the degree sequence while hinge-flips and toggle ins/outs do not. We prove a theorem that switches are irreducible on realizations of third almost-regular degree sequences that appear at high temperatures in the Parallel Tempering. We also showed that on small data sets, it is possible to heat the Boltzmann distribution up to the uniform distribution of all possible hypergraphs. It is Fig 5: **Acceptance probabilities of swapping the states of neighboring chains in the Parallel Tempering of case study #2.** On the horizontal axis, we show the temperature of the warmer chain \(T_{i}\), i.e., the swap occurs between chains of temperatures \(T_{i-1}\) and \(T_{i}\). easy to see that toggle ins/outs alone provide rapid mixing of this Boltzmann distribution, yet, it is possible to design a moderate number of parallel chains such that the Boltzmann distributions of consecutive chains have a significant overlap (expressed in large acceptance probabilities of swapping their states), and the realizations of the prescribed degree sequence dominate the Boltzmann distribution of the coldest chain. The Parallel Tempering MCMC was tested on both synthetic and real data. We showed that the hypergraph-based \(\chi^{2}\) test is indeed more sensitive than the theoretical \(\chi^{2}\) test. This might be especially important when the scarcity of data reduces the power of the theoretical \(\chi^{2}\) test (i.e. its probability of correctly rejecting the null hypothesis). Although our theoretical results suggest that even the Parallel Tempering method becomes infeasible to run for some inputs, the performance of the method is reasonably good on small amounts of data - exactly when it is needed for more sensitive testing. We see several potential improvements in the Parallel Tempering method; hereby we mention a few. The convergence of the Markov chain might be accelerated with a greedy start. Such a greedy start has already been successfully applied in a Monte Carlo method to sample binary contingency tables, that is, bipartite graphs, or, in yet other words, partite, 2-uniform hypergraphs [5]. We opted to uniformly choose switches, hinge-flips and toggle ins/outs as transitions in the Markov chains. However, non-uniform distributions might cause higher acceptance probabilities in the Metropolis-Hastings algorithm and thus faster convergence. Indeed, at low temperatures, the hinge-flips and toggle ins/outs increasing the deviation from the prescribed degree sequence are accepted with a small probability and thus should be proposed only with a small probability. Also, appropriately setting the temperatures of the parallel chains as Fig 6: **Acceptance of intrachain operations in the Parallel Tempering of case study #2.** Acceptance probabilities consist of the probability of proposing a valid operation multiplied by the probability of accepting it. “Invalid” denotes the probability of proposing an invalid operation. well as the number of parallel chains might improve the Parallel Tempering method. There are also theoretical questions remaining. We proved that switches are irreducible on the realizations of third almost-regular degree sequences. We conjecture that the switches might be irreducible for a broader class of degree sequences. In an ongoing work, we are going to prove that the degree sequence realization problem is easy for partite 3-regular hypergraphs if the degree sequences are linearly bounded, that is, each degree in the \(i^{\text{th}}\) vertex class is between some \(c_{1}\times n_{i+1}\times n_{i+2}\) and \(c_{2}\times n_{i+1}\times n_{i+2}\) for some \(0<c_{1}<c_{2}<1\), and the indexes in \(n_{j}\) are modulo 3. We were not able to prove this so far, but conjecture that switches are irreducible on the realizations of such degree sequences. The ultimate goal would be to identify degree sequence classes with rapidly mixing corresponding Markov chains on their realizations. Proving rapid mixing even for regular degree sequences is absolutely not obvious since it does not follow from the rapid mixing of Markov chains on bipartite graph realizations of regular degree sequences. Indeed, note that the \((A,B)\)-projection (see Def. 8) might be regular or extremely irregular even in case of regular degree sequences. Further, the number of hypergraphs with different \((A,B)\)-projections might vary in an unknown manner hindering the application of available proving techniques based on the decomposition of the state space [13]. The Parallel Tempering method might help to identify easy-to-sample degree sequences. Indeed, for bipartite graphs, rapid mixing of a Simulated Annealing technique (a method quite similar to Parallel Tempering) is proved for arbitrary degree sequences [5], while the rapid mixing of the switch Markov chain is proved only for a large class of degree sequences [12]. There are necessary and sufficient conditions when a Parallel Tempering is rapidly mixing that might be utilized here [31, 32]. ## Acknowledgments Our research was supported by the European Union project RRF2.3.1-21-2022-00004 within the framework of the Artificial Intelligence National Laboratory Grant no RRF-2.3.1-21-2022-00004. AH and IM were supported by the European Union project RRF2.3.1-21-2022-00006 within the framework of Health Safety National Laboratory Grant no RRF-2.3.1-21-2022-00006. IM was further supported by NKFIH grant K132696.
2310.09043
Midpoint geometric integrators for inertial magnetization dynamics
We consider the numerical solution of the inertial version of Landau-Lifshitz-Gilbert equation (iLLG), which describes high-frequency nutation on top of magnetization precession due to angular momentum relaxation. The iLLG equation defines a higher-order nonlinear dynamical system with very different nature compared to the classical LLG equation, requiring twice as many degrees of freedom for space-time discretization. It exhibits essential conservation properties, namely magnetization amplitude preservation, magnetization projection conservation, and a balance equation for generalized free energy, leading to a Lyapunov structure (i.e. the free energy is a decreasing function of time) when the external magnetic field is constant in time. We propose two second-order numerical schemes for integrating the iLLG dynamics over time, both based on implicit midpoint rule. The first scheme unconditionally preserves all the conservation properties, making it the preferred choice for simulating inertial magnetization dynamics. However, it implies doubling the number of unknowns, necessitating significant changes in numerical micromagnetic codes and increasing computational costs especially for spatially inhomogeneous dynamics simulations. To address this issue, we present a second time-stepping method that retains the same computational cost as the implicit midpoint rule for classical LLG dynamics while unconditionally preserving magnetization amplitude and projection. Special quasi-Newton techniques are developed for solving the nonlinear system of equations required at each time step due to the implicit nature of both time-steppings. The numerical schemes are validated on analytical solution for macrospin terahertz frequency response and the effectiveness of the second scheme is demonstrated with full micromagnetic simulation of inertial spin waves propagation in a magnetic thin-film.
Massimiliano d'Aquino, Salvatore Perna, Claudio Serpico
2023-10-13T12:11:50Z
http://arxiv.org/abs/2310.09043v1
# Midpoint geometric integrators for inertial magnetization dynamics ###### Abstract We consider the numerical solution of the inertial version of Landau-Lifshitz-Gilbert equation (iLLG), which describes high-frequency nutation on top of magnetization precession due to angular momentum relaxation. The iLLG equation defines a higher-order nonlinear dynamical system with very different nature compared to the classical LLG equation, requiring twice as many degrees of freedom for space-time discretization. It exhibits essential conservation properties, namely magnetization amplitude preservation, magnetization projection conservation, and a balance equation for generalized free energy, leading to a Lyapunov structure (i.e. the free energy is a decreasing function of time) when the external magnetic field is constant in time. We propose two second-order numerical schemes for integrating the iLLG dynamics over time, both based on implicit midpoint rule. The first scheme unconditionally preserves all the conservation properties, making it the preferred choice for simulating inertial magnetization dynamics. However, it implies doubling the number of unknowns, necessitating significant changes in numerical micromagnetic codes and increasing computational costs especially for spatially inhomogeneous dynamics simulations. To address this issue, we present a second time-stepping method that retains the same computational cost as the implicit midpoint rule for classical LLG dynamics while unconditionally preserving magnetization amplitude and projection. Special quasi-Newton techniques are developed for solving the nonlinear system of equations required at each time step due to the implicit nature of both time-steppings. The numerical schemes are validated on analytical solution for macrospin terahertz frequency response and the effectiveness of the second scheme is demonstrated with full micromagnetic simulation of inertial spin waves propagation in a magnetic thin-film. keywords: magnetic inertia, terahertz spin nutation, micromagnetic simulations, inertial Landau-Lifshitz-Gilbert (iLLG) equation, implicit midpoint rule, numerical methods. ## 1 Introduction The study of ultra-fast magnetization processes has become increasingly important in recent years, particularly for its potential applications to future generations of nanomagnetic and spintronic devices [1]. Since the pioneering experiment by Beaurepaire et al. [2] that revealed subpicosecond spin dynamics, the investigation of ultra-fast magnetization processes has attracted the attention of many research groups, leading to a considerable body of research [3; 4; 5; 6; 7; 8; 9; 10]. Recently, there have been exciting experimental developments in the direct detection of spin nutation in ferromagnets in the terahertz range [11; 12]. This has confirmed the presence of inertial effects in magnetization dynamics, which were theoretically predicted several years ago [13; 14; 15]. Nutation-like magnetization motions in nanomagnets occurring at gigahertz frequencies under the action of time-harmonic applied external magnetic fields were also studied theoretically in past decades within the classical precessional dynamics[16]. From a technological perspective, the observation of terahertz spin nutation opens up new possibilities for exploiting novel ultra-fast regimes. For instance, it may be possible to use strong picosecond field pulses to drive ballistic magnetization switching into the inertial regime [17; 18; 19; 20; 21; 22]. This has important implications for the development of ultra-fast magnetic devices, and it also has fundamental implications for the physics of magnetism. From a theoretical point of view, inertial magnetization dynamics can be described by augmenting the classical Landau-Lifshitz-Gilbert (LLG) precessional dynamics with a torque term modeling intrinsic angular momentum relaxation [13; 14]. This approach has been successful in explaining the observed high-frequency spin nutation in uniformly-magnetized ferromagnetic samples [11], for which magnetization dynamics is governed by the following inertial version of the Landau-Lifshitz-Gilbert equation[13; 14]: \[\frac{d\mathbf{M}}{dt}=-\gamma\mathbf{M}\times\left(\mathbf{H}_{\rm eff}-\frac{\alpha}{ \gamma M_{s}}\frac{d\mathbf{M}}{dt}-\tau^{2}\frac{d^{2}\mathbf{M}}{dt^{2}}\right)\quad, \tag{1}\] where \(\mathbf{M}(t)\) is the magnetization vector field (\(M_{s}\) is the saturation magnetization of the material), \(\mathbf{H}_{\rm eff}\) is the magnetic effective field, \(\alpha\) is the Gilbert damping, \(\gamma\) is the absolute value of the gyromagnetic ratio and \(\tau\) defines the time scale of inertial magnetic phenomena. However, when spatial changes of magnetization do occur in magnetic systems of nano- and micro-scale, the description of spatially-inhomogeneous ultra-fast magnetization dynamics occurring at sub-picosecond time scales becomes a challenging problem that requires appropriate extension of eq.(1) to take into account space-varying vector fields in the region \(\Omega\) occupied by the ferromagnetic body. This extension leads to the formulation of a novel equation where formally the total derivatives with respect to time become partial and the effective field is given by the variational derivative of the Gibbs-Landau free energy functional[23], resulting in the following: \[\frac{\partial\mathbf{M}}{\partial t}=-\gamma\mathbf{M}\times\left(\mathbf{H}_{\rm eff}- \frac{\alpha}{\gamma M_{s}}\frac{\partial\mathbf{M}}{\partial t}-\tau^{2}\frac{ \partial^{2}\mathbf{M}}{\partial t^{2}}\right)\quad, \tag{2}\] where generally the natural (homogeneous Neumann) boundary conditions \(\partial\mathbf{M}/\partial\mathbf{n}=0\) are inherited by the classical LLG when no surface anisotropy is present at the body surface \(\partial\Omega\). Equation (2) reduces to the purely precessional classical LLG equation when no inertia is considered (i.e. \(\tau=0\)). Nevertheless, despite this apparent similarity, eq.(2) has profoundly different nature in that it has hyperbolic (wave-like) character (instead of parabolic as the classical LLG equation) and admits the possibility of travelling solutions (spin waves) with finite propagation speed[24]. For this reason, the iLLG dynamics deserves a dedicated investigation in his own rights. In this respect, based on equations (1),(2), a number of theoretical studies have been proposed in the latest years to characterize terahertz spin nutation[25; 26; 27; 28; 29; 30; 31; 32]. Most of these interesting studies rely on analytical approaches valid in idealized situations such as, for instance, analysis of magnetization oscillations in single-domain particles (macrospin) or small-amplitude spin waves propagation in infinite media. Very recently, the possibility to observe propagation of ultra-short inertial spin waves in confined ferromagnetic thin-films driven by ac terahertz fields has been also theoretically demonstrated[24]. These waves exhibit behavior that deviates significantly from classical exchange spin-waves and can propagate at a finite speed up to a limit of several thousands meters per second, which is comparable with the velocity of surface acoustic waves. While such phenomena occurring in confined micromagnetic systems mainly involve magnetization oscillations around equilibria and can be investigated by analyzing the inertial LLG dynamics in the linear regime, no such possibility exists when far from equilibrium dynamics such as nonlinear oscillations[33], magnetization switching[21] or even chaos[34] are considered. In these situations where no analytical techniques can be applied, one has to resort to numerical simulation. In this respect, after the experimental evidence of the terahertz spin nutation[11], the study of inertial effects in magnetization dynamics is rapidly becoming an emergent field of research and, consequently, the need for accurate and efficient computational techniques that exploit the intrinsic properties of the nutation dynamics beyond off-the-shelf time-stepping schemes is growing fast, too. Nonetheless, at the present moment, very few works[35; 36] address ad-hoc numerical techniques for the inertial magnetization dynamics. In this paper, after illustrating the general qualitative conservation properties of the continuous inertial magnetization dynamics, we propose suitable time-integration schemes based on the implicit midpoint rule technique[37] for the numerical solution of the inertial LLG (iLLG) equation and their relevant properties are discussed. The midpoint rule is an unconditionally stable second order accurate scheme which preserves the fundamental geometrical properties of the classical LLG dynamics[38]. The first time-stepping proposed here is shown to preserve all relevant conservation properties of the iLLG dynamics unconditionally, i.e. regardless of the time step amplitude. Despite these remarkable properties, we show that, in general, the numerical integration of inertial magnetization dynamics must address the issue of the higher order of the dynamical system that it describes, which implies dramatic changes of micromagnetic codes and results anyway in at least doubling the computational cost of the numerical scheme as compared to classical LLG dynamics. This has a huge impact when micromagnetic simulations with full spatial discretization on hundred thousands (or more) computational cells have to be performed, such as in the case of (sub)micronsized magnetic systems. For this reason, we develop an additional efficient implementation of the midpoint rule technique for iLLG dynamics, based on suitable multistep method for the inertial term, which can be built on the top of that associated with classical LLG dynamics and, therefore, retaining a computational cost with the same order of magnitude. The proposed techniques are first validated by computing the frequency response of a magnetic thin-film modeled as single spin (macrospin) magnetized along the easy direction and subject to out-of-plane ac field, and comparing the results with the analytical solution. Then, full micromagnetic simulations of inertial spin wave propagation in a ferromagnetic nanodot are performed in order to demonstrate the accuracy and effectiveness of the second proposed time-stepping in reproducing spatially-inhomogeneous ultra-fast spin nutation dynamics. ## 2 Inertial magnetization dynamics and qualitative properties The starting point of the discussion is the inertial Landau-Lifshitz-Gilbert (iLLG) equation (2), expressed in dimensionless form[13; 24]: \[\frac{\partial\mathbf{m}}{\partial t}=-\mathbf{m}\times\left(\mathbf{h}_{\rm eff}-\alpha \frac{\partial\mathbf{m}}{\partial t}-\xi\frac{\partial^{2}\mathbf{m}}{\partial t^{2} }\right)\, \tag{3}\] where \(\mathbf{m}(\mathbf{r},t)\) is the magnetization unit-vector (normalized by the saturation magnetization \(M_{s}\)) at each location \(\mathbf{r}\in\Omega\) (\(\Omega\) is the region occupied by the magnetic body), time is measured in units of \((\gamma M_{s})^{-1}\) (corresponding to 5.7 ps for \(\gamma=2.21\times 10^{5}A^{-1}s^{-1}m\) and \(\mu_{0}M_{s}=1\)T), \(\alpha\) is the Gilbert (dimensionless and positive, typically in the order \(\sim 10^{-3}\div 10^{-2}\)) damping parameter, the parameter \(\xi\) measures the strength of inertial effects in magnetization dynamics. It is worthwhile noting (see eq.(2)) that the dimensionless quantity \(\xi\) can be expressed as \(\xi=(\gamma M_{s}\tau)^{2}\) where \(\tau\) determines the physical time-scale of magnetic inertia, for which previous works[13; 11; 21] assessed its order of magnitude as fractions of picosecond (this implies that typically \(\xi\sim 10^{-2}\)). Thus, the inertial effects in magnetization dynamics are governed by a quantity with the same smallness as usual Gilbert damping \(\alpha\sim 10^{-2}\). The effective field \(\mathbf{h}_{\rm eff}(\mathbf{r},t)\) is given by[23]: \[\mathbf{h}_{\rm eff}=-\frac{\delta g}{\delta\mathbf{m}}\, \tag{4}\] which takes into account interactions (exchange, anisotropy, magnetostatics, Zeeman) among magnetic moments and is expressed as the variational derivative of the free energy functional (the dimensionless energy is measured in units of \(\mu_{0}M_{s}^{2}V\), with \(V\) being the volume of region \(\Omega\)) \[g(\mathbf{m},\mathbf{h}_{a})=\frac{1}{V}\int_{\Omega}\frac{l_{\rm ex}^{2}}{2}(\nabla \mathbf{m})^{2}+f_{\rm an}-\frac{1}{2}\mathbf{h}_{m}\cdot\mathbf{m}-\mathbf{h}_{a}\cdot\mathbf{m} \,dV\,, \tag{5}\] where \(A\) and \(l_{\rm ex}=\sqrt{(2A)/(\mu_{0}M_{s}^{2})}\) are the exchange stiffness constant and length, respectively, \(f_{\rm an}\) is the anisotropy energy density, \(\mathbf{h}_{m}\) is the magnetostatic (demagnetizing) field and \(\mathbf{h}_{a}(\mathbf{r},t)\) the external applied field. When the anisotropy is of uniaxial type, such that \(f_{\rm an}=\kappa_{\rm an}[1-(\mathbf{m}\cdot\mathbf{e}_{\rm an}) ^{2}]\) with \(\kappa_{\rm an}\) and \(\mathbf{e}_{\rm an}\) being the uniaxial anisotropy constant and unit-vector, respectively, the effective field can be expressed by the sum of a linear operator \({\cal C}\) acting on magnetization vector field plus the applied field: \[\mathbf{h}_{\rm eff}(\mathbf{r},t)=-{\cal C}\mathbf{ m}+\mathbf{h}_{a}\,, \tag{6}\] where \({\cal C}=-l_{\rm ex}^{2}\nabla^{2}+{\cal N}+\kappa_{\rm an}e_{\rm an}\otimes e _{\rm an}\) and \({\cal N}\) is the (symmetric-positive definite) demagnetizing operator such that: \[\mathbf{h}_{m}(\mathbf{r})=\frac{1}{4\pi}\nabla\nabla\cdot \int_{\Omega}\frac{\mathbf{m}(\mathbf{r}^{\prime})}{|\mathbf{r}-\mathbf{r}^{\prime}|}\,dV=-{\cal N}\mathbf{m}\,. \tag{7}\] As mentioned in the previous section, eq. (3) is usually complemented with the natural boundary conditions \(\partial\mathbf{m}/\partial\mathbf{n}=0\) at the body surface \(\partial\Omega\), which is typical when no surface anisotropy is considered. It can be shown that the operator \({\cal C}\) with the aforementioned boundary conditions is self-adjoint and positive-definite in the appropriate subspace of square-integrable vector fields[23]. It is also worth remarking that, for eq.(3), equilibrium magnetization fields are characterized by simultaneously vanishing time-derivatives of first and second order: \[\frac{\partial\mathbf{m}}{\partial t}=\mathbf{0}\quad,\quad \frac{\partial^{2}\mathbf{m}}{\partial t^{2}}=\mathbf{0}\quad. \tag{8}\] Equation (3) describes a nonlinear dynamical system of higher order compared to that associated with the classical LLG equation (obtained by setting \(\xi=0\) in eq.(3)). In fact, by defining a new variable \(w\) resembling, in a purely formal fashion, the 'angular momentum' of a point-particle of unitary mass, position vector \(m\) and velocity \(\partial\mathbf{m}/\partial t\) such that \[\mathbf{w}=\mathbf{m}\times\frac{\partial\mbox{\boldmath$m$ }}{\partial t}\,, \tag{9}\] one has: \[\frac{\partial\mathbf{w}}{\partial t}=\mathbf{m}\times\frac {\partial^{2}\mathbf{m}}{\partial t^{2}}\quad. \tag{10}\] First, by dot-multiplying both sides of eq.(3) by \(m\), we observe that magnetization vector evolves on the unit-sphere \(|\mathbf{m}|^{2}=1\) since \[\mathbf{m}\cdot\frac{\partial\mathbf{m}}{\partial t}=0\,. \tag{11}\] Then, by cross-multiplying both sides of eq.(3) by \(m\), one obtains: \[\mathbf{w}=-\mathbf{m}\times(\mathbf{m}\times \mathbf{h}_{\rm eff})+\alpha\mathbf{m}\times\mbox{\boldmath $w$}+\xi\mathbf{m}\times\frac{\partial\mathbf{w}}{\partial t }\quad. \tag{12}\] By performing further cross-multiplication of both sides of the latter equation by \(m\), one ends up with: \[\mathbf{m}\times\mathbf{w}=(\mathbf{m}\times\mathbf{h}_{\rm eff})-\alpha\mathbf{w}-\xi\frac{d\mathbf{ w}}{dt}\quad, \tag{13}\] where the property \(\mathbf{m}\cdot\partial\mathbf{w}/\partial t=0\) has been used. Consequently, iLLG eq.(3) can be rewritten as a set two coupled nonlinear equations for variables \(m\) and \(w\) as follows: \[\frac{\partial\mathbf{m}}{\partial t} = \mathbf{w}\times\mathbf{m}\quad, \tag{14}\] \[\xi\frac{\partial\mathbf{w}}{\partial t} = -\mathbf{m}\times\mathbf{w}-\alpha\mbox{\boldmath $w$}+\mathbf{m}\times\mathbf{h}_{\rm eff}\quad, \tag{15}\] where eq.(14) comes from eq.(9) cross-multiplied by \(m\) combined with the fact that \(|\mathbf{m}|^{2}=1\), and eq.(15) from eq.(13). We point out that, as a consequence of eq.(8) and the definition of \(w\) from eq.(9), equilibrium solutions of eqs.(14)-(15) are such that: \[\frac{\partial\mathbf{m}}{\partial t}=\mathbf{0}\quad,\quad \frac{\partial\mathbf{w}}{\partial t}=\mathbf{0}\quad. \tag{16}\] In this way, the implicit equation (3) has been transformed into a higher-order equation in standard explicit form, which is amenable of general considerations concerning the properties of the dynamical systems that it describes. To this end, we now focus on the dynamical system expressed by eqs.(14)-(15) where the state variables \(\mathbf{m},\mathbf{w}\) are considered independent of each other, remembering that it is equivalent to the original iLLG eq.(3) when eq.(9) holds. First of all, by dot-multiplying eq.(14) by \(\mathbf{m}\), one can immediately see that the motion of vector \(\mathbf{m}\) occurs on the unit-sphere \(|\mathbf{m}|=1\): \[\mathbf{m}\cdot\frac{\partial\mathbf{m}}{\partial t}=0\quad\Rightarrow\quad|\mathbf{m}( \mathbf{r},t)|=1\quad\forall\mathbf{r}\in\Omega\,,\,t\geq t_{0}, \tag{17}\] provided that \(\mathbf{m}\) has unit-amplitude at initial time \(t_{0}\). The latter will be referred to as magnetization amplitude conservation property. Now, let us sum eq.(14) dot-multiplied by \(\mathbf{w}\) and eq.(15) divided by \(\xi\) and dot-multiplied by \(\mathbf{m}\). One has: \[\mathbf{w}\cdot\frac{\partial\mathbf{m}}{\partial t}+\mathbf{m}\cdot\frac{\partial\mathbf{w} }{\partial t}=\frac{\partial(\mathbf{w}\cdot\mathbf{m})}{\partial t}=-\frac{\alpha}{ \xi}\mathbf{w}\cdot\mathbf{m}\quad. \tag{18}\] This means that, in any spatial location \(\mathbf{r}\in\Omega\), the scalar product \(\mathbf{w}\cdot\mathbf{m}\), termed as 'angular momentum' projection on magnetization, will have to decay exponentially to zero as follows: \[\mathbf{w}(\mathbf{r},t)\cdot\mathbf{m}(\mathbf{r},t)=\mathbf{w}(\mathbf{r},t_{0})\cdot\mathbf{m}(\mathbf{r},t_{0})e^{-\frac{\alpha}{\xi}t}\quad\forall\mathbf{r}\in\Omega\,,\,t\geq t_{0}, \tag{19}\] where the time decay constant is controlled by the ratio \(\xi/\alpha>0\) between the intensities of damping and inertia. Thus, for \(t\gg t_{0}\) (practically \(t>t_{0}+5\xi/\alpha\)), the 'angular momentum' variable \(\mathbf{w}\) is asymptotically constrained to evolve on the manifold defined by \(\mathbf{w}\cdot\mathbf{m}=0\). Interestingly, for zero damping \(\alpha=0\), the latter equation implies exact conservation of the product \(\mathbf{w}\cdot\mathbf{m}\) at any time: \[\mathbf{w}(\mathbf{r},t)\cdot\mathbf{m}(\mathbf{r},t)=\mathbf{w}(\mathbf{r},t_{0})\cdot\mathbf{m}(\mathbf{r},t_{0})\quad\forall\mathbf{r}\in\Omega\,,\,t\geq t_{0}. \tag{20}\] From equation (19) it is also worth noting that, for any value of \(\alpha\geq 0\) and initially vanishing magnetization time-derivative \(\partial\mathbf{m}/\partial t(\mathbf{r},t_{0})=0\) at any location \(\mathbf{r}\in\Omega\), which therefore implies \(\mathbf{w}(\mathbf{r},t_{0})=0\), the iLLG dynamics will occur such that the product \(\mathbf{w}\cdot\mathbf{m}\) is always zero: \[\mathbf{w}(\mathbf{r},t)\cdot\mathbf{m}(\mathbf{r},t)=\mathbf{w}(\mathbf{r},t_{0})\cdot\mathbf{m}(\mathbf{r},t_{0})=0\quad\forall\mathbf{r}\in\Omega\,,\,t\geq t_{0}. \tag{21}\] From the above discussion, being that the inertial magnetization dynamics must fulfill the two constraints (17),(19), one can conclude that, in general, the dynamical system obtained by the iLLG eq.(3) and expressed by eqs.(14)-(15) has, in each spatial location \(\mathbf{r}\in\Omega\), four independent state variables evolving on a four-dimensional state space. This means that the iLLG dynamics requires a double number of degrees of freedom compared to the classical LLG for its description. Furthermore, eq.(3) admits an additional conservation property. In fact, by dot-multiplying eq.(15) by \(\mathbf{w}\) and integrating over the region \(\Omega\), one has: \[\frac{1}{V}\int_{\Omega}\frac{\xi}{2}\frac{\partial|\mathbf{w}|^{2 }}{\partial t}\,dV =\frac{1}{V}\int_{\Omega}-\alpha|\mathbf{w}|^{2}+\mathbf{w}\cdot(\mathbf{m} \times\mathbf{h}_{\rm eff})\,dV\Leftrightarrow \tag{22}\] \[\frac{1}{V}\int_{\Omega}\frac{\xi}{2}\frac{\partial|\mathbf{w}|^{2 }}{\partial t}\,dV =\frac{1}{V}\int_{\Omega}-\alpha|\mathbf{w}|^{2}+\mathbf{h}_{\rm eff} \cdot\frac{\partial\mathbf{m}}{\partial t}\,dV. \tag{23}\] By using the fact that \[\frac{dg}{dt}=\frac{1}{V}\int_{\Omega}\frac{\delta g}{\delta\mathbf{m}}\cdot\frac {\partial\mathbf{m}}{\partial t}+\frac{\delta g}{\delta\mathbf{h}_{a}}\cdot\frac{ \partial\mathbf{h}_{a}}{\partial t}\,dV=\frac{1}{V}\int_{\Omega}-\mathbf{h}_{\rm eff }\cdot\frac{\partial\mathbf{m}}{\partial t}-\mathbf{m}\cdot\frac{\partial\mathbf{h}_{a} }{\partial t}\,dV\,, \tag{24}\] and remembering from (9) that \(|\mathbf{w}|=|\partial\mathbf{m}/\partial t|\), one obtains the following energy balance equation: \[\frac{d}{dt}\left(g+\frac{1}{V}\int_{\Omega}\frac{\xi}{2}\left| \mathbf{w}\right|^{2}dV\right) = -\frac{1}{V}\int_{\Omega}\mathbf{m}\cdot\frac{\partial \mathbf{h}_{a}}{\partial t}-\alpha\left|\mathbf{w}\right|^{2 }\,dV\,\Leftrightarrow\] \[\frac{d}{dt}\left(g+\frac{1}{V}\int_{\Omega}\frac{\xi}{2}\left| \frac{\partial\mathbf{m}}{\partial t}\right|^{2}dV\right) = -\frac{1}{V}\int_{\Omega}\mathbf{m}\cdot\frac{d\mathbf{h}_{a}}{dt}-\alpha\left|\frac{\partial\mathbf{m}}{ \partial t}\right|^{2}\,dV. \tag{25}\] The latter equation can be put in a more compact form by defining the following generalized free energy: \[\tilde{g}(\mathbf{m},\mathbf{w},\mathbf{h}_{a})=g (\mathbf{m},\mathbf{h}_{a})+\frac{1}{V}\int_{\Omega}\frac{ \xi}{2}\left|\mathbf{w}\right|^{2}\,dV=g(\mathbf{m},\mathbf{h}_{a})+\frac{1}{V}\int_{\Omega}\frac{\xi}{2}\left|\frac{\partial \mathbf{m}}{\partial t}\right|^{2}\,dV\quad, \tag{26}\] where the second term, in the framework of the purely formal mechanical analogy introduced before, can be seen as a sort of 'kinetic' energy (see the last equality in eq.(26)) augmenting the classical micromagnetic free energy interpreted as 'potential' energy. Thus, the balance equation (25) becomes \[\frac{d\tilde{g}}{dt}=-\frac{1}{V}\int_{\Omega}\mathbf{m}\cdot\frac{ \partial\mathbf{h}_{a}}{\partial t}-\alpha\left|\frac{\partial\mbox {\boldmath$m$}}{\partial t}\right|^{2}\,dV\, \tag{27}\] where the first term at the right-hand side describes energy pumping via time-varying external applied magnetic field and the second term takes into account the intrinsic dissipation of magnetic materials. It is apparent that, under the assumption of constant-in-time (even spatially-inhomogeneous) applied field (\(\partial\mathbf{h}_{a}/\partial t=0\)), the generalized free energy \(\tilde{g}\) must be a decreasing function of time: \[\frac{d\tilde{g}}{dt}=-\frac{1}{V}\int_{\Omega}\alpha\left|\frac{\partial \mathbf{m}}{\partial t}\right|^{2}\,dV\leq 0\, \tag{28}\] which reveals a Lyapunov structure for the iLLG in terms of the generalized free energy \(\tilde{g}\) similarly to what happens for the LLG dynamics in terms of then classical free energy \(g\). This means, that, under the above assumptions, the only possible attractors of the dynamics are stable equilibria (i.e. such that \(\partial\mathbf{m}/\partial t=0,\partial\mathbf{w}/\partial t =0\) and \(\tilde{g}\) is minimum). In addition, in the absence of dissipation (\(\alpha=0\)), one has the conservation property for the quantity \(\tilde{g}\): \[\frac{d\tilde{g}}{dt}=\frac{d}{dt}\left(g+\frac{1}{V}\int_{\Omega}\frac{\xi}{ 2}\left|\frac{\partial\mathbf{m}}{\partial t}\right|^{2}\,dV\right) =0\quad, \tag{29}\] which is analogous to the conservation of 'total' (potential + 'kinetic') energy \(\tilde{g}\) in mechanical systems and here strikingly expresses the conservative nature of the (lossless) spin nutation dynamics. We remark that the balance equation (25),(27) could have been derived directly from eq.(3) by dot-multiplying both sides by the quantity in parentheses and integrating over \(\Omega\). Finally, we observe that, in the absence of dissipation (i.e. \(\alpha=0\)), eqs.(14)-(15) admit three integrals of motion: \[|\mathbf{m}(\mathbf{r},t)|=1\quad\forall\mathbf{r }\in\Omega\,,\,t\geq t_{0} \tag{30a}\] \[\mathbf{w}(\mathbf{r},t)\cdot\mathbf{m} (\mathbf{r},t)=\mathbf{w}(\mathbf{r},t_{0})\cdot \mathbf{m}(\mathbf{r},t_{0})\quad\forall\mathbf{r }\in\Omega\,,\,t\geq t_{0}\] (30b) \[\tilde{g}=g+\frac{1}{V}\int_{\Omega}\frac{\xi}{2}\left|\frac{ \partial\mathbf{m}}{\partial t}\right|^{2}\,dV=\tilde{g}_{0}\,,\,t \geq t_{0} \tag{30c}\] that we term amplitude, 'angular momentum' projection on magnetization and 'total' free energy conservation, respectively. The former two hold in a pointwise fashion, that is in any location and time instant (provided that they are fulfilled at initial time \(t_{0}\)), while the last is an integral constraint on magnetization motion (we remark that \(\tilde{g}(t_{0})=\tilde{g}_{0}\) is the initial 'total' free energy). The above conservation laws hold for spatially-inhomogeneous magnetization processes, but one can also consider'sufficiently small' particles where the exchange interaction strongly penalizes spatial magnetization gradients and, thus, approximately treat them as uniformly-magnetized (macrospin) anisotropic particles, which eliminates the dependence on the spatial location \(\mathbf{r}\) within the ferromagnet. This makes sense when dealing with magnetic nanosystems of dimensions in the order of the exchange length, such as those used as elementary cells for magnetic memories and other spintronic devices[1]. Under the assumption of spatially-uniform magnetization and anisotropy of uniaxial type, the free energy (5) has the simple expression[39]: \[g(\mathbf{m},\mathbf{h}_{a})=\frac{1}{2}D_{x}m_{x}^{2}+\frac{1}{2}D_{y}m_{y}^{2}+\frac {1}{2}D_{z}m_{z}^{2}-\mathbf{m}\cdot\mathbf{h}_{a}\quad, \tag{31}\] where \(D_{x},D_{y},D_{z}\) are effective demagnetizing factors taking into account shape and crystalline anisotropy. The aforementioned integrals of motion (30a)-(30c) become \[|\mathbf{m}(t)|=1\quad\forall\,,\,t\geq t_{0}\,, \tag{32a}\] \[\mathbf{w}(t)\cdot\mathbf{m}(t)=\mathbf{w}(t_{0})\cdot\mathbf{m}(t_{0})\quad \forall\,,\,t\geq t_{0}\,,\] (32b) \[\tilde{g}=g+\frac{\xi}{2}\left|\mathbf{w}\right|^{2}=g+\frac{\xi}{2 }\left|\frac{d\mathbf{m}}{dt}\right|^{2}=\tilde{g}_{0}\,,\,t\geq t_{0}\,, \tag{32c}\] with \(g\) given by eq.(31) and will be instrumental in the validation of time-stepping techniques that we will discuss in the following sections. ## 3 Spatially semi-discretized iLLG equation Now we proceed to the numerical discretization of the iLLG equation. In the following, we will refer to spatially semi-discretized equations on a collection of \(N\) mesh points \((\mathbf{r}_{j})_{j=1}^{N}\) associated with the related computational cells of volume \(V_{j}\). This description is quite general and works both for finite-difference and finite-element methods. We will denote as \(\underline{\mathbf{m}}(t)=(\mathbf{m}_{1},\ldots,\mathbf{m}_{N})^{T}\), \(\underline{\mathbf{w}}(t)=(\mathbf{w}_{1},\ldots,\mathbf{w}_{N})^{T}\!\in\mathbb{R}^{3N}\) (the notation \({}^{T}\) means matrix transpose) the mesh vectors containing all cell vectors \(\mathbf{m}_{j}(t),\mathbf{w}_{j}(t)\in\mathbb{R}^{3}\) with \(j=1,\ldots,N\). Moreover, we will use the operator notation for the cross-product for both cell and mesh vectors, namely: \[\Lambda(\mathbf{v})\cdot\mathbf{w}=\mathbf{v}\times\mathbf{w}\,,\quad\underline{\Lambda}( \underline{\mathbf{v}})\cdot\underline{\mathbf{w}}=(\mathbf{v}_{1}\times\mathbf{w}_{1},\ldots,\mathbf{v}_{N}\times\mathbf{w}_{N})^{T}\,, \tag{33}\] meaning that the latter operator is a skew-symmetric \(3N\times 3N\) block-diagonal operator that provides cross product of homologous cell vectors. Thus, the semi-discretized iLLG equation will read as: \[\frac{d\mathbf{m}}{dt} =\underline{\Lambda}(\underline{\mathbf{w}})\cdot\underline{\mathbf{m}}\quad, \tag{34}\] \[\xi\frac{d\underline{\mathbf{w}}}{dt} =-\underline{\Lambda}(\underline{\mathbf{m}})\cdot\underline{\mathbf{w}} -\alpha\underline{\mathbf{w}}+\underline{\Lambda}(\underline{\mathbf{m}})\cdot\underline {\mathbf{h}}_{\rm eff}\quad, \tag{35}\] where the discrete effective field \(\underline{\mathbf{h}}_{\rm eff}\) is given by: \[\underline{\mathbf{h}}_{\rm eff}(\underline{\mathbf{m}},t)=-\frac{\partial g}{ \partial\underline{\mathbf{m}}}=-C\cdot\underline{\mathbf{m}}(t)+\underline{\mathbf{h}} _{a}(t)\,, \tag{36}\] the symmetric positive-definite matrix \(\underline{C}\) plays the role of the effective field operator \(\mathcal{C}\) and \(\underline{g}(\underline{\mathbf{m}},\underline{\mathbf{h}}_{a})\) is the discrete counterpart of the free energy defined by eq. (5): \[\underline{g}(\underline{\mathbf{m}},\underline{\mathbf{h}}_{a})=\frac{1}{2}\underline {\mathbf{m}}^{T}\cdot\underline{C}\cdot\underline{\mathbf{m}}-\underline{\mathbf{h}}_{a} ^{T}\cdot\underline{\mathbf{m}}\,. \tag{37}\] By using the same line of reasoning as for the continuous iLLG equation (14)-(15), one can derive the following conservation properties: \[|\mathbf{m}_{j}(t)|=|\mathbf{m}_{j}(t_{0})|\quad\forall t\geq t_{0},\quad j=1,\ldots,N\,, \tag{38}\] \[(\mathbf{w}_{j}(t)\cdot\mathbf{m}_{j}(t))=(\mathbf{w}_{j}(t_{0})\cdot\mathbf{m}_{j }(t_{0}))e^{-\frac{\pi}{d}t}\quad\forall t\geq t_{0},\quad j=1,\ldots,N\,,\] (39) \[\frac{d}{dt}\left(\frac{g(\mathbf{m}(t),\mathbf{h}_{a}(t))+\frac{\xi}{2} \left|\frac{d\mathbf{m}}{dt}\right|^{2}}{2}\right)=\frac{d}{dt}\left(\frac{g(\mathbf{m }(t),\mathbf{h}_{a}(t))+\frac{\xi}{2}\left|\mathbf{w}\right|^{2}}{2}\right)=\frac{d \tilde{g}}{dt}=-\alpha\left|\frac{d\mathbf{m}}{dt}\right|^{2}-\mathbf{m}\cdot\frac{d \mathbf{h}_{a}}{dt}\,, \tag{40}\] where \(\tilde{g}(\mathbf{m},\mathbf{w},\mathbf{h}_{a})=g(\mathbf{m},\mathbf{h}_{a})+\frac{\xi}{2}|\mathbf{w} |^{2}\) is the discrete 'total' energy corresponding to \(\tilde{g}\) in the continuous iLLG dynamics (see eq.(26)). ## 4 Midpoint time-stepings for iLLG dynamics The numerical solution of eqs.(34)-(35) with classical time-stepping techniques in general will corrupt the conservation properties (38)-(40) of semi-discretized inertial magnetization dynamics. Thus, such properties will be fulfilled with an accuracy depending on the amplitude of the time-step \(\Delta t\). For the classical purely precessional LLG equation, it has been shown[38] that the implicit midpoint rule technique preserves the properties of discrete magnetization dynamics regardless of the time-step. Here we propose two schemes based on such technique for the iLLG spin nutation dynamics. ### Implicit midpoint rule (IMR) The first is based on discretiztion of eqs.(34)-(35) at time \(t^{n+\frac{1}{2}}=t^{n}+\Delta t/2\) with the following second-order midpoint formulas: \[\underline{\mathbf{m}}^{n+\frac{1}{2}}=\frac{\underline{\mathbf{m}}^{n+1}+\underline{ \mathbf{m}}^{n}}{2}\quad\underline{\mathbf{w}}^{n+\frac{1}{2}}=\frac{\underline{\mathbf{ w}}^{n+1}+\underline{\mathbf{w}}^{n}}{2}\,, \tag{41}\] where \(\underline{\mathbf{m}}^{n},\underline{\mathbf{w}}^{n}\) denote \(\underline{\mathbf{m}}(t^{n}),\underline{\mathbf{w}}(t^{n})\), which leads to the following time-stepping for the \(j-\)th computational cell: \[\frac{\mathbf{m}_{j}{}^{n+1}-\mathbf{m}_{j}{}^{n}}{\Delta t} =\mathbf{w}_{j}^{n+\frac{1}{2}}\times\mathbf{m}_{j}^{n+\frac{1}{2}}\quad, \tag{42}\] \[\xi\frac{\mathbf{w}_{j}^{n+1}-\mathbf{w}_{j}^{n}}{\Delta t} =-\mathbf{m}_{j}^{n+\frac{1}{2}}\times\mathbf{w}_{j}^{n+\frac{1}{2}}- \alpha\mathbf{w}_{j}^{n+\frac{1}{2}}\] \[+\mathbf{m}_{j}^{n+\frac{1}{2}}\times\mathbf{h}_{\rm eff,j}(\underline{ \mathbf{m}}^{n+\frac{1}{2}},t^{n+\frac{1}{2}})\quad,\quad j=1,\ldots,N\,. \tag{43}\] Now, by dot-multiplying the first equation by \(\mathbf{m}_{j}^{n+1/2}\), one can easily see that \[|\mathbf{m}_{j}^{n+1}|^{2}-|\mathbf{m}_{j}^{n}|^{2}=0\,,\quad j=1,\ldots,N\,, \tag{44}\] meaning that magnetization amplitude will be preserved unconditionally, namely independently of \(\Delta t\) in each computational cell. In addition, by dot-multiplying eq.(42) by \(\mathbf{w}^{n+\frac{1}{2}}\) and eq.(43) by \(\mathbf{m}^{n+\frac{1}{2}}\) and summing both sides of the result, one immediately ends up with: \[\frac{\mathbf{w}_{j}^{n+1}+\mathbf{w}_{j}^{n}}{2}\cdot\frac{\mathbf{m}_{j}^{n+1}-\mathbf{m}_{j }^{n}}{\Delta t}+\frac{\mathbf{m}_{j}^{n+1}+\mathbf{m}_{j}^{n}}{2}\cdot\frac{\mathbf{w}_{j }^{n+1}-\mathbf{w}_{j}^{n}}{\Delta t}=-\frac{\alpha}{\xi}\mathbf{w}_{j}^{n+\frac{1}{2} }\cdot\mathbf{m}_{j}^{n+\frac{1}{2}}\,,\quad j=1,\ldots,N\,, \tag{45}\] which expresses the reproduction of the property (39) in its mid-point time discretized version: \[\frac{\mathbf{w}_{j}^{n+1}\cdot\mathbf{m}_{j}^{n+1}-\mathbf{w}_{j}^{n}\cdot\mathbf{m}_{j}^{n}} {\Delta t}=-\frac{\alpha}{\xi}\mathbf{w}_{j}^{n+\frac{1}{2}}\cdot\mathbf{m}_{j}^{n+ \frac{1}{2}}\,,\quad j=1,\ldots,N\,. \tag{46}\] Remarkably enough, in the conservative case \(\alpha=0\), the latter equation becomes: \[\frac{\mathbf{w}_{j}^{n+1}\cdot\mathbf{m}_{j}^{n+1}-\mathbf{w}_{j}^{n}\cdot\mathbf{m}_{j}^{n}}{ \Delta t}=0\quad\Rightarrow\quad\mathbf{w}_{j}^{n+1}\cdot\mathbf{m}_{j}^{n+1}=\mathbf{w}_{j }^{n}\cdot\mathbf{m}_{j}^{n}\,,\quad j=1,\ldots,N \tag{47}\] which means that the 'angular momentum' projection conservation property is fulfilled for any choice of the time step \(\Delta t\). Now let us consider the midpoint rule time-stepping for the mesh vectors: \[\frac{\underline{\mathbf{m}}^{n+1}-\underline{\mathbf{m}}^{n}}{\Delta t} =\underline{\Lambda}(\underline{\mathbf{w}}^{n+\frac{1}{2}})\cdot \underline{\mathbf{m}}^{n+\frac{1}{2}}\quad, \tag{48}\] \[\xi\frac{\underline{\mathbf{w}}^{n+1}-\underline{\mathbf{w}}^{n}}{\Delta t} =-\underline{\Lambda}(\underline{\mathbf{m}}^{n+\frac{1}{2}})\cdot \underline{\mathbf{w}}^{n+\frac{1}{2}}-\alpha\underline{\mathbf{w}}^{n+\frac{1}{2}}+ \underline{\Lambda}(\underline{\mathbf{m}}^{n+\frac{1}{2}})\cdot\underline{\mathbf{ h}}_{\rm eff}(\underline{\mathbf{m}}^{n+\frac{1}{2}},t^{n+\frac{1}{2}})\quad. \tag{49}\] By assuming constant applied field, dot-multiplying the second equation by \(\underline{\mathbf{w}}^{n+1/2}\) and taking into account eq.(36) and the symmetry of the matrix \(\underline{C}\), one obtains the following discretized energy balance: \[\frac{\xi}{2}\frac{|\underline{\mathbf{w}}^{n+1}|^{2}-|\underline{\mathbf{ w}}^{n}|^{2}}{\Delta t} =-\alpha|\underline{\mathbf{w}}^{n+\frac{1}{2}}|^{2}-\frac{g^{n+1}-g^ {n}}{\Delta t}\Leftrightarrow \tag{50}\] \[\frac{\tilde{\underline{g}}^{n+1}-\tilde{\underline{g}}^{n}}{ \Delta t} =-\alpha|\underline{\mathbf{w}}^{n+\frac{1}{2}}|^{2}\quad,\] where \(\tilde{\underline{g}}^{n}=\tilde{\underline{g}}(\underline{\mathbf{m}}^{n})\), which implies that the total (discrete) energy \(\tilde{\underline{g}}\) must be either decreasing when \(\alpha>0\) or being conserved when \(\alpha=0\), both regardless of the time-step. Equations (42)-(43) represent a nonlinear system of \(6N\) coupled scalar equations, which must be solved at each time step. They can be regarded as the following two vector equations in \(\underline{\mathbf{u}}=\underline{\mathbf{m}}^{n+1},\underline{\mathbf{v}}=\underline{ \mathbf{w}}^{n+1}\): \[\mathbf{F}(\underline{\mathbf{u}},\underline{\mathbf{v}})=\mathbf{0}\quad,\quad\mathbf{G}( \underline{\mathbf{u}},\underline{\mathbf{v}})=\mathbf{0}\,, \tag{51}\] and can be solved by using Newton-Raphson iteration: \[\left(\begin{array}{c}\underline{\mathbf{u}}_{k+1}\\ \underline{\mathbf{v}}_{k+1}\end{array}\right)=\left(\begin{array}{c}\underline {\mathbf{u}}_{k}\\ \underline{\mathbf{v}}_{k}\end{array}\right)-\left(\begin{array}{cc}\frac{ \partial\mathbf{F}}{\partial\underline{\mathbf{u}}}&\frac{\partial\mathbf{F}}{\partial \underline{\mathbf{v}}}\\ \frac{\partial\mathbf{G}}{\partial\underline{\mathbf{u}}}&\frac{\partial\mathbf{G}}{ \partial\underline{\mathbf{v}}}\end{array}\right)^{-1}\cdot\left(\begin{array}{c }\mathbf{F}(\underline{\mathbf{u}}_{k})\\ \mathbf{G}(\underline{\mathbf{v}}_{k})\end{array}\right)\,, \tag{52}\] where the partial Jacobian matrices are given by: \[\frac{\partial\mathbf{F}}{\partial\underline{\mathbf{u}}} =\frac{\underline{I}}{\Delta t}-\frac{1}{4}\underline{\Lambda}( \underline{\mathbf{v}}+\underline{\mathbf{w}}^{n})\,, \tag{53}\] \[\frac{\partial\mathbf{F}}{\partial\underline{\mathbf{v}}} =\frac{1}{4}\underline{\Lambda}(\underline{\mathbf{u}}+\underline{\bm {m}}^{n})\,,\] (54) \[\frac{\partial\mathbf{G}}{\partial\underline{\mathbf{u}}} =-\frac{1}{4}\underline{\Lambda}(\underline{\mathbf{v}}+\underline{ \mathbf{w}}^{n})+\frac{1}{2}\underline{\Lambda}(\underline{\mathbf{u}}+\underline{\bm {m}}^{n})\cdot\underline{C}-\underline{\Lambda}\left[\underline{\mathbf{h}}_{\rm eff }\left(\frac{\underline{\mathbf{u}}+\underline{\mathbf{m}}^{n}}{2}\right)\right]\,,\] (55) \[\frac{\partial\mathbf{G}}{\partial\underline{\mathbf{v}}} =\frac{\xi}{\Delta t}\,\underline{I}+\frac{1}{4}\underline{ \Lambda}(\underline{\mathbf{u}}+\underline{\mathbf{m}}^{n})+\frac{\alpha}{2}\, \underline{I}\,, \tag{56}\] and the linear operator notation \(\underline{\Lambda}\) has been used for the cross product involving mesh vectors. The above time-stepping has remarkable qualitative properties that reproduce those of the continuous iLLG dynamics and, therefore, represents the preferred choice to realize inertial micromagnetic numerical codes for the analysis of terahertz magnetization dynamics. However, it evidently requires to double the state variables and, consequently, the number of unknowns, owing to the introduction of the vector field \(\mathbf{w}\) although one is mainly interested to compute the dynamics of magnetization vector field \(\mathbf{m}\). This issue becomes even more pronounced when large-scale micromagnetic simulations with full spatial discretization are considered, which would require dramatic modification of numerical codes in order to introduce the auxiliary variable \(\mathbf{w}\) and to solve a system of \(6N\) nonlinear coupled equations at each time-step. Moreover, we remark that in the latter situation, the \(3N\times 3N\) matrix \(\underline{C}\) involved in the Newton iteration (see eq.(55)) is also fully-populated owing to the long-range nature of magnetostatic interactions. Therefore, following what is done for the classical LLG equation[38], a quasi-Newton technique is required to solve the large nonlinear system, implemented by considering reasonable and sparse approximation of the matrix \(\underline{C}\) (e.g. obtained retaining only exchange and anisotropy contributions \(\underline{C}_{\rm ex}\) and \(\underline{C}_{\rm an}\), respectively) and, in turn, of the full Jacobian defined by eqs.(53)-(56). Of course, the computational cost of such quasi-Newton method, involving the solution of several non-symmetric large linear systems (e.g. by using GMRES methods[40]), will be at least double with respect to that associated with LLG equation, posing a strong limit to the capability of solving large-scale iLLG dynamics. ### Implicit midpoint with multi-step inertial term (IMR-MS) For these reasons, in order to obtain an alternative efficient numerical technique with minimum modification of existing micromagnetic codes, we propose a second time-stepping based on the implicit midpoint rule combined with a multi-step method for the inertial term. This technique is based on direct space-time discretization of eq.(3) at time \(t^{n+\frac{1}{2}}=t_{n}+\Delta t/2\): \[\frac{\mathbf{m}^{n+1}-\mathbf{m}^{n}}{\Delta t}=-\underline{\Lambda}(\mathbf{m}^{n+\frac{ 1}{2}})\cdot\left(\mathbf{\underline{h}}_{\rm eff}\left(\mathbf{m}^{n+\frac{1}{2}},t^ {n+\frac{1}{2}}\right)\right.-\alpha\frac{\mathbf{m}^{n+1}-\mathbf{m}^{n}}{\Delta t}- \xi\frac{d^{2}\mathbf{m}}{dt^{2}}\left|{}_{n+\frac{1}{2}}\right)\quad. \tag{57}\] Then, in order to retain the amplitude conservation property (44) of the aforementioned IMR scheme, we use the first of midpoint formulas (41) in eq.(57) in a way that it is rewritten as system of \(3N\) nonlinear equations in the unknowns \(\mathbf{\underline{m}}^{n+1}\): \[\mathbf{F}^{n}(\mathbf{\underline{m}}^{n+1})=\mathbf{0}\quad, \tag{58}\] where \(\mathbf{F}^{n}(\mathbf{\underline{y}}):\mathbb{R}^{3N}\to\mathbb{R}^{3N}\) is the following vector function: \[\mathbf{F}^{n}(\mathbf{\underline{y}})=\left[I-\alpha\underline{\Lambda}\left(\frac{ \mathbf{y}+\mathbf{m}^{n}}{2}\right)\right]\left(\mathbf{\underline{y}}-\mathbf{m}^{n}\right) -\Delta t\ \mathbf{f}^{n}\left(\frac{\mathbf{y}+\mathbf{m}^{n}}{2}\right)-\Delta t\,\xi\frac{d^{2} \mathbf{m}}{dt^{2}}\left|{}_{n+\frac{1}{2}}\right)\,, \tag{59}\] and where \[\mathbf{f}^{n}(\mathbf{\underline{m}})=-\underline{\Lambda}(\mathbf{\underline{m}})\cdot \underline{h}_{\rm eff}\left(\mathbf{\underline{m}},t^{n}+\frac{\Delta t}{2} \right)=\underline{\Lambda}(\mathbf{\underline{m}})\cdot\frac{\partial g}{ \partial\mathbf{\underline{m}}}\left(\mathbf{\underline{m}},\underline{h}_{a}\left(t^ {n}+\frac{\Delta t}{2}\right)\right) \tag{60}\] is the purely precessional term in the right-hand-side of the conservative semi-discretized iLLG equation (35) expressed by using the definition (36) of the discrete effective field. Then, we adopt a multi-step approach with a \(p-\)points backward formula for the second derivative in the inertial term appearing in eqs.(57) and (59): \[\frac{d^{2}\mathbf{\underline{m}}}{dt^{2}}\left|{}_{n+\frac{1}{2}}\right.\approx \frac{1}{\Delta t^{2}}\sum_{k=1}^{p\geq 3}a_{n+2-k}\mathbf{\underline{m}}^{n+2-k}= \Delta_{p}^{2}\,, \tag{61}\] where the coefficients \(a_{n+2-k}\) are determined from truncation error analysis in order to control the accuracy of the approximation. This technique implies a slight modification of existing numerical codes based on implicit midpoint rule time-stepping. In fact, once formula (61) is plugged into the time-stepping equation (57), the solution of the nonlinear coupled equations (58) can be obtained by using the Newton-Raphson technique[38] as follows: \[\mathbf{\underline{y}}_{0}=\mathbf{\underline{m}}^{n}\,,\ \ \mathbf{\underline{y}}_{k+1}=\mathbf{ \underline{y}}_{k}+\Delta\mathbf{\underline{y}}_{k+1}\quad\text{with}\quad J_{F}^ {n}(\mathbf{\underline{y}}_{k},t^{n})\Delta\mathbf{\underline{y}}_{k+1}=-\mathbf{F}^{n} (\mathbf{\underline{y}}_{k})\,, \tag{62}\] by simply considering the following augmented Jacobian matrix of the iteration: \[J_{F}(\mathbf{\underline{u}},t) =\frac{I}{\Delta t}+\frac{\alpha}{\Delta t}\underline{\Lambda}( \mathbf{\underline{m}}^{n})+\frac{\xi}{2\Delta t^{2}}a_{1}\underline{\Lambda}( \mathbf{\underline{u}})+\frac{\xi}{2\Delta t^{2}}\Delta\left(\sum_{k=2}^{p\geq 3}a_{n+2-k} \mathbf{\underline{m}}^{n+2-k}\right)-\] \[-\frac{\xi}{2\Delta t^{2}}\underline{\Lambda}(\mathbf{\underline{u }}+\mathbf{\underline{m}}^{n})a_{1}-\frac{1}{2}J_{f}\left(\frac{\mathbf{\underline{u }}+\mathbf{\underline{m}}^{n}}{2},t+\frac{\Delta t}{2}\right) \tag{63}\] where \(J_{f}(\underline{\mathbf{u}},t)=\underline{\Lambda}(\underline{\mathbf{u}})\cdot \underline{C}+\underline{\Lambda}[\underline{\mathbf{h}}_{\text{eff}}(\underline{\bm {u}},t)]\). The linear system in eq.(62) is solved at each iteration \(k\) by considering the sparse approximation of the full matrix \(\underline{C}\) as \(\underline{C}\approx\underline{C}_{\text{ex}}+\underline{C}_{\text{an}}\) in the Jacobian \(J_{f}\) and using the GMRES method[40] until the residual \(\|\mathbf{F}^{n}(\underline{\mathbf{y}}_{k})\|\) decreases below a prescribed tolerance. Now, if one assumes that initially magnetization has zero velocity \(d\underline{\mathbf{m}}/dt(t=0)=\mathbf{0}\), which is reasonable in simulation of experimental situations, then at the first time step \(n=1\) one has \(\mathbf{m}^{n+2-k}=0,k>2\). For the subsequent steps \(n>1\), one can use magnetization samples from the previous steps as \(\underline{\mathbf{m}}^{n+2-k},k>2\) in eq.(57). The only cost of such operation is for the storage of \(p-2\) magnetization vectors. In this respect, the simplest choice is the classical \(p=3\) points formula: \[\frac{d^{2}\underline{\mathbf{m}}}{dt^{2}}\,\Big{|}_{n+\frac{1}{2}}\,\approx\frac{ \underline{\mathbf{m}}^{n+1}+\underline{\mathbf{m}}^{n-1}-2\underline{\mathbf{m}}^{n}}{ \Delta t^{2}}=\Delta_{p=3}^{2}\,, \tag{64}\] which, plugged into eq.(57), defines the IMR-MS1 scheme. However, an analysis of truncation error reveals that: \[\Delta_{p=3}^{2}=\frac{d^{2}\underline{\mathbf{m}}}{dt^{2}}\,\Big{|}_{n+\frac{1}{ 2}}\,-\frac{1}{2}\frac{d^{3}\underline{\mathbf{m}}}{dt^{3}}\,\Big{|}_{n+\frac{1}{ 2}}\,\Delta t+\frac{5}{24}\frac{d^{4}\underline{\mathbf{m}}}{dt^{4}}\,\Big{|}_{n+ \frac{1}{2}}\,\Delta t^{2}+\ldots\,, \tag{65}\] meaning that the accuracy is just of first order \(\mathcal{O}(\Delta t)\) (it would be \(\mathcal{O}(\Delta t^{2})\) if the second derivative was computed at \(t=t_{n}\)). Thus, in order to be consistent with discretization of other terms in eq.(57) at second order with respect to \(\Delta t\), one can derive a more accurate formula using \(p=4\) points. To this end, we compute a different second derivative formula: \[\tilde{\Delta}_{p=3}^{2}=\frac{d^{2}\underline{\mathbf{m}}}{dt^{2}}\,\Big{|}_{n+ \frac{1}{2}}\,\approx\frac{2\underline{\mathbf{m}}^{n+1}-3\underline{\mathbf{m}}^{n}+ \underline{\mathbf{m}}^{n-2}}{3\Delta t^{2}}\,, \tag{66}\] which has truncation error such that: \[\tilde{\Delta}_{p=3}^{2}=\frac{d^{2}\underline{\mathbf{m}}}{dt^{2}}\,\Big{|}_{n+ \frac{1}{2}}\,-\frac{5}{6}\frac{d^{3}\underline{\mathbf{m}}}{dt^{3}}\,\Big{|}_{n+ \frac{1}{2}}\,\Delta t+\frac{13}{24}\frac{d^{4}\underline{\mathbf{m}}}{dt^{4}}\, \Big{|}_{n+\frac{1}{2}}\,\Delta t^{2}+\ldots\,. \tag{67}\] Now we use Richardson extrapolation[41] to cancel \(\mathcal{O}(\Delta t)\) order terms in the truncation and define the following new difference formula (defining the IMR-MS2 scheme): \[\Delta_{p=4}^{2}=\frac{5}{2}\Delta_{p=3}^{2}-\frac{3}{2}\tilde{\Delta}_{p=3}^{ 2}=\frac{3\underline{\mathbf{m}}^{n+1}-7\underline{\mathbf{m}}^{n}+5\underline{\mathbf{ m}}^{n-1}-\underline{\mathbf{m}}^{n-2}}{2\Delta t^{2}}\,, \tag{68}\] for which the truncation error is \(\mathcal{O}(\Delta t^{2})\): \[\Delta_{p=4}^{2}=\frac{d^{2}\underline{\mathbf{m}}}{dt^{2}}\,\Big{|}_{n+\frac{1}{ 2}}\,-\,\frac{7}{24}\frac{d^{4}\underline{\mathbf{m}}}{dt^{4}}\,\Big{|}_{n+\frac{ 1}{2}}\,\Delta t^{2}+\ldots\,. \tag{69}\] The coefficients for the above multi-step formulas with \(p=3,4\) are summarized in table 1. In order to assess the order of accuracy of the proposed schemes, we consider the conservative iLLG dynamics (\(\alpha=0\)) and numerically integrate eq.(3) using IMR, IMR-MS1, IMR-MS2 time-stepings for different time step \(\Delta t\) and compare the results with a benchmark reference solution obtained by using standard adaptive time step Dormand-Prince Runge-Kutta (RK45) scheme[42; 43]. Absolute tolerances are set to \(10^{-14}\) both for RK45 and for Newton iterations solving eq.(57) with IMR, IMR-MS1, IMR-MS2. In the left panel of figure 1, we report the global truncation error \(||\Delta\mathbf{m}||\) arising from time-integration of iLLG eq.(3) in the interval [0,1] between the proposed schemes and the reference RK45 solution. A \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(p\) & order & Coefficients & scheme \\ \hline 3 & \(\mathcal{O}(\Delta t)\) & \(a_{1}=1,a_{0}=-2,a_{-1}=1\) & IMR-MS1 \\ 4 & \(\mathcal{O}(\Delta t^{2})\) & \(a_{1}=3/2,a_{0}=-7/2,a_{-1}=5/2,a_{-2}=-1/2\) & IMR-MS2 \\ \hline \end{tabular} \end{table} Table 1: Table of coefficients for multi-step formula (61). quick inspection of the figure confirms the expected first and second orders of accuracy for IMR-MS1 and IMR,IMR-MS2, respectively. Remarkably, IMR-MS2 has performance quite similar to the fully-implicit IMR without doubling the number of degrees of freedom. On the other hand, in the right panel of fig.1, one can look at the conservation properties for the proposed schemes (with \(\Delta t=0.001\)) and the reference solution. As expected, all the three IMR schemes are able to preserve amplitude \(|\mathbf{m}|\) and 'angular momentum' projection on magnetization \(\mathbf{w}\cdot\mathbf{m}\) with (double) machine precision, while only the fully-implicit IMR is able to do so for the 'total' energy \(\tilde{g}\). Nevertheless, it is apparent (middle panel, blue and cyan solid lines) that IMR-MS2 is able to guarantee the same precision as the RK45 concerning energy conservation while being significantly lower-order than RK45. For the evaluation of the variable \(\mathbf{w}\) and energy \(\tilde{g}\) when considering IMR-MS schemes, we have used the central difference formula \(\mathbf{w}^{n}=\mathbf{m}^{n}\times(\mathbf{m}^{n+1}-\mathbf{m}^{n-1})/(2\Delta t)\). ## 5 Numerical results In order to validate the proposed techniques on physically relevant situations, we perform two different simulations. The first describes the ultra-fast resonant spin nutation of a uniformly-magnetized thin-film driven by ac terahertz applied field, similar to the experiment[11] that provided direct evidence of the presence of inertial effects. This will also be a basic testbed to compare the accuracy of the developed IMR and IMR-MS schemes. The second simulation will address the ultra-fast spatially-inhomogeneous dynamics of magnetization in a microscale nanodot excited with terahertz applied field and will demonstrate the efficiency of the IMR-MS time-stepping in full micromagnetic simulations. ### Nutation frequency response of a single-domain particle We analyze the frequency response of a thin-film magnetized along the easy \(y\) direction and subject to an out-of-plane ac field with small amplitude. In this situation, one can assume that macrospin iLLG dynamics occurs in the linear regime and analytical theory can be developed. To this end, let us assume Figure 1: Accuracy tests on conservative (\(\alpha=0\)) iLLG dynamics. The values of parameters are \(D_{x}=0,1,D_{y}=0.2,D_{z}=0.7,\mathbf{h}_{a}=(0,0,0.1),\mathbf{m}(t=0)=(1,0,0),\xi=0.03\). (left) Global error \(||\Delta\mathbf{m}||\) at \(t=1\) between IMR, IMR-MS1, IMR-MS2 schemes and reference RK45 solution showing first-order \(O(\Delta t)\) behavior for IMR-MS1 and second-order \(O(\Delta t^{2})\) for IMR and IMR-MS2. (right) Conservation of properties of iLLG dynamics versus time (time step \(\Delta t=0.001\) for all IMR schemes). Top panel refers to amplitude \(1-|\mathbf{m}|\) conservation, middle panel to relative error \(\Delta\tilde{g}/\tilde{g}=(\tilde{g}(t)-\tilde{g}(0))/\tilde{g}(0)\) in ‘total’ free energy conservation, bottom panel refers to error \(\Delta(\mathbf{w}\cdot\mathbf{m})=\mathbf{w}(t)\cdot\mathbf{m}(t)-\mathbf{w}(0)\cdot\mathbf{m}(0)\) in ‘angular momentum’ projection on magnetization conservation. One can see that all IMR, IMR-MS1, IMR-MS2 schemes preserve amplitude \(|\mathbf{m}|\) and projection \(\mathbf{w}\cdot\mathbf{m}\) with (double-precision) machine accuracy and only IMR also preserves energy. IMR-MS2 outperforms IMR-MS1 and RK45 in energy conservation. that the applied field is decomposed in a nonzero constant bias field plus a time-harmonic component \(\mathbf{h}_{a}(t)=\mathbf{h}_{\rm dc}+\mathbf{h}_{\rm ac}(t)\,\quad|\mathbf{h}_{\rm ac}|\ll|\mathbf{h}_{\rm dc}|\). We also assume that the free energy has the simple form (31) under the macrospin approximation. Then, the iLLG eq.(3) can be linearized around an equilibrium \(\mathbf{m}_{0}\) such that \(\mathbf{m}(t)=\mathbf{m}_{0}+\Delta\mathbf{m}(t)\) in the following way: \[\frac{d\Delta\mathbf{m}}{dt}=\mathbf{m}_{0}\times\left[(D+h_{0}\mathcal{I})\cdot\Delta \mathbf{m}-\alpha\frac{d\Delta\mathbf{m}}{dt}-\xi\frac{d^{2}\Delta\mathbf{m}}{dt^{2}}\right]\, \tag{70}\] where \(D\) denotes the diagonal matrix \(D={\rm diag}[D_{x},D_{y},D_{z}]\), \(h_{0}=\mathbf{h}_{\rm eff}(\mathbf{m}_{0})\cdot\mathbf{m}_{0}=(-D\cdot\mathbf{m}_{0}+\mathbf{h}_{ a})\cdot\mathbf{m}_{0}\) is the projection of the equilibrium effective field on equilibrium magnetization. We first observe that the dynamics fulfills the constraint \(\mathbf{m}_{0}\cdot\Delta\mathbf{m}=0\) (to the first order), therefore we can consider only the dyanamics of the component \(\Delta\mathbf{m}_{\perp}\) of \(\Delta\mathbf{m}\) living in the plane perpendicular to the equilibrium \(\mathbf{m}_{0}\). We also refer to the projection of the demag tensor \(D\) as \(D_{\perp}\). As a consequence, we will deal with vectors having only two components associated with axes transverse to the equilibrium \(\mathbf{m}_{0}\). We observe that the skew-symmetric operator \(\Lambda\) is invertible (such that \(\Lambda\cdot\Lambda=-\mathcal{I}\)) when restricted to the plane orthogonal to \(\mathbf{m}_{0}\) and we express both \(\Delta\mathbf{m}_{\perp},\mathbf{h}_{\rm ac}\) using complex (phasor) domain as \(\Delta\mathbf{m}_{\perp}(t)=\Delta\tilde{\mathbf{m}}e^{j\omega t}\,\ \mathbf{h}_{\rm ac }=\tilde{\mathbf{h}}_{\rm ac}e^{j\omega t}\). By using these formulas in eq.(70), one ends up with: \[\Delta\tilde{\mathbf{m}}=\underbrace{\left[j\omega\Lambda(\mathbf{m}_{0})+(D_{\perp}+ h_{0}\mathcal{I})+j\omega\alpha\mathcal{I}-\xi\omega^{2}\mathcal{I}\right]^{-1}}_{ \chi(\omega)}\tilde{\mathbf{h}}_{\rm ac}\, \tag{71}\] which defines the magnetic susceptibility tensor \(\chi(\omega)\). When referred to principal axes, \(\chi(\omega)\) is a \(2\times 2\) matrix which can be easily computed. It can be shown that the resonance frequencies associated with the above linear dynamical system are the roots of the following fourth-degree polynomial: \[\xi^{2}\omega^{4}-2j\alpha\xi\omega^{3}-(\alpha^{2}+\xi(\omega_{0y}+\omega_{ 0z})+1)\omega^{2}+j\alpha(\omega_{0y}+\omega_{0z})\omega+\omega_{0y}\omega_{0z }=0\, \tag{72}\] where \(\omega_{0y}=D_{y}-D_{x}+h_{\rm dc}\) and \(\omega_{0y}=D_{z}-D_{x}+h_{\rm dc}\). Equation (72) can be solved by using appropriate perturbation theory leading to the following resonance frequencies (computed in the conservative case when Figure 2: Time-domain linear relaxation of \(m_{z}\) computed with IMR, IMR-MS with different time steps and compared with analytical solution of linear iLLG eq.(70). \(\alpha=0\)): \[\omega_{\rm FMR} \approx\pm\frac{\omega_{K}}{\sqrt{1+\xi(\omega_{0y}+\omega_{0z})}}\,\ \omega_{K}=\sqrt{\omega_{0y}\omega_{0z}}\, \tag{73}\] \[\omega_{N} \approx\pm\sqrt{\frac{\sqrt{2\xi(\omega_{0y}+\omega_{0z})+1}+\xi( \omega_{0y}+\omega_{0z})+1}{2\xi^{2}}}\, \tag{74}\] where \(\omega_{K}=\sqrt{(D_{y}-D_{x}+h_{\rm dc})(D_{z}-D_{x}+h_{\rm dc})}\) is the classical Kittel ferromagnetic resonance (FMR) frequency. The former equation describes the influence of inertial effects on the FMR frequency, while the second formula gives the nutation resonance frequency (typically in the THz range). We observe that the above formulas take into account the dependence on the external bias field through the parameters \(\omega_{0y},\omega_{0z}\). It is also possible to determine closed-form expressions for the half-power (Full Width at Half Maximum, FWHM) linewidths: \[\Delta\omega_{\rm FMR} \approx\alpha(\omega_{0y}+\omega_{0z})-\xi(\omega_{0y}^{2}+4 \omega_{0y}\omega_{0z}+\omega_{0z}^{2})\, \tag{75}\] \[\Delta\omega_{N} \approx\frac{\alpha}{\xi}\left[1+\frac{1}{\sqrt{2\xi(\omega_{0y}+ \omega_{0z})+1}}\right]. \tag{76}\] Here we consider an infinite thin-film (\(D_{x}=0,D_{y}=0,D_{z}=1\)) with material parameters: damping \(\alpha=0.023\), saturation magnetization such that \(\mu_{0}M_{s}=0.93\)T, inertial time scale \(\tau=1.26\) ps. In figure 2, we report the time-evolution of \(m_{z}\) during relaxation under zero bias field starting from an initial state tilted in the \(x-z\) plane such that \(m_{x}=0.01,m_{z}=0.01\). The analytical solution of eq.(70) is used to benchmark the proposed numerical techniques with different time step amplitudes. It is apparent that IMR technique allows the use of the largest time steps (up to \(\Delta t=0.01\), around 25 samples per nutation period) yielding no significant loss of accuracy with respect to the analytical solution. One can also see that IMR-MS with first-order formula (64) (IMR-MS1) is accurate when \(\Delta t\sim 0.0001\) (corresponding to 0.61 fs in physical units), while is not able to follow nutation dynamics after 5-6 periods (the period is 0.27) with 50 times larger time step \(\Delta t=0.005\). Conversely, IMR-MS with second order formula (68) (IMR-MS2) performs well with \(\Delta t=0.005\) (slightly above 50 samples per period) and provides a measured speedup of Figure 3: Frequency response power spectrum \(|\Delta\tilde{m}_{z}(\omega)|^{2}\). detail of the nutation peak. The FMR frequency is around 18 GHz whereas the nutation frequency is about 634 GHz. The respective linewidths are about 1 GHz and 27 GHz. Blue line is eq.(71), red dots and dashed line are analytical formulas (73),(75), black symbols are the result of numerical simulations of iLLG dynamics with IMR-MS1. about 30 times compared to the former. This occurs since the average number of Newton iterations remains of the same order of magnitude, namely 2 for IMR-MS1 with \(\Delta t=0.0001\) and 3 for IMR-MS2 with 50 times larger \(\Delta t\) (the tolerance for Newton-Raphson iteration was set to \(10^{-14}\)). Finally, comparing IMR and IMR-MS2 methods, one can see that, despite correctly reproducing the nutation oscillation, IMR-MS2 produces a small phase-shift when the largest time step \(\Delta t=0.01\) is chosen. Next, by using eqs.(70), the frequency response power spectrum of the out-of-plane magnetization component \(m_{z}\) has been computed under a bias field \(h_{ax}=0.35\)T and ac field directed along \(y\). The iLLG equation (3) has been solved numerically in order to determine the frequency response power spectrum. Namely, given the susceptibility \(\chi(\omega)\) in eq.(71) and the cross-power spectrum matrix \((\Delta\tilde{\mathbf{m}})\cdot(\Delta\tilde{\mathbf{m}})^{H}=(\chi(\omega)\cdot \tilde{\mathbf{h}}_{\mathrm{ac}})\cdot(\chi(\omega)\cdot\tilde{\mathbf{h}}_{\mathrm{ ac}})^{H}\) (\({}^{H}\) means conjugate transpose) assuming that the only nonzero component of the input field \(\tilde{\mathbf{h}}_{\mathrm{ac}}=(\hat{h}_{y},h_{z})^{T}\) is \(\tilde{h}_{y}=1\) (Fourier Transform of a Dirac delta) and the output response is the out-of-plane magnetization \(\Delta\tilde{m}_{z}\), the output power spectrum is: \[|\Delta\tilde{m_{z}}(\omega)|^{2}=|\chi_{zy}(\omega)|^{2}\quad, \tag{77}\] which is reported in fig.3 and compared with analytical formulas (73),(75). In order to compute \(|\chi_{zy}|^{2}\) from time-domain numerical simulations, we apply a sinusoidal field \(h_{y}(t)\) at frequency \(\omega_{0}\) with sufficiently small amplitude (in order to stay in the linear regime) and measure the steady-state oscillation of \(m_{z}(t)\) after sufficiently long time so that the transient response has vanished. This has been performed by choosing a simulated time \(T=2\)ns and ac field amplitude equal to 0.06T. Then, the Fast Fourier Transform \(M_{z}(\omega)=\mathcal{F}[m_{z}(t)]\) has been computed and the maximum of the power spectrum \(|M_{z}(\omega)|^{2}\) has been determined. This procedure has been repeated for several values of \(\omega_{0}\). The so determined points form samples of the frequency response power spectrum \(|\chi_{zy}(\omega)|^{2}\). In figure 3, numerical simulations performed with IMR-MS1 with \(\Delta t\) corresponding to 1 fs are used to compute the output power spectrum (black symbols). The results are in excellent agreement with analytical theory. ### Spatially-inhomogeneous spin nutation driven by terahertz applied field In order to perform efficient time-domain micromagnetic simulations of iLLG dynamics with full spatial discretization, the proposed IMR-MS2 time-stepping has been implemented in the finite-difference numerical code MaGICo[38; 44], which retains the same computational cost as the simulation of classical precessional LLG dynamics while keeping important conservation properties as outlined above. To validate the code, we explore typical spatio-temporal patterns of iLLG dynamics considering the time-domain simulation of ultra-short inertial spin waves in a confined ferromagnetic thin-film. At terahertz frequencies, the behavior of small magnetization oscillations significantly deviates from the classical description of exchange-dominated spin-waves, in that an ultimate limiting propagation speed appears[24]. This difference is mostly due to the mathematical structure of eq.(3) compared with the classical LLG precessional dynamics, i.e. the same equation where one sets \(\xi=0\). In fact, when inertial effects are taken into account, the torque proportional to the second-order time-derivative transforms the classical LLG equation into a wave-like equation with hyperbolic mathematical character. In this respect, on short time scales, finite time delays are expected in magnetization response propagation far from local external excitation. The considered sample is made of Cobalt and has a thin-film shape with square cross-section \(200\times 200\,\mathrm{nm}^{2}\) and thickness \(5\,\mathrm{nm}\). The ferromagnetic nanodot is initially at equilibrium, being saturated along the \(x\) axis by a static field \(\mu_{0}H_{ax}=100\) mT. The value of material parameters are \(\gamma=2.211\times 10^{5}\) m A\({}^{-1}\) s\({}^{-1}\), \(\mu_{0}M_{s}=1.6\) T, \(A=13\) pJ/m (\(l_{\mathrm{ex}}=3.57\) nm), \(\tau=0.653\) ps (\(\xi=0.0338\)) and \(\alpha=0.005\). The applied field is a spatially-uniform sine wave step (turned on at \(t=0\)) along the \(y\) axis transverse to the equilibrium configuration with amplitude \(\mu_{0}H_{ay}=100\) mT and frequency \(f=1386\) GHz. For the above choice of parameters, this excitation frequency is slightly above the nutation resonance frequency and corresponds to inertial spin waves with wavelength around 20 nanometers[24]. The numerical simulation of iLLG equation (3) is performed with a time-step of 25 fs, a \(2.5\times 2.5\times 5\) nm\({}^{3}\) computational cell, which corresponds to discretize the thin-film into \(80\times 80\) square prims cell. In order to isolate short-wavelength spin wave propagation from the rest of the simulated spatial pattern, high-pass spatial filtering via two-dimensional Fast Fourier Transform is performed on magnetization components. The simulated time is \(100\,\mathrm{ps}\) and some snapshots of the magnetization out-of-plane component \(m_{z}\), taken at different time instants, are reported in figure 4. The magnetization is initially at the equilibrium and mostly aligned with the static field along the \(x\) direction except close to the square corners where there is the most pronounced deviation. Such a tilting acts as local excitation for inertial spin waves when the time-varying ac field step is applied[24]. In fact, as it can be seen in the various panels of fig.4, two wavepacket with wavelength \(\approx$20\,\mathrm{nm}$\) propagate from the edges of the nanodot toward its center after the application of the ac field, consistently showing a propagation with finite speed \(\approx$2000\,\mathrm{m}\mathrm{/}\mathrm{s}$\) compatible with that predicted by the theory[24]. ## 6 Conclusion In this paper, we have proposed second-order accurate and efficient numerical schemes for the time-integration of the ultra-fast inertial magnetization dynamics. We have shown that the iLLG equation describes a higher-order dynamical system compared to the classical precessional dynamics which requires to double the degrees of freedom for its desription. We have derived the fundamental properties of the iLLG dynamics, namely conservation of magnetization amplitude and 'angular momentum' projection, Lyapunov structure and generalized free energy balance properties, and demonstrated that the proposed implicit Figure 4: Spatial profiles of short-wavelength magnetization out-of-plane component \(m_{z}\) obtained by FFT high-pass filtering at time \(t=$1,10,30,49\,\mathrm{ps}$\). The magenta solid line is a guide for the eye to follow the spin wave oscillation profile along the \(x\) direction. midpoint rule (IMR) time-stepping is able to correctly reproduce them unconditionally. Suitable Newton technique has been developed for the inversion of the nonlinearly coupled system of equations to be solved at each time-step. For large-scale micromagnetic simulations with full spatial discretization, efficient numerical time-stepping schemes based on implicit midpoint rule combined with appropriate multi-step method for the inertial term, termed IMR-MS of order 1 and 2, have been proposed. These schemes retain the same computational cost of the IMR for the classical LLG dynamics while providing conservation of magnetization amplitude and accurate reproduction of the high frequency nutation oscillations. In particular, thanks to the unconditional stability due to its implicit nature along with the second-order accuracy on the inertial term, both the IMR and IMR-MS2 allow choosing moderately large time-steps only based on accuracy requirements for the description of the nutation dynamics. The proposed techniques have been successfully validated against test cases of spatially-homogeneous and inhomogeneous magnetization iLLG dynamics demonstrating their effectiveness. For these reasons, we believe that these numerical schemes can become a standard de facto in the micromagnetic simulation of inertial magnetization dynamics in nano- and micro-scale magnetic systems. ## Acknowledgements M.d'A., S.P. and C.S. acknowledge support from the Italian Ministry of University and Research, PRIN2020 funding program, grant number 2020PY8KTC.
2305.04909
On the nature of the planet-powered transient event ZTF SLRN-2020
The Red Nova ZTF SLRN-2020 is the third transient event with properties that are compatible with the merger of a planet with a main sequence (or close to) star on a dynamical timescale. While the two first transient events occurred in young stellar objects, ZTF SLRN-2020 occurred in an old system. Nonetheless, I show that the three star-planet intermediate luminosity optical transients (ILOTs, also termed Red Novae) occupy the same area in the energy-time diagram of ILOTs. Based on models for ILOTs that are power by stellar binary interaction I suggest that the planet in ZTF SLRN-2020 launched jets at about its escape speed before it was engulfed by the star. Interestingly, the escape speed from the planet is similar to the orbital speed of the planet. This leads to an outflow with a very low terminal velocity, much below the escape velocity from the star, and in concentration around ~45 degrees to the equatorial plane. As well, the planet might have lost back some of the accreted mass just before engulfment, forming an accretion disk around the star. This disk might have launched jets during the main outburst of the event. The jets form a bipolar expanding nebula.
Noam Soker
2023-05-08T17:46:20Z
http://arxiv.org/abs/2305.04909v2
# On the nature of the planet-powered transient event ZTF SLRN-2020 ###### Abstract The Red Nova ZTF SLRN-2020 is the third transient event with properties that are compatible with the merger of a planet with a main sequence (or close to) star on a dynamical timescale. While the two first transient events occurred in young stellar objects, ZTF SLRN-2020 occurred in an old system. Nonetheless, I show that the three star-planet intermediate luminosity optical transients (ILOTs, also termed Red Novae) occupy the same area in the energy-time diagram of ILOTs. Based on models for ILOTs that are power by stellar binary interaction I suggest that the planet in ZTF SLRN-2020 launched jets at about its escape speed before it was engulfed by the star. Interestingly, the escape speed from the planet is similar to the orbital speed of the planet. This leads to an outflow with a very low terminal velocity, much below the escape velocity from the star, and in concentration around \(\approx 45^{\circ}\) to the equatorial plane. As well, the planet might have lost back some of the accreted mass just before engulfment, forming an accretion disk around the star. This disk might have launched jets during the main outburst of the event. The jets form a bipolar expanding nebula. stars: jets; planet-star interactions; Planetary Systems; stars: variables: general 0000-0002-4880-8800]Noam Soker 0000-0002-4880-7888]Roan ## 1 Introduction The recently analyzed (De et al., 2023) interesting transient event ZTF SLRN-2020 is the fourth intermediate luminosity optical transient (ILOT; or Red Nova) that was claimed to be powered by star-planet interaction.1 Footnote 1: I refer to all gravitational-powered transients as ILOTs, not including supernova nor dwarf novae (e.g., Berger et al., 2009; Kashi and Soker, 2016; Muthukrishna et al., 2019). Other researchers use other terms (e.g., Jencson et al., 2019; Pastorello et al., 2019; Pastorello and Fraser, 2019). The ILOT class includes in it the subclass of Red Novae and some similar sub-classes. The first claim by Retter and Marom (2003) and Retter et al. (2006) that V838Mon was powered by star-planet interaction was refuted on the ground of energy considerations (e.g., Soker and Tylenda, 2003). The second claim for an ILOT powered by a star-planet interaction is that the unusual outburst of the young stellar object (YSO) ASASSN-15qi was powered by the accretion of a tidally-disrupted Jupiter-like planet (Kashi and Soker, 2017). The accretion via an accretion disk powered the event, probably by launching jets. Kashi et al. (2019) made the third claim and argued that the \(\approx 800\) days-long eruption of the young stellar object ASASSN-13db was powered by engulfment of the remains of a planet that was tidally-shredded by the YSO. These last two claims are not refuted (yet), and so the new claim by De et al. (2023) is the third claim for a star-planet-powered Red Nova that still holds. From their analyses of the observations De et al. (2023) deduce that a star of a mass \(M_{1}\simeq 0.8-1.5M_{\odot}\) on the main sequence or early sub-giant phase (Hertzsprung gap) and with a radius of \(R_{1}\simeq 1-4R_{\odot}\) engulfed a planet to power this event that radiated in the first 150 days an energy of \(E_{\rm rad}\simeq 6.5\times 10^{41}(d/4\ {\rm kpc})^{2}\) erg, where \(d\) is the distance to this ILOT. De et al. (2023) argue for a planet mass of \(M_{\rm p}\simeq 0.01M_{\odot}\). They further deduce from the infrared that the interaction started at least \(\approx 7\) months before the main outburst, and that the interaction expelled dust with a velocity of \(v_{\rm d}\simeq 35\ {\rm km\ s^{-1}}\). In this short study I further analyze the new event ZTF SLRN-2020. I place it on the energy-time diagram of ILOTs and compare it with predictions of previous studies and discuss the common properties and differences from the two earlier ILOTs powered by star-planet interaction (section 2). I then suggest (section 3) that the slowly expanding dust was launched from an accretion disk around the planet. I do not study other theoretical aspects that were worked out by earlier studies (e.g., Bear, Kashi, and Soker, 2011; Metzger, Giannios, and Spiegel, 2012; Yamazaki, Hayasaki, and Loeb, 2017; Gurevich, Bear, and Soker, 2022; O'Connor et al., 2023). I summarize in section 4. ## 2 Comparing ZTF slkn-2020 with other star-planet IDs I consider the duration of the star-planet ILOTs and their total energy that includes the radiated energy and the kinetic energy of the ejecta. I end this section with the discussion of the main difference between them. The total radiated energy of ZTF slkn-2020 is \(E_{\rm rad}\simeq 6.5\times 10^{41}(d/4\ {\rm kpc})^{2}\) erg, and the decline time by three magnitudes is about 200 days (De et al., 2023). De et al. (2023) consider two limits for the duration of the event. Its lightcurve plateau duration, \(\simeq 26\) days, and the time it radiated 90% of its the total radiated energy, \(103\pm 20\) days. The total radiate energy divided by the luminosity during the plateau \(L_{\rm p}\simeq 1.1\times 10^{35}(d/4\ {\rm kpc})^{2}\) erg s\({}^{-1}\)(De et al., 2023) gives a timescale of \(\simeq 70\) days. I take the range to include all these timescales, namely, from 26 days to 200 days. I mark these two end with yellow-red stars on Fig. 1. Fig. 1 that is adapted from Kashi & Soker (2017) presents many ILOTs in a plane of their total energy versus their typical timescale. Note that the energy is the total energy, including radiation and kinetic energy. When there is no information on the kinetic energy the structure of this diagram assumes that the total energy is ten times the radiated energy. This estimate gives here a total energy of \(E\simeq 6.5\times 10^{42}\) (for a distance of \(d=4\ {\rm kpc}\)) for ZTF slkn-2020. De et al. (2023) crudely estimate an ejecta mass and velocity \(M_{\rm ej}\approx 3\times 10^{-5}M_{\odot}\) and \(v_{\rm ej}\approx 100\ {\rm km\ s^{-1}}\). This gives a kinetic energy of \(E_{\rm ej}\approx 3\times 10^{42}\) erg. The estimated total energy from observations is therefore \(E_{\rm ej}+E_{\rm rad}\approx 3.6\times 10^{42}\) erg. Therefore, the value of \(E\simeq 6.5\times 10^{42}\) that I take here is a reasonable upper value of the total energy (and there is the additional uncertainty of the distance). I mark the location of ZTF slkn-2020 on Fig. 1 with \(E\simeq 6.5\times 10^{42}\) and the range of timescales by the two yellow-filled blue circles and a line connecting them. Kashi et al. (2019) analyzed the ILOT ASASSN-13db that was observed by Sicilia-Aguilar et al. (2017). The total duration of high emission lasted \(\approx 800\) day, ending with a decline that lasted \(\approx 55\) days. The total radiated energy during the (more or less) plateau of \(\approx 800\) days is \(E_{\rm rad}\approx 2\times 10^{41}\) erg. The total energy of the event is about an order of magnitude larger. Kashi et al. (2019) scale it with \(\approx 10^{42}\) erg. I take it here to be ten times later at \(E\simeq 2\times 10^{42}\). I mark this energy with the span of the timescale of 55 to 800 days. This timescale range is not the uncertainty in observations, but rather the unclear way by which the event timescale should be defined, i.e., should we take only the decline phase by three magnitudes or so, or the entire duration, or the plateau duration, etc. From the locations of the three ILOTs (Red Novae) claimed to be powered by star-planet interaction, ASASSN-15qi, ASASSN-13db, and ZTF slkn-2020, on the Energy-Time diagram we can learn the following. (1) V838 Mon is much above their location, and it was not powered by a star-planet interaction, contrary to the claim by Retter & Marom (2003). (2) The three star-planet ILOTs occupy a relatively well defined area in the energy-time diagram that is clearly below those that are thought to be powered by stellar-binary systems. (3) The star-planet ILOTs have typical timescales that are much longer than the dynamical time of the star, by about two orders of magnitude and more. This might suggest a powering process that is much longer than the dynamical time scale. An accretion disk that is formed by the planet might have this property (e.g., Bear, Kashi, & Soker, 2011). (4) The timescale of the star-planet ILOTs is not necessarily well defined, as a plateau phase in their lightcurve might be very long. Nonetheless, their decline phase might have a similar behavior as regular ILOTs (e.g., Kashi et al., 2019). (5) The recently added star-planet ILOT, ZTF slkn-202 (De et al., 2023), does not seem to be a YSO (or even a pre-main sequence star) as the other two are. Despite that it occupies the same general area of the other two star-planet ILOTs and close to the area that Kashi & Soker (2017) mark to be due to star-planet interaction in planetary systems of YSOs (green lines on the lower left of the diagram). This suggests that there are common powering processes to these three ILOTs. A common properties might results from accretion disks that launch jets. I turn to study this possibility in section 3. There are some differences between the three ILOTs, in particular the multi-outbursts that ASASSN-15qi and ASASSN-13db had, but not ZTF slkn-2020. ASASSN-13db experienced a shorter and fainter outburst in 2013 (Holoien et al., 2014), about a year before the beginning of the main outburst (Sicilia-Aguilar et al., 2017). Kashi et al. (2019) attribute this early outburst to a typical outburst of YSOs. The ILOT ASASSN-15qi experienced an outburst in 1976, 39 years prior to the main outburst (Herczeg et al., 2016). There is not enough data to classify the early outburst either as an ILOT with another planet or as a typical YSO outburst. In any case, it seems that the two early outbursts in the YSO ILOTs, which ZTF slkn-2020 was not observed to have, result from ASASSN-15qi and ASASSN-13db being YSOs. ## 3 The roles of jets Observations and theoretical arguments suggest that jets play major roles in many ILOTs (Red Novae). Observations of ILOTs that have a spatially resolved ejecta show the ILOTs to possess bipolar structures that strongly hint at shaping by jets. These include the Great Eruption of Eta Carinae (Davidson, & Humphreys, 1997), a luminous blue variable (LBV) with its bipolar Homunculus nebula, V4332 Sgr that has a bipolar structure (Kaminski et al., 2018), and Nova 1670 (CK Vulpeculae) with a 350-years old bipolar nebula (Shara et al., 1985) that has an S-morphology (Kaminski et al., 2020; Kaminski et al., 2021), a morphological type that must be shaped by jets. In a new study, Mobeen et al. (2023) report that the ejecta (nebula) of V838 Mon is bipolar. Theoretically, jets are very efficient in powering ILOTs and more efficient than equatorial ejecta (Soker, 2020; for powering by equatorial ejecta see, e.g., Pejcha et al., 2016, 2016; Metzger, & Pejcha, 2017). Specific studies show that jets can account for the lightcurves of at least some ILOTs (e.g., Soker, 2020; Soker & Kaplan, 2021). These studies were aiming at ILOTs powered by stellar binary interaction. I turn to suggest that the planet in ZTF SLRN-2020 also launched jets (or disk-wind). I consider the following parameters that De et al. (2023) infer for the planetary system ZTF SLRN-2020. A stellar mass of \(M_{1}\simeq 1M_{\odot}\), a stellar radius of \(R_{1}\simeq 1-4M_{\odot}\), and a planet mass of \(M_{\rm p}\simeq 0.01M_{\odot}\). For the Figure 1: Observed transient events on the energy time diagram adapted from Kashi & Soker (2017) (where more details can be found). Blue empty circles represent the total (radiated plus kinetic) energy of the observed transients as a function of the duration of their eruptions, i.e., usually the time for the visible luminosity to decrease by 3 magnitudes. The three lines on the left are theoretical estimates for ILOTs powered by planet-brown dwarf (BD) interaction (Bear, Kashi, & Soker, 2011). Kashi & Soker (2017) place the ILOT ASASSN-15qi (observational data from Herczeg et al., 2016) and argued it was powered by a star-planet interaction. I added ASASSN-13db (observational data from Sicilia-Aguilar et al., 2017) with its estimated total energy that Kashi et al. (2019) claimed to have been powered by a star engulfing planet debris, and ZTF SLRN-2020 from De et al. (2023) who argued it was powered by a star engulfing a planet. Both the radiated energy (yellow-filled red stars) and the crudely estimated total energy (radiation + kinetic; two yellow-filled blue circles) are shown with the timescale range of ZTF SLRN-2020 (see text). The timescale range is not due to uncertain observations, but rather in the way that one defines the timescale, total duration or decline rate. The lower-left part (hatched in green) is the extension that Kashi & Soker (2017) made to include YSO-planet systems. The new observations suggest that systems of a well evolved main sequence star and a planet also populate this region. Abbreviation: BD: brown dwarf; GE: Great Eruption; ILRT: intermediate luminosity red transient; LRT: luminous red transient; SN: supernova; VMSs: very massive stars; planet to expel mass from the system or to accrete mass it should orbit very close to the stellar surface. For a solar mass and an orbit at \(a=2R_{\odot}\) the orbital velocity is \(v_{\rm orb}=310\) km s\({}^{-1}\). The radius of the planet of the above mass is \(R_{\rm p}\simeq 0.1R_{\odot}\)(e.g., Bashi et al., 2017), and so the escape velocity from the planet is \(v_{\rm p,es}\simeq 200\) km s\({}^{-1}\). The ILOT ZTF SLRN-2020 had months of pre-outburst activity, including mass loss (De et al., 2023). I consider a process by which the planet accreted mass and launched jets (or a bipolar disk wind) that powered the pre-outburst activity. As is the case with stellar winds, the typical terminal velocity of jets is about the escape velocity from the object that launches the jets (e.g., Livio, 2009). The properties of jets might change over short timescales relative to their total activity time period. Some parts in the jets during some jet-launching episodes might have terminal velocities that are larger than the escape velocity, i.e., \(v_{\rm jet,m}=\beta v_{\rm p,es}\) with \(\beta>1\). For the parameters I take above, the escape velocity from the star at the location of planet is \(v_{1,\rm es}=2^{1/2}v_{\rm orb}\simeq 440\) km s\({}^{-1}\). Because the jets are launched perpendicular to the orbital plane, the terminal velocity of the fastest parts of the jets relative to the star is \[v_{\rm jet,m,1}=\sqrt{(\beta v_{\rm p,es})^{2}+v_{\rm orb}^{2}}. \tag{1}\] These segments of the jets escapes the system if \(v_{\rm jet,m,1}>v_{1,\rm es}\), which reads \(\beta>v_{\rm orb}/v_{\rm p,es}\simeq 1.5\). The property of this system that the escape velocity from the planet is of the same order of magnitude as the escape velocity from the system implies that some jets segments can reach the escape velocity, but not by much. These jet segments barely escape the star. If they do, their final outflow velocity from the system is much smaller than the escape velocity from the system. I suggest that this pre-outburst jet activity explains the slow outflow velocity of the dust \(\approx 35\) km s\({}^{-1}\ll\)\(v_{\rm orb}\)(De et al., 2023). Because the outflow velocity from the planet of the escaping gas/dust, which is perpendicular to the orbital plane, is about equal to the orbital velocity, the outflow direction of the escaping dust/gas in this case is about \(\theta_{\rm j}\approx 45^{\circ}\) to the equatorial plane. This forms a bipolar outflow morphology, in addition to possible concentration of outflowing gas/dues in the equatorial plane. The accreted mass onto the planet forms an outer layer around the planet. When the planet spirals-in closer to the stellar surface, the planet might lose this material back to the star, hence forming an accretion disk around the star. This accretion disk might launch much faster jets. These jets might play a role in powering the main outburst, as was suggested for ILOTs power by stellar binary systems. If we take the lowest mass range of masses that De et al. (2023) discuss for the planet in ZTF SLRN-2020, \(M_{\rm p}\simeq 10^{-4}M_{\odot}\), then we are in a different regime. The average density of such planets is \(<1\) g cm\({}^{-1}\) and they might be tidally disrupted by a sun like star on the main sequence (e.g., Yamazaki, Hayasaki, & Loeb, 2017). This brings the scenario to be much more similar to what Kashi & Soker (2017) suggested for the YSO-planet system ASASSN-15qi and Kashi et al. (2019) suggested for the YSO-planet system ASASSN-13db. However, De et al. (2023) consider the tidal disruption of the planet to be unlikely. ## 4 Summary The goal of this study is to group the newly analyzed (De et al., 2023) star-planet ILOT (Red Nova) ZTF SLRN-2020 with the two other star-planet ILOTs, ASASSN-15qi (Kashi & Soker, 2017) and ASASSN-13db (Kashi et al., 2019). I added ASASSN-13db and ZTF SLRN-2020 to the energy-time diagram of ILOTs (Fig. 1). These three star-planet ILOTs occupy a well defined area in that diagram, below all other ILOTs and below classical novae. They are in the general area that was marked as YSO ILOTs in young star-planet interactions. The new event ZTF SLRN-2020 is not a YSO-planet system, but nonetheless located in the same area. This might points to a similar powering mechanism. I suggested (section 3) that one of the common ingredients might be the launching of jets by the star. For the ILOT ASASSN-15qi that occurred in a young planetary system Kashi & Soker (2017) suggested that the star tidally disrupted the planet, forming an accretion disk that launched the jets during the event. In the star-planet ILOT ZTF SLRN-2020 the star engulfed the planet rather than tidally disrupted the planet (De et al., 2023). I suggested that the planet accreted some mass from the star in the months before the main outburst. At the early phase of the outburst as the planet spiralled-in close to the star the star tidally removed the accreted mass. This formed an accretion disk around the star that launched jets. This suggested process must be confirmed by three-dimensional hydrodynamical simulations. I also suggested that the planet launched jets as it accreted mass. These formed the slowly expanding outflowing dust. The jets that the planet launched form a concentrated outflow at \(\approx 45^{\circ}\) to the equatorial plane. The jets that the star might have launched are perpendicular to the equatorial plane. Overall, I expect the outflow from ZTF SLRN-2020 to form a bipolar nebula, as other ILOTs have, e.g., Nova 1670 (CK Vulpeculae; Kaminski et al., 2020; Kaminski et al., 2021; section 3). ## Acknowledgments I thank Dima Shishkin and Amit Kashi for their help with the graphics. I thank an anonymous referee for helpful comments. This research was supported by a grant from the Israel Science Foundation (769/20).
2303.11734
Unlocking Layer-wise Relevance Propagation for Autoencoders
Autoencoders are a powerful and versatile tool often used for various problems such as anomaly detection, image processing and machine translation. However, their reconstructions are not always trivial to explain. Therefore, we propose a fast explainability solution by extending the Layer-wise Relevance Propagation method with the help of Deep Taylor Decomposition framework. Furthermore, we introduce a novel validation technique for comparing our explainability approach with baseline methods in the case of missing ground-truth data. Our results highlight computational as well as qualitative advantages of the proposed explainability solution with respect to existing methods.
Kenyu Kobayashi, Renata Khasanova, Arno Schneuwly, Felix Schmidt, Matteo Casserini
2023-03-21T10:46:34Z
http://arxiv.org/abs/2303.11734v1
# Unlocking Layer-wise Relevance Propagation for Autoencoders ###### Abstract Autoencoders are a powerful and versatile tool often used for various problems such as anomaly detection, image processing and machine translation. However, their reconstructions are not always trivial to explain. Therefore, we propose a fast explainability solution by extending the Layer-wise Relevance Propagation method with the help of the Deep Taylor Decomposition framework. Furthermore, we introduce a novel validation technique for comparing our explainability approach with baseline methods in the case of missing ground-truth data. Our results highlight computational as well as qualitative advantages of the proposed explainability solution with respect to existing methods. Machine Learning, Autoencoders, Deep Learning, Autoencoders ## 1 Introduction Autoencoders (Rumelhart et al., 1986) are neural network architectures that play a fundamental role in unsupervised machine learning. They are frequently used in various tasks such as anomaly detection, machine translation and image processing (Bank et al., 2020). Autoencoders are designed to encode the input data into a compressed, meaningful representation. This representation is then decoded in such a way that the reconstruction is as close as possible to the input data. Specifically, Autoencoders aim to minimize a reconstruction error, which is computed with a loss function based on the difference between the original input and its reconstruction. By minimizing this error, Autoencoders learn the informative representation of the data. Nonetheless, despite their good performance and widespread usage across different applications, they are hardly interpretable due to their intrinsic nonlinearity. In particular, when an Autoencoder fails to properly reconstruct a given input, understanding the rationale behind this failure is challenging. To that end, the addition of explainability capabilities to these types of models is highly desirable. One way to explain an Autoencoder output for a given sample can be achieved by using attribution-based explainability methods (Linardatos et al., 2021). The main idea of these approaches is to explain machine learning models by assigning a relevance score to each input feature depending on its importance for the model's prediction. Such scores can be computed by measuring the reconstruction error of each individual feature. But depending on the approach used, additional noise may appear in explanations, as depicted in Figure 1 (c). This happens due to the fact that relevance scores are being assigned without taking into consideration any contextual information. Another approach is to modify the original input and measure the impact of such modifications on the model's output (Lundberg and Lee, 2017; Ribeiro et al., 2016). Then, high relevance scores are assigned to features that have a significant impact. However, in this case the number of perturbations to reli Figure 1: Explanations produced for an image of a damaged object from the MVTec dataset (Bergmann et al., 2019). The figure illustrates the (a) original image, (b) ground truth damaged area, (c) explanations produced by the baseline method and (d) results of our LRP-based explainability approach. Here, the baseline method focuses not only on the damaged area of the hazelnut, but on the borders of the object as well. Our approach, on the other hand, focuses attention on the damaged area. ably compute such scores increases exponentially with the dimensionality of the input data. Hence, this process may get computationally expensive. Other types of explainability approaches (Bach et al., 2015; Shrikumar et al., 2017; Sundararajan et al., 2017) leverage the knowledge of architecture configurations and often come with computational benefits. In this work we propose an explainability approach specific to Autoencoders by extending the Layer-wise Relevance Propagation (LRP) framework (Bach et al., 2015). Figure 1 (d) illustrates an example of an explanation generated with our method when applied to a convolutional Autoencoder model. This model is trained to reconstruct images of a non-damaged sample object from the MVTec dataset (Bergmann et al., 2019). Figure 1 shows that our explanation (d) is more focused on the damaged part, while the reconstruction error-based one (c) focuses not only on the damaged area but also on borders and background noise. Our contribution is two-fold: * we propose a novel LRP-based explainability approach specific to Autoencoders, that allows the propagation of reconstruction errors and assignment of relevance scores to input features; * in order to assess explainability methods' performance, we introduce a self-supervised validation approach. The latter produces artificial explanation labels, which can be used for evaluation. ## 2 Related Work Attribution-based explanation methods (Linardatos et al., 2021) have recently gained popularity in the field of explainable artificial intelligence (Samek et al., 2021). The main idea of such approaches is to compute relevance scores for each input feature. The scores reflect the features corresponding importance with respect to the model's decision-making process. Formally, given an input feature vector \(x=[x_{1},\dots,x_{N}]\in\mathbb{R}^{N}\) and a model output \(y\), the attribution-based approach assigns relevance scores \(R=[R_{1},...R_{N}]\in\mathbb{R}^{N}\) to features, where \(R_{i}\) designates the contribution of \(x_{i}\) to the output \(y\) of the model. We distinguish between two classes of attribution-based methods: perturbation-based methods (Lundberg and Lee, 2017; Ribeiro et al., 2016) and backpropagation-based methods (Shrikumar et al., 2017; Sundararajan et al., 2017; Bach et al., 2015). While the former are model-agnostic and, therefore, applicable to any black-box model, the latter are model-specific and exploit the underlying structure of the model to provide explanations. Model-agnostic perturbation-based methods (e.g. SHAP (Lundberg and Lee, 2017; Antwarg et al., 2019; Kim et al., 2021), LIME (Ribeiro et al., 2016)) typically analyze output values of multiple modifications of the original input feature vector, and assign relevance scores to features based on this analysis. Therefore, they are computationally expensive (Ullah et al., 2020), which makes them unsuitable for tasks with a high input dimensionality. On the other hand, backpropagation-based approaches (Ancona et al., 2018) exploit the structure of predictive models to compute relevance scores in a single backward pass, which makes them computationally efficient. As these methods are model-specific, their field of application is constrained, as they require knowledge of the architecture. Shrikumar et al. (2017) and Sundararajan et al. (2017) propose to estimate relevance scores based on differences between the model's output for a given sample and the model's output for what they call a baseline input. Such a baseline input needs to be selected with information on the application domain. For example, in an image classification task, a black image may be selected as a baseline to represent the absence of any information. Therefore, the main limitation of such an approach is the requirement of domain-specific knowledge. In contrast, the LRP approach (Bach et al., 2015; Montavon et al., 2019) is designed for neural network architectures and directly propagates the model's output backward using propagation rules designed for various layer types, such as fully-connected, pooling, convolutional layers, etc. (Samek et al., 2017; Gholizadeh and Zhou, 2021; Bohle et al., 2019). The LRP explainability approach is typically applied to supervised tasks (Arras et al., 2017; Eitel et al., 2019; Bohle et al., 2019; Agarwal et al., 2021) and to, the best of our knowledge, no LRP propagation rule for the reconstruction loss function of Autoencoders exists. Therefore, in this work, we propose such a rule that permits propagation of an Autoencoder's reconstruction error throughout the network to assign relevance scores to the corresponding input feature vector. ## 3 A Novel LRP Rule for Autoencoders In this section, we start by introducing Layer-wise Relevance Propagation (LRP), which we leverage for the explanation of Autoencoders. Then, we describe the challenges of applying this technique to neural networks with a reconstruction layer. Further, we briefly introduce the key concepts of the Deep Taylor Decomposition method (DTD) (Montavon et al., 2017) that we use to extend the LRP approach and make it applicable to Autoencoders. Finally, we present our novel LRP rule, which allows us to explain the reconstruction error of Autoencoders. ### Layer-wise Relevance Propagation Layer-wise Relevance Propagation (Bach et al., 2015) is an explainability method designed for neural networks to pro duce relevance scores for each feature of an input sample \(x\). LRP assigns these scores by backward propagation from the model's output \(y=f(x)\) to the input features. First, the approach assigns a relevance score to the output of the model \(R_{o}=f(x)=y\). Then, \(R_{o}\) is redistributed to the neurons from the reconstruction layer \(l\), according to the LRP rule designed for the corresponding layer type. Further, relevance scores of those neurons are in turn propagated to the neurons from layer \(l-1\). Thus, this procedure is repeated until the input layer of the network is reached. All LRP rules satisfy a conservation property (Bach et al., 2015), which is defined by two equations. The first equation states that the sum of the relevance values received by a neuron should be equal to its own relevance value: \[R_{i}^{(l)}=\sum_{k}\mathcal{R}_{i\gets k}^{(l,l+1)}, \tag{1}\] where \(R_{i}^{(l)}\) is the relevance value assigned to neuron \(i\) in layer \(l\), \(\mathcal{R}_{i\gets k}^{(l,l+1)}\) is the relevance value that is distributed from neuron \(k\) in layer \(l+1\) to the neuron \(i\) in layer \(l\). The second equation states that the sum of the relevance values distributed by a neuron should be equal to its own relevance value: \[R_{k}^{(l+1)}=\sum_{i}\mathcal{R}_{i\gets k}^{(l,l+1)}, \tag{2}\] where \(R_{k}^{(l+1)}\) is the relevance score of the neuron \(k\) in layer \(l+1\). Eq. (1) and Eq. (2) lead to the following layer-wise conservation property: \[\sum_{i}R_{i}^{(l)}=\sum_{k}R_{k}^{(m)}, \tag{3}\] and the following global conservation property: \[\sum_{i}R_{i}^{(l)}=f(x), \tag{4}\] where \(i\) and \(k\) denote neurons' indexes in the layers \(l\) and \(m\) respectively. These properties define that the sum of relevance scores of all neurons for a given layer is constant and equal to the relevance score, which is assigned to the output of the model \(R_{o}=f(x)=y\). This global conservation property defined in Eq. (4) is desirable for any explainability method that assigns relevance scores to input features (Bach et al., 2015). Using the conservation property, Bach et al. (2015) define what they refer to as the basic propagation rule, which works for both convolutional and fully-connected layers, as follows: \[\mathcal{R}_{i\gets k}^{(l,l+1)}=R_{k}^{(l+1)}\frac{a_{i}w_{ik}}{\sum_{h \neq i}a_{h}w_{hk}}, \tag{5}\] where \(a_{i}\) is the activation value of neuron \(i\), \(w_{ik}\) is the weight of a link between the \(i\)-th and \(k\)-th neurons in layers \(l\) and \(l+1\) respectively. This rule produces an explanation that is equivalent to the gradient multiplied by the input. Further, Montavon et al. (2019) propose additional rules, such as the LRP-\(\epsilon\) rule to absorb the relevance values of neurons with weak activations and the LRP-\(\gamma\) rule to favor the effect of positive over negative contributions. We refer the reader to the work by Montavon et al. (2019) for more information about these rules. ### Autoencoders Autoencoders are used in various tasks such as anomaly detection or image processing. These networks encode an input feature vector \(x\) into a latent representation and then predict the reconstruction of the input, denoted here as \(\hat{x}\). Typically, a reconstruction loss function \(e(x,\hat{x})\) is used to optimize the parameters of an Autoencoder model. One common example of such a loss function is the \(L_{2}\) loss: \[e(x,\hat{x})=\frac{1}{m}\sum_{i=1}^{m}\left(x_{i}-\hat{x}_{i}\right)^{2}, \tag{6}\] where \(x_{i}\) and \(\hat{x}_{i}\) are the Autoencoder's input and output features, respectively, and \(m\) is the dimensionality of the input feature vector \(x\). Another common example is the \(L_{1}\) loss: \[e(x,\hat{x})=\frac{1}{m}\sum_{i=1}^{m}\left|x_{i}-\hat{x}_{i}\right|. \tag{7}\] We propose a method to extend the LRP explanation approach for Autoencoders to such reconstruction losses. The rule from Eq. (5) is not applicable to this case, as \(e(x,\hat{x})\) depends on both, the output and input layers of the Autoencoder. Thus, we propose a novel LRP rule that permits to propagate the reconstruction error to the Autoencoder's output layer. ### Deep Taylor Decomposition Deep Taylor Decomposition (DTD) (Montavon et al., 2017) is a similar back-propagation explainability approach that assigns relevance scores to the input features. However, while LRP rules are typically designed heuristically, DTD derives rules by using Taylor expansions. It is interesting to note that heuristically-defined LRP rules for some types of layers have a DTD interpretation (Montavon et al., 2019). Moreover, other works (Arras et al., 2019; Samek et al., 2017) combine LRP and DTD rules to propagate relevance scores through an ML model. DTD is inspired by the divide-and-conquer paradigm, leveraging the fact that a deep neural network's function can be decomposed into a set of simpler sub-functions. These sub-functions are defined on single neurons, therefore, they can be easily expanded and decomposed using Taylor expansion. This permits the definition of a propagation rule for each neuron. Then, by aggregating multiple rules, we propagate the relevance from the output of the network to the inputs. More formally, to obtain these decompositions we use the following two steps (Montavon et al., 2017): * we assume that relevance \(R_{j}^{(l)}\) of neuron \(j\) in layer \(l\) depends solely on the set of neurons \(s_{j}=\left\{i_{1},i_{2},\ldots\right\}\) from the previous layer \(l-1\) and, therefore, there exists a function \(f_{R_{j}}(s_{j})=R_{j}^{(l)}\) ; * we identify a set of neurons \(\tilde{s}_{j}=\left\{\tilde{i}_{1},\tilde{i}_{2},\ldots\right\}\) which are referred to as root points and serve as the starting points to compute Taylor expansion. For any input feature vector \(x\), in order to choose a root point \(\tilde{s}_{j}\), we search for a set of neurons that satisfies the two following conditions (Montavon et al., 2017): 1. \(f_{R_{j}}(\tilde{s}_{j})=0\). This condition is necessary to obtain a decomposition that fully redistributes the relevance across the neurons \(\left\{i_{1},i_{2},\ldots\right\}\); 2. \(\tilde{s}_{j}\) lies in the vicinity of \(\hat{s}_{j}\) under a desired distance metric (e.g. \(L_{2}\)). Here, \(\hat{s}_{j}\) is the value of neurons \(s_{j}\) in layer \(l-1\) when a sample \(x\) is propagated through the network. Such a root point is usually obtained as a solution of an optimization problem, by minimizing the following objective: \[\begin{split}\tilde{s}_{j}&=\arg\min_{\xi}\|\xi- \hat{s}_{j}\|^{2}\\ \text{s.t.}& f_{R_{j}}(\xi)=0,\xi\in\Xi,\end{split} \tag{8}\] where \(\Xi\) is the input domain of \(f_{R_{j}}\) (i.e. the set of possible neurons that influence neuron \(j\)). Using the Taylor decomposition, we can then represent \(R_{j}^{(l)}=f_{R_{j}}(s_{j})\) as follows: \[\begin{split} f_{R_{j}}(s_{j})&=f_{R_{j}}(\tilde {s}_{j})\ +\left(\left.\frac{\partial f_{R_{j}}}{\partial s_{j}}\right|_{s_{j}} \right)^{\top}(s_{j}-\tilde{s}_{j})+\varepsilon_{j}\\ &=0+\sum_{i}\underbrace{\left.\frac{\partial f_{R_{j}}}{\partial i }\right|_{\tilde{s}_{j}}\left(i-\tilde{i}\right)}_{\mathcal{R}_{i\gets j }^{(l-1,l)}}+\varepsilon_{j},\end{split} \tag{9}\] where \(\varepsilon_{j}\) denotes the first order Taylor residual, \(|_{s_{j}}\) indicates that the derivative is evaluated at the root point \(\tilde{s}_{j}\) and \(\mathcal{R}_{i\gets j}^{(l-1,l)}\) is the relevance that neuron \(i\) in layer \(l-1\) receives from neuron \(j\) in layer \(l\). Here, the decomposition is done only at the first-order, because the second- and higher-order terms would involve complex combinations of several neurons that propagate relevance, and, therefore, it is more challenging to derive such propagation rules (Montavon et al., 2017). By combining Eq. (3) and Eq. (9), we can finally compute the relevance score of neuron \(i\) in the layer \(l-1\) for a chosen root point as follows: \[R_{i}^{(l-1)}=\left.\sum_{j}\frac{\partial f_{R_{j}}}{\partial i}\right|_{ \tilde{s}_{j}}\left(i-\tilde{i}\right). \tag{10}\] Montavon et al. (2017) suggest to choose root points based on the layer's input domain. For example, when calculating relevance scores for pixels, they propose to constrain the values of root points to be in the range between \(0\) and \(255\). Following Eq. (10), other works (Montavon et al., 2019; Arras et al., 2019) describe different propagation formulas for various types of layers. ### LRP Rule for Reconstruction Loss Functions In this section, we describe a novel LRP rule that we can use to explain an Autoencoder's reconstruction error. This rule allows the propagation of a relevance score from the reconstruction error \(R_{e}=e(x,\hat{x})\) to neurons from the Autoencoder's output layer. The proposed rule can be combined with other rules used for the remaining layers of the Autoencoder, depending on their types. Therefore, relevance scores can be seamlessly propagated all the way from the reconstruction error to the input feature vector. As we generate explanations for a given sample, we can assume without loss of generality that our reconstruction error \(e\) depends solely on the output neurons of the Autoencoder's output layer \(\hat{x}=\left\{\hat{x}_{1},\hat{x}_{2},\ldots\right\}\) and thus we treat the input feature vector \(x\) as a constant. We then derive an LRP rule for the Autoencoder's reconstruction error by decomposing the function \(f_{R_{e}}(\hat{x})=R_{e}=e(\hat{x})\) In order to perform such a decomposition we need to choose a root point \(\tilde{x}=\left\{\tilde{x}_{1},\tilde{x}_{2},\ldots\right\}\) for which \(f_{R_{e}}(\tilde{x})\) is equal to zero. Based on the above assumptions, the only solution is the input feature vector, which is also the optimal solution for Eq. (8). We then perform the Taylor decomposition as follows: \[\begin{split} f_{R_{e}}(\hat{x})&=f_{R_{e}}(\tilde {x})+\nabla f_{R_{e}}(\tilde{x})^{\top}(\tilde{x}-\hat{x})+\\ &+\frac{1}{2}(\tilde{x}-\hat{x})^{\top}H_{e}(\hat{x})(\tilde{x}- \hat{x})+\varepsilon_{e},\end{split} \tag{11}\] where reconstruction error \(e(\tilde{x})=0\) as \(\tilde{x}\) is the root point; \(H_{e}\) is the Hessian matrix of the reconstruction loss function \(f_{R_{e}}\); and \(\varepsilon_{e}\) is the Taylor residual. Note that \(\varepsilon_{e}=0\) for both the \(L_{2}\) and \(L_{1}\) loss functions introduced in Sec. 3.2. Below we describe in detail the derivation of the LRP rule for both \(L_{2}\) and \(L_{1}\) reconstruction functions. \(L_{2}\) reconstruction function.It should be noted that we rely on the second-order Taylor decomposition for the \(L_{2}\) loss function as its first-order derivative is equal to zero, and all the second-order terms involving multiple variables are equal to zero. Thus, we decompose \(f_{R_{e}}(\hat{x})\) for the \(L_{2}\) reconstruction loss from Eq. (6) as follows: \[\begin{split} f_{R_{e}}(\hat{x})&=-\sum_{i}\frac{2 }{m}(\tilde{x}_{i}-\tilde{x}_{i})(\tilde{x}_{i}-\hat{x}_{i})+\frac{1}{m}( \tilde{x}_{i}-\hat{x}_{i})^{2}\\ &=\sum_{i}\frac{1}{m}(\tilde{x}_{i}-\hat{x}_{i})^{2}=\sum_{i} \mathcal{R}^{(l,l_{e})}_{i\gets e},\end{split} \tag{12}\] where \(\tilde{x}_{i}\) and \(\hat{x}_{i}\) are the elements of the root point and reconstructed feature vectors correspondingly with dimension \(m\); \(e\) is the reconstruction error; \(l\) and \(l_{e}\) denotes the Autoencoder's output and reconstruction error layers correspondingly; and \(\mathcal{R}^{(l,l_{e})}_{i\gets e}\) is the relevance score that is propagated from the reconstruction error \(e\) to the neuron \(i\) in the Autoencoder's output layer \(l\). As each neuron \(i\) in the Autoencoder's output layer receives relevance only from reconstruction error \(e\), we derive the propagation rule for the \(L_{2}\) loss as follows: \[\mathcal{R}^{(l,l_{e})}_{i\gets e}=\frac{1}{m}(\tilde{x}_{i}-\hat{x}_{i} )^{2} \tag{13}\] \(L_{1}\) reconstruction function.Similarly, we derive a propagation rule for the \(L_{1}\) loss: \[\begin{split} f_{R_{e}}(\hat{x})=\sum_{i}\frac{1}{m}|\tilde{x}_{ i}-\hat{x}_{i}|;\\ \mathcal{R}^{(l,l_{e})}_{i\gets e}=\frac{1}{m}|\tilde{x}_{i} -\hat{x}_{i}|.\end{split} \tag{14}\] Here, the second-order term is equal to zero, while \(\sum_{i}\frac{1}{m}|\tilde{x}_{i}-\hat{x}_{i}|\) represents the first-order term of the Taylor decomposition. More precisely, the first-order term is defined everywhere except at the singularity \(\tilde{x}_{i}=\hat{x}_{i}\), where we can assume the derivative to be zero. It is important to mention that for any input sample \(x\) both of these propagation rules preserve the conservation property: \[\sum_{i}\mathcal{R}^{(l,l_{e})}_{i\gets e}=R_{e}=e(x,\hat{x}). \tag{15}\] The proposed propagation rules for the \(L_{1}\) and \(L_{2}\) reconstruction loss functions allow us to extend the LRP approach to Autoencoders. In the following section we provide a detailed analysis of the proposed LRP rule by applying it to two challenging anomaly detection tasks. ## 4 Experiments In this section, we describe the results of our experiments with Autoencoders for anomaly detection. We evaluate the proposed approach for Autoencoders on a SQL workload log and on image datasets. First, we assess the performance of the proposed explainability algorithm on anomalies from a SQL workload. The workload is unlabelled, which makes it challenging to evaluate explainability approaches. Therefore, we introduce a self-supervised validation method based on corruption. Second, we generate and visualize explanations for images to explain the anomalous parts of objects detected by a convolutional Autoencoder. ### Anomaly Detection on SQL Logs For our first experiment, we use a dataset that consists of SQL workload logs to train an anomaly detection system for database intrusion detection. The underlying model is a vanilla Autoencoder-based anomaly detector that we train in an unsupervised manner with the goal of memorizing regular database activity. The dataset contains approximately \(10M\) training, \(5M\) validation and \(10M\) test samples. The trained Autoencoder produces small reconstruction errors for test samples similar to training samples. Anomalous user behaviour results in an erroneous reconstruction of the input. An anomaly score is computed based on the \(L_{2}\) distance between Autoencoder's input and reconstruction. The scores are normalized between \(0\) and \(1\). Each dataset's sample is composed of textual and numerical features which, among others, encode various information related to the user, session, and SQL statements. Various standard embedding techniques are applied to encode the \(21\) features. Explainability algorithms assign a relevance score to each of these embedded features. We compare the three following methods: * Residual explanation, which uses the \(L_{2}\) distance between the individual original and reconstructed features as explanations, * SHAP (Lundberg and Lee, 2017), a model-agnostic explainability approach, * our approach (see Sec. 3) using the \(L_{2}\) loss. In our experiment, we use the kernel SHAP implementation with \(1000\) re-evaluations of each prediction and \(755\) as the background dataset size. For our approach, we use the proposed \(L_{2}\) propagation rule for the reconstruction layer and the \(z^{+}\) rule (Montavon et al., 2019) for all fully-connected layers of the Autoencoder except for the first layer, for which we apply the \(w^{2}\) rule. The purpose of our validation approach is two-fold. First, we quantify the performance of an explanation method to prove that the delivered explanations are satisfactory. Second, we discuss time complexity and compare the computational performance of different explainability methods. Quantitative comparison.We quantify the performance of an attribution-based explanation method through a validation method based on corruption. Our approach consists in modifying one input feature of a clean and randomly chosen sample. The modification changes the feature's value in such a way that the Autoencoder produces a high anomaly score for the modified sample. Thus, we obtain a ground truth that indicates the feature causing the high anomaly score of the given sample, and permits the computation of a validation metric to compare different explainability approaches. For the input feature modification we use the following three corruption strategies: null, random and adversarial. The null corruption method changes the feature's value to 0; the random corruption approach modifies the feature's value to a random value sampled from a uniform distribution between 0 and 1 (that is, the same range as the initial feature values). Finally, the adversarial corruption method updates the feature's value in such a way that reconstruction of the given feature does not increase while the the reconstruction error of other features increases. Further details are given in the Appendix. Then, we generate \(K=100\) anomalous samples with the aforementioned corruptions. All these generated samples have anomaly scores greater than a threshold \(T\). We set \(T\) = 0.3, \(T\) = 0.5 and \(T\) = 0.3 for the adversarial, random and null corruptions, respectively. We have chosen these values empirically, based on the difficulty for the corruption method to produce datapoints with an anomaly score exceeding the given threshold. In practice, we expect explanation methods to achieve better performance when \(T\) is large, as the corrupted feature's contribution to the anomaly becomes large. Conversely, when \(T\) is lower, the validation approach leads to assessing an explanation method's sensitivity to identifying a corrupted feature with a lower contribution to the anomaly. To compare different approaches we use a recall metric that we calculate as follows. We define an explanation to be correct when a corrupted feature is among the \(m\in\{1,\dots,M\}\) features with the highest anomaly scores, where \(M\) is the number of features. Otherwise, we define an explanation to be incorrect. We calculate the recall metric based on generated validation samples \(recall=N_{+}/(N_{+}+N_{-})\), where \(N_{+}\) and \(N_{-}\) denote the number of correct and incorrect explanations respectively. Figure 2 illustrates the recall metric defined above for the null, random and adversarial corruption validation datasets correspondingly. The experiment shows that the Residual explanation achieves a good validation score on the null and random corruption datasets. However, the performance on the more challenging adversarial corruption dataset is poor. On the other hand, SHAP and our method achieve good performance on all datasets. This shows that our proposed approach and SHAP are more generic and succeed in explaining a broader range of anomalies. Time Complexity.Even though SHAP produces accurate results, our method delivers explanations of comparable quality with substantially lower time complexity. This is explained by the fact that our approach only requires one forward and backward pass to compute relevance scores, while SHAP requires a forward pass for each generated perturbation and for each re-evaluation. In our experiments, using the parameters described at the beginning of this Section, the execution time required to compute explanations is three to four orders of magnitude faster with our method compared to SHAP. We conclude that SHAP and the proposed approach produce more accurate results compared to the Residual explanation. Nonetheless, as our method is several orders of magnitude faster than SHAP, our approach is more suitable for time-sensitive applications. ### Anomaly Detection on Image Dataset For our visual anomaly detection experiments we use images from the MVTec dataset (Bergmann et al., 2019), which contains several objects with various types of damage. For each of these objects we train a convolutional Autoencoder model as suggested by Bergmann et al. (2019). The training set consists of images of non-damaged objects and the test set contains images of damaged objects. In this case the Autoencoder is expected to show high reconstruction error for the damaged parts of objects in the test set. In this experiment, we provide explanations for this reconstruction error and compare them with ground-truth images of damaged areas that are also provided in the dataset. We convert each image to gray-scale, apply a Gaussian filter with kernel size \(3\) and \(\sigma=0.5\) and resize to \(128\times 128\) pixels. To avoid overfitting, we use \(10\%\) of the training dataset as validation. We also preprocess the training and validation data by applying random rotations and flips to the images. Using this augmentation method, we generate \(10000\) training samples and \(1000\) validation samples for each class of objects. We use a similar architecture as the one proposed by Bergmann et al. (2019), with the following modifications: * all leaky ReLU activations are replaced with ReLU activations; * convolutional kernels sizes are modified. Appendix A provides more details about the exact model architecture. To evaluate our explanations, we rely on the precision-recall metric. We compute this metric separately for the various damaged object groups. For each group we set anomaly thresholds: \(T=(t_{1},\dots,t_{i},\dots,t_{1000})\) in range \((\min(R_{j}),\max(R_{j}))\), where \(\min(R_{j})\) and \(\max(R_{j})\) are minimal and maximal pixel relevance values across all pixels and all images of the given group \(j\). Any pixel that is assigned a relevance value higher than \(t_{i}\) is considered as damaged. For each threshold \(t_{i}\), we calculate precision and recall values by taking into account all pixels that have relevance values higher than \(t_{i}\). Further, we compute the average precision (AP) metric by calculating the area under the precision-recall curve. We rely on two baseline methods to evaluate the performance of our convolutional Autoencoder's explanation approach, which we denote as Residual-\(L_{1}\) and Residual-\(L_{2}\). These methods calculate reconstruction error as \((x-\hat{x})^{2}\) and \(|x-\hat{x}|\) respectively. Similarly, for our approach we investigate the following two settings, when the reconstruction error is computed using either \(L_{1}\) or \(L_{2}\) losses and propagated using the LRP-rules from Eq. (13) and Eq. (14) respectively. We refer to these methods as Ours-\(L_{1}\) and Ours-\(L_{2}\). In both these methods we rely on the \(z^{+}\)-rule for the relevance propagation through convolutional layers, and \(z\)-box-rule for the first layer. Table 1 shows a comparison of AP for several object classes from the MVTec dataset (Bergmann et al., 2019). On average, our approaches produce higher scores compared to the baseline methods. We also notice that some damages are harder to explain, e.g. the transistor object class with the cut lead damage. The main reason is that the Autoencoder model is not able to reach good reconstruction accuracy, which consequently lowers the performance for all explainability approaches. Furthermore, Figure 3 illustrates explanations that are produced by the proposed approach and baseline method. We notice that our explanations focus on the important part of the object and its neighbouring area that belongs to the object. Also, it does not assign much importance to the background pixels. In contrast, the Residual explanations highlight various parts of images, which are primarily borders and are not necessarily relevant to the damaged part of the object. Additional examples to highlight the preceding statement are provided in the Appendix 4. Finally, we analyze images, on which Residual explanations outperform our approach with respect to the AP metric. We notice that the score does not always reflect the quality of produced explanations. For example, the samples of objects with the hazelnut class, which is depicted by Figure 3, have pixels with high relevance scores outside of the ground truth area. However, all of those pixels are located next to the correct damaged parts of the object. In contrast, the Residual \begin{table} \begin{tabular}{c l l c|c c} \hline \hline \multirow{2}{*}{Object} & \multirow{2}{*}{Damage Type} & \multicolumn{2}{c|}{Residual} & \multicolumn{2}{c}{Ours} \\ & & \(L_{1}\) & \(L_{2}\) & \(L_{1}\) & \(L_{2}\) \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & Bent Lead & **0.104** & **0.103** & 0.027 & 0.028 \\ & Cut Lead & **0.082** & 0.079 & 0.038 & 0.040 \\ & Damaged Case & 0.061 & 0.058 & **0.122** & 0.103 \\ & Misplaced & 0.494 & 0.498 & **0.888** & 0.867 \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & Broken Large & 0.305 & 0.298 & **0.366** & **0.366** \\ & Broken Small & 0.206 & 0.202 & **0.340** & 0.321 \\ & Contamination & 0.148 & 0.155 & **0.369** & 0.263 \\ \hline \multirow{4}{*}{ \begin{tabular}{} \end{tabular} } & Crack & **0.478** & 0.475 & 0.317 & 0.295 \\ & Hole & 0.455 & **0.457** & 0.240 & 0.218 \\ \cline{1-1} & Print & **0.661** & **0.660** & 0.557 & 0.514 \\ \hline Overall & & 0.298 & 0.298 & **0.328** & 0.304 \\ \hline \hline \end{tabular} \end{table} Table 1: AP metric for several object classes from the MVTec dataset (Bergmann et al., 2019). Here we compare the \(L_{1}\) and \(L_{2}\) Residual explanation methods with the proposed approach for \(L_{1}\) and \(L_{2}\) reconstruction functions. Figure 2: Resulting recall metrics for Ours (blue \(o\)), SHAP (green \(+\)) and Residual-\(L_{2}\) (orange \(*\)) explanation approaches applied to (a) null corruption, (b) random corruption and (c) adversarial corruption datasets. explanation highlights some object borders, which are not relevant to the damages, but the amount of highlighted pixels is smaller outside the ground truth area, which results in a high AP metric. Therefore, the AP metric might not be ideal for evaluation of explainability approaches. While developing an appropriate metric is out of the scope of this work, it is an interesting direction for future research. ## 5 Conclusion In this work, we have proposed a principled way to extend an LRP explainability technique for Autoencoders using Deep Taylor Decomposition (Montavon et al., 2017). Furthermore, we have suggested a self-supervised validation technique for attribution-based explanation methods by leveraging various corruption methods. Finally, our experiments show that the proposed method outperforms the Residual explanation baseline method and shows comparable performance to model-agnostic approaches such as SHAP (Lundberg and Lee, 2017), while being several orders of magnitude faster. Figure 3: Explanations produced for images of damaged objects from the MVTec dataset (Bergmann et al., 2019). The figure illustrates the (a) original image; (b) ground truth damaged area; Autoencoder’s explanations produced by the (c) baseline Residual-\(L_{1}\) and (d) Residual-\(L_{2}\) explainability methods and (e,f) results of the proposed LRP-based approach. For our approach, we show explanations generated by propagation of (e) \(L_{1}\) and (f) \(L_{2}\) reconstruction losses. As we can see, the baseline methods focus on the borders of the object, while our approach focuses the attention on the damaged areas. Additional examples are provided in the Appendix B.2.
2307.12912
Classical simulation of non-Gaussian fermionic circuits
We propose efficient algorithms for classically simulating fermionic linear optics operations applied to non-Gaussian initial states. By gadget constructions, this provides algorithms for fermionic linear optics with non-Gaussian operations. We argue that this problem is analogous to that of simulating Clifford circuits with non-stabilizer initial states: Algorithms for the latter problem immediately translate to the fermionic setting. Our construction is based on an extension of the covariance matrix formalism which permits to efficiently track relative phases in superpositions of Gaussian states. It yields simulation algorithms with polynomial complexity in the number of fermions, the desired accuracy, and certain quantities capturing the degree of non-Gaussianity of the initial state. We study one such quantity, the fermionic Gaussian extent, and show that it is multiplicative on tensor products when the so-called fermionic Gaussian fidelity is. We establish this property for the tensor product of two arbitrary pure states of four fermions with positive parity.
Beatriz Dias, Robert Koenig
2023-07-24T16:12:29Z
http://arxiv.org/abs/2307.12912v3
# Classical simulation of non-Gaussian fermionic circuits ###### Abstract We propose efficient algorithms for classically simulating fermionic linear optics operations applied to non-Gaussian initial states. By gadget constructions, this provides algorithms for fermionic linear optics with non-Gaussian operations. We argue that this problem is analogous to that of simulating Clifford circuits with non-stabilizer initial states: Algorithms for the latter problem immediately translate to the fermionic setting. Our construction is based on an extension of the covariance matrix formalism which permits to efficiently track relative phases in superpositions of Gaussian states. It yields simulation algorithms with polynomial complexity in the number of fermions, the desired accuracy, and certain quantities capturing the degree of non-Gaussianity of the initial state. We study one such quantity, the fermionic Gaussian extent, and show that it is multiplicative on tensor products when the so-called fermionic Gaussian fidelity is. We establish this property for the tensor product of two arbitrary pure states of four fermions with positive parity. ###### Contents * 1 Introduction * 1.1 Efficiently simulable quantum computations * 1.1.1 Clifford circuits / Stabilizer computations * 1.1.2 Fermionic linear optics / Fermionic Gaussian computations * 1.2 Classical simulation algorithms and measures of magic * 1.3 The fermionic Gaussian extent * 1.4 On the (sub)multiplicativity of the extent * 1.5 Our contribution * 1.6 Prior and related work * 1.7 Outline * 2 Background * 2.1 Dirac and Majorana operators * 2.2 Gaussian unitaries * 2.3 Fermionic Gaussian (pure) states * 2.4 Gaussianity condition * 2.5 Covariance matrices, Gaussian states and Wick's theorem * 2.6 Inner product formulas for Gaussian states * 3 Gaussian evolution and occupation number measurement * 2.8 The tensor product of two fermionic states * 3 Tracking relative phases in fermionic linear optics * 3.1 Subroutines * 3.2 Computing overlaps and descriptions of evolved/measured states * 3.3 Initial states for computation * 4 Classical simulation of fermionic Gaussian circuits with non-Gaussian initial states * 4.1 Extending simulation algorithms to superpositions * 4.2 Sparsification: Relating \(\mathcal{D}\)-extent to approximate \(\mathcal{D}\)-rank * 4.3 Fast norm estimation and approximate simulation * 4.4 Fermionic linear optics with non-Gaussian initial states * 4.5 Efficient additive-error strong simulation * 5 Multiplicativity of the Gaussian fidelity for \(4\) fermions * 5.1 Four-fermion Gaussian and non-Gaussian states * 5.2 The Gaussian fidelity for \(4\)-fermion states * 5.3 Multiplicativity of the Gaussian fidelity for \(4\)-fermion states * 6 Multiplicativity of \(\mathcal{D}\)-fidelity implies that of \(\mathcal{D}\)-extent * 6.1 Multiplicativity for finite dictionaries * 6.2 Multiplicativity for infinite dictionaries * 7 Multiplicativity of the Gaussian extent for four fermions * A Alternative Gaussianity condition for \(4\)-fermion states * B Commutativity of the map \(\theta\) and quadratic Majorana monomials ## 1 Introduction While universal polynomial-time quantum computation is believed to exceed the capabilities of efficient classical algorithms, restricted classes of quantum computations are amenable to efficient classical simulation. Identifying such models and corresponding simulation algorithms is a central goal in the study of quantum computing. On the one hand, a good characterization of the boundary between the computational power of classical and quantum computational models provides insight into potential quantum advantages. On the other hand, efficient classical simulation methods can be used to assess the merits and scalability of quantum information-processing proposals. For example, the resilience of certain quantum codes against restricted noise models has successfully been studied by means of classical simulation methods giving threshold estimates for large-scale systems, see e.g., [1, 2, 3, 4, 5, 6] for an incomplete list of relevant references. ### Efficiently simulable quantum computations Most known examples of efficiently simulable quantum computations can be summarized by the following ingredients: * A set \(\mathcal{D}\) of states with the property that each element \(\Psi\in\mathcal{D}\) has a succinct classical description \(d_{\Psi}\). In the following, we will refer to \(\mathcal{D}\) as a dictionary. 2. A set \(\mathcal{E}\) of operations (unitary or non-unitary evolutions), again with the property that each element \(E\in\mathcal{E}\) has a succinct classical description \(d_{E}\). Following resource-theoretic conventions, we call \(\mathcal{E}\) the set of free operations. 3. A set \(\mathcal{M}\) of measurements (quantum instruments) with an efficient classical descriptions \(d_{M}\) for each \(M\in\mathcal{M}\), and the property that every post-measurement state (associated with different measurement outcomes) obtained by applying \(M\in\mathcal{M}\) to a state \(\Psi\in\mathcal{D}\) belongs to \(\mathcal{D}\). A triple \((\mathcal{D},\mathcal{E},\mathcal{M})\) gives rise to a (generally restricted) quantum computational model by composing these ingredients. A typical (non-adaptive) computation proceeds by preparing an initial state \(\Psi\in\mathcal{D}\), applying a sequence \(\{E_{t}\}_{t=1}^{T}\subset\mathcal{E}\) of operations, and performing measurements \(\{M_{k}\}_{k=1}^{L}\subset\mathcal{M}\) in succession. Assuming for simplicity that \(\mathcal{E}\) consists of a set of unitaries, and that for each \(k\in[L]=\{1,\ldots,L\}\), the measurement \(M_{k}\) realizes a POVM \(M_{k}=\{M_{m}^{(k)}\}_{m\in\mathcal{M}_{k}}\) with outcomes in a set \(\mathcal{M}_{k}\), such a computation produces a sample \(m=(m_{1},\ldots,m_{L})\in\mathcal{M}_{1}\times\cdots\times\mathcal{M}_{L}\) from the distribution \[p(m_{1},\ldots,m_{L})=\langle\Psi_{T},M_{m_{L}}^{(L)}\cdots M_{m_{1}}^{(1)} \Psi_{T}\rangle\qquad\text{ where }\qquad\Psi_{T}=E_{T}\circ\cdots\circ E_{1}(\Psi). \tag{1}\] More generally, one may consider circuits where operations are chosen adaptively depending on intermediate measurement results, assuming that the dependence is given by an efficiently computable function. The task of classically simulating the computational model associated with \((\mathcal{D},\mathcal{E},\mathcal{M})\) comes in two flavors. The input in both cases is the collection \((d_{\Psi},\{d_{E_{t}}\}_{t=1}^{T},\{d_{M_{k}}\}_{k=1}^{L})\) of descriptions of the initial state, the set of operations applied, and the measurements. The problem of weak simulation then consists in producing a sample \(m\in\mathcal{M}_{1}\times\cdots\times\mathcal{M}_{L}\) drawn according to the (ideal) output distribution \(p(m)\) of the circuit given by Eq. (1). In contrast, the problem of strong simulation consists of computing the output probability \(p(m)\) for a given (potential) measurement outcome \(m\). Relaxing the requirements of weak and strong simulation, one may allow for an approximation error. For weak simulation, this is typically formalized by demanding that the (probabilistic) classical algorithm outputs a sample \(m\) drawn from a distribution \(\tilde{p}\) which is \(\delta\)-close in \(L^{1}\)-distance (for a chosen error parameter \(\delta>0\)) to the ideal output distribution \(p\). Similarly, for strong simulation, the output \(\tilde{p}\) is required to be close to the value \(p(m)\) with a controlled (additive or multiplicative) error. In cases where a computational model specified by \((\mathcal{D},\mathcal{E},\mathcal{M})\) is amenable to efficient classical simulation, associated classical simulation algorithms are typically constructed by considering evolution and measurement separately. The basic problem then consists in constructing efficient classical algorithms with the following functionalities: 1. An algorithm evolve which, given classical descriptions \(d_{\Psi}\) of a state \(\Psi\in\mathcal{D}\) and \(d_{E}\) of an evolution operation \(E\in\mathcal{E}\), computes a classical description \(d_{E(\Psi)}\) of the evolved state \(E(\Psi)\). 2. Given a classical description \(d_{\Psi}\) of a state \(\Psi\in\mathcal{D}\), a classical description \(d_{M}\) of a measurement \(M\in\mathcal{M}\) (with associated set of measurement outcomes \(\mathcal{M}_{M}\)) and a measurement outcome \(m\in\mathcal{M}_{M}\), 1. an algorithm measureprob which outputs the probability \(p(m)\) (determined by Born's rule) of obtaining measurement outcome \(m\), and 2. an algorithm postmeasure which outputs a classical description of the post-measurement state when applying the measurement \(M\) to \(\Psi\). It is clear that a triple \((\mathsf{evolve},\mathsf{measureprob},\mathsf{postmeasure})\) of such algorithms immediately gives rise to an efficient algorithm for strong simulation of the model \((\mathcal{D},\mathcal{E},\mathcal{M})\), with a runtime \[T\cdot\mathsf{time}(\mathsf{evolve})+L\cdot(\mathsf{time}(\mathsf{measureprob})+ \mathsf{time}(\mathsf{postmeasure}))\] which is linear in the number \(T\) of operations applied, and linear in the number \(L\) of measurements. Assuming that for any measurement \(M\in\mathcal{M}\), the set of measurement outcomes \(\mathcal{M}_{M}\) associated with \(M\) is of constant cardinality, the triple \((\mathsf{evolve},\mathsf{measureprob},\mathsf{postmeasure})\) also gives rise to a randomized algorithm for weak simulation: Such an algorithm is obtained by using \(\mathsf{measureprob}\) to compute the entire distribution \(\{p(m)\}_{m\in\mathcal{M}_{M}}\) of measurement outcomes (when applying a measurement \(M\)), and then drawing \(m\in\mathcal{M}_{M}\) randomly according to this distribution. The runtime of this probabilistic algorithm is \[T\cdot\mathsf{time}(\mathsf{evolve})+L\cdot(\mathsf{time}(\mathsf{measureprob}) \cdot w+\mathsf{time}(\mathsf{postmeasure}))\] where \(w=\max_{M\in\mathcal{M}}|\mathcal{M}_{M}|\) bounds the maximal cardinality of the set of measurement outcomes. #### 1.1.1 Clifford circuits / Stabilizer computations Perhaps the most well-known example of a computational model \((\mathcal{D},\mathcal{E},\mathcal{M})\) where efficient algorithms \((\mathsf{evolve},\mathsf{measureprob},\mathsf{postmeasure})\) can be provided is the Gottesman-Knill-theorem for stabilizer computations on \(n\) qubits. Here \(\mathcal{D}\) is the set \(\mathsf{STAB}_{n}\) of \(n\)-qubit stabilizer states (whose elements can be specified by their stabilizer generators, i.e., corresponding stabilizer tableaux), \(\mathcal{E}\) is the set of Clifford unitaries (described by symplectic matrices), and \(\mathcal{M}\) are measurements of single-qubit Pauli \(Z\) operators (described by an index \(j\in[n]\)). In this case, there are efficient algorithms with runtimes given in Table 1. #### 1.1.2 Fermionic linear optics / Fermionic Gaussian computations A different class of efficiently simulable computations - the one we are interested in here - is that of fermionic linear optics on \(n\) fermions. We focus on pure-state computations: Here the dictionary \(\mathcal{D}\) consists of the set \(\mathcal{G}_{n}\) of pure fermionic Gaussian states. An element \(\Psi\in\mathcal{G}_{n}\) in the dictionary can be described by its covariance matrix \(\Gamma_{\Psi}\), an antisymmetric \(2n\times 2n\) matrix with real entries. The set \(\mathcal{E}=\mathcal{E}_{\text{Gauss}}\) can be taken as the set of Gaussian unitary operations. Each such unitary \(U=U_{R}\) is fully determined by an element \(R\in O(2n)\) of the orthogonal group on \(\mathbb{R}^{2n}\), where \(R\mapsto U_{R}\) defines a (projective) unitary representation of \(O(2n)\) on the space \(\mathcal{H}^{n}\) of \(n\) fermions. The set \(\mathcal{M}=\mathcal{M}_{\text{number}}\) consists of all occupation number measurements. As in the case of stabilizer states, there are polynomial-time algorithms \((\mathsf{evolve},\mathsf{measureprob},\mathsf{postmeasure})\) for classical simulation with runtimes summarized in Table 2. In particular, the covariance matrix \(\Gamma_{U_{R}\Psi}\) of a Gaussian state \(\Psi\) evolved under a Gaussian unitary \(U_{R}\) can be computed in time \(O(n^{3})\) from \(\Gamma_{\Psi}\) and \(R\). The outcome probability of observing \(0\) (respectively \(1\)) when performing an occupation number measurement can be computed in time \(O(1)\), and the covariance matrix of the post-measurement state can be computed in time \(O(n^{2})\)[8, 9, 10] (see also [3]). \begin{table} \begin{tabular}{c|c} \hline algorithm & time \\ \hline \hline \(\mathsf{evolve}\) & \(O(n)\) \\ \hline measureprob & \(O(n)\) \\ \hline postmeasure & \(O(n^{2})\) \\ \hline \end{tabular} \end{table} Table 1: Runtimes of building blocks \(\mathsf{evolve}\), \(\mathsf{measureprob}\), \(\mathsf{postmeasure}\) for classical simulation of \(n\)-qubit stabilizer circuits as given in [7]. Evolution corresponds to application of an \(n\)-qubit Clifford unitary, and each measurement is that of a Pauli observable \(Z_{j}\) with \(j\in[n]\). ### Classical simulation algorithms and measures of magic A natural way of extending the power of a quantum computational model specified by \((\mathcal{D},\mathcal{E},\mathcal{M})\) consists in providing resources/capabilities that do not belong to the specified sets. "Magic states" are a prime example: Here a state \(\Psi\not\in\mathcal{D}\) not belonging to the dictionary is provided as an (initial) state in the quantum computation, thereby providing additional capabilities to the computational model. For example, non-Clifford unitaries can be realized by certain stabilizer-computations (sometimes referred to as "gadgets") applied to so-called magic states [11]. Similarly, non-Gaussian initial states can be combined with fermionic linear optics operations to realize non-Gaussian operations [12, 13]. While such a magic state can even promote the computational model to universal quantum computation, this is generally not the case for all states \(\Psi\). It is thus a natural question to quantify the degree of "magicness" provided by a state \(\Psi\not\in\mathcal{D}\). For the set \(\mathsf{STAB}_{n}\) of \(n\)-qubit stabilizer states, corresponding magic monotones considered in the literature include the robustness of magic [14, 15], the exact and approximate stabilizer rank [16, 17, 18], the stabilizer extent [19, 18], the stabilizer nullity [20] and the generalized robustness [21]. The maximum overlap of a given state \(\Psi\) with an element of the dictionary \(\mathcal{D}\), i.e., the quantity \[F_{\mathcal{D}}(\Psi)=\sup_{\varphi\in\mathcal{D}}|\langle\varphi,\Psi\rangle| ^{2}\, \tag{2}\] is arguably one of the most direct ways of quantifying how far \(\Psi\) is from a "free" state, i.e., a state belonging to \(\mathcal{D}\). Motivated by the analogously defined notion of stabilizer fidelity in Ref. [18], we call \(F_{\mathcal{D}}(\Psi)\) the \(\mathcal{D}\)-fidelity of \(\Psi\) in the following. This quantity plays an important role in our arguments when considering multiplicativity properties. However, the \(\mathcal{D}\)-fidelity \(F_{\mathcal{D}}(\Psi)\) is not a good quantifier of hardness of classical simulation because simply replacing \(\Psi\) by an element of \(\mathcal{D}\) typically leads to a significant approximation error. From the point of view of classical simulation, a relevant magicness measure for a state \(\Psi\not\in\mathcal{D}\) relates to the (added) complexity when trying to simulate a quantum computation with initial state \(\Psi\), built from a triple \((\mathcal{D},\mathcal{E},\mathcal{M})\) allowing for efficient classical simulation. One such measure, introduced in Ref. [16] for the case of stabilizer computations, is the \(\mathcal{D}\)-rank \(\chi_{\mathcal{D}}(\Psi)\) of \(\Psi\). (For \(\mathcal{D}=\mathsf{STAB}_{n}\), this is called the stabilizer rank of \(\Psi\).) It is defined as the minimum number of terms when decomposing \(\Psi\) as a linear combination of elements of \(\mathcal{D}\), i.e., \[\chi_{\mathcal{D}}(\Psi)=\min\left\{\chi\in\mathbb{N}\mid\exists\{\varphi_{ j}\}_{j=1}^{\chi}\subset\mathcal{D},\{\gamma_{j}\}_{j=1}^{\chi}\subset\mathbb{C} \text{ such that }\Psi=\sum_{j=1}^{\chi}\gamma_{j}\varphi_{j}\ \right\}. \tag{3}\] In the context of signal processing, the corresponding optimization problem is referred to as a sparse approximation problem. The \(\mathcal{D}\)-rank \(\chi_{\mathcal{D}}(\Psi)\) appears naturally when constructing and analyzing simulation algorithms, but it suffers from a number of shortcomings: On the one hand, the set of states \(\Psi\in\mathcal{H}\) whose \(\mathcal{D}\)-rank is less than the dimension of the Hilbert space \(\mathcal{H}\) is a set of zero Lebesgue measure [22, Proposition 4.1]. On the other hand, the quantity \(\chi_{\mathcal{D}}(\Psi)\) relates to the classical simulation cost of exactly simulating dynamics involving the state \(\Psi\). In practice, some approximation error is typically acceptable, and corresponding simulations can \begin{table} \begin{tabular}{c|c} \hline algorithm & time \\ \hline \hline evolve & \(O(n^{3})\) \\ \hline measureprob & \(O(1)\) \\ \hline postmeasure & \(O(n^{2})\) \\ \hline \end{tabular} \end{table} Table 2: Runtimes of building blocks evolve, measureprob, postmeasure for classical simulation of \(n\)-fermion linear optics circuits as proposed in [8, 9, 10], see also [3]. Evolution amounts to application of a fermionic Gaussian unitary. Measurement corresponds to measuring an observable \(a_{j}^{\dagger}a_{j}\) (occupation number) for \(j\in[n]\). be achieved with lower cost. In other words, the quantity \(\chi_{\mathcal{D}}(\Psi)\) does not accurately reflect the cost of approximate simulation. A more operationally relevant quantity is the \(\delta\)-approximate \(\mathcal{D}\)-rank \(\chi_{\mathcal{D}}^{\delta}(\Psi)\) of \(\Psi\) introduced in Ref. [17], again for stabilizer computations. For a fixed approximation error \(\delta>0\), this is given by the minimum \(\mathcal{D}\)-rank of any state \(\Psi^{\prime}\) that is \(\delta\)-close to \(\Psi\), i.e., \[\chi_{\mathcal{D}}^{\delta}(\Psi)=\min\left\{\chi_{\mathcal{D}}(\Psi^{\prime} )\ |\ \Psi^{\prime}\in\mathcal{H}\text{ such that }\|\Psi-\Psi^{\prime}\|\leq \delta\right\}. \tag{4}\] An exact classical simulation algorithm whose complexity scales with the exact \(\mathcal{D}\)-rank \(\chi_{\mathcal{D}}(\Psi)\) provides an approximate simulation at a cost with an identical scaling in the approximate (instead of exact) \(\mathcal{D}\)-rank \(\chi_{\mathcal{D}}^{\delta}(\Psi)\) of \(\Psi\). Here approximate weak simulation means that instead of sampling from the ideal output distribution \(P\) of a circuit, the simulation samples from a distribution \(P^{\prime}\) whose \(L^{1}\)-distance from \(P\) is bounded by \(O(\delta)\). Similarly, in approximate (strong) simulation, output probabilities are approximately computed with a controlled approximation error. A different quantity of interest is obtained by replacing the ill-behaved rank function (i.e., size of the support) in the definition of the \(\mathcal{D}\)-rank \(\chi_{\mathcal{D}}(\Psi)\) by the \(L^{1}\)-norm of the coefficients when representing \(\Psi\) as a linear combination. In the context of stabilizer states the corresponding quantity was introduced by Bravyi et al. [18] under the term stabilizer extent: For a state \(\Psi\in(\mathbb{C}^{2})^{\otimes n}\) it is defined as \[\xi_{\mathsf{STAB}_{n}}(\Psi)=\inf\left\{\|\gamma\|_{1}^{2}\ |\ \gamma:\mathsf{ STAB}_{n}\to\mathbb{C}\text{ such that }\Psi=\sum_{\varphi\in\mathsf{ STAB}_{n}}\gamma(\varphi)\varphi\right\}\,\] where \(\|\gamma\|_{1}=\sum_{\varphi\in\mathsf{STAB}_{n}}|\gamma(\varphi)|\) denotes the \(1\)-norm of \(\gamma\). The corresponding convex optimization problem is known as the basis pursuit problem [23] (when \(\mathsf{STAB}_{n}\) is replaced by e.g., a finite dictionary \(\mathcal{D}\)). Sufficient conditions for when the basis pursuit problem yields a solution of the sparse approximation problem where investigated in a series of works culminating in Fuchs' condition [24] (see also [25]). More importantly for (approximate) simulation, feasible solutions of the basis pursuit problem provide upper bounds on the sparse approximation problem. For the stabilizer rank, a sparsification result (see [18, Theorem 1]) gives an upper bound on the \(\delta\)-approximate stabilizer rank \(\chi_{\mathsf{STAB}_{n}}(\Psi)\) in terms of the stabilizer extent \(\xi_{\mathsf{STAB}_{n}}(\Psi)\), for any \(\delta>0\) (see Section 4.2). Building on earlier results [17], it was shown in Ref. [18] that a stabilizer circuit on \(n\) qubits with \(L\) Clifford gates initialized in a state \(\Psi\) can be weakly simulated with error \(\delta\) in a time scaling as \(O(\xi_{\mathsf{STAB}_{n}}(\Psi)/\delta^{2}\cdot\mathsf{poly}(n,L))\). The error \(\delta\) expresses the \(L^{1}\)-norm distance of the distribution of simulated measurement outcomes from the output distribution of the actual quantum computation. Here we are not accounting for the time required to perform classical computations when adaptive quantum circuits are considered. In addition, Ref. [26] provided a classical algorithm for strong simulation of a circuit \(U\) with \(L\) Clifford gates and \(t\)\(T\)-gates initialized in a stabilizer state \(\Psi\) with an additive error \(\delta\). Their algorithm outputs an estimate of the probability \(|\langle x,U\Psi\rangle|^{2}\) of obtaining measurement outcome \(x\in\{0,1\}^{n}\) up to an additive error \(\delta\), with success probability greater than \(1-p_{f}\). It has runtime \(O(\xi_{\mathsf{STAB}_{n}}(|T\rangle^{\otimes t})\log(1/p_{f})\cdot\mathsf{poly} (n,L,\delta^{-1}))\), scaling linearly with the stabilizer extent \(\xi_{\mathsf{STAB}_{n}}(|T\rangle^{\otimes t})\) of \(t\) copies of the single-qubit magic state \(|T\rangle\) associated with a \(T\)-gate [11]. ### The fermionic Gaussian extent In the following, we generalize the notion of the extent beyond stabilizer computations to any dictionary \(\mathcal{D}\). We refer to the corresponding quantity as the \(\mathcal{D}\)-extent \(\xi_{\mathcal{D}}(\Psi)\) of \(\Psi\). We assume throughout that we are interested in pure state quantum computations on a Hilbert space \(\mathcal{H}\) and that the dictionary \(\mathcal{D}\) is a subset of pure states on \(\mathcal{H}\). Then the \(\mathcal{D}\)-extent \(\xi_{\mathcal{D}}(\Psi)\) of \(\Psi\in\mathcal{H}\) is defined as \[\xi_{\mathcal{D}}(\Psi)=\inf_{N\in\mathbb{N}}\inf_{\varphi_{1},\ldots,\varphi_{ N}\in\mathcal{D}}\left\{\|\gamma\|_{1}^{2}\ \big{|}\ \gamma\in\mathbb{C}^{N}\ \text{such that}\ \sum_{j=1}^{N}\gamma_{j}\varphi_{j}=\Psi \right\}. \tag{5}\] Here \(\|\gamma\|_{1}=\sum_{j=1}^{N}|\gamma_{j}|\) is the \(L^{1}\)-norm of the vector \(\gamma\). That is, the \(\mathcal{D}\)-extent \(\xi_{\mathcal{D}}(\Psi)\) is the \(L^{1}\)-norm of the coefficients minimized over all decompositions of \(\Psi\) into a finite linear combination of elements of the dictionary \(\mathcal{D}\). As mentioned above, quantities of the form (5) are well-studied in the context of signal-processing. When the dictionary \(\mathcal{D}\) is a finite subset of a Hilbert space \(\mathcal{H}\cong\mathbb{C}^{d}\), the \(\mathcal{D}\)-extent \(\xi_{\mathcal{D}}(\Psi)\) of a state \(\Psi\in\mathcal{H}\) can be expressed as a second-order cone program [27] (see also e.g., [28]), as in Appendix A of Ref. [19]. Second-order cone programs can be solved in time polynomial in \(\max(d,|\mathcal{D}|)\). We are typically interested in cases where \(\mathcal{D}\) contains a basis of \(\mathcal{H}\) (such that every state can indeed be represented as a linear combination of dictionary elements): Here this runtime is at least polynomial in \(d\). For example, in the case \(\mathcal{D}=\mathsf{STAB}_{n}\) of stabilizer states on \(n\) qubits, this leads to an exponential scaling in \(n^{2}\). Beyond algorithmic considerations related to the evaluation of the extent, the fact that \(\xi_{\mathcal{D}}(\Psi)\) is given by a second-order cone program provides useful analytical insight by convex programming duality. Indeed, this fact has previously been exploited both for showing multiplicativity of the stabilizer extent for states of small dimension [18], as well as to show non-multiplicativity in high dimensions [19]. In Section 6, we also exploit this connection to relate the \(\mathcal{D}\)-fidelity \(F_{\mathcal{D}}(\Psi)\) with the \(\mathcal{D}\)-extent \(\xi_{\mathcal{D}}(\Psi)\). In contrast, the \(\mathcal{D}\)-extent \(\xi_{\mathcal{D}}(\Psi)\) for an infinite, i.e., continuously parameterized, dictionary \(\mathcal{D}\) constitutes additional mathematical challenges as an optimization problem. This is the case of interest here as we are considering the dictionary \(\mathcal{D}=\mathcal{G}_{n}\) consisting of all \(n\)-fermion Gaussian states in the following. We call the associated quantity \(\xi_{\mathcal{G}_{n}}(\Psi)\) the (fermionic) Gaussian extent of an \(n\)-fermion state \(\Psi\). Our focus here is on discussing the role of the quantity \(\xi_{\mathcal{G}_{n}}(\Psi)\) in the context of classically simulating fermionic linear optics, and its behavior on tensor products. A detailed discussion of the algorithmic problem of computing \(\xi_{\mathcal{G}_{n}}(\Psi)\) for an arbitrary state \(\Psi\), and finding a corresponding optimal decomposition of \(\Psi\) into a linear combination of Gaussian states is beyond the scope of this work. We refer to e.g., [29] where semidefinite relaxations are given for the related atomic norm minimization problem in cases where the atomic set (corresponding to the dictionary) has algebraic structure. Similar techniques may be applicable to the fermionic Gaussian extent. ### On the (sub)multiplicativity of the extent Consider a situation where an operation \(E\not\in\mathcal{E}\) not belonging to the set \(\mathcal{E}\) of efficiently simulable operations is implemented by using a "magic" resource state \(\Psi\not\in\mathcal{D}\). For example, if \(\mathcal{D}=\mathsf{STAB}_{n}\) is the set of stabilizer states, \(\mathcal{E}\) the set of Clifford unitaries and \(\mathcal{M}\) the set of single-qubit Pauli-\(Z\)-measurements, then a non-Clifford gate (such as the \(T\)-gate) can be realized by an (adaptive) Clifford circuit at the cost of consuming a non-Clifford state (such as the state \(|T\rangle\)) [11]. Similar "gadget constructions" exist for fermionic linear optics, where non-Gaussian unitaries are realized by Gaussian unitaries and non-Gaussian states [12, 13]. A natural question arising in this situation is to characterize the cost of simulating the application of two independent magic gates \(E_{1},E_{2}\not\in\mathcal{E}\), each realized by efficiently simulable operations (belonging to \(\mathcal{E}\)) using magic states \(\Psi_{1},\Psi_{2}\). For any reasonable simulation algorithm, we expect the required simulation effort to increase at most multiplicatively. Indeed, this feature is reflected in the submultiplicativity property \[\xi_{\mathcal{D}_{3}}(\Psi_{1}\otimes\Psi_{2})\leq\xi_{\mathcal{D}_{1}}(\Psi_{ 1})\xi_{\mathcal{D}_{2}}(\Psi_{2})\qquad\text{ for all}\qquad\Psi_{1}\in\mathcal{H}_{1}\ \text{and}\ \Psi_{2}\in\mathcal{H}_{2} \tag{6}\] of the \(\mathcal{D}\)-extent. In Eq. (6), we are considering Hilbert spaces \(\mathcal{H}_{1}\), \(\mathcal{H}_{2}\) and their tensor product \(\mathcal{H}_{3}=\mathcal{H}_{1}\otimes\mathcal{H}_{2}\), as well as dictionaries \(\mathcal{D}_{j}\subset\mathcal{H}_{j}\) for \(j\in[3]\). The submultiplicativity property (6) follows immediately from the definition of the extent if the three dictionaries satisfy the inclusion property \[\mathcal{D}_{1}\otimes\mathcal{D}_{2}\subset\mathcal{D}_{3}. \tag{7}\] In particular, this is satisfied e.g., when the dictionary \(\mathcal{D}_{j}=\mathsf{STAB}_{n_{j}}\subset(\mathbb{C}^{2})^{\otimes n_{j}}\) is the set of \(n_{j}\)-qubit stabilizer states for \(j\in[3]\), with \(n_{3}=n_{1}+n_{2}\), or when considering the set of (even) Gaussian states (see below). While the submultiplicativity property (6) is a trivial consequence of Eq. (7), the question of whether or not the stronger multiplicativity property \[\xi_{\mathcal{D}_{3}}(\Psi_{1}\otimes\Psi_{2})=\xi_{\mathcal{D}_{1}}(\Psi_{1} )\xi_{\mathcal{D}_{2}}(\Psi_{2})\qquad\text{ for all }\qquad\Psi_{1}\in\mathcal{H}_{1}\text{ and }\Psi_{2}\in \mathcal{H}_{2} \tag{8}\] holds for the \(\mathcal{D}\)-extent is a much less trivial problem. If the multiplicativity property (8) is satisfied, then computing the extent of a product state can be broken down into several smaller optimization problems: It suffices to compute the extent of each factor in the tensor product. Furthermore, the classical simulation cost (with typical algorithms) when applying several non-free ("magic") gates constructed by gadgets increases at an exponential rate determined by the individual gates. In contrast, if the extent is not multiplicative (i.e., the equality in (8) is not satisfied for some states \(\Psi_{j}\in\mathcal{H}_{j}\), \(j\in[2]\)), then such a simplification is not possible. More surprisingly, such a violation of multiplicativity implies that the classical simulation cost of applying certain non-free gates can be reduced by treating these jointly instead of individually. We note that in the slightly different context of so-called circuit knitting, similar savings in complexity have been shown to be significant [30]. Previous work established that the stabilizer extent is multiplicative even for multiple factors, that is, \[\xi_{\mathsf{STAB}_{n_{1}+\cdots+n_{r}}}(\Psi_{1}\otimes\cdots\otimes\Psi_{r} )=\prod_{j=1}^{r}\xi_{\mathsf{STAB}_{n_{j}}}(\Psi_{j})\qquad\text{ for all }\qquad\Psi_{j}\in(\mathbb{C}^{2})^{\otimes n_{j}},j\in[r]\] if the factors are single-qubit, 2- or 3-qubit states, i.e., \(n_{j}\in[3]\), see Ref. [18]. An example is the stabilizer extent of a tensor product of \(t\) copies of the magic (single-qubit) state \(\left|T\right\rangle=(\left|0\right\rangle+e^{i\pi/4}\left|1\right\rangle)/ \sqrt{2}\) associated with the \(T\)-gate. Multiplicativity for qubit states gives \(\xi_{\mathsf{STAB}_{1}}(\left|T\right\rangle^{\otimes t})=\xi_{\mathsf{STAB}_ {t}}(\left|T\right\rangle)^{t}\), where \(\xi_{\mathsf{STAB}_{1}}(\left|T\right\rangle)\) is known to be approximately 1.17 [17]. This translates to an overhead exponential in \(t\) in the runtime of stabilizer computations supplemented with \(t\)\(T\)-gates. Surprisingly, the stabilizer extent has been shown not to be multiplicative (for all pairs of states) in high dimensions [19]. For (pure) Gaussian states, the Gaussian extent of a 1-, 2- and 3-mode pure fermionic state is trivially one because any 1-, 2- and 3-mode pure fermionic state is Gaussian [31] and is thus an element of the dictionary. Hence the Gaussian extent is (trivially) multiplicative if the factors are 1-, 2- or 3-mode fermionic states. The simplest non-trivial case is that of \(n=4\) fermionic modes in each factor. ### Our contribution Our results concern fermionic linear optics, the computational model introduced in Section 1.1.2 described by the triple \((\mathcal{G}_{n},\mathcal{E}_{\mathsf{Gauss}},\mathcal{M}_{\text{number}})\) of fermionic Gaussian pure states on \(n\) fermions, Gaussian unitary operations and number state measurements. We propose classical simulation algorithms for the case where the initial state \(\Psi\in\mathcal{H}_{n}\) is an arbitrary pure state in the \(n\)-fermion Hilbert space \(\mathcal{H}_{n}\) (instead of belonging to the set \(\mathcal{G}_{n}\subset\mathcal{H}_{n}\) of Gaussian states). Our results are two-fold: **New simulation algorithms.** We give algorithms realizing the functionalities described in Section 1.1 exactly for the triple \((\mathcal{H}_{n},\mathcal{E}_{\text{Gauss}},\mathcal{M}_{\text{number}})\). This immediately gives rise to efficient algorithms for weak and strong simulation of circuits with non-Gaussian initial states. The corresponding runtimes of these building blocks, which we refer to as \((\chi\mathsf{evolve},\chi\mathsf{measureprob},\chi\mathsf{postmeasure})\), depend on the Gaussian rank \(\chi=\chi_{\mathcal{G}_{n}}(\Psi)\) of the initial state \(\Psi\) and are summarized in Table 3. Key to the construction of these algorithms is a novel way of keeping track of relative phases in superpositions of Gaussian states, see Section 3. We argue that our techniques can be applied more generally to adapt simulation procedures developed, e.g., for Clifford circuits, to the setting of fermionic linear optics. In order to illustrate this procedure, we apply it to the simulation algorithms of [17, 18] for efficient (approximate) classical simulation algorithms. In this way, we obtain new approximate simulation algorithms with runtimes depending linearly on the fermionic Gaussian extent \(\xi=\xi_{\mathcal{G}_{n}}(\Psi)\) of the initial state \(\Psi\), see Table 4 for a summary of the corresponding runtimes. They depend inverse-polynomially on parameters \((\delta,\epsilon,p_{f})\) determining the accuracy of the simulation. The error \(\delta\) describes a certain "offset", i.e., a systematic error: Instead of simulating the dynamics of the circuit with the (ideal) initial state \(\Psi\), the simulation algorithm emulates the dynamics when using a different starting state \(\tilde{\Psi}\) which is \(\delta\)-close to \(\Psi\), i.e., which satisfies \(\|\Psi-\tilde{\Psi}\|\leq\delta\). The algorithm \(\mathsf{approx}\mathsf{evolve}\) computes evolution exactly on the state used in the simulation (i.e., it preserves the approximation error \(\delta\) relative to the ideal initial state). In contrast, the procedure \(\mathsf{approxmeasureprob}\) can fail with probability \(p_{f}\), and both \(\mathsf{approxmeasureprob}\) and \(\mathsf{approxpostmeasure}\) introduce an additional error quantified by \(\epsilon\) (if \(\mathsf{approxmeasureprob}\) succeeds): Instead of returning the probability \(p(0)\) of obtaining zero occupation number when measuring the state, the output of \(\mathsf{approxmeasureprob}\) is a value \(\tilde{p}\) which satisfies \(|\tilde{p}-p(0)|\leq O(\epsilon)\). Similarly, the output of \(\mathsf{approxpostmeasure}\) is a description of a state that is \(O(\epsilon)\)-close to the actual post-measurement state. These parameters and runtimes are analogous to those obtained in [18] for simulating Clifford circuits with non-stabilizer initial states. In particular, they imply that a circuit with initial state \(\Psi\) involving \(T\) Gaussian unitaries and \(L\) occupation number measurements can be weakly simulated in time \(\tilde{O}(\epsilon^{-2}\xi)\), such that the sampled measurement outcomes are \(\epsilon\)-close in \(L^{1}\)-distance to the ideal (joint) output distribution of all measurements. Here the notation \(\tilde{O}(\cdot)\) suppresses a factor polynomial in \(n,T,L\) and \(\log(\epsilon^{-1})\), see [17] for details. \begin{table} \begin{tabular}{c|c} \hline algorithm & time \\ \hline \hline \(\mathsf{approx}\mathsf{evolve}\) & \(O(\xi\delta^{-2}n^{3})\) \\ \hline \(\mathsf{approxmeasureprob}\) & \(O(\xi\delta^{-2}\epsilon^{-2}p_{f}^{-1}n^{7/2})\) \\ \hline \(\mathsf{approxpostmeasure}\) & \(O(\xi\delta^{-2}n^{3})\) \\ \hline \end{tabular} \end{table} Table 4: Runtimes of building blocks \(\mathsf{approx}\mathsf{evolve}\), \(\mathsf{approxmeasureprob}\), \(\mathsf{approxpostmeasure}\) for approximate simulation of \(n\)-qubit fermionic linear optics circuits with a non-Gaussian initial state \(\Psi\) of Gaussian extent \(\xi=\xi_{\mathcal{G}_{n}}(\Psi)\). The parameters \((\epsilon,\delta,p_{f})\) determine the quality of the approximation. **On the multiplicativity of the Gaussian extent and the Gaussian fidelity.** Motivated by the relevance of the Gaussian extent \(\xi_{\mathcal{G}_{n}}(\Psi)\) for characterizing the complexity of classical simulation, we study multiplicativity properties of both the \(\mathcal{D}\)-fidelity \(F_{\mathcal{D}}(\Psi)\) as we well as the \(\mathcal{D}\)-extent \(\xi_{\mathcal{D}}(\Psi)\) for a general infinite, i.e., continuously parameterized, dictionary \(\mathcal{D}\). We show that multiplicativity of the \(\mathcal{D}\)-fidelity is closely related to that of the \(\mathcal{D}\)-extent: For a general family of (discrete or continuous) dictionaries \(\mathcal{D}_{j}\subset\mathcal{H}_{j}\) for \(j\in[3]\) with the property \[\mathcal{D}_{1}\otimes\mathcal{D}_{2}\subset\mathcal{D}_{3}\,\] multiplicativity of the \(\mathcal{D}\)-fidelity, i.e., \[F_{\mathcal{D}_{3}}\left(\Psi_{1}\otimes\Psi_{2}\right)=F_{\mathcal{D}_{1}} \left(\Psi_{1}\right)F_{\mathcal{D}_{2}}\left(\Psi_{2}\right)\quad\text{ for all }\quad\Psi_{j}\in\mathcal{H}_{j}\text{ for }j\in[2]\] implies multiplicativity of the \(\mathcal{D}\)-extent, i.e. \[\xi_{\mathcal{D}_{3}}\left(\Psi_{1}\otimes\Psi_{2}\right)=\xi_{\mathcal{D}_{1 }}\left(\Psi_{1}\right)\xi_{\mathcal{D}_{2}}\left(\Psi_{2}\right)\quad\text{ for all }\quad\Psi_{j}\in\mathcal{H}_{j}\text{ for }j\in[2]\.\] We note that for stabilizer states \(\mathcal{D}=\mathsf{STAB}_{n}\), a similar route was followed in Ref. [18] to show multiplicativity of the stabilizer extent \(\xi_{\mathsf{STAB}_{n}}\) with respect to the tensor product of 1-, 2- and 3-qubit states. Our main contribution is an extension of this connection to the case of infinite dictionaries by the use of nets. We expect this connection to be helpful in proving or disproving multiplicativity of the extent more generally. We subsequently make use of this connection to the Gaussian fidelity to show that the fermionic Gaussian extent is multiplicative for the tensor product of any two 4-mode fermionic states with positive parity, i.e., \[\xi_{\mathcal{G}_{8}}(\Psi_{1}\otimes\Psi_{2})=\xi_{\mathcal{G}_{4}}(\Psi_{1} )\xi_{\mathcal{G}_{4}}(\Psi_{2})\qquad\text{ for all }\qquad\Psi_{1},\Psi_{2}\in\mathcal{H}_{+}^{4}. \tag{9}\] Here \(\mathcal{H}_{+}^{4}\) denotes the set of 4-mode fermionic states with positive parity. The proof of (9) relies on the Schmidt decomposition of fermionic Gaussian states and specific properties of 4-mode (positive parity) fermionic states. The result (9) gives the first non-trivial example of multiplicativity of the Gaussian extent. Multiplicativity for more general cases such as that of multiple 4-mode fermionic factors remains an open problem. ### Prior and related work The starting point of our work is the fact that fermionic Gaussian operations acting on Gaussian states can be efficiently simulated classically as shown in pioneering work by Terhal and DiVincenzo [9] and Knill [10]. The model and its simulability are closely related to that of matchgate computations introduced by Valiant [8], where so-called matchgates correspond to a certain certain subset of Gaussian operations (see also [32]). In analogy to the fermionic context, the efficient simulability of bosonic Gaussian circuits was recognized at around the same time [33, 34]. In an effort to identify commonalities between simulation algorithms for a variety of quantum computational models, Somma et al. [35] provided a unifying Lie algebraic treatment which gives a counterpart to the Gottesman-Knill theorem for the simulability of Clifford circuits [7, 36]. While matchgate circuits, fermionic and bosonic linear optics, and Clifford circuits provide rich classes of efficiently simulable models for the study of many-body dynamics associated with quantum circuits, it is desirable to extend the applicability of such simulation methods. There has been significant interest in this problem resulting in a range of approaches. We only briefly discuss these here to give an overview, without attempting to give an exhaustive treatment. A first prominent approach is the use of quasi-probability distributions to describe states and corresponding dynamics. Such a description typically applies to a subset of density operators: For example, it has been shown in [37, 38] that circuits applying Gaussian operations to initial states with a positive Wigner function (a strict superset of the set of Gaussian states) can be simulated efficiently. Negativity of the Wigner function (both in the continuous-variable as well as the qubit context) thus serves as a resource for quantum computation, also see e.g., [39, 40]. It is also closely related to contextuality, see [41], and thus connects contextuality to the complexity of classical simulation [42, 43]. Not unlike the notorious sign problem in quantum Monte-Carlo methods applied in many-body physics, the runtimes of corresponding (randomized) simulation algorithms scale with certain measures of "negativity" of the initial state. The concept of a convex-Gaussian state was introduced and studied in [31] to extend the range of fermionic linear optics simulation methods. This is related to quasi-probability representations in the sense that initial states of a particular form are shown to lead to efficient simulability. Here a density operator is called convex-Gaussian if it is a convex combination of fermionic Gaussian states. The utility of this concept was illustrated in [31] by showing a converse to the fault-tolerance threshold theorem: Sufficiently noisy quantum circuits can be simulated classically because the corresponding states turn out to be convex-Gaussian. A detailed characteriziation of convex-Gaussianity is necessary to translate this into explicit (numerical) threshold estimates. An infinite hierarchy of semidefinite programs was constructed in [31] to detect convex-Gaussianity, and this was subsequently shown to be complete [44]. This hierarchy also provides a way of determining whether a state is close to being convex-Gaussian [44]. A second important class of approaches are rank-based methods. Here the non-free resource (either a state or an operation) is decomposed into a linear combination of free (i.e., efficiently simulable) resources. Our work follows this approach, which is detailed in Section 4.1 for pure states. For Clifford computations, this involves writing general states as superpositions of stabilizer states. The development of such simulators was pioneered by Bravyi, Smith, and Smolin [16] with subsequent work dealing with approximate stabilizer decompositions [17]. The concept of low-rank (approximate) decompositions of quantum states or operations into more easily treatable basic objects appears in a variety of forms: For example, the work [18] also discusses - in addition to state vector decompositions - decompositions of non-Clifford unitaries into sums of Clifford operations. In Ref. [45], a similar approach was taken to approximately decompose non-Gaussian fermionic unitary operations into linear combinations of Gaussian channels. In all these cases, the main challenge is to identify optimal (or simply good) decompositions (e.g., in terms of rank or an extent-like quantity). In more recent work, Moherla, Lao and Browne [46] study the problem of simulating matchgate circuits using universality-enabling gates. They provide a simulation algorithm and associated runtime estimates for estimating expectation values of single-qubit observables in output states obtained by applying a matchgate circuit to a product state. This problem is closely related to the problem considerd in this work as matchgate circuits efficiently describe evolution under a quadratic fermionic Hamiltonian. The approach taken in [46] is quite different from ours, however: The classical simulator keeps track of the density operator by tracking its coefficients in the Pauli (operator) basis, using the structure of corresponding linear maps associated with matchgates. The effect of a specific set of universality-enabling gates is then analyzed in detail. This extends the sparse simulation method for matchgate circuits to circuits augmented with such gates. The runtime estimates of [46] apply to certain universality-providing gates. In contrast, our constructions can in principle also be applied to (gadget-based) constructions of arbitrary gates and provide gate-specific information. For gates close to the identity, for example, this may provide additional resource savings (in terms of e.g., the rate of growth for several uses of such a gate). Near the completion of our work, we also became aware of concurrent independent work on simulation of fermionic circuits with non-Gaussian operations, see the papers [47, 48] which are posted simultaneously with our work to the arXiv. ### Outline The paper is structured as follows. In Section 2, we give some background on fermionic linear optics, reviewing fermionic Gaussian operations and states, inner product formulas for Gaussian states and tensor products of fermionic systems. In Sections 3 and 4 we describe classical algorithms for simulation of Gaussian and non-Gaussian fermionic circuits, respectively. Specifically, in Section 3 we provide an algorithm overlap for computing the overlap of two Gaussian states, an algorithm evolve to simulate unitary evolution of a Gaussian state, and algorithms measureprob and postmeasure to simulate measurements of occupation numbers. All these algorithms keep track of the phase of the state. In Section 4 we extend the simulation described in Section 3 to allow for non-Gaussian input states. The remainder of this work is focused on the multiplicity of the fermionic Gaussian extent. In Section 5, we prove the multiplicativity of the fermionic Gaussian fidelity for the tensor product of any two 4-mode fermionic states with positive parity. Section 6 is devoted to showing that the multiplicativity of the \(\mathcal{D}\)-fidelity implies multiplicativity of the \(\mathcal{D}\)-extent for general (finite and infinite, i.e., continuously parameterized) dictionaries. Finally, the results from Sections 5 and 6 are used to prove the main result in Section 7, namely the multiplicativity of the fermionic Gaussian extent for the tensor product of any two 4-mode fermionic states with positive parity. ## 2 Background In this section, we give some background on fermionic linear optics to fix notation. ### Dirac and Majorana operators Throughout, we consider fermionic systems composed of \(n\) modes, with (Dirac) creation- and annihilation operators \(a_{j}^{\dagger},a_{j}\), \(j\in[n]\), satisfying the canonical anticommutation relations \[\{a_{j},a_{k}^{\dagger}\}=\delta_{j,k}I\qquad\text{and}\qquad\{a_{j},a_{k}\}= \{a_{j}^{\dagger},a_{k}^{\dagger}\}=0\qquad\text{for all}\qquad j,k\in[n]\.\] The fermionic vacuum state \(\ket{0_{F}}\) is the (up to a phase) unique unit vector satisfying \(a_{j}\ket{0_{F}}=0\) for all \(j\in[n]\). For \(x=(x_{1},\ldots,x_{n})\in\{0,1\}^{n}\), we define the number state \(\ket{x}\) by \[\ket{x}=(a_{1}^{\dagger})^{x_{1}}\cdots(a_{n}^{\dagger})^{x_{n}} \ket{0_{F}}. \tag{10}\] The states \(\{\ket{x}\}_{x\in\{0,1\}^{n}}\) are an orthonormal basis of the underlying Hilbert space \(\mathcal{H}^{n}\cong(\mathbb{C}^{2})^{\otimes n}\). A state \(\ket{x}\) is a simultaneous eigenstate of the occupation number operators \(a_{j}^{\dagger}a_{j}\), \(j\in[n]\), where \(x_{j}\) is the eigenvalue of \(a_{j}^{\dagger}a_{j}\). For later reference, we note that \[a_{j}\ket{x}=(-1)^{\eta_{j}(x)}x_{j}\ket{x\oplus e_{j}}\qquad \text{and}\qquad a_{j}^{\dagger}\ket{x}=(-1)^{\eta_{j}(x)}\overline{x_{j}} \ket{x\oplus e_{j}}\, \tag{11}\] with the definition \[\eta_{j}(x)=\sum_{k=1}^{j-1}x_{k}\qquad\text{ for }\qquad j \in[n]\, \tag{12}\] where we write \(\overline{0}=1\) and \(\overline{1}=0\), where \(e_{j}\in\{0,1\}^{n}\) is given by \((e_{j})_{k}=\delta_{j,k}\) for \(k\in[n]\), and where \(\oplus\) denotes bitwise addition modulo 2. It will be convenient to work with Majorana operators \(\{c_{j}\}_{j=1}^{2n}\) defined by \[c_{2j-1}=a_{j}+a_{j}^{\dagger}\qquad\text{and}\qquad c_{2j}=i\left(a_{j}-a_{j}^{ \dagger}\right). \tag{13}\] Majorana operators are self-adjoint and satisfy the relations \[\{c_{j},c_{k}\}=2\delta_{jk}I\qquad\text{and}\qquad c_{j}^{2}=I\qquad\text{ for}\qquad j,k\in[2n]\.\] For \(\alpha\in\{0,1\}^{2n}\), we call the self-adjoint operator \[c(\alpha)=i^{|\alpha|\cdot(|\alpha|-1)/2}c_{1}^{\alpha_{1}}\cdots c_{2n}^{ \alpha_{2n}}\] a Majorana monomial. Here \(|\alpha|=\sum_{j=1}^{2n}\alpha_{j}\) denotes the Hamming weight of \(\alpha\in\{0,1\}^{2n}\). The set \(\{c(\alpha)\}_{\alpha\in\{0,1\}^{n}}\) constitutes an orthonormal basis of the real vector space of self-adjoint operators on \(\mathcal{H}^{n}\) equipped with the (normalized) Hilbert-Schmidt inner product \(\langle A,B\rangle=2^{-n}\operatorname{tr}(A^{\dagger}B)\). The Majorana monomials satisfy \[c(x)c(y)=(-1)^{|x|\cdot|y|+x\cdot y}c(x)c(y)\qquad\text{with}\qquad x,y\in\{0, 1\}^{2n}\,\] where \(x\cdot y=\sum_{j=1}^{n}x_{j}y_{j}\). In particular, if either \(x\) or \(y\) have even Hamming weight then \(c(x)c(y)=(-1)^{x\cdot y}c(y)c(x)\). In the following, we will denote the set of even- and odd-weight \(2n\)-bit strings by \(\{0,1\}_{+}^{2n}\) and \(\{0,1\}_{-}^{2n}\), respectively. The parity operator \[P=i^{n}c_{1}c_{2}\cdots c_{2n}\] is the Majorana monomial associated with \(\alpha=1^{2n}=(1,\ldots,1)\). The parity operator commutes with every even-weight Majorana monomial and anti-commutes with every odd-weight Majorana monomial, i.e., we have \[Pc(\alpha)=(-1)^{|\alpha|}c(\alpha)P\qquad\text{ for every}\qquad\alpha\in\{0,1\}^{2n}. \tag{14}\] The Hilbert space \(\mathcal{H}^{n}=\mathcal{H}^{n}_{+}\oplus\mathcal{H}^{n}_{-}\) associated with \(n\) fermions decomposes into a direct sum of positive- and negative-parity vectors \[\mathcal{H}^{n}_{+} =\{\Psi\in\mathcal{H}^{n}\ |\ P\Psi=\Psi\}\ \ \text{ and}\] \[\mathcal{H}^{n}_{-} =\{\Psi\in\mathcal{H}^{n}\ |\ P\Psi=-\Psi\}\ \.\] We call a state \(\Psi\in\mathcal{H}^{n}\) of definite parity if either \(\Psi\in\mathcal{H}^{n}_{+}\) or \(\Psi\in\mathcal{H}^{n}_{-}\). An element \(X\in\mathcal{B}(\mathcal{H}^{n})\) belonging to the set \(\mathcal{B}(\mathcal{H}^{n})\) of linear operators on \(\mathcal{H}^{n}\) is called even (odd) if it is a linear combination of Majorana monomials \(c(\alpha)\) with \(\alpha\in\{0,1\}^{2n}\) of even (odd) weight. An immediate consequence of these definitions is that a state \(\Psi\in\mathcal{H}^{n}\) has definite parity if and only if \(|\Psi\rangle\langle\Psi|\) is even (see e.g., [49, Proposition 1] for a proof). ### Gaussian unitaries A unitary operator \(U\) on \(\mathcal{H}^{n}\) is Gaussian if and only if it maps a Majorana operator \(c_{j}\) to a linear combination of Majorana operators, i.e. \[Uc_{j}U^{\dagger}=\sum_{k=1}^{2n}R_{jk}c_{k}\, \tag{15}\] where \(R\in O(2n)\) is a real orthogonal matrix. Ignoring overall phases, the group of Gaussian unitary operators is generated by operators of the form \[U_{j,k}(\vartheta)=\exp(\vartheta/2c_{j}c_{k})\qquad\text{ with }\qquad \vartheta\in[0,2\pi)\text{ and }j<k\in[2n]\] and by operators \[U_{j}=c_{j}\qquad\text{ with }\qquad j\in[2n]\.\] The operator \(U_{j,k}(\vartheta)\) implements the rotation \[U_{j,k}(\vartheta)c_{j}U_{j,k}(\vartheta)^{\dagger} =\cos(\vartheta)c_{j}-\sin(\vartheta)c_{k} \tag{16}\] \[U_{j,k}(\vartheta)c_{k}U_{j,k}(\vartheta)^{\dagger} =\sin(\vartheta)c_{j}+\cos(\vartheta)c_{k}\] \[U_{j,k}(\vartheta)c_{\ell}U_{j,k}(\vartheta)^{\dagger} =c_{\ell}\qquad\qquad\qquad\text{ for }\qquad\ell\not\in\{j,k\}\.\] The operator \(U_{j}=c_{j}\) leaves \(c_{j}\) invariant and flips the sign of each \(c_{k}\) with \(k\neq j\), i.e., it implements the reflection \[U_{j}c_{j}U_{j}^{\dagger} =c_{j} \tag{17}\] \[U_{j}c_{k}U_{j}^{\dagger} =-c_{k}\qquad\text{ for }\qquad k\neq j\.\] We note that by relation (14), every generator \(U_{j,k}(\vartheta)\) is parity-preserving, whereas every generator \(U_{j}\) reverses the parity, i.e., \[U_{j,k}(\vartheta)PU_{j,k}(\vartheta)^{\dagger} =P\qquad\quad\text{ for all }\qquad k>j\in[n],\vartheta\in[0,2\pi)\, \tag{18}\] \[U_{j}PU_{j}^{\dagger} =-P\qquad\text{ for all }\qquad j\in[n]\.\] Every orthogonal matrix \(R\) gives rise to a Gaussian unitary \(U_{R}\) satisfying (15). The unitary \(U_{R}\) is unique up to a global phase, and \(R\mapsto U_{R}\) is called the metaplectic representation. We can fix the global phase of \(U_{R}\) uniquely, e.g., by the following procedure. Every element \(R\in O(2n)\) can be uniquely decomposed into a product \[R=S_{0}S_{1}\cdots S_{L} \tag{19}\] with \(L\leq\frac{2n(2n-1)}{2}\) and where \[S_{0}=\begin{cases}I&\text{ if }R\in SO(2n)\\ R_{1}&\text{ otherwise }\,\end{cases}\] where for each \(r\in[L]\), the matrix \(S_{r}\) is of the form \[S_{r}=R_{j_{r},k_{r}}(\vartheta_{r})\qquad\text{ for some }\qquad j_{r}<k_{r}\in[2n], \vartheta_{r}\in[0,2\pi)\.\] Here \(R_{1}\in O(2n)\) is associated with the unitary \(U_{1}\) by Eq. (17), whereas \(R_{j,k}(\vartheta)\in SO(2n)\) is associated with \(U_{j,k}(\vartheta)\) according to Eq. (16). We note that \(R_{j,k}(\vartheta)\in SO(2n)\) is a so-called Givens rotation, introduced in Ref. [50], and a decomposition of the form (19) can be found by a deterministic algorithm with runtime \(O(n^{3})\) (see e.g., Section 5.2.3 in Ref. [51]). In particular, application of this algorithm defines a function taking an arbitrary element \(R\in O(2n)\) to a unique product of the form (19). Given the (unique) decomposition (19) of \(R\in O(2n)\), we can then define \(U_{R}\) as the product \[U_{R}=U_{1}U_{j_{1},k_{1}}(\vartheta_{1})\cdots U_{j_{L},k_{L}}(\vartheta_{L})\.\] Overall, this defines a function \(R\mapsto U_{R}\) from \(O(2n)\) to the set of Gaussian unitaries, fixing the phase ambiguity. Throughout the remainder of this work, \(U_{R}\) will denote the Gaussian unitary uniquely fixed by \(R\). ### Fermionic Gaussian (pure) states The set of pure fermionic Gaussian states is the orbit of the vacuum state \(|0_{F}\rangle\) under the action of \(O(2n)\) defined by the metaplectic representation, i.e., fermionic Gaussian states are of the form \(U_{R}\,|0_{F}\rangle\) with \(U_{R}\) a fermionic Gaussian unitary. In more detail, every fermionic Gaussian state \(e^{i\theta}U_{R}\,|0_{F}\rangle\) is uniquely specified by a pair \((\theta,R)\) with \(\theta\in[0,2\pi)\) and \(R\in O(2n)\). We will denote the set of all fermionic Gaussian states by \[\mathcal{G}_{n}=\left\{e^{i\theta}U_{R}\,|0_{F}\rangle\;\mid\theta\in[0,2\pi),R\in O(2n)\right\}\.\] By Eq. (18) and because \(P\,|0_{F}\rangle=|0_{F}\rangle\), every pure fermionic Gaussian state \(\Psi\) has a fixed parity, i.e., it is an eigenvector of the parity operator \(P\). This defines a disjoint partition \(\mathcal{G}_{n}=\mathcal{G}_{n}^{+}\cup\mathcal{G}_{n}^{-}\) of the set of fermionic Gaussian states into positive- and negative-parity states. ### Gaussianity condition In Ref. [52] Bravyi established a necessary and sufficient condition to determine if a (possibly mixed) state \(\rho\in\mathcal{B}(\mathcal{H}^{n})\) is Gaussian (see Theorem 1 therein). Here Gaussianity of a density operator \(\rho\) is defined by the condition that \(\rho\) has the form \[\rho=K\exp\left(i\sum_{j,k=1}^{2n}A_{j,k}c_{j}c_{k}\right) \tag{20}\] for an antisymmetric matrix \(A=-A^{T}\in\mathsf{Mat}_{2n\times 2n}(\mathbb{R})\) and a constant \(K>0\). We note that a pure state \(\Psi\in\mathcal{H}^{n}\) is Gaussian if and only if the associated density operator \(\rho=|\Psi\rangle\langle\Psi|\) is Gaussian. (This follows from the fact that \(|0_{F}\rangle\langle 0_{F}|=\frac{1}{2^{n/2}}\exp\left(i\frac{\pi}{4}\sum_{j=1}^{ n}c_{2j-1}c_{2j}\right)\). Indeed, if \(\rho=|\Psi\rangle\langle\Psi|\) is a rank-one projection of the form (20), then it follows from Williamson's normal form for antisymmetric matrices that there is \(R\in O(2n)\) such that \(RAR^{T}=\bigoplus_{j=1}^{n}\left(\begin{smallmatrix}0&1\\ -1&0\end{smallmatrix}\right)\). This implies that \(U_{R}|\Psi\rangle\langle\Psi|U_{R}^{\dagger}=|0_{F}\rangle\langle 0_{F}|\) and thus \(|\Psi\rangle=U_{R}^{\dagger}\,|0_{F}\rangle\). Conversely, for a Gaussian state \(|\Psi\rangle=e^{i\theta}U_{R}\,|0_{F}\rangle\in\mathcal{G}_{n}\), we can use the expression for \(|0_{F}\rangle\langle 0_{F}|\) to argue that \(|\Psi\rangle\langle\Psi|\) is of the form (20), i.e., Gaussian.) The characterization of Gaussian density operators established in [52] is the following. **Theorem 2.1** (Theorem 1 in [52]).: _An even state \(\rho\in\mathcal{B}(\mathcal{H}^{n})\) is Gaussian if and only if \([\Lambda,\rho\otimes\rho]=0\)._ Based on this characterization [52], the following was shown in [31]. **Lemma 2.2** (Corollary 1 in [31]).: _Define \(\Lambda=\sum_{j=1}^{2n}c_{j}\otimes c_{j}\). Let \(\rho\in\mathcal{B}(\mathcal{H}^{n})\) be an even state. Then \(\rho\) is a Gaussian pure state if and only if \(\Lambda\left(\rho\otimes\rho\right)=0\)._ In the following, we only use the statement of Lemma 2.2 applied to pure states in order to distinguish between Gaussian and non-Gaussian pure states. We formulate this as follows: **Lemma 2.3**.: _Let \(\Psi\in\mathcal{H}^{n}\) be a pure state with fixed parity. Then \(\Psi\) is Gaussian if and only if_ \[\Lambda(|\Psi\rangle\otimes|\Psi\rangle)=0\.\] Proof.: This follows immediately from the equivalence of the concepts of Gaussianity of pure states (vectors) and density operators because the density operator \(|\Psi\rangle\langle\Psi|\) is even for any fixed-parity state \(\Psi\). We note that there is an elegant representation-theoretic interpretation of this characterization of Gaussianity [53]. It is derived from the fact that Gaussian states are the orbit of the vacuum state \(|0_{F}\rangle\) (a highest weight state) under the action of the metaplectic group, cf. [54, Section IV] and [55]. We use a version of this reformulation for 4 fermions, see Lemma 5.1 below, that has first been obtained in [56]. ### Covariance matrices, Gaussian states and Wick's theorem The covariance matrix \(\Gamma=\Gamma(\Psi)\in\mathsf{Mat}_{2n\times 2n}(\mathbb{R})\) of a state \(\Psi\in\mathcal{H}^{n}\) is the antisymmetric matrix with entries \[\Gamma_{j,k}(\Psi)=\begin{cases}\langle\Psi,ic_{j}c_{k}\Psi\rangle&\quad\text{ for }j\neq k\\ 0&\quad\text{for }j=k\end{cases}\] with \(j,k\in[2n]\). It satisfies \(\Gamma\Gamma^{T}=I\) for any state \(\Psi\in\mathcal{H}^{n}\). The expectation value of a Hermitian operator with respect to a Gaussian state \(\Psi\) is fully determined by its covariance matrix \(\Gamma=\Gamma(\Psi)\). This is because the expectation value of a Majorana monomial \(c(\alpha)\), \(\alpha\in\{0,1\}^{2n}\), is given by Wick's theorem \[\langle\Psi,c(\alpha)\Psi\rangle=\begin{cases}\mathsf{Pf}(\Gamma[\alpha])& \text{ if }|\alpha|\text{ is even}\\ 0&\text{ otherwise}\end{cases}. \tag{21}\] Here \(\Gamma[\alpha]\in\mathsf{Mat}_{|\alpha|\times|\alpha|}(\mathbb{R})\) is the submatrix of \(\Gamma\) which includes all rows and columns with index \(j\in[2n]\) such that \(\alpha_{j}=1\). Evaluating such expectation values, i.e., computing Pfaffians of \(|\alpha|\times|\alpha|\)-matrices (with \(|\alpha|\) even), takes time \(O(|\alpha|^{3})\). (Here and below we use the number of elementary arithmetic operations to quantify the time complexity of algorithms.) ### Inner product formulas for Gaussian states The modulus of the inner product of two Gaussian states \(\Phi_{1},\Phi_{2}\) with identical parity \(\sigma\in\{\pm 1\}\) and covariance matrices \(\Gamma_{1},\Gamma_{2}\) is given by the expression [57] \[|\langle\Phi_{1},\Phi_{2}\rangle|^{2}=\sigma 2^{-n}\mathsf{Pf}(\Gamma_{1}+ \Gamma_{2}). \tag{22}\] For three Gaussian states \(\Phi_{0},\Phi_{1},\Phi_{2}\), the expression \(\langle\Phi_{0},\Phi_{1}\rangle\cdot\langle\Phi_{1},\Phi_{2}\rangle\cdot \langle\Phi_{2},\Phi_{0}\rangle\) is invariant under a change of the global phase of any of the states, and can therefore be computed by the covariance matrix formalism. An explicit expression was derived by Lowdin in [57]. In Ref. [58] Bravyi and Gosset gave the formula \[\langle\Phi_{0},\Phi_{1}\rangle\cdot\langle\Phi_{1},\Phi_{2} \rangle\cdot\langle\Phi_{2},\Phi_{0}\rangle=\sigma 4^{-n}i^{n}\operatorname{Pf} \left(\begin{array}{ccc}i\Gamma_{0}&-I&I\\ I&i\Gamma_{1}&-I\\ -I&I&i\Gamma_{2}\end{array}\right) \tag{23}\] for three Gaussian states \(\{\Phi_{j}\}_{j=0}^{2}\) of identical parity \(\sigma\in\{\pm 1\}\), where \(\Gamma_{j}=\Gamma(\Phi_{j})\) is the covariance matrix of \(\Phi_{j}\) for \(j=0,1,2\). More generally, they obtained the formula \[\langle\Phi_{0},\Phi_{1}\rangle\cdot\langle\Phi_{1},c(\alpha) \Phi_{2}\rangle\cdot\langle\Phi_{2},\Phi_{0}\rangle=\sigma 4^{-n}i^{n+|\alpha|\cdot(|\alpha|-1)/2} \operatorname{Pf}\left(R_{\alpha}\right)\, \tag{24}\] for any even-weight Majorana monomial \(c(\alpha)\), \(\alpha\in\{0,1\}_{+}^{2n}\), where \[R_{\alpha}=\left(\begin{array}{cccc}i\Gamma_{0}&-I&I&0\\ I&i\Gamma_{1}&-I&0\\ -I&I&iD_{\alpha}\Gamma_{2}D_{\alpha}&J_{\alpha}^{T}+iD_{\alpha}\Gamma_{2}J_{ \alpha}^{T}\\ 0&0&-J_{\alpha}+iJ_{\alpha}\Gamma_{2}D_{\alpha}&iJ_{\alpha}\Gamma_{2}J_{ \alpha}^{T}\end{array}\right)\in\mathsf{Mat}_{(6n+|\alpha|)\times(6n+|\alpha|)}( \mathbb{R}). \tag{25}\] Here \(D_{\alpha}=\mathsf{diag}(\{1-\alpha_{j}\}_{j=1}^{2n})\) is a diagonal matrix, whereas \(J_{\alpha}\in\mathsf{Mat}_{[\alpha|\times 2n}(\mathbb{R})\) has entries defined in terms of the indices \(\{i\in[2n]\ |\ \alpha_{i}\neq 0\}=\{i_{1}<\cdots<i_{r}\}\) associated with non-zero entries of \(\alpha\), that is, \[(J_{\alpha})_{j,k}=\begin{cases}\delta_{i_{j},k}&\text{ if }j\leq r\\ 0&\text{ otherwise }\.\end{cases}\] In other words, \(\left(J_{\alpha}\right)_{j,k}=1\) if and only if \(k\) is the position of the \(j\)-th nonzero element of \(\alpha\). As argued in [58], expressions (23) and (24) determine the inner product \(\langle\Phi_{1},\Phi_{2}\rangle\) and an expression of the form \(\langle\Phi_{1},c(\alpha)\Phi_{2}\rangle\) entirely in terms of covariance matrices, assuming that the remaining two overlaps \(\langle\Phi_{0},\Phi_{1}\rangle\), \(\langle\Phi_{2},\Phi_{0}\rangle\) with a Gaussian reference state \(\Phi_{0}\) are given and non-zero. In this situation, these quantities can be computed in time \(O(n^{3})\). ### Gaussian evolution and occupation number measurement Underlying the known classical simulation algorithms for fermionic linear optics is the fact that Gaussian unitaries and occupation number measurements preserve Gaussianity. Explicitly, this can be described as follows: Given a Gaussian state \(\Psi\) with covariance matrix \(\Gamma(\Psi)\) 1. the covariance matrix \(\Gamma(U_{R}\Psi)\) of \(\Psi\) evolved under the Gaussian unitary \(U_{R}\), \(R\in O(2n)\), is given by \(\Gamma(U_{R}\Psi)=R\Gamma(\Psi)R^{T}\). 2. measurement of the observable \(a_{j}^{\dagger}a_{j}\) for \(j\in[n]\) gives the outcome \(s\in\{0,1\}\) with probability \[P_{j}(s)=\|\Pi_{j}(s)\Psi(d)\|^{2}=\frac{1}{2}(1+(-1)^{s}\Gamma_{2j-1,2j})\.\] (26) Given that the measurement outcome is \(s\in\{0,1\}\), the post-measurement state \[\Psi(s)=\left(\Pi_{j}\left(s\right)|\Psi\right)/\sqrt{P_{j}\left(s\right)} \qquad\text{ where }\qquad\Pi_{j}(s)=\frac{1}{2}(1+(-1)^{s}i c_{2j-1}c_{2j})\] is Gaussian with covariance matrix \(\Gamma(\Psi(s))\) defined by the lower-diagonal entries \[\Gamma(\Psi(s))_{k,\ell}=\begin{cases}(-1)^{s}&\text{if }(k,\ell)=(2j,2j-1)\\ \Gamma_{k,\ell}+\frac{(-1)^{s}}{2P_{j}(s)}\left(\Gamma_{2j-1,\ell}\Gamma_{2j, k}-\Gamma_{2j-1,k}\Gamma_{2j,\ell}\right)&\text{otherwise}\end{cases}\] (27) for \(k>\ell\). In particular, the corresponding resulting covariance matrices can be computed in time \(O(n^{3})\)[8, 9, 10, 52] and \(O(n^{2})\)[3] for unitary evolution and measurement, respectively. ### The tensor product of two fermionic states Two density operators \(\rho_{j}\in\mathcal{B}(\mathcal{H}^{n_{j}})\), \(j\in[2]\), have a joint extension if and only if there is an element \(\rho\in\mathcal{B}(\mathcal{H}^{n_{1}+n_{2}})\) such that \[\operatorname{tr}(c(\alpha_{1}\|\alpha_{2})\rho)=\operatorname{tr}(c(\alpha_ {1})\rho_{1})\operatorname{tr}(c(\alpha_{2})\rho_{2})\qquad\text{ for all }\qquad\alpha_{j}\in\{0,1\}^{2n_{j}},j\in[2]. \tag{28}\] Here \(\alpha_{1}\|\alpha_{2}\in\{0,1\}^{2(n_{1}+n_{2})}\) denotes the concatenation of \(\alpha_{1}\) and \(\alpha_{2}\). Theorem 1 in [59] implies that if either \(\rho_{1}\) or \(\rho_{2}\) is even, then a unique joint extension \(\rho\) of \((\rho_{1},\rho_{2})\) exists. Furthermore, this extension is even if and only if both \(\rho_{1}\) and \(\rho_{2}\) are even. Theorem 2 in [59] shows that if \(\rho\) is even and \(\rho_{1}\) and \(\rho_{2}\) are pure, then \(\rho\) is also pure. In particular, this means that for states \(\Psi_{1},\Psi_{2}\) of definite parity, there is a unique joint pure extension \(\rho=|\Psi\rangle\langle\Psi|\) of \((|\Psi_{1}\rangle\langle\Psi_{1}|,|\Psi_{2}\rangle\langle\Psi_{2}|)\). Since \(\rho\) is pure, this also means that \(\Psi\) is of definite parity. We will write \(\Psi=\Psi_{1}\tilde{\otimes}\Psi_{2}\) for this state in the following, and we call \(\tilde{\otimes}\) the fermionic tensor product. Note that \(\Psi\) is only defined up to a global phase. It follows immediately from these definitions that \[\left|\langle x,y|\Psi_{1}\tilde{\otimes}\Psi_{2}\rangle\right|=\left|\langle x,\Psi_{1}\rangle\cdot\langle y,\Psi_{2}\rangle\right|\qquad\text{ for all }\qquad x\in\{0,1\}^{n_{1}}\text{ and }y\in\{0,1\}^{n_{2}}. \tag{29}\] Proof.: Let \(x\in\{0,1\}^{n_{1}}\) and \(y\in\{0,1\}^{n_{2}}\) be arbitrary. By definition, we have \[|x,y\rangle\langle x,y| =\left(\prod_{j=1}^{n_{1}}\frac{1}{2}(I+(-1)^{x_{j}}ic_{2j-1}c_{2j}) \right)\left(\prod_{k=1}^{n_{2}}\frac{1}{2}(I+(-1)^{y_{k}}ic_{2n_{1}+2k-1}c_{2n _{1}+2k})\right)\] \[=\left(\sum_{\alpha\in\{0,1\}^{2n_{1}}_{+}}\gamma_{x}(\alpha)c( \alpha||0^{2n_{2}})\right)\left(\sum_{\beta\in\{0,1\}^{2n_{2}}_{+}}\gamma_{y}( \beta)c(0^{2n_{1}}||\beta)\right)\.\] for certain coefficients \(\{\gamma_{x}(\alpha)\}_{\alpha\in\{0,1\}^{2n_{1}}_{+}}\) and \(\{\gamma_{y}(\beta)\}_{\beta\in\{0,1\}^{2n_{2}}_{+}}\). Since \(\rho=|\Psi_{1}\tilde{\otimes}\Psi_{2}\rangle\langle\Psi_{1}\tilde{\otimes} \Psi_{2}|\) is an extension of \((|\Psi_{1}\rangle\langle\Psi_{1}|,|\Psi_{2}\rangle\langle\Psi_{2}|)\) and \(c(\alpha||0^{2n_{2}})c(0^{2n_{1}}||\beta)=i^{-|\alpha|\cdot|\beta|}c(\alpha|| \beta)=c(\alpha||\beta)\) for (even-weight) \(\alpha\in\{0,1\}^{2n_{1}}_{+}\) and \(\beta\in\{0,1\}^{2n_{2}}_{+}\), it follows that \[\big{|}\langle x,y|\Psi_{1}\tilde{\otimes}\Psi_{2}\rangle\big{|}^ {2} =\sum_{\alpha\in\{0,1\}^{2n_{1}}_{+}}\gamma_{x}(\alpha)\sum_{ \beta\in\{0,1\}^{2n_{2}}_{+}}\gamma_{y}(\beta)\operatorname{tr}(c(\alpha|| \beta)\rho)\] \[=\langle\Psi_{1},\left(\sum_{\alpha\in\{0,1\}^{2n_{1}}_{+}}\gamma _{x}(\alpha)c(\alpha)\right)\Psi_{1}\rangle\cdot\langle\Psi_{2},\left(\sum_{ \beta\in\{0,1\}^{2n_{2}}_{+}}\gamma_{y}(\beta)c(\beta)\right)\Psi_{2}\rangle\] \[=|\langle x,\Psi_{1}\rangle|^{2}\cdot|\langle y,\Psi_{2}\rangle|^ {2}\.\] Refining Eq. (29), (relative) phase information between these matrix elements can be obtained from the explicit construction of \(\Psi_{1}\tilde{\otimes}\Psi_{2}\) given in [59, Section 3.1] (see also [49, Proof of Theorem 1]): Consider the isometry \[U:\begin{array}{ccl}\mathcal{H}^{n_{1}+n_{2}}&\rightarrow&\mathcal{H}^{n_{1} }\otimes\mathcal{H}^{n_{2}}\\ &|x_{1},\ldots,x_{n_{1}+n_{2}}\rangle&\mapsto&|x_{1},\ldots,x_{m}\rangle \otimes|x_{n_{1}+1},\ldots,x_{n_{1}+n_{2}}\rangle\end{array}\] whose action is given by \[Ua_{j}U^{\dagger}=\begin{cases}a_{j}\otimes I&\text{if }j\in[n_{1}]\\ P_{1}\otimes a_{j-n_{1}}&\text{if }j\in\{n_{1}+1,\ldots,n_{1}+n_{2}\}\.\end{cases}\] where \(P_{1}\), the parity operator acting on \(\mathcal{H}_{n_{1}}\), introduces phases. Then \[\Psi_{1}\tilde{\otimes}\Psi_{2}=U^{\dagger}(\Psi_{1}\otimes\Psi_{2})\.\] It is straightforward from this definition to check that \(\Psi_{1}\tilde{\otimes}\Psi_{2}\) is the extension of \((\Psi_{1},\Psi_{2})\) and \[\langle x,y|\Psi_{1}\tilde{\otimes}\Psi_{2}\rangle=(-1)^{|x|}\cdot\langle x, \Psi_{1}\rangle\cdot\langle y,\Psi_{2}\rangle\qquad\text{ for all }\qquad x\in\{0,1\}^{n_{1}}\text{ and }y\in\{0,1\}^{n_{2}}. \tag{30}\] We note that the fermionic tensor product preserves Gaussianity in the following sense. **Lemma 2.4**.: _Let \(\Psi_{j}\in\mathcal{G}^{+}_{n_{j}}\) be positive-parity fermionic Gaussian states for \(j\in[2]\). Then \(\Psi_{1}\tilde{\otimes}\Psi_{2}\in\mathcal{G}^{+}_{n_{1}+n_{2}}\), i.e., it is an even fermionic Gaussian state._ Proof.: By definition of an extension (see Eq. (28)) and Wick's theorem (Eq. (21)), the tensor product \(\Psi=\Psi_{1}\tilde{\otimes}\Psi_{2}\) satisfies \[\langle\Psi,c(\alpha_{1}\|\alpha_{2})\Psi\rangle=\begin{cases}\mathsf{Pf}( \Gamma_{1}[\alpha_{1}])\mathsf{Pf}(\Gamma_{1}[\alpha_{1}])&\text{ if both }|\alpha_{1}|\text{ and }|\alpha_{2}|\text{ are even}\\ 0&\text{otherwise}\end{cases} \tag{31}\] for all \(\alpha_{j}\in\{0,1\}^{2n_{j}}\), where \(\Gamma_{j}\) is the covariance matrix of \(\Psi_{j}\) for \(j\in[2]\). Because the Pfaffian satisfies \[\mathsf{Pf}(A_{1}\oplus A_{2})=\mathsf{Pf}\begin{pmatrix}A_{1}&0\\ 0&A_{2}\end{pmatrix}=\mathsf{Pf}(A_{1})\mathsf{Pf}(A_{2})\] for block-matrices, it follows from (31) that the tensor product \(\Psi\) satisfies Wick's theorem (21) with covariance matrix \(\Gamma_{1}\oplus\Gamma_{2}\). In particular, it is Gaussian. ## 3 Tracking relative phases in fermionic linear optics The covariance matrix \(\Gamma(\Psi)\) of a fermionic Gaussian state \(\ket{\Psi}=e^{i\theta}U_{R}\ket{0_{F}}\in\mathcal{G}_{n}\) fully determines expectation values by Wick's theorem, which is why Gaussian states and dynamics are amenable to efficient classical simulation (see Section 2.7). However, the description of \(\Psi\) in terms of the covariance matrix \(\Gamma(\Psi)\) does not capture information on the global phase \(e^{i\theta}\) of the state. For processes involving non-Gaussian states expressed as superpositions of Gaussian states, such phase information needs to be available for computing norms, expectation values and overlaps. Here we provide an extended (classical) description of fermionic Gaussian states that incorporates phase information. A central feature of our construction is the fact that this description permits to compute overlaps (including relative phases, i.e., not only the absolute value) of Gaussian states in an efficient manner. Our construction is motivated by and relies on expression (23), which relates the inner product \(\langle\Psi_{1},\Psi_{2}\rangle\) of two Gaussian states \(\Psi_{1},\Psi_{2}\in\mathcal{G}_{n}\) to their inner products \(\langle\Psi_{0},\Psi_{1}\rangle\), \(\langle\Psi_{0},\Psi_{2}\rangle\) with a Gaussian reference state \(\Psi_{0}\in\mathcal{G}_{n}\) and their covariance matrices \(\Gamma_{0},\Gamma_{1},\Gamma_{2}\). This suggests fixing a reference state \(\Psi_{0}\in\mathcal{G}_{n}\) and using the pair \((\Gamma(\Psi),\langle\Psi_{0},\Psi\rangle)\) as a classical description of any state \(\ket{\Psi}\in\mathcal{G}_{n}\) relevant in the computation. The problem with this idea is that \(\langle\Psi_{0},\Psi\rangle\) may vanish, preventing the application of (23). To avoid this problem, we use - instead of a single state \(\Psi_{0}\) - a (potentially) different reference state for each state \(\Psi\). Specifically, we will show that using number states, i.e., states of the form (10), suffices. This motivates the following definition. **Definition 3.1**.: _Let \(\ket{\Psi}=e^{i\theta}U_{R}\ket{0_{F}}\in\mathcal{G}_{n}\) be a Gaussian state. We call a tuple_ \[d=(\Gamma,x,r)\in\mathsf{Mat}_{2n\times 2n}(\mathbb{R})\times\{0,1\}^{n} \times\mathbb{C}\] _a (valid) description of \(\ket{\Psi}\) if the following three conditions hold:_ 1. \(\Gamma=\Gamma(\Psi)\) _is the covariance matrix of_ \(\ket{\Psi}\)_._ 2. \(x\in\{0,1\}^{n}\) _is such that_ \(\langle x,\Psi\rangle\neq 0\)_, where_ \(|x\rangle\) _is the number state defined by Eq._ (10)_._ _In our algorithms we will in fact ensure that_ \(|\langle x,\Psi\rangle|^{2}\geq 2^{-n}\)_, i.e., only a subset of valid descriptions is used. A description_ \(d=(\Gamma,x,r)\) _with this property, i.e., satisfying_ \(|r|^{2}\geq 2^{-n}\)_, will be called a good description. The restriction to good descriptions is necessary to make our algorithms work with finite-precision arithmetic._ 3. \(r=\langle x,\Psi\rangle\)_._ More explicitly, necessary and sufficient conditions for \(d=(\Gamma(\Psi),x,r)\) to constitute a description of \(\Psi\) are that \[r\neq 0\qquad\text{ and }\qquad|r|^{4}=2^{-2n}\mathsf{Det}(\Gamma(|x \rangle)+\Gamma(\Psi))\] because of formula (22) for the overlap of two states and because \(\mathsf{Det}(\cdot)=\mathsf{Pf}^{2}(\cdot)\). Here \[\Gamma(|x))=\bigoplus_{j=1}^{n}\begin{pmatrix}0&(-1)^{x_{j}}\\ -(-1)^{x_{j}}&0\end{pmatrix} \tag{32}\] is the covariance matrix of \(|x\rangle\). Since a Gaussian state \(\Psi\) generally has non-zero overlap with more than a single occupation number state \(|x\rangle\), there are several distinct valid descriptions of \(\Psi\). We will denote the set of descriptions of \(|\Psi\rangle\in\mathcal{G}_{n}\) by \(\mathsf{Desc}(\Psi)\). We note that a description \(d=(\Gamma,x,r)\) uniquely fixes a Gaussian state \(\Psi\in\mathcal{G}_{n}\): The covariance matrix \(\Gamma\) determines all expectation values, and the global phase of \(\Psi\) is fixed by the overlap \(\langle x,\Psi\rangle\), i.e., by \(r\). Denoting by \(\mathsf{Desc}_{n}=\bigcup_{\Psi\in\mathcal{G}_{n}}\mathsf{Desc}(\Psi)\) the set of all descriptions of fermionic Gaussian \(n\)-mode states, this means that we have a function \[\begin{array}{cccc}\Psi&:&\mathsf{Desc}_{n}&\to&\mathcal{G}_{n}\\ &d&\mapsto&\Psi(d)\.\end{array} \tag{33}\] The main result of this section shows that expectation values, overlaps, and matrix elements of (Majorana) operators with respect to Gaussian states can be efficiently computed from their classical descriptions. Furthermore, when evolving a Gaussian state under a Gaussian unitary, the description of the resulting state can be computed efficiently. The same is true for the post-measurement state when applying an occupation number measurement. For evolution, we note that it suffices to consider Gaussian unitaries of the form \(U_{R}\) where \(R\in O(2n)\) belongs to the set of generators \(\mathsf{Gen}(O(2n))\) introduced in Section 2.2, that is, \[\mathsf{Gen}(O(2n))=\{R_{j,k}(\vartheta)\ |\ j<k\in[2n],\vartheta\in[0,2\pi) \}\cup\{R_{j}\}_{j=1}^{2n}\.\] Here \(R_{j,k}(\vartheta)\) is a Givens rotation and \(R_{j}=-\mathsf{diag}(\{(-1)^{\delta_{j,k}}\}_{k=1}^{2n})\) a reflection. We note that each element of \(\mathsf{Gen}(O(2n))\) can be specified by a tuple \((j,k,\vartheta)\in[2n]\times[2n]\times[0,2\pi)\) or an index \(j\in[2n]\), respectively. We assume that this parameterization is used in the following algorithms (but leave this implicit). To state the properties of our (deterministic) algorithms, it is convenient to express these as functions. **Theorem 3.2** (Overlap, evolution, and measurement).: _Let \(\Psi(d)\in\mathcal{G}_{n}\) be the Gaussian state associated with a description \(d\in\mathsf{Desc}_{n}\), see Eq. (33). Then the following holds:_ 1. _The algorithm_ \(\mathsf{overlap}:\mathsf{Desc}_{n}\times\mathsf{Desc}_{n}\to\mathbb{C}\) _given in Fig._ 7 _has runtime_ \(O(n^{3})\) _and satisfies_ \[\mathsf{overlap}(d_{1},d_{2})=\langle\Psi(d_{1}),\Psi(d_{2})\rangle\qquad \text{ for all }\qquad d_{1},d_{2}\in\mathsf{Desc}_{n}\.\] 2. _The algorithm_ \(\mathsf{evolve}:\mathsf{Desc}_{n}\times\mathsf{Gen}(O(2n))\to\mathsf{Desc}_{n}\) _given in Fig._ 9 _has runtime_ \(O(n^{3})\) _and satisfies_ \[\Psi(\mathsf{evolve}(d,R))=U_{R}\Psi(d)\qquad\text{ for all }\qquad d\in\mathsf{Desc}_{n}\text{ and }R\in\mathsf{Gen}(O(2n))\,\] _where_ \(U_{R}\) _denotes the Gaussian unitary associated with_ \(R\in O(2n)\)_._ 3. _The algorithm_ \(\mathsf{measureprob}:\mathsf{Desc}_{n}\times[n]\times\{0,1\}\to\mathbb{R}\) _given in Fig._ 11 _has runtime_ \(O(1)\) _and satisfies_ \[\mathsf{measureprob}(d,j,s)=\|\Pi_{j}(s)\Psi(d)\|^{2}\qquad\text{ for all }\qquad d\in\mathsf{Desc}_{n},j\in[n],s\in\{0,1\}\,\] _where_ \(\Pi_{j}(s)=\frac{1}{2}(I+(-1)^{s}ic_{2j-1}c_{2j})\) _is the projection onto the eigenspace of_ \(a_{j}^{\dagger}a_{j}\) _with eigenvalue_ \(s\) _._ * _The algorithm_ \(\mathsf{postmeasure}:\mathsf{Desc}_{n}\times[n]\times\{0,1\}\times[0,1]\to \mathsf{Desc}_{n}\) _given in Fig._ 12 _has runtime_ \(O(n^{3})\)_. The algorithm satisfies_ \[\Psi(\mathsf{postmeasure}(d,j,s,p(d,j,s)))=\frac{\Pi_{j}(s)\Psi(d)}{\|\Pi_{j}(s) \Psi(d)\|}\quad\text{for all}\quad d\in\mathsf{Desc}_{n},j\in[n],s\in\{0,1\}\,\] _with_ \(p(d,j,s)=\|\Pi_{j}(s)\Psi(d)\|^{2}\)_._ _The output of both \(\mathsf{evolve}\) and \(\mathsf{postmeasure}\) is a good description for any input._ We argue that descriptions of relevant initial states can be obtained efficiently. Clearly, this is the case for any state of the form \(|\Psi\rangle=U_{R_{L}}\cdots U_{R_{1}}\,|0_{F}\rangle\) obtained by applying a sequence \(\{R_{j}\}_{j\in[L]}\subset\mathsf{Gen}(O(2n))\) of generators to the vacuum state \(|0_{F}\rangle\): Here we can use the algorithm \(\mathsf{evolve}\)\(L\) times, producing a description of \(|\Psi\rangle\) in time \(O(Ln^{3})\). We we will at times need a description of a state \(|\Psi\rangle\) but do not require fixing its global phase. This is the case for example when subsequent computational states only involve phase-insensitive expressions, e.g., terms of the form \(|\langle\Psi,\Phi\rangle|^{2}\). Such a description can be found efficiently from the covariance matrix \(\Gamma\) of \(|\Psi\rangle\). Since the phase can be fixed arbitrarily, the problem here is to find \(x\in\{0,1\}^{n}\) such that \(\langle x,\Psi\rangle\) is non-zero. **Theorem 3.3**.: _There is an algorithm \(\mathsf{describe}:\mathsf{Mat}_{2n\times 2n}(\mathbb{R})\to\mathsf{Desc}_{n}\) with runtime \(O(n^{3})\) such that for any covariance matrix \(\Gamma\), the state \(\Psi(\mathsf{describe}(\Gamma))\) is a Gaussian state with covariance matrix \(\Gamma\), and \(\mathsf{describe}(\Gamma)\) is a good description._ For example, consider states of the form \(|\Phi(\pi,y)\rangle=U_{R_{\pi}}\,|y\rangle\), where \(R_{\pi}\in O(2n)\) is a permutation matrix specified by an element \(\pi\in S_{2n}\) and \(y\in\{0,1\}^{n}\). (Such states are used in Ref. [58] to give a fast norm estimation algorithm, see Section 4.3.) The covariance matrix of this state is \(\Gamma(\pi,y)=R_{\pi}\Gamma(|y\rangle)R_{\pi}^{T}\) (with \(\Gamma(|y\rangle)\) defined by Eq. (32)). We thus conclude that \(|\Psi(\mathsf{describe}(\Gamma(\pi,y)))\rangle\) is proportional to \(|\Phi(\pi,y)\rangle\) with a global phase \(e^{i\theta}\) possibly depending on the pair \((\pi,y)\). The remainder of this section is devoted to the proofs of Theorem 3.2 and Theorem 3.3: We describe the algorithms \(\mathsf{evolve}\), \(\mathsf{overlap}\), \(\mathsf{measureprob}\), \(\mathsf{postmeasure}\) and \(\mathsf{describe}\) in detail, providing pseudocode, and verify that these satisfy the desired properties. ### Subroutines Our algorithms make use of subroutines called \(\mathsf{findsupport}\), \(\mathsf{relatebasiselements}\), \(\mathsf{overlap}\) and \(\mathsf{convert}\) which we describe here. The subroutine \(\mathsf{findsupport}\) takes as input the covariance matrix \(\Gamma\) of a Gaussian state \(\Psi\) and produces a string \(x\in\{0,1\}^{2n}\) with the property that \(\langle x,\Psi\rangle\neq 0\). It is given in Fig. 1. It has the following properties: **Lemma 3.4**.: _The algorithm \(\mathsf{findsupport}:\mathsf{Mat}_{2n\times 2n}(\mathbb{R})\to\{0,1\}^{n}\) runs in time \(O(n^{3})\). It satisfies_ \[|\langle\mathsf{findsupport}(\Gamma(\Psi)),\Psi\rangle|^{2}\geq 2^{-n}\qquad \text{ for every }\qquad\Psi\in\mathcal{G}_{n}\, \tag{34}\] _where \(\Gamma(\Psi)\) is the covariance matrix of \(\Psi\)._ Proof.: The main idea of the algorithm is to mimic a measurement in the number state basis executed in a sequential manner. Consider the following process: Suppose we start with the state \(\Psi^{(0)}=\Psi\), and then measure \(a_{j}^{\dagger}a_{j}\) successively for \(j=1,\ldots,n\). Let \(P(x_{j}|x_{1}\cdots x_{j-1})\) denote the conditional probability of observing the outcome \(x_{j}\in\{0,1\}\) (when measuring \(a_{j}^{\dagger}a_{j}\)), given that the previous measurements yielded \((x_{1},\ldots,x_{j-1})\). According to Born's rule, this is given by \[P(x_{j}|x_{1}\cdots x_{j-1})=\langle\Psi^{(j-1)}_{x_{1}\cdots x_{j-1}},\Pi_{j} (x_{j})\Psi^{(j-1)}_{x_{1}\cdots x_{j-1}}\rangle\] where \(\Psi^{(j-1)}_{x_{1}\cdots x_{j-1}}\) is the post-measurement state after the first \((j-1)\) measurements. The probability of observing the sequence \(x\in\{0,1\}^{n}\) of outcomes then is \[|\langle x,\Psi\rangle|^{2}=\prod_{j=1}^{n}P(x_{j}|x_{1}\cdots x_{j-1}) \tag{35}\] by Bayes' rule. Figure 1: The algorithm \(\mathsf{findsupport}\): Given the covariance matrix \(\Gamma\) of a Gaussian state \(|\Psi\rangle\), it computes \(x\in\{0,1\}^{n}\) such that \(\langle x,\Psi\rangle\neq 0\). The algorithm findsupport simulates this process: For each \(j\in[n]\), the quantity \(q_{j}\) computed in line 5 is equal to the conditional probability \(P(0|x_{1}\cdots x_{j-1})\) that the \(j\)-th measurement results in the outcome \(0\). Lines 6-11 ensure that the outcome \(x_{j}\in\{0,1\}\) with higher probability of occurrence is selected at each step, guaranteeing Property (34) (because of Eq. (35)). The matrix \(\Gamma^{(j)}\) computed in steps 12-18 is the covariance matrix of the post-measurement state \(\Psi_{x_{1}\cdots x_{j}}^{(j)}\). Each measurement is thus realized in time \(O(n^{2})\) yielding the overall complexity of \(O(n^{3})\). The algorithm relatebasiselements is more straightforward: Given \(x,y\in\{0,1\}^{n}\), it outputs \((\alpha,\vartheta)\in\{0,1\}^{2n}\times\mathbb{R}\) such that \(c(\alpha)\left|x\right>=e^{i\vartheta}\left|y\right>\). That is, it finds a Majorana monomial \(c(\alpha)\) which maps the basis state \(\left|x\right>\) to \(\left|y\right>\) up to a phase and computes the corresponding phase. In Fig. 2 we give pseudocode for this algorithm. **Lemma 3.5**.: _The algorithm_ relatebasiselements _:_ \(\{0,1\}^{n}\to\{0,1\}^{2n}\times\mathbb{C}\) _runs in time_ \(O(n)\) _and satisfies_ \[c(\alpha)\left|x\right>=e^{i\vartheta}\left|y\right>\text{ where }(\alpha, \vartheta)=\text{\tt relatebasiselements}(x,y)\text{ \quad for all \quad}x,y\in\{0,1\}^{n}\.\] Proof.: Let \(x,y\in\{0,1\}^{n}\) be arbitrary. Define \[\alpha_{2j-1}=x_{j}\oplus y_{j}\qquad\text{ and }\qquad\alpha_{2j}=0\qquad \text{ for }\qquad j\in[n]\,\] as in line 4 of algorithm relatebasiselements. Then \[c(\alpha)\left|y\right> =i^{\left|\alpha\right|\cdot(\left|\alpha\right|-1)/2}c_{1}^{x_{1 }\oplus y_{1}}c_{3}^{x_{2}\oplus y_{2}}\cdots c_{2n-1}^{x_{n}\oplus y_{n}} \left|y\right>\] \[=i^{\left(\sum_{j=1}^{n}x_{j}\oplus y_{j}\right)\left(\left(\sum_ {j=1}^{n}x_{j}\oplus y_{j}\right)-1\right)/2}(-1)^{\sum_{j=1}^{n}(x_{j}\oplus y _{j})\eta_{j}(x)}\left|y\oplus(x\oplus y)\right>\] \[=i^{\left|x\oplus y\right|\cdot(\left|x\oplus y\right|-1)/2}(-1)^ {\sum_{j=1}^{n}(x\oplus y)\eta_{j}(x)}\left|x\right>\] where in the second identity, we used that \[c_{2j-1}\left|x\right>=(-1)^{\eta_{j}(x)}\left|x\oplus e_{j}\right>\qquad \text{ for all }\qquad x\in\{0,1\}^{n}\text{ and }j\in[n]\] because of Eq. (11). Because \(i^{\left|x\oplus y\right|\cdot(\left|x\oplus y\right|-1)/2}(-1)^{\sum_{j=1}^{n }(x\oplus y)_{j}\eta_{j}(x)}=e^{i\vartheta}\) for \[\vartheta=\frac{\pi}{4}|x\oplus y|\cdot(\left|x\oplus y\right|-1)+\pi\sum_{j=1 }^{n}(x\oplus y)_{j}\eta_{j}(x)\,\] comparison with line 5 of the algorithm relatebasiselements gives the claim. The algorithm overaptriple takes covariance matrices \(\Gamma_{0},\Gamma_{1},\Gamma_{2}\) of three Gaussian states \(\Phi_{0},\Phi_{1},\Phi_{2}\) of the same parity \(\sigma\in\{-1,1\}\) and \(\alpha\in\{0,1\}_{+}^{n}\), as well as overlaps \(u=\langle\Phi_{0},\Phi_{1}\rangle\), \(v=\langle\Phi_{1},c(\alpha)\Phi_{2}\rangle\) (which both have to be non-zero), and computes the overlap \(\langle\Phi_{2},\Phi_{0}\rangle\). It is obtained by direct application of the formula (24). For completeness, we include pseudocode in Fig. 3. Since this algorithm involves computing Pfaffians of matrices that have size linear in \(n\), its runtime is \(O(n^{3})\). Fig. 4 gives a graphical representation of what the algorithm overaptriple achieves. These graphical representations will be helpful to construct and analyze other algorithmic building blocks. The algorithm convert takes a description \(d=(\Gamma,x,r)\) of a Gaussian state \(\Psi(d)\) and \(y\in\{0,1\}^{n}\) such that \(\langle y,\Psi(d)\rangle\neq 0\), and outputs a description \(d^{\prime}=(\Gamma,y,s)\) of the same state. In other words, it converts a description \(d\) of the state involving the reference state \(|x\rangle\) to a description \(d^{\prime}\) of the same state but involving a different reference state \(|y\rangle\). In Fig. 5 we give pseudocode for this algorithm. Figure 4: A graphical representation of the functionality provided by the algorithm overaptriple. Solid lines represent inner products that are given / have been computed, and are non-zero. Inner products of the form \(\langle\Phi_{1},c(\alpha)\Phi_{2}\rangle\) are represented by arrows. Figure 3: The algorithm overaptriple takes as input the covariance matrices \(\Gamma_{j}\) of three Gaussian states \(\Phi_{j}\), \(j=0,1,2\) with identical parity, \(\alpha\in\{0,1\}_{+}^{n}\) and the overlaps \(\langle\Phi_{0},\Phi_{1}\rangle\), \(\langle\Phi_{1},c(\alpha)\Phi_{2}\rangle\). The latter have to be non-zero. The algorithm computes the overlap \(\langle\Phi_{2},\Phi_{0}\rangle\) using Eq. (24). **Lemma 3.6**.: _The algorithm \(\mathsf{convert}:\mathsf{Desc}_{n}\times\{0,1\}^{n}\rightarrow\mathsf{Desc}_{n}\) given in Fig. 5 runs in time \(O(n^{3})\). Assume that \(d\in\mathsf{Desc}_{n}\) and \(y\in\{0,1\}^{n}\) satisfy \(\langle y,\Psi(d)\rangle\neq 0\). Then_ \[\Psi(\mathsf{convert}(d,y))=\Psi(d). \tag{36}\] _Furthermore, denoting the output of \(\mathsf{convert}(d,y)\) by \(d^{\prime}=(\Gamma^{\prime},y^{\prime},s^{\prime})\) we have_ \[y^{\prime}=y\] _as well as_ \[|s^{\prime}|^{2}=|\langle y^{\prime},\Psi(d)\rangle|^{2}=|\langle y,\Psi(d)\rangle|^{2}. \tag{37}\] Proof.: Let us denote the input to \(\mathsf{convert}\) by \((d,y)\), where \(d=(\Gamma,x,r)\in\mathsf{Desc}_{n}\) and \(y\in\{0,1\}^{n}\). Then \[\langle x,\Psi(d)\rangle\neq 0 \tag{38}\] Figure 6: An illustration of the algorithm \(\mathsf{convert}\). Dotted lines represent inner products that are non-zero, but that are not provided / have not yet been computed by the algorithm. Figure 5: The algorithm \(\mathsf{convert}\) takes a description \(d\in\mathsf{Desc}_{n}\) and \(y\in\{0,1\}^{n}\) such that \(\langle y,\Psi(d)\rangle\neq 0\). It outputs a description \(d^{\prime}\in\mathsf{Desc}_{n}\) of \(\Psi(d)\) such that the second entry of \(d^{\prime}\) is equal to \(y\), i.e., \(d^{\prime}=(\Gamma,y,s)\). It makes use of the subroutines \(\mathsf{ relatebasiselements}\) and \(\mathsf{overlaptriple}\). For \(x\in\{0,1\}^{n}\), \(\Gamma(|x\rangle)\) denotes the covariance matrix of the state \(|x\rangle\), see Eq. (32). since \(d\) is a description of \(\Psi(d)\). Furthermore, for \((\alpha,\vartheta)\) as defined in line 2 we have \[\langle x,c(\alpha)y\rangle=e^{i\vartheta}\neq 0 \tag{39}\] by definition of the algorithm relatebasiselements. In line 3 of convert, the matrices \(\Gamma_{j}\), \(j\in[3]\) are the covariance matrices of the states \[(\Phi_{0},\Phi_{1},\Phi_{2})=(\Psi(d),\ket{x},\ket{y})\.\] We note that Eq. (38) and the assumption \(\langle y,\Psi(d)\rangle\neq 0\) imply that these three states have identical parity. The value \(w\) computed in line 6 using overlaptriple is equal to the overlap \[w=\langle\Phi_{2},\Phi_{0}\rangle=\langle y,\Psi(d)\rangle\, \tag{40}\] because \[u =\overline{r}=\overline{\langle x,\Psi(d)\rangle}=\langle\Psi(d), x\rangle=\langle\Phi_{0},\Phi_{1}\rangle\neq 0\] \[v =e^{i\vartheta}=\langle x,c(\alpha)y\rangle=\langle\Phi_{1},c( \alpha)\Phi_{2}\rangle\neq 0\] by Eqs. (38) and (39). Eq. (40) together with the assumption \(\langle y,\Psi(d)\rangle\ \neq 0\) show that the output \((\Gamma,y,w)\) is a description of \(\Psi(d)\). This completes the proof of Eq. (36). Eq. (37) is trivially satisfied because \[s^{\prime}=w=\langle y,\Psi(d)\rangle\.\] The complexity of the algorithm is dominated by overlap, which takes time \(O(n^{3})\). ### Computing overlaps and descriptions of evolved/measured states Based on the subroutines findsupport, relatebasiselements, overlaptriple and convert, we can now describe our main algorithms overlap, evolve, measureprob and postmeasure for overlaps, Gaussian unitary evolution, to compute the outcome probability and the post-measurement state when measuring the occupation number, respectively. We give pseudocode for each algorithm and establish the associated claims. We give pseudocode for the algorithm overlap in Fig. 7 and we illustrate it in Fig. 8. Figure 7: The algorithm overlap takes descriptions \(d_{1},d_{2}\in\mathsf{Desc}_{n}\) and outputs the overlap \(\langle\Psi(d_{1}),\Psi(d_{2})\rangle\). **Lemma 3.7**.: _The algorithm \(\mathsf{overlap}:\mathsf{Desc}_{n}\times\mathsf{Desc}_{n}\to\mathbb{C}\) given in Fig. 7 runs in time \(O(n^{3})\). It satisfies_ \[\mathsf{overlap}(d_{1},d_{2})=\langle\Psi(d_{1}),\Psi(d_{2})\rangle\qquad\text { for all }\qquad d_{1},d_{2}\in\mathsf{Desc}_{n}. \tag{41}\] Proof.: Let \(d_{j}=(\Gamma_{j},x_{j},r_{j})\in\mathsf{Desc}_{n}\) for \(j\in[2]\). Then \[r_{j}=\langle x_{j},\Psi(d_{j})\rangle\neq 0\qquad\text{ for }\qquad j\in[2]\, \tag{42}\] by assumption. Line 4 treats the case where \(\Psi(d_{1})\) and \(\Psi(d_{2})\) have different parity, and are thus orthogonal. Starting from line 6, we can hence assume that the parities \(\sigma_{1},\sigma_{2}\) of \(\Psi(d_{1}),\Psi(d_{2})\) are identical, \(\sigma=\sigma_{1}=\sigma_{2}\). By Eq. (42), this implies that both \(\left|x_{1}\right\rangle\) and \(\left|x_{2}\right\rangle\) also have parity \(\sigma\), that is, \[\sigma(\left|x_{1}\right\rangle)=\sigma(\left|x_{2}\right\rangle)=\sigma( \left|\Psi(d_{1})\right\rangle)=\sigma(\left|\Psi(d_{2})\right\rangle). \tag{43}\] By definition of \(\mathsf{relatebasisements}\), the pair \((\alpha,\vartheta)\) computed in line 6 satisfies \[c(\alpha)\left|x_{2}\right\rangle=e^{i\vartheta}\left|x_{1}\right\rangle. \tag{44}\] Consider the triple of states \[(\Phi_{0},\Phi_{1},\Phi_{2})=(\Psi(d_{1}),\left|x_{1}\right\rangle,\Psi(d_{2} ))\.\] Then the matrices \(\Gamma_{j}^{\prime}\), \(j\in[3]\) defined in line 7 of the algorithm are the covariance matrices of \(\Phi_{j}\), \(j\in\{0,1,2\}\). We have \[u=\overline{r}_{1}=\overline{\langle x_{1},\Psi(d_{1})\rangle}=\langle\Psi(d_ {1}),x_{1}\rangle=\langle\Phi_{0},\Phi_{1}\rangle\neq 0\,\] by Eq. (42), and similarly \[v =e^{i\vartheta}r_{2}\] \[=e^{i\vartheta}\langle x_{2},\Psi(d_{2})\rangle\] \[=\langle e^{-i\vartheta}x_{2},\Psi(d_{2})\rangle\] \[=\langle c(\alpha)x_{1},\Psi(d_{2})\rangle\qquad\text{ with (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: Furthermore, by Eq. (43) the states \((\Phi_{0},\Phi_{1},\Phi_{2})\) have identical parity. It thus follows from the properties of overlaptriple that the quantity \(w\) computed in step line 10 is equal to \[w=\langle\Psi(d_{2}),\Psi(d_{1})\rangle\.\] Since the output of the algorithm is the complex conjugate \(\overline{w}=\langle\Psi(d_{1}),\Psi(d_{2})\rangle\), this implies the claim (41). The runtime of overlap is dominated by overlaptriple, hence it is of order \(O(n^{3})\). We give pseudocode for the algorithm evolve in Fig. 9 and we illustrate it in Fig. 10. Figure 9: The algorithm evolve takes a description \(d\in\mathsf{Desc}_{n}\) and an orthogonal matrix \(R\in\mathsf{Gen}(O(2n))\) associated with the Gaussian unitary \(U_{R}\) and computes a description for the state \(U_{R}\Psi(d)\). In this algorithm, the functions \(\beta_{s}:\{0,1\}^{n}\to\mathbb{R}\) for \(s\in[n]\) are defined as \(\beta_{s}(x)=\eta_{s}(x)+\left(x_{s}-\frac{1}{2}\right)\cdot(s+1)\), \(x\in\{0,1\}^{n}\) with \(\eta_{s}(x)\) given in Eq. (12). **Lemma 3.8**.: _The algorithm \(\mathsf{evolve}:\mathsf{Gen}(O(2n))\times\mathsf{Desc}_{n}\to\mathsf{Desc}_{n}\) given in Fig. 9 runs in time \(O(n^{3})\). Consider an arbitrary generator \(R\in\mathsf{Gen}(O(2n))\) and a description \(d\in\mathsf{Desc}_{n}\). Then_ \[\Psi(\mathsf{evolve}(R,d))=U_{R}\Psi(d)\,\] _that is, the output of \(\mathsf{evolve}\) is a description of the evolved state \(U_{R}\Psi(d)\). Furthermore, denoting the output by \(d^{\prime}=(\Gamma^{\prime},x^{\prime},r^{\prime})=\mathsf{evolve}(R,d)\) we have_ \[|r^{\prime}|^{2}=|\langle x^{\prime},\Psi(d^{\prime})\rangle|^{2}\geq 2^{-n}. \tag{45}\] Proof.: Let us denote the input of \(\mathsf{evolve}\) by \((R,d)\) where \(R\in\mathsf{Gen}(O(2n))\) and \(d=(\Gamma,x,r)\in\mathsf{Desc}_{n}\). The state \(U_{R}\Psi(d)\) has covariance matrix \(\Gamma_{0}=R\Gamma R^{T}\) computed in line 2 (see Section 2.7). By the properties of \(\mathsf{findsupport}\) (see Lemma 3.4), the state \(|y\rangle\) with Figure 10: An illustration of the algorithm \(\mathsf{evolve}\). Dotted lines correspond to inner products whose value is non-zero, but has not been computed at that stage of the algorithm. computed in line 3 is such that \[|\langle y,U_{R}\Psi(d)\rangle|^{2}\geq 2^{-n}. \tag{46}\] In particular, it is non-zero. The remainder of the algorithm computes \(\langle y,U_{R}\Psi(d)\rangle\). We first show the following: **Claim 3.9**.: _Lines 4-15 compute \((z,s)\in\{0,1\}^{n}\times\mathbb{C}\) such that_ \[|\langle z,U_{R}x\rangle|^{2}\geq 1/2 \tag{47}\] _and_ \[s=\langle z,U_{R}x\rangle. \tag{48}\] Proof.: Here we are using the fact that for any generator \(R\in\mathsf{Gen}(O(2n))\), the associated Gaussian unitary \(U_{R}\) has a local action on the mode operators. In particular, we can easily compute the image \(U_{R}\left|x\right\rangle\) of a number state \(\left|x\right\rangle\) under \(U_{R}\). We distinguish two cases: * \(R=R_{j,k}(\vartheta)\), \(j<k\in[2n]\), \(\vartheta\in[0,2\pi)\) is a Givens-rotation (see Lines 4-11): In this case, \(R\) is associated with the unitary evolution operator \[U_{j,k}=\exp(\vartheta/2c_{j}c_{k})=\cos(\vartheta/2)I+\sin(\vartheta/2)c_{j }c_{k}\.\] It maps a basis state \(\left|x\right\rangle\), \(x\in\{0,1\}^{n}\), to \[U_{j,k}(\vartheta)\left|x\right\rangle=\cos(\vartheta/2)\left|x\right\rangle+e ^{i\pi(\beta_{j}(x)+\beta_{k}(x))}\sin(\vartheta/2)\left|x\oplus e_{j} \oplus e_{k}\right\rangle\] (49) where we introduced the quantities \[\beta_{s}(x)=\eta_{s}(x)+\left(x_{s}-\frac{1}{2}\right)\cdot(s+1)\qquad\text { for any }\qquad s\in[n]\,\] with \(\eta_{s}(x)\) defined in Eq. (12). To obtain Eq. (49), we used that \[c_{j}|x\rangle=e^{i\pi\beta_{j}(x)}\left|x\oplus e_{j}\right\rangle\qquad \text{ for all }\quad j\in[2n]\quad\text{ and }\quad x\in\{0,1\}^{n}\] (50) because \[c_{2j-1}|x\rangle=(-1)^{\eta_{j}(x)}|x\oplus e_{j}\rangle\qquad\text{and} \qquad c_{2j}|x\rangle=-i(-1)^{\eta_{j}(x)+x_{j}}|x\oplus e_{j}\rangle\.\] Eq. (49) motivates the following case distinction: * \(\cos^{2}(\vartheta/2)\geq 1/2\) (see Lines 6-7): Here \(\left|x\right\rangle\) has higher amplitude than \(\left|x\oplus e_{j}\oplus e_{k}\right\rangle\) in the state \(U_{j,k}(\vartheta)\left|x\right\rangle\). The algorithm picks \(z=x\) (Line 6) and sets \(s=\cos(\vartheta/2)\) (line 7). In particular, comparing with (49), it follows immediately that the claims (47) and (48) are satisfied. * \(\cos^{2}(\vartheta/2)<1/2\) (see Lines 9-11): In this case the algorithm ensures that \[z =x\oplus e_{j}\oplus e_{k} \text{by Line 9}\] (51) \[\beta =\beta_{j}(x)+\beta_{k}(x) \text{by Line 10}\] \[s =e^{i\pi(\beta_{j}(x)+\beta_{k}(x))}\sin(\vartheta/2) \text{by Lines 10 and 11}\.\] (52) Because \(\cos^{2}(\vartheta/2)+\sin^{2}(\vartheta/2)=1\) we have \[|s|^{2}\geq\frac{1}{2}\] by the assumption that \(\cos^{2}(\theta/2)<1/2\). To prove the claims (47) and (48), it thus suffices to show the second claim (48). But this again follows from (49) and the definitions (51) of \(z\) and (52) of \(s\), i.e., we have \(s=\langle z,U_{j,k}(\vartheta)x\rangle\). 2. \(R=R_{j}\), \(j\in[2n]\) is a reflection (see Lines 12-15): Here \(R\) is associated with the unitary evolution operator \[U_{j}=c_{j}\.\] Its action on \(\left|x\right\rangle\), \(x\in\{0,1\}^{n}\), is described by Eq. (50), i.e., we have \[U_{j}\left|x\right\rangle=e^{i\pi\beta_{j}(x)}\left|x\oplus e_{j}\right\rangle\.\] This state is proportional to \(\left|x\oplus e_{j}\right\rangle\), showing that the choice \[z =x\oplus e_{j} \text{(Line 13)}\] \[\beta =\beta_{j}(x) \text{(Line 14)}\] \[s =e^{i\pi\beta} \text{(Line 15)}\] indeed ensures that the claims (47) and (48) are satisfied. Equipped with Claim 3.9, we can show that the algorithm evolve has the desired functionality. The matrix \(\Gamma_{0}\) computed in Line 2 is the covariance matrix of the evolved state \(U_{R}\Psi(d)\), whereas \(\Gamma_{1},\Gamma_{2}\) computed in Line 17 are the covariance matrices of \(U_{R}\left|x\right\rangle\) and \(\left|y\right\rangle\), respectively. Thus overlaptriple in Line 20 is invoked on the triple of states \[(\Phi_{0},\Phi_{1},\Phi_{2})=(U_{R}\Psi(d),U_{R}\left|x\right\rangle,\left|y \right\rangle)\.\] To check that the requirements of overlaptriple are satisfied, first observe that \[u =\overline{r} \text{by Line 18}\] \[=\left\langle\Psi(d),x\right\rangle \text{by definition of }r\] \[=\left\langle U_{R}\Psi(d),U_{R}x\right\rangle \text{by unitarity of }U_{R}\] \[=\left\langle\Phi_{0},\Phi_{1}\right\rangle\,.\] Furthermore, this is non-zero because \(r\) (part of the input) is non-zero by definition of the description \(d=(\Gamma_{0},x,r)\) of \(\Psi(d)\). By the defining property of the subroutine relatebasiselements, Line 16 of the algorithm computes \((\alpha,\gamma)\in\{0,1\}^{2n}\times[0,2\pi)\) such that \[c(\alpha)\left|y\right\rangle=e^{i\gamma}\left|z\right\rangle. \tag{53}\] We also have \[v =e^{i\gamma}\overline{s} \text{by line 18}\] \[=e^{i\gamma}\overline{\left\langle z,U_{R}x\right\rangle \text{by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq By construction of \(y\) using \(\mathsf{findsupport}\), we have \[|w|^{2}\geq 2^{-n}\,\] see Eq. (46). In particular, we conclude that the triple \((\Gamma_{0},y,w)\) is a valid description of \(U_{R}\Psi(d)\) with the desired property (45). This is what the algorithm returns. The runtime of the algorithm \(\mathsf{evolve}\) is dominated by the runtime \(O(n^{3})\) of the algorithm \(\mathsf{overlap}\). We give pseudocode for the algorithm \(\mathsf{measureprob}\) in Fig. 11. **Lemma 3.10**.: _The algorithm \(\mathsf{measureprob}:\mathsf{Desc}_{n}\times[n]\times\{0,1\}\to\mathbb{R}\) given in Fig. 11 runs in time \(O(1)\). It satisfies_ \[\mathsf{measureprob}(d,j,s)=\langle\Psi(d),\Pi_{j}(s)\Psi(d)\rangle \qquad\text{ for all }\qquad d\in\mathsf{Desc}_{n},j\in[n],s\in\{0,1\}\,\] _where \(\Pi_{j}(s)=\frac{1}{2}(I+(-1)^{s}ic_{2j-1}c_{2j})\) is the projection onto the eigenvalue-\(s\) eigenspace of \(a_{j}^{\dagger}a_{j}\)._ Proof.: We denote the input to \(\mathsf{measureprob}\) by \((d,j,s)\) where \(d=(\Gamma,x,r)\in\mathsf{Desc}_{n}\) is a description of a state \(\Psi(d)\), \(j\in[n]\) and \(s\in\{0,1\}\). Given the state \(\Psi(d)\), the probability of obtaining measurement outcome \(s\) when measuring the occupation number operator \(a_{j}^{\dagger}a_{j}\) is given by Eq. (26). This is the output of the algorithm in line 2 and gives the claim. Computing line 2 requires a constant number of arithmetic operations, giving the runtime \(O(1)\). We give pseudocode for the algorithm \(\mathsf{postmeasure}\) in Fig. 12 and we illustrate it in Fig. 13. **Lemma 3.11**.: _The algorithm \(\mathsf{postmeasure}:\mathsf{Desc}_{n}\times[n]\times\{0,1\}\times[0,1]\to \mathsf{Desc}_{n}\) given in Fig. 12 runs in time \(O(n^{3})\). Let \(d\in\mathsf{Desc}_{n}\), \(j\in[n]\) and \(s\in\{0,1\}\) be arbitrary. Let \(\Pi_{j}(s)=\frac{1}{2}(I+(-1)^{s}ic_{2j-1}c_{2j})\) be the projection onto the eigenvalue-\(s\) eigenspace of \(a_{j}^{\dagger}a_{j}\) and let \(p=\left\|\Pi_{j}(s)\Psi(d)\right\|^{2}\). Then_ \[\Psi(\mathsf{postmeasure}(d,j,s,p)=\frac{\Pi_{j}(s)\Psi(d)}{\left\| \Pi_{j}(s)\Psi(d)\right\|}\,\] _that is, \(\mathsf{postmeasure}\) computes a description of the post-measurement state when measuring \(a_{j}^{\dagger}a_{j}\) and obtaining outcome \(s\). Denoting the output of the algorithm by_ \[d^{\prime}=(\Gamma^{\prime},x^{\prime},r^{\prime})=\mathsf{ postmeasure}(d,j,s,p)\,\] _we further have_ \[|r^{\prime}|^{2}=|\langle x^{\prime},\Psi(d^{\prime})\rangle|^{2 }\geq 2^{-n}. \tag{54}\] Figure 11: The subroutine \(\mathsf{measureprob}\) takes as input a description \(d=(\Gamma,x,r)\in\mathsf{Desc}_{n}\) of a Gaussian state \(\Psi(d)\), an integer \(j\in[n]\) and a bit \(s\in\{0,1\}\). It outputs the probability of obtaining the measurement outcome \(s\) when measuring the occupation number operator \(a_{j}^{\dagger}a_{j}\). The outcome probability does not depends on the global phase of \(\Psi(d)\) (which is determined by its reference state \(x\) and the overlap \(r\)), but only on its covariance matrix \(\Gamma\). **Require:**\(d=(\Gamma,x,r)\in\mathsf{Desc}_{n}\) **Require:**\(j\in[n]\) **Require:**\(s\in\{0,1\}\) **Require:**\(p=\|\Pi_{j}(s)\Psi(d)\|^{2}\in[0,1]\)\(\triangleright\) probability of outcome \(s\) when measuring \(a_{j}^{\dagger}a_{j}\) ``` 1:functionpostmeasure(\(d,j,s,p\)) 2:\(\Gamma^{\prime}\gets 0\in\mathsf{Mat}_{2n\times 2n}(\mathbb{R})\)\(\triangleright\) compute covariance matrix of post-measurement state \(\Psi^{\prime}\) 3:\(\Gamma^{\prime}_{2j,2j-1}\leftarrow(-1)^{s}\) 4:for\(\ell\gets 1\) to \(n-1\)do 5:for\(k\leftarrow\ell+1\) to \(n\)do 6:if\((k,\ell)\neq(2j,2j-1)\)then 7:\(\Gamma^{\prime}_{k,\ell}\leftarrow\Gamma_{k,\ell}+\frac{(-1)^{s}}{2p}(\Gamma_ {2j-1,\ell}\Gamma_{2j,k}-\Gamma_{2j-1,k}\Gamma_{2j,\ell})\) 8:\(\Gamma^{\prime}\leftarrow\Gamma^{\prime}-(\Gamma^{\prime})^{T}\) 9:\(y\leftarrow\mathsf{findsupport}(\Gamma^{\prime})\)\(\triangleright\) find \(y\) such that \(|\langle y,\Psi^{\prime}\rangle|^{2}\geq 2^{-n}\) 10:\((\alpha,\vartheta)\leftarrow\mathsf{relatedabasielements}(y,x)\)\(\triangleright\)\((\alpha,\vartheta)\) are such that \(c(\alpha)\,|y\rangle=\hat{e}^{i\vartheta}\,|x\rangle\) 11:\(\Gamma_{0}\leftarrow\Gamma\), \(\Gamma_{1}\leftarrow\Gamma(|x\rangle)\), \(\Gamma_{2}\leftarrow\Gamma(|y\rangle)\)\(\triangleright\) covariance matrices of \(\Psi(d)\), \(|x\rangle\) and \(|y\rangle\) 12:\(u\leftarrow\overline{r}\)\(\triangleright\)\(u=\langle\Psi(d),x\rangle\) 13:\(v\gets e^{i\vartheta}\)\(\triangleright\)\(v=\langle x,c(\alpha)y\rangle\) 14:\(w\leftarrow\mathsf{overlaptriple}(\Gamma_{0},\Gamma_{1},\Gamma_{2},\alpha,u,v)\)\(\triangleright\)\(w=\langle y,\Psi(d)\rangle\) 15:return\((\Gamma^{\prime},y,w/\sqrt{p})\)\(\triangleright\) return a description of \(\Psi^{\prime}\) ``` **Algorithm 12** The algorithm \(\mathsf{postmeasure}\) takes as input a description \(d\in\mathsf{Desc}_{n}\), an integer \(j\in[n]\), a bit \(s\in\{0,1\}\) and a real number \(p\in[0,1]\). Assuming \(p=\|\Pi_{j}(s)\Psi(d)\|^{2}\), the algorithm outputs a description of the post-measurement state \(\Psi^{\prime}=(\Pi_{j}(s)\Psi(d))/\|\Pi_{j}(s)\Psi(d)\|\) when measuring the number operator \(a_{j}^{\dagger}a_{j}\) and obtaining the outcome \(s\) where \(s\in\{0,1\}\). Here, \(\Pi_{j}(s)=(I+(-1)^{s}ic_{2j-1}c_{2j})/2\) is the projection onto the eigenvalue \(s\) eigenspace of \(a_{j}^{\dagger}a_{j}\). Proof.: We denote the input to \(\mathsf{postmeasure}\) by \((d,j,s,p)\), where \(d\in\mathsf{Desc}_{n}\), \(j\in[n]\), \(s\in\{0,1\}\) and \(p\in[0,1]\). For brevity, let us denote the post-measurement state when measuring the observable \(a_{j}^{\dagger}a_{j}\) and obtaining outcome \(s\) by \[\Psi^{\prime}=\frac{\Pi_{j}(s)\Psi(d)}{\|\Pi_{j}(s)\Psi(d)\|}\.\] In lines 2-7, the algorithm \(\mathsf{postmeasure}\) computes the covariance matrix \(\Gamma^{\prime}\) of \(\Psi^{\prime}\) according to Eq. (27). In line 9 the algorithm uses \(\mathsf{findsupport}\) to find \(y\in\{0,1\}^{n}\) such that \[\left|\langle y,\Psi^{\prime}\rangle\right|^{2}\geq 2^{-n}. \tag{55}\] Line 10 provides \((\alpha,\vartheta)\in\{0,1\}^{2n}\times\mathbb{R}\) such that \[c(\alpha)\left|y\right\rangle=e^{i\vartheta}\left|x\right\rangle. \tag{56}\] Line 11 of the algorithm sets the matrices \((\Gamma_{0},\Gamma_{1},\Gamma_{2})\) equal to the covariance matrices of the three states \[(\Phi_{0},\Phi_{1},\Phi_{2})=(\Psi(d),\left|x\right\rangle,\left|y\right\rangle )\.\] We check that the conditions for applying \(\mathsf{overlapping}\) in Line (14) are satisfied. We have \[u =\overline{r}\] by Line 12 \[=\left\langle\Psi(d),x\right\rangle\] by definition of \[r\] \[=\left\langle\Phi_{0},\Phi_{1}\right\rangle\,\] and this is non-zero because \(d=(\Gamma,x,r)\) is a valid description (hence \(r\neq 0\)). Similarly as before, we also have \[v =e^{i\vartheta} \text{by Line \ref{line:1}}\] \[=\langle x,c(\alpha)y\rangle \text{by Eq. \eqref{eq:1}}\] \[=\langle\Phi_{1},c(\alpha)\Phi_{2}\rangle\.\] In particular, this is also non-zero. The requirements to run overlaptriple in Line 14 are therefore met, and Line 14 returns \[w =\langle\Phi_{2},\Phi_{0}\rangle\] \[=\langle y,\Psi(d)\rangle. \tag{57}\] It remains to show that \((\Gamma^{\prime},y,w/\sqrt{p})\) (the expression returned by the algorithm) is a valid description of the post-measurement state \(\Psi^{\prime}\), and to establish the bound \[|w/\sqrt{p}|^{2}\geq 2^{-n} \tag{58}\] in order to prove Eq. (54). Inserting \(\Psi^{\prime}=\Pi_{j}(s)\Psi/\sqrt{p}\), Eq. (55) implies that \[2^{-n}\leq|\langle y,\Psi^{\prime}\rangle|^{2}=\frac{1}{p}|\langle y,\Pi_{j}(s) \Psi\rangle|^{2}=\frac{1}{p}|\langle\Pi_{j}(s)y,\Psi\rangle|^{2}\leq\frac{1}{p }\|\Pi_{j}(s)y\|^{2}\cdot\|\Psi\|^{2} \tag{59}\] because \(\Pi_{j}(s)\) is self-adjoint and with the Cauchy-Schwarz inequality. In particular, we have \(\Pi_{j}(s)y\neq 0\) and thus \[\Pi_{j}(s)y=y \tag{60}\] since any number state \(|y\rangle\) is an eigenvector of the projection \(\Pi_{j}(s)\). Inserting (60) into (59) and using (57) we obtain the bound \[2^{-n}\leq\frac{1}{p}|\langle y,\Psi\rangle|^{2}=\frac{|w|^{2}}{p}\,\] establishing (58). Eq. (60) and the self-adjointness of \(\Pi_{j}(s)\) also imply that \[\langle y,\Psi^{\prime}\rangle=\frac{1}{\sqrt{p}}\langle y,\Pi_{j}(s)\Psi \rangle=\frac{1}{\sqrt{p}}\langle y,\Psi\rangle=\frac{w}{\sqrt{p}}\.\] Since \(\Gamma^{\prime}\) is the covariance matrix of \(\Psi^{\prime}\) and \(p=\|\Pi_{j}(s)\Psi(d)\|^{2}\) is the probability of obtaining outcome \(s\) when measuring \(a_{j}^{\dagger}a_{j}\), this shows that \((\Gamma^{\prime},y,w/\sqrt{p})\) is a valid description of \(\Psi^{\prime}\) as claimed. The complexity of the algorithm is dominated by overapttriple, which takes time \(O(n^{3})\). ### Initial states for computation Using the algorithm evolve, it is straightforward to generate a description of a state that is obtained by applying a sequence of Gaussian unitaries (generators) to the vacuum state. This is all that is typically needed to describe initial states. In cases where we do not need to fix the overall phase, we can generate a description from the covariance matrix. The algorithm describe takes as input the covariance matrix \(\Gamma\) of a Gaussian state \(\Psi\) and outputs a description \(d\in\mathsf{Desc}_{n}\) of a Gaussian state which is equal to \(\Psi\) up to a global phase. It is given in Fig. 14 and it simply uses the subroutine findsupport and Eq. (22). **Lemma 3.12**.: _The algorithm describe: \(\mathsf{Mat}_{2n\times 2n}(\mathbb{R})\to\mathsf{Desc}_{n}\) runs in time \(O(n^{3})\). Its output is such that for every covariance matrix \(\Gamma\), the state \(\Psi(\mathsf{describe}(\Gamma))\) is a Gaussian state with covariance matrix \(\Gamma\). We have_ \[|r|^{2}=|\langle x,\Psi(d)\rangle|^{2}\geq 2^{-n} \tag{61}\] _for \(d=(\Gamma,x,r)=\Psi(\mathsf{describe}(\Gamma))\)._ Figure 14: The algorithm describe: Given the covariance matrix \(\Gamma\) of a Gaussian state \(|\Psi\rangle\), it outputs \(d\in\mathsf{Desc}_{n}\) such that \(|\langle\Psi(d),\Psi\rangle|=1\). Proof.: Let \(\Gamma\in\mathsf{Mat}_{2n\times 2n}(\mathbb{R})\) be a covariance matrix and let \(\Psi\) be a Gaussian state with covariance matrix \(\Gamma\). By definition of the algorithm \(\mathsf{findsupport}\), the value \(y\in\{0,1\}^{n}\) computed in line 3 satisfies \[|\langle y,\Psi\rangle|^{2}\geq 2^{-n}\.\] By Eq. (22), the value \(r\) computed in Line 5 satisfies \[r=|\langle y,\Psi\rangle|\.\] In particular, there is an angle \(\vartheta\in[0,2\pi)\) such that \[r=\sqrt{\sigma 2^{-n}\mathsf{Pf}(\Gamma(|y))+\Gamma)}=\langle y,e^{i\vartheta} \Psi\rangle\.\] It follows immediately that \(d=(\Gamma,y,r)\) is a valid description of the Gaussian state \(e^{i\vartheta}\Psi=\Psi(d)\) with the required property (61). ## 4 Classical simulation of fermionic Gaussian circuits with non-Gaussian initial states In this section, we argue that the techniques developed in Section 3 to describe fermionic Gaussian states (including relative phases) give rise to efficient classical simulation algorithms for computations composed of non-Gaussian initial states, Gaussian unitaries and occupation number measurements. Specifically, we argue that algorithms developed in the context of stabilizer circuits can immediately be translated to this fermionic setup. Furthermore, this translation maintains runtime bounds when the stabilizer extent is replaced by the fermionic Gaussian extent. Because of the generality of this adaptation procedure - it being applicable to a variety of simulation algorithms both for strong and weak simulation - we restrict our attention to the key substitutions. Our algorithms apply to the efficient classical simulation of fermionic circuits of the following form, involving \(n\) fermions. 1. The initial state \(\Psi^{(0)}=\Psi\) is a possibly non-Gaussian state \(\Psi\). We assume that its fermionic Gaussian extent \(\xi(\Psi)\) and a corresponding optimal decomposition into a superposition of Gaussian states is known. This is the case for example for any four-fermion state, or a tensor product of two four-fermion states, see Section 5. Alternatively, we may assume that an upper bound \(\overline{\xi}(\Psi)\geq\xi(\Psi)\) and a corresponding decomposition of \(\Psi\) achieving this value is known: In this case, runtime upper bounds will depend on \(\overline{\xi}(\Psi)\) instead of \(\xi(\Psi)\). 2. The computation proceeds in a sequence of timesteps. At each step \(t\in[T]\), one of the following is performed: 1. A Gaussian unitary \(U_{R}\), \(R\in\mathsf{Gen}(O(2n))\) is applied to the state. Here the choice of \(R\) may depend (in an efficiently computable manner) on measurement results obtained previously. We will leave this dependence implicit and do not take it into account in our runtime estimates, as it will typically depend heavily on the circuit considered. 2. An occupation number measurement, i.e., measurement of the operator \(a_{j}^{\dagger}a_{j}\) for some \(j\in[n]\) is performed, yielding a measurement outcome \(s\in\{0,1\}\) and a corresponding post-measurement state. The choice of the mode \(j\in[n]\) to be measured may again depend (in an efficient manner) on the measurement outcomes already obtained. We note that the restriction to the set of Gaussian unitaries associated with generators of \(O(2n)\) in (iia) incurs no loss of generality at the cost of possibly increasing \(T\) by a factor of order \(O(n^{2})\) and an additive term in the runtime of order \(O(n^{3})\) since a decomposition of an arbitrary element \(R\in O(2n)\) of the form (19) as a product of \(L\leq O(n^{2})\) generators can be found in time \(O(n^{3})\), see the discussion below Theorem 3.2. The use of arbitrary initial states \(\Psi\) in (i) allows us to model, in particular, the application of certain "magic gates" using so-called gadgets. These can be realized by using non-Gaussian auxiliary states combined with Gaussian operations, see e.g., [12, 13]. Since all 1-, 2- and 3-fermion states are Gaussian [31], 4-fermion states provide the smallest non-trivial examples; these will also be our main focus in Section 5. We refer to e.g., [12, 13] for a discussion of these constructions. We proceed as follows: In Section 4, we formulate in general terms how simulation algorithms for a model can be generalized to initial states that are superpositions: This follows known approaches for stabilizer circuits augmented by magic states. In Section 4.2 we review the relationship between the \(\mathcal{D}\)-extent and the \(\mathcal{D}\)-rank defined by a dictionary \(\mathcal{D}\). In Section 4.3 we discuss fast algorithms for estimating norms of superpositions of dictionary states. In Section 4.4 we apply these constructions to our setup. ### Extending simulation algorithms to superpositions Here we discuss how to extend simulation algorithms for an efficiently simulable model \((\mathcal{D},\mathcal{E},\mathcal{M})\) in such a way that the resulting extended algorithms (\(\chi\mathsf{evolve},\chi\mathsf{measureprob},\chi\mathsf{postmeasure}\)) work with any initial state \(\Psi\) which is a superposition of \(\chi\) elements of \(\mathcal{D}\) (i.e., has \(\mathcal{D}\)-rank bounded by \(\chi_{\mathcal{D}}(\Psi)\leq\chi\)). Our discussion is standard and is included only for the reader's convenience: It follows that for stabilizer states as discussed in [18]. Recall that the dictionary \(\mathcal{D}\) is a set of states, \(\mathcal{E}\) a set of operations and \(\mathcal{M}\) a set of measurements. In addition to the subroutines \(\mathsf{evolve}\), \(\mathsf{measureprob}\) and \(\mathsf{postmeasure}\) for evolution and measurement associated with \((\mathcal{D},\mathcal{E},\mathcal{M})\), the construction discussed here requires an efficient algorithm \(\mathsf{overlap}\) which computes inner products \(\langle\Psi(d_{1}),\Psi(d_{2})\rangle\) from descriptions \((d_{1},d_{2})\in\mathsf{Desc}_{n}^{2}\). This means that the description \(d\in\mathsf{Desc}_{n}\) of a state \(\Psi(d)\) must include phase information. For Gaussian states, the covariance matrix formalism has to be extended as discussed in Section 3. Our goal is to find classical simulation algorithms for circuits of the following form: 1. The initial state \(\Psi^{(0)}=\Psi\) is a superposition of the form \[\Psi=\sum_{j=1}^{\chi}\gamma_{j}\varphi_{j}\.\] of \(\chi\) states \(\{\varphi_{j}\}_{j=1}^{\chi}\subset\mathcal{D}\) with complex coefficients \(\{\gamma_{j}\}_{j=1}^{\chi}\). We assume that this decomposition is explicitly provided as an input to the classical algorithm in the form of a \(\chi\)-tuple \(\{(\gamma_{j},d_{j})\}_{j=1}^{\mathcal{D}}\), where \(d_{j}\) is an efficient classical descriptions of the state \(\varphi_{j}\). 2. In each timestep \(t\in[T]\), 1. either an evolution operation \(E\in\mathcal{E}\), or 2. a measurement \(M\in\mathcal{M}\) is applied to the state. We assume that corresponding efficient descriptions of \(E\) respectively \(M\) are given to the classical simulation algorithm. The algorithms \((\mathsf{evolve},\mathsf{measureprob},\mathsf{postmeasure},\mathsf{overlap})\) associated with the model \((\mathcal{D},\mathcal{E},\mathcal{M})\) then immediately give rise to algorithms \((\chi\mathsf{evolve},\chi\mathsf{measureprob},\chi\mathsf{postmeasure})\) for simulating a more general circuit: At each time step \(t\in[T]\), the resulting algorithm maintains the data \(\{\gamma_{j}^{(t)},d_{j}^{(t)}\}_{j=1}^{\mathcal{D}}\) describing the instantaneous state \(\Psi^{(t)}\) after step \(t\) as a linear combination \[\Psi^{(t)}=\sum_{j=1}^{\chi}\gamma_{j}^{(t)}\Psi(d_{j}^{(t)})\] of vectors belonging to the dictionary \(\mathcal{D}\), and the subroutines \((\chi\mathsf{evolve},\chi\mathsf{measureprob},\chi\mathsf{postmeasure})\) are used to successively update this description (respectively sample from corresponding measurement outcomes). Before describing the extended routines \(\chi\mathsf{evolve},\chi\mathsf{measureprob},\chi\mathsf{postmeasure}\) in more detail, it is convenient to introduce a subroutine \(\chi\mathsf{norm}\) which takes as input a tuple \(\{(\gamma_{j},d_{j})\}_{j=1}^{\chi}\in(\mathbb{C}\times\mathsf{Desc}_{n})^{\chi}\) and outputs the squared norm \(\|\sum_{j=1}^{\chi}\gamma_{j}\Psi(d_{j})\|^{2}\). It is clear that such a primitive can be realized naively by using the algorithm overlap for computing inner products. This naive implementation, which we refer to as \(\chi\mathsf{naivenorm}\), requires time \[\mathsf{time}(\chi\mathsf{naivenorm})=\chi^{2}\mathsf{time}(\mathsf{overlap})\.\] Let us now describe the procedures \(\chi\mathsf{evolve},\chi\mathsf{measureprob}\) and \(\chi\mathsf{postmeasure}\), building on a (general) norm computation subroutine \(\chi\mathsf{norm}\). 1. if an evolution operation \(E\in E\) with description \(d_{E}\) is applied at time \(t\), then the description is updated by setting \[\gamma_{j}^{(t)}=\gamma_{j}^{(t-1)}\qquad\text{ and }\qquad d_{j}^{(t)}=\mathsf{evolve}(d_{E},d_{j}^{(t-1)})\qquad\text{ for }\qquad j\in[\chi]\.\] This defines the algorithm \(\chi\mathsf{evolve}\). The runtime of this update is \[\mathsf{time}(\chi\mathsf{evolve})=\chi\cdot\mathsf{time}(\mathsf{evolve})\.\] 2. if a (projective) measurement \(M=\{M_{s}\}_{s\in\mathcal{M}}\in\mathcal{M}\) with description \(d_{M}\) is applied to the state at time \(t\), then the probability of obtaining \(s\in\mathcal{M}\) is given by \[p(s|\Psi^{(t-1)})=\|M_{s}\Psi^{(t-1)}\|^{2}=\left\|\sum_{j=1}^{\chi}\gamma_{j }^{(t-1)}\sqrt{p(s|\Psi_{j}^{(t-1)})}\Psi_{j}^{(t-1)}(M,s)\right\|^{2}\.\] Here the probability \(p(s|\Psi_{j}^{(t-1)})=\|M_{s}\Psi_{j}^{(t-1)}\|^{2}=\mathsf{measureprob}(d_{j} ^{(t-1)},d_{M},s)\) of obtaining outcome \(s\) when measuring \(\Psi_{j}^{(t-1)}\) can be efficiently obtained from the description \(d_{j}^{(t-1)}\) of \(\Psi_{j}^{(t-1)}\) and the description \(d_{M}\) of \(M\). (Summands \(j\) where the probability \(p(s|\Psi_{j}^{(t-1)})\) vanishes can be omitted from this sum.) Similarly, a description \(d_{j}(s)=\mathsf{postmeasure}(d_{j}^{(t-1)},d_{M},s)\) of the (normalized) post-measurement state \(\Psi_{j}^{(t-1)}(M,s)=\frac{1}{\sqrt{p(s|\Psi_{j}^{(t-1)})}}M_{s}\Psi_{j}^{(t- 1)}\) (when measuring \(\Psi_{j}^{(t-1)}\)) can be obtained efficiently. In particular, setting \(\gamma_{j}=\gamma_{j}^{(t-1)}\sqrt{p(s|\Psi_{j}^{(t-1)})}\), we conclude that the outcome probability \[p(s|\Psi^{(t-1)})=\left\|\sum_{j=1}^{\chi}\tilde{\gamma}_{j}\Psi(d_{j}(s)) \right\|^{2}\] (62) is the squared norm of a superposition of elements from \(\mathcal{D}\). This expression (together with the norm computation routine \(\chi\mathsf{norm}\)) defines the algorithm \(\chi\mathsf{measureprob}\). In particular, given \(\{\tilde{\gamma}_{j},d_{j}(s),p(s|\Psi_{j}^{(t-1)})\}_{j=1}^{\chi}\), the probability \(p(s|\Psi)\) can be evaluated (exactly) in runtime \(\mathsf{time}(\chi\mathsf{norm})\). Since \(\chi\mathsf{measureprob}\) first has to compute the descriptions \(d_{j}(s)\) of the post-measurement states \(\Psi_{j}^{(t-1)}(M,s)\) and the probabilities \(\{p(s|\Psi_{j}^{(t-1)})\}_{j=1}^{\chi}\), its runtime is \[\mathsf{time}(\chi\mathsf{measureprob})=\mathsf{time}(\chi\mathsf{ norm})+\chi\cdot(\mathsf{time}(\mathsf{measureprob})+\mathsf{time}(\mathsf{postmeasure}))\.\] One can easily verify that the post-measurement state after time step \(t\) is given by \[\Psi^{(t)}=\sum_{j=1}^{\chi}\gamma_{j}^{(t)}\Psi(d_{j}^{(t)})\,\] where \[\gamma_{j}^{(t)}=\frac{\tilde{\gamma}_{j}}{\sqrt{p(s|\Psi^{(t-1)})}}\qquad \text{ and }\qquad d_{j}^{(t)}=d_{j}(s)\.\] In particular, this means that (similarly as for \(\chi\mathsf{measureprob}\)) we have an algorithm \(\chi\mathsf{postmeasure}\) which given \(\{(\gamma_{j},d_{j}^{(t-1)})\}_{j=1}^{\chi}\) and \(p(s|\Psi^{(t-1)})\), computes a description of the post-measurement state in time \[\mathsf{time}(\chi\mathsf{postmeasure})=\chi\cdot(\mathsf{time}( \mathsf{postmeasure})+\mathsf{time}(\mathsf{measureprob}))\.\] Given the ability to compute \(p(s|\Psi)\) and assuming, e.g., that the number \(|\mathcal{M}|\) of measurement outcomes is constant, one can then sample from this distribution (when the goal is to perform weak simulation) to get an outcome \(s\in\mathcal{M}\). Using the naive algorithm \(\chi\mathsf{naivenorm}\) for \(\chi\mathsf{norm}\) gives runtimes \[\mathsf{time}(\chi\mathsf{evolve}) =\chi\cdot\mathsf{time}(\mathsf{evolve})\] \[\mathsf{time}(\chi\mathsf{measureprob}) =\chi^{2}\cdot\mathsf{time}(\mathsf{overlap})+\chi\cdot( \mathsf{time}(\mathsf{postmeasure})+\mathsf{time}(\mathsf{measureprob})). \tag{63}\] \[\mathsf{time}(\chi\mathsf{postmeasure}) =\chi\cdot(\mathsf{time}(\mathsf{postmeasure})+\mathsf{time}( \mathsf{measureprob}))\] As a function of \(\chi\), this is dominated by the computation of the squared norm (62) in \(\chi\mathsf{measureprob}\) which takes time \(O(\chi^{2})\). ### Sparsification: Relating \(\mathcal{D}\)-extent to approximate \(\mathcal{D}\)-rank Algorithms whose complexity depends on the \(\mathcal{D}\)-extent \(\xi_{\mathcal{D}}(\Psi)\) instead of the (exact) \(\mathcal{D}\)-rank \(\chi_{\mathcal{D}}(\Psi)\) (see Eq. (3)) of the initial state \(\Psi\) can be obtained as follows. The idea consists in replacing \(\Psi\) by a state \(\tilde{\Psi}\) which is \(\delta\)-close to \(\Psi\) and has bounded \(\mathcal{D}\)-rank. More precisely, it relies on the following result which connects the \(\mathcal{D}\)-extent \(\xi_{\mathcal{D}}(\Psi)\) to the approximate \(\mathcal{D}\)-rank \(\chi_{\mathcal{D}}^{\delta}(\Psi)\) defined in Eq. (4). **Theorem 4.1** (Theorem 1 in [18]).: _Suppose \(\Psi=\sum_{j=1}^{m}\gamma_{j}\varphi_{j}\) is a decomposition of a normalized vector \(\Psi\) into a superposition of elements \(\{\varphi_{j}\}_{j=1}^{m}\) belonging to the dictionary \(\mathcal{D}\). Then_ \[\chi_{\mathcal{D}}^{\delta}(\Psi)\leq 1+\|\gamma\|_{1}^{2}/\delta^{2}\] _where \(\|\gamma\|_{1}=\sum_{j=1}^{m}|\gamma_{j}|\) is the \(1\)-norm of \(\gamma\). In particular, we have the relationship_ \[\chi_{\mathcal{D}}^{\delta}(\Psi)\leq 1+\xi_{\mathcal{D}}(\Psi)/\delta^{2}\.\] In [18], this result was established for the dictionary \(\mathcal{D}=\mathsf{STAB}_{n}\) consisting of \(n\)-qubit stabilizer states. Inspection of the proof immediately shows that the statement is applicable to any dictionary \(\mathcal{D}\) (independently of, e.g., whether or not it is finite). In particular, Theorem 4.1 implies that in runtime upper bounds, the quantity \(\chi_{\mathcal{D}}\) can always be replaced by (the potentially much smaller quantity) \(\xi_{\mathcal{D}}(\Psi)/\delta^{2}\), at the cost of introducing an \(O(\delta)\)-error in \(L^{1}\)-distance in the sampled distribution. For example, using the naive norm estimation algorithm (i.e., inserting into (63)), this gives a quadratic scaling (for computing output probabilities) in \(\xi_{\mathcal{D}}(\Psi)\). Note that here we are assuming that a decomposition of \(\Psi\) with squared \(L^{1}\)-norm \(\|\gamma\|_{1}^{2}\) of coefficients achieving \(\xi_{\mathcal{D}}(\Psi)\) is given. ### Fast norm estimation and approximate simulation In pioneering work, Bravyi and Gosset [17] improved upon the sketched algorithm in the case of stabilizer states. This was achieved by replacing the \(O(\chi^{2})\)-runtime (naive) estimation algorithm \(\chi\mathsf{na\mathsf{u}ver\mathsf{o}rn}\) for the norm of a superposition of stabilizer states by a probabilistic algorithm \(\chi\mathsf{fastnorm}\). With success probability at least \(1-p_{f}\), the algorithm \(\chi\mathsf{fastnorm}\) provides an estimate \(\hat{N}\) of the squared norm \(N=\|\Psi\|^{2}\) of a superposition \(\Psi=\sum_{j=1}^{\chi}\gamma_{j}\varphi_{j}\) of \(n\)-qubit stabilizer states \(\{\varphi_{j}\}_{j=1}^{\chi}\) with multiplicative error \(\epsilon\) (i.e., \(\hat{N}\in[(1-\epsilon)N,(1+\epsilon)N]\)), and has runtime \(O(\chi\cdot n^{3}\epsilon^{-2}p_{f}^{-1})\) The key observation underlying the algorithm is the fact that the norm of interest can be expressed as \[N=2^{n}\mathbb{E}_{\Theta}\left[|\langle\Theta,\Psi\rangle|^{2}\right]\, \tag{64}\] i.e., it is proportional to the expected (squared) overlap of \(\Psi\) with a state \(|\Theta\rangle\) drawn uniformly at random from the set of \(n\)-qubit stabilizer states. Given \(\{\gamma_{j},\varphi_{j}\}_{j=1}^{\chi}\), this algorithm proceeds by taking \(R=\lceil p_{f}^{-1}\epsilon^{-2}\rceil\) stabilizer states \(\Theta_{1},\ldots,\Theta_{R}\) chosen uniformly from the set of all stabilizer states, and producing the estimate \[\hat{N}=\frac{2^{n}}{R}\sum_{k=1}^{R}|\langle\Theta_{k},\Psi\rangle|^{2} \tag{65}\] of \(N\). Importantly, expression (65) can be computed from (the descriptions of) \(\{\Theta_{k}\}_{k=1}^{R}\), \(\{\varphi_{j}\}_{j=1}^{\chi}\) and the coefficients \(\{\gamma_{j}\}_{j=1}^{\chi}\), using \(\chi\cdot R\) calls of a subroutine overlap which computes the overlap of two stabilizer states (including phases). This is because each summand in (65) can be written as a sum \[|\langle\Theta_{k},\Psi\rangle|=\left|\sum_{j=1}^{\chi}\gamma_{j}\langle \Theta_{k},\varphi_{j}\rangle\right| \tag{66}\] of \(\chi\) such products. The resulting runtime of this norm estimation algorithm is thus \(O(\chi\cdot R\cdot\mathsf{time}(\mathsf{overlap}))\) which amounts to the claimed runtime \(O(\chi\cdot n^{3}\epsilon^{-2}p_{f}^{-1})\). We note that a similar reasoning can be applied to any situation where the norm of a superposition of dictionary elements of interest can be expressed as in Eq. (64) as the expected overlap of the inner product of \(\Psi\) with a state \(\Theta\) randomly chosen according to a suitable distribution over dictionary states. Specifically, as derived in Appendix B of [58] and discussed below (see Section 4.4), this is the case for the set of fermionic Gaussian states. The corresponding norm estimation algorithm then has a runtime of the form \[\begin{split}\mathsf{time}(\chi\mathsf{fastnorm})& =O(\chi\cdot R\cdot\mathsf{time}(\mathsf{overlap})+R\cdot \mathsf{time}(\mathsf{samplesate}))\\ &=O(p_{f}^{-1}\epsilon^{-2}(\chi\mathsf{time}(\mathsf{overlap})+ \mathsf{time}(\mathsf{samplesate}))\end{split} \tag{67}\] where \(\mathsf{samplestate}\) is an algorithm producing a description of a state \(\Theta\) drawn randomly form the appropriate distribution. Importantly, the runtime (67) is linear in \(\chi\), resulting in a linear dependence when replacing \(\chi\) by the stabilizer extent \(\xi_{\mathcal{D}}(\Psi)\) as discussed in Section 4.2. Algorithms (\(\mathsf{approxvolve}\), \(\mathsf{approxmeasureprob}\), \(\mathsf{approxpostmeasure}\)) can now be obtained by using \(\chi\mathsf{fastnorm}\) in place of \(\chi\mathsf{norm}\). The algorithm \(\mathsf{approxvolve}\) is identical to \(\chi\mathsf{evolve}\) since it does not involve norm computations. In contrast, \(\mathsf{approxmeasureprob}\) is a probabilistic algorithm that can fail with probability \(p_{f}\) and both \(\mathsf{approxmeasureprob}\) and \(\mathsf{approxpostmeasure}\) introduce an error (in the sampled distribution and the post-measurement state, respectively). This is because \(\chi\mathsf{fastnorm}\) only estimates the norm of a superposition. Finally, replacing \(\chi\) by the \(\mathcal{D}\)-extent \(\xi_{\mathcal{D}}(\Psi)\), see Section 4.2 results in a triple of approximate algorithms (\(\mathsf{approxvolve}\), \(\mathsf{approxmeasureprob}\), \(\mathsf{approxpostmeasure}\)) with parameters \((\epsilon,\delta,p_{f})\) describing the quality of approximation and failure probability as discussed in Section 1.5. By construction, the runtimes of these algorithms are \[\begin{split}\mathsf{time}(\mathsf{approxvolve})&=O \left(\frac{\xi_{\mathcal{D}}(\Psi)}{\delta^{2}}\mathsf{time}(\mathsf{evolve}) \right)\\ \mathsf{time}(\mathsf{approxmeasureprob})&=O\left(p_{f} ^{-1}\epsilon^{-2}\left(\frac{\xi_{\mathcal{D}}(\Psi)}{\delta^{2}}\mathsf{ time}(\mathsf{overlap})+\mathsf{time}(\mathsf{samplestate})\right)\right)\\ &+O\left(\frac{\xi_{\mathcal{D}}(\Psi)}{\delta^{2}}\left( \mathsf{time}(\mathsf{postmeasure})+\mathsf{time}(\mathsf{measureprob})\right) \right)\\ \mathsf{time}(\mathsf{approxpostmeasure})&=O\left( \frac{\xi_{\mathcal{D}}(\Psi)}{\delta^{2}}\left(\mathsf{time}(\mathsf{postmeasure})+ \mathsf{time}(\mathsf{samplestate})\right)\right)\end{split}\;. \tag{68}\] ### Fermionic linear optics with non-Gaussian initial states The algorithms described above can be adapted in a straightforward manner to the problem of classically simulating fermionic linear optics with non-Gaussian initial states: We can simply use the efficient description of Gaussian states introduced in Section 3 and make use of the associated procedures \(\mathsf{overlap}\), \(\mathsf{evolve}\) as well as \(\mathsf{measureprob}\) and \(\mathsf{postmeasure}\). In particular, observe that combining Eq. (63) with the runtimes \(O(n^{3})\) for the algorithms \(\mathsf{evolve}\), \(\mathsf{postmeasure}\) and \(\mathsf{overlap}\), and \(O(1)\) for \(\mathsf{measureprob}\) (see Section 3) results in the runtimes given in Table 3 for exact simulation. To get a linear scaling in the Gaussian extent \(\xi_{\mathcal{G}_{n}}(\Psi)\) of the initial state (for approximate simulation), the naive norm estimation needs to be replaced. A fast norm estimation scheme for superpositions of fermionic Gaussian has been described in Appendix C of Ref. [58]: Consider the following probabilistic process defined for a superposition \(\Psi=\sum_{j=1}^{\chi}\gamma_{j}\varphi_{j}\), \(\varphi_{j}\in\mathcal{G}_{n}\), \(\gamma_{j}\in\mathbb{C}\) of \(n\)-mode fermionic Gaussian states: 1. Sample \(K\) random Gaussian states \(\{\Theta_{k}\}_{k=1}^{K}\) independently and identically from the distribution induced by picking a permutation \(\pi\in S_{2n}\) and a string \(y\in\{0,1\}^{n}\) uniformly at random and outputting \[\left|\Theta(\pi,y)\right\rangle=U_{R_{\pi}}\left|y\right\rangle.\] Here \(R_{\pi}=O(2n)\) is a permutation matrix specified by an element \(\pi\in S_{2n}\) and \(y\in\{0,1\}^{n}\). 2. Set \[\hat{N}=\frac{1}{K}\sum_{k=1}^{K}2^{n}|\langle\Theta_{k},\Psi\rangle|^{2}\.\] (69) The following was shown in [58]. **Lemma 4.2** (Lemma 10 in Ref. [58]).: _For any \(p_{f}\in[0,1]\) and \(\epsilon>0\), consider the probabilistic process described above with the choice \(K=\lceil 2\sqrt{n}\epsilon^{-2}p_{f}^{-1}\rceil\). Then the random variable \(\hat{N}\) satisfies_ \[(1-\epsilon)\|\Psi\|^{2}\leq\hat{N}\leq(1+\epsilon)\|\Psi\|^{2}\.\] _with probability at least \(1-p_{f}\)._ A description of a state proportional to \(\Theta_{k}\) can be computed from the associated pair \((\pi_{k},y_{k})\in S_{2n}\times\{0,1\}^{n}\) using the subroutine describe, for each \(k\in[K]\), as follows (see Fig. 15 for pseudocode for the algorithm). The covariance matrix \(\Gamma_{k}=R_{\pi_{k}}\Gamma(|y\rangle)R_{\pi_{k}}^{\dagger}\) of such a state can be computed in time \(O(n^{3})\) from \((\pi_{k},y)\), and applying describe to \(\Gamma_{k}\) gives the desired description. We note that any such state can be used in place of \(\Theta_{k}\) since the expression (69) (and, in particular, (66)) does not depend on the global phase of \(\Theta_{k}\). With the definition (69), it follows that the probabilistic process described here can be simulated in time given by Eq. (67) using \(K\) calls to the subroutine samplestate shown in Fig. 15, and subsequent use of overlap to compute the empirical average (69). Because the runtimes of describe and overlap are both upper bounded by \(O(n^{3})\), this leads to an overall runtime of \(O\left(n^{7/2}\epsilon^{-2}p_{f}^{-1}\chi\right)\) of this algorithm for computing the estimate \(\hat{N}\) of \(\|\Psi\|^{2}\). We note this conclusion about the runtime was also reached in Ref. [58], although the issue of a potential lack of a phase reference applicable throughout the computation was not considered there. This issue is resolved by our description of Gaussian states, see Section 3. Combing this algorithm with the runtimes given in Eq. (68) and with the runtimes \(O(n^{3})\) for the algorithms evolve, postmeasure and overlap, and \(O(1)\) for measureprob (see Section 3) gives runtimes claimed in Table 4 for the algorithms approxevolve, approxmeasureprob and approxpostmeasure. ### Efficient additive-error strong simulation In a different direction of generalization, building upon the work [17] and making innovative use of a concentration inequality by Hayes [60] for vector martingales, Ref. [26] gives a randomized algorithm which, for a state \(\Psi\) obtained by applying \(n\)-qubit Clifford gates and \(t\) (non-Clifford) \(T\)-gates to \(|0\rangle^{\otimes n}\), provides an additive-error estimate \(\hat{p}(x)\) for the probability \(p(x)=\|(\langle x|\otimes I^{\otimes(n-a)})|\Psi\rangle\|^{2}\) of observing \(a\) qubits in the state \(|x\rangle\), with \(x\in\{0,1\}^{a}\). The algorithm is based on a procedure by which the probability \(p(x)\) of interest is expressed in terms of the squared norm \(\big{\|}\big{(}\langle 0|^{\otimes t-r}\otimes I^{\otimes r}\rangle\,W\,|\Psi \rangle\big{\|}^{2}\) of a partially projected state, where \(\Psi\) is a tensor product of \(t\) non-stabilizer single-qubit states (arising from gadgetization), \(W\) a certain Clifford unitary, and \(r\) a circuit-dependent integer. The failure probability of the constructed algorithm is then upper bounded (see [26, Theorem 3]) by an expression depending on \(p(x)\), the error \(\epsilon\), the stabilizer Figure 15: The algorithm samplestate outputs a classical description of a state \(|\Theta(\pi,y)\rangle=R_{\pi}\,|y\rangle\) where \(\pi\in S_{2n}\) and \(y\in\{0,1\}^{n}\) are taken uniformly at random. rank \(\xi_{\mathsf{STAB}_{n}}(\Psi)\) (taking the role of \(\chi\)) of the product state \(\Psi\), as well as two additional parameters than be chosen freely, but enter into the (polynomial) runtime estimate. The described method of adapting fast algorithms for simulating Clifford circuits with non-stabilizer initial states can be applied in a similar manner to this algorithm, since this also reduces to computing inner products (including phases) between Gaussian states. ## 5 Multiplicativity of the Gaussian fidelity for \(4\) fermions The main result of this section is a proof that the fermionic Gaussian fidelity is multiplicative for the tensor product of any two positive-parity \(4\)-fermion states. We begin in Section 5.1 by laying out some specific properties of \(4\)-fermion states. We discuss a Gaussianity condition specific to \(4\)-fermion states [56] and we write an explicit expression for any \(4\)-fermion state as a superposition of two orthogonal (Gaussian) states. This was first introduced in Refs. [54, 56]. In Section 5.2 we establish properties of the fermionic Gaussian fidelity for \(4\)-fermion states which are subsequently used in Section 5.3 to prove that the fermionic Gaussian fidelity is multiplicative for the tensor product of any two \(4\)-fermion states. ### Four-fermion Gaussian and non-Gaussian states Key to our considerations is a certain antiunitary map \(\theta\) acting on \(\mathcal{H}^{4}_{+}\), the positive-parity subspace of \(4\) fermions spanned by \(\{\ket{x}\}_{x\in\{0,1\}_{+}^{4}}\). It is defined by its action \[\theta\ket{x}=(-1)^{\theta(x)}\ket{\overline{x}}\, \tag{70}\] for \(x\in\{0,1\}_{+}^{4}\), on basis states (antilinearly extended to all of \(\mathcal{H}^{4}_{+}\)), where \(\vartheta(x)=(-1)^{x_{1}+x_{3}}=(-1)^{x_{2}+x_{4}}=\vartheta(\bar{x})\). Here \(\overline{x}=(\overline{x}_{1},\ldots,\overline{x}_{n})\) is obtained by flipping each bit of \(x\). The relevant properties of this map are the following. We note that the following statement has been given in [56, Eq. (9)], along with a negative-parity version. **Lemma 5.1** ([56]).: _A state \(\Psi\in\mathcal{H}^{4}_{+}\) is Gaussian if and only if_ \[\langle\Psi,\theta\Psi\rangle=0\.\] Proof.: This follows from the Gaussianity criterion given in Lemma 2.3. We give the proof in Appendix A. **Lemma 5.2**.: _We have \(\theta c_{j}c_{k}=c_{j}c_{k}\theta\) for all \(j,k\in[8]\)._ Proof.: See Appendix B. The following result was first shown in Ref. [54]. **Lemma 5.3** ([54]).: _Let \(\Psi\in\mathcal{H}^{4}_{+}\) be a unit vector. Then there are two orthogonal unit vectors \(\Psi_{1},\Psi_{2}\in\mathcal{H}^{4}_{+}\), \(\varphi\in[0,2\pi)\) and \(a\in[0,1/\sqrt{2}]\) such that_ \[\theta\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{71}\] _and_ \[\Psi=e^{i\varphi}\left(\sqrt{1-a^{2}}\Psi_{1}+ia\Psi_{2}\right). \tag{72}\] Proof.: We first show that \(\Psi_{j}\) is a unitary map. We have \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{73}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{74}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{75}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{76}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{77}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{78}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{79}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{80}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{81}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{82}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{83}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{84}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{85}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{86}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{87}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{88}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{89}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{90}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{91}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{92}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{93}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{94}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{95}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{96}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{97}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{98}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{99}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{99}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{100}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{101}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{102}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{103}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{104}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{105}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{106}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{107}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{108}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{109}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{111}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{112}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{113}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{114}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{115}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{116}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{117}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{118}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{119}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{120}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{121}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{122}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{123}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{124}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{125}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{126}\] and \[\Psi_{j}=\Psi_{j}\qquad\text{ for }\qquad j\in[2] \tag{127}\] and Proof.: We first argue that it suffices to consider the case where \(\Psi\) satisfies \[\langle\Psi,\theta\Psi\rangle\in\mathbb{R}. \tag{73}\] This is because \[\langle(e^{i\varphi}\Psi),\theta(e^{i\varphi}\Psi)\rangle=e^{-2i\varphi}\langle \Psi,\theta\Psi\rangle\qquad\text{ for every }\qquad\varphi\in[0,2\pi)\,\] which implies that (73) can be ensured by replacing \(\Psi\) with \(e^{i\varphi}\Psi\) for a suitably chosen \(\varphi\in[0,2\pi)\). Let \(\Psi\) be such that (73) holds. We define "real" and "imaginary" parts of \(\Psi\) by the expressions \[\Psi_{R} =\frac{1}{2}\left(\Psi+\theta\Psi\right)\] \[\Psi_{I} =\frac{1}{2i}\left(\Psi-\theta\Psi\right)\.\] It follows immediately from this definition that \[\Psi=\Psi_{R}+i\Psi_{I}\] and \[\theta\Psi_{R} =\Psi_{R}\] \[\theta\Psi_{I} =\Psi_{I}\.\] because \(\theta\) is antiunitary and an involution. Furthermore, Eq. (73) implies that the vectors \(\Psi_{R}\),\(\Psi_{I}\) are orthogonal: We have \[4i\langle\Psi_{R},\Psi_{I}\rangle =\langle\Psi+\theta\Psi,\Psi-\theta\Psi\rangle\] \[=\|\Psi\|^{2}-\|\theta\Psi\|^{2}+\langle\theta\Psi,\Psi\rangle- \langle\Psi,\theta\Psi\rangle\] \[=-2i\mathbf{Im}\langle\Psi,\theta\Psi\rangle\] \[=0\] where we used that \(\theta\) is an antiunitary in the first step, and assumption (73) in the last step. The claim now follows by setting \[(a,\Psi_{1},\Psi_{2})=\begin{cases}\left(\|\Psi_{I}\|,\frac{\Psi_{B}}{\|\Psi_ {R}\|},\frac{\Psi_{I}}{\|\Psi_{I}\|}\right)&\text{if}\qquad\|\Psi_{I}\|\leq 1/ \sqrt{2}\\ \left(\|\Psi_{R}\|,\frac{\Psi_{I}}{\|\Psi_{I}\|},\frac{\Psi_{B}}{\|\Psi_{R}\| }\right)&\text{otherwise}\ \.\end{cases}\] **Theorem 5.4** ([54, 56]).: _Let \(\Psi\in\mathcal{H}^{4}_{+}\) be an arbitrary unit vector. Then there are a Gaussian pure state \(\Psi_{g}\in\mathcal{G}^{+}_{+}\), \(\varphi\in[0,2\pi)\) and \(f\in[1/2,1]\) such that the state \(\theta\Psi_{g}\) is Gaussian and orthogonal to \(\Psi_{g}\) and_ \[\Psi=e^{i\varphi}\left(\sqrt{f}\Psi_{g}+\sqrt{1-f}\theta\Psi_{g}\right). \tag{74}\] _The triple \((\Psi_{g},\varphi,f)\) is uniquely defined by \(\Psi\), i.e., a function of \(\Psi\). Furthermore, the quantity \(f=f(\Psi)\) is invariant under the action of Gaussian unitaries associated with special orthogonal rotations: We have_ \[f(U\Psi)=f(\Psi)\qquad\text{ for any Gaussian unitary }U=U_{R}\text{ with }R\in SO(2n). \tag{75}\] _T Proof.: Let \(\Psi\in\mathcal{H}^{4}_{+}\) be an arbitrary unit vector. Let \(\Psi_{1},\Psi_{2}\in\mathcal{H}^{4}_{+}\), \(\varphi\in[0,2\pi)\) and \(a\in[0,1]\) be as in Lemma 5.3. Define \[\Psi_{g}^{\pm}=\frac{1}{\sqrt{2}}\left(\Psi_{1}\pm i\Psi_{2}\right)\.\] Then \[\theta\Psi_{g}^{-}=\Psi_{g}^{+} \tag{76}\] because of property (71). It follows that \[\langle\Psi_{g}^{-},\Psi_{g}^{+}\rangle =\langle\Psi_{g}^{-},\theta\Psi_{g}^{-}\rangle\] \[=\frac{1}{2}\left(\|\Psi_{1}\|^{2}+2i\mathsf{Re}\langle\Psi_{1}, \Psi_{2}\rangle-\|\Psi_{2}\|^{2}\right)\] \[=0\] since \(\Psi_{1}\) and \(\Psi_{2}\) are orthogonal unit vectors. Using (76) and the orthogonality of \(\Psi_{g}^{-},\Psi_{g}^{+}\) implies that \[\langle\Psi_{g}^{+},\theta\Psi_{g}^{+}\rangle=\langle\Psi_{g}^{+},\Psi_{g}^{- }\rangle=0\] and similarly (because \(\theta\) is an involution) \[\langle\Psi_{g}^{-},\theta\Psi_{g}^{-}\rangle=\langle\Psi_{g}^{-},\Psi_{g}^{+} \rangle=0\.\] According to the Gaussianity criterion in Lemma 5.1, we conclude that both \(\Psi_{g}^{+}\) and \(\Psi_{g}^{-}\) are Gaussian. Rewriting Eq. (72) by expressing \((\Psi_{1},\Psi_{2})\) in terms of \((\Psi_{g}^{+},\Psi_{g}^{-})\) gives \[\Psi=e^{i\varphi}\left(\sqrt{f}\Psi_{g}^{-}+\sqrt{1-f}\Psi_{g}^{+}\right) \qquad\text{ where }\qquad f=\frac{1}{2}+a\sqrt{1-a^{2}}\.\] The claim (74) now follows with (76). It remains to show property (75) of the function \(f\). This follows immediately from the fact that the antiunitary \(\theta\) commutes with all quadratic monomials \(c_{j}c_{k}\) of Majorana operators (see Lemma 5.2), and hence with any Gaussian unitary \(U=U_{R}\) with \(R\in SO(2n)\), i.e., \(U\theta=\theta U\). Retracing the steps of the proof, it is easy to check that if \((\Psi_{1},\Psi_{2})\) are the states of Lemma 5.3, and \(\Psi_{g}\) the state in expression (74) for \(\Psi\), then the corresponding states \((\Psi_{1}^{\prime},\Psi_{2}^{\prime})\) and \(\Psi_{g}^{\prime}\) for the state \(\Psi^{\prime}=U\Psi\) are given by \(\Psi_{j}^{\prime}=U\Psi_{j}\) for \(j\in[2]\) and \(\Psi_{g}^{\prime}=U\Psi_{g}\), respectively. This implies the claim. ### The Gaussian fidelity for 4-fermion states For a subset \(E\subset\{0,1\}^{4}\), we define \(\overline{E}=\{\overline{x}\mid x\in E\}\). We also write \(\Pi_{E}=\sum_{x\in E}|x\rangle\langle x|\) for the projection onto the span of \(\{|x\rangle\}_{x\in E}\). **Lemma 5.5**.: _Let \(E\subset\{0,1\}^{4}_{+}\), \(|E|=4\) be a subset of even-weight strings such that \(E\cup\overline{E}=\{0,1\}^{4}_{+}\). Let \(f(\Psi)\in[1/2,1]\) be defined as in Theorem 5.4. Then_ \[\left\|\Pi_{E}\Psi\right\|^{2}\leq f(\Psi)\qquad\text{ for any unit vector }\qquad\Psi\in\mathcal{H}^{4}_{+}. \tag{77}\] Proof.: Let \(f=f(\Psi)\in[1/2,1]\), \(\varphi\in[0,2\pi)\) and \(\Psi_{g}\in\mathcal{G}_{n}^{+}\) be as in Theorem 5.4 such that \[\Psi=e^{i\varphi}\left(\sqrt{f}\Psi_{g}+\sqrt{1-f}\theta\Psi_{g}\right). \tag{78}\] We define \[\alpha_{x} = \langle x,\Psi_{g}\rangle\] \[\beta_{x} = \langle x,\theta\Psi_{g}\rangle\qquad\text{ for every }\qquad x\in E\.\] We claim that we have the identities \[\sum_{x\in E}(|\alpha_{x}|^{2}+|\beta_{x}|^{2}) =1 \tag{79}\] \[\sum_{x\in E}\overline{\alpha_{x}}\beta_{x} =0. \tag{80}\] Observe that these two identities immediately imply (77): Using expression (78), we have \[\|\Pi_{E}\Psi\|^{2} =\sum_{x\in E}|\langle x,\Psi\rangle|^{2}\] \[=\sum_{x\in E}|\sqrt{f}\alpha_{x}+\sqrt{1-f}\beta_{x}|^{2}\] \[=\left\|\sqrt{f}\vec{\alpha}+\sqrt{1-f}\vec{\beta}\right\|^{2} \tag{81}\] where we defined the vectors \(\vec{\alpha}=(\alpha_{x})_{x\in E},\vec{\beta}=(\beta_{x})_{x\in E}\in\mathbb{ C}^{4}\). Since (79) and (80) are equivalent to the statement that \[\|\vec{\alpha}\|^{2}+\|\vec{\beta}\|^{2} =1 \tag{82}\] \[\langle\vec{\alpha},\vec{\beta}\rangle =0, \tag{83}\] we obtain \[\left\|\sqrt{f}\vec{\alpha}+\sqrt{1-f}\vec{\beta}\right\|^{2} =f\|\vec{\alpha}\|^{2}+(1-f)\|\vec{\beta}\|^{2}\] \[\leq\max\{f,1-f\}\] \[=f \tag{84}\] by the Pythagorean theorem in \(\mathbb{C}^{4}\) (using (83)) and by maximizing over \((\alpha,\beta)\) satisfying (82). Inserting (84) into (81) results in the upper bound (77) on \(\|\Pi_{E}\Psi\|^{2}\). It remains to prove the claimed identities (79) and (80). We argue that these are a consequence of the fact that \(\Psi_{g}\) is normalized and Gaussian, respectively. Proof.: Observe that by definition of the antiunitary \(\theta\), we have \[\beta_{x} =\langle x,\theta\Psi_{g}\rangle\] \[=(-1)^{\vartheta(\overline{x})}\langle\overline{x},\Psi_{g} \rangle\qquad\text{ for all }\qquad x\in E\.\] In particular, this implies that \[|\beta_{x}|^{2}=|\langle\overline{x},\Psi_{g}\rangle|^{2}\qquad\text{ for every }\qquad x\in E. \tag{85}\] Eq. (79) now follows from the fact that \(\Psi_{g}\) is normalized and positive-parity: we have \[\sum_{x\in E}\left(|\alpha_{x}|^{2}+|\beta_{x}|^{2}\right) =\sum_{x\in E}\left(|\langle x,\Psi_{g}\rangle|^{2}+|\langle\overline {x},\Psi_{g}\rangle|^{2}\right)\] \[=\|\Psi_{g}\|^{2}\] \[=1\] where we used the definition of \(\alpha_{x}\) and (85) in the first step, and the assumption \(E\cup\overline{E}=\{0,1\}_{+}^{4}\) in the second identity. Similarly, Eq. (80) is a consequence of the fact that \(\Psi_{g}\) is Gaussian: we have \[\sum_{x\in\{0,1\}_{+}^{4}}\overline{\alpha_{x}}\beta_{x} =\sum_{x\in\{0,1\}_{+}^{4}}\langle\Psi_{g},x\rangle\langle x, \theta\Psi_{g}\rangle\] \[=\langle\Psi_{g},\theta\Psi_{g}\rangle\] \[=0\] where we used the definition of \(\alpha_{x}\) and \(\beta_{x}\) in the first step, the fact that \(\Psi_{g}\in\mathcal{H}_{+}^{4}\) in the second step, and the characterization of Gaussianity from Lemma 5.1 in the last identity. Lemma 5.5 immediately implies the following expression for the fermionic Gaussian fidelity. We note that a more general expression for the "Gaussian fidelity" of a mixed state has previously been obtained in [56]. The proof for pure states given here is more elementary and illustrates the use of Lemma 5.5. **Theorem 5.6** (Fermionic Gaussian fidelity for 4-mode pure states [54, 56]).: _Let \(\Psi\in\mathcal{H}_{+}^{4}\) be a unit vector. Let \(f(\Psi)\in[1/2,1]\) be defined as in Theorem 5.4. Then_ \[F_{\mathcal{G}_{4}^{+}}(\Psi)=f(\Psi)\.\] Proof.: Let \(f=f(\Psi)\), \(\varphi\in[0,2\pi)\) and \(\Psi_{g}\in\mathcal{G}_{4}^{+}\) be as in Theorem 5.4. Then we have \[F_{\mathcal{G}_{4}^{+}}(\Psi) \geq\Big{|}\langle\Psi_{g},e^{i\varphi}\left(\sqrt{f}\Psi_{g}+f \theta\Psi_{g}\right)\rangle\Big{|}^{2}\] \[=f\] since \(\theta\Psi_{g}\) is orthogonal to \(\Psi_{g}\). It thus suffices to show the upper bound \[F_{\mathcal{G}_{4}^{+}}(\Psi)\leq f. \tag{86}\] Let \(\Phi_{g}\in\mathcal{G}_{4}^{+}\) be an arbitrary positive-parity Gaussian pure state. Then there is a Gaussian unitary \(U=U_{R}\) with \(R\in SO(2n)\) and a phase \(\mu\in[0,2\pi)\) such that \(\Phi_{g}=e^{i\mu}U\left|0_{F}\right.\). We will use any subset \(E\subset\{0,1\}_{+}^{4}\) of even-weight strings as in Lemma 5.5 with the additional property that \(0000\in E\), e.g., \(E=\{0000,1100,1010,1001\}\). Then \(\left|0_{F}\right.=\Pi_{E}\left|0_{F}\right.\) is in the image of \(\Pi_{E}\). It follows that \[\left|\langle\Phi_{g},\Psi\rangle\right| =\left|\langle 0_{F},U^{\dagger}\Psi\rangle\right|\] \[=\left|\langle\Pi_{E}0_{F},U^{\dagger}\Psi\rangle\right|\] \[=\left|\langle 0_{F},\Pi_{E}U^{\dagger}\Psi\rangle\right|\] \[\leq\left\|\Pi_{E}U^{\dagger}\Psi\right\|\] \[\leq\sqrt{f(U^{\dagger}\Psi)}\,\] where we used the Cauchy-Schwarz inequality in the penultimate step, and Lemma 5.5 applied to the state \(U^{\dagger}\Psi\). Since \(\Phi_{g}\in\mathcal{G}_{4}^{+}\) was arbitrary, the claimed inequality (86) follows by taking the square and using that \(f(U^{\dagger}\Psi)=f(\Psi)\), see Eq. (75) of Theorem 5.4. Combining Lemma 5.5 with Theorem 5.6 yields the following statement, which directly relates the weight of a state on certain subspaces to the fermionic Gaussian fidelity. It will be our main technical tool in the following. **Corollary 5.7**.: _Let \(E\subset\{0,1\}_{+}^{4}\), \(|E|=4\) be a subset of even-weight strings such that \(E\cup\overline{E}=\{0,1\}_{+}^{4}\). Then_ \[\|\Pi_{E}\Psi\|^{2}\leq F_{\mathcal{G}_{4}^{+}}(\Psi)\qquad\text{ for any unit vector }\qquad\Psi\in\mathcal{H}_{+}^{4}\.\] ### Multiplicativity of the Gaussian fidelity for 4-fermion states Here we prove that the fermionic Gaussian fidelity is multiplicative for 4-fermion states (see Theorem 5.10). We use two intermediate results stated as Lemmas 5.8 and 5.9. In Lemma 5.8, we bound the overlap of a tensor product of two (arbitrary) positive-parity pure states \(\Psi_{A},\Psi_{B}\) with a state \(\Phi\) written as a Schmidt decomposition of a bipartite fermionic pure state. In Theorem 5.10, this result is used to bound the fermionic Gaussian fidelity. More specifically, Lemma 5.9 is used to upper bound the Schmidt coefficients, giving the multiplicativity result for the Gaussian fidelity. **Lemma 5.8**.: _Let \(\{m_{x}\}_{x\in\{0,1\}^{4}}\subset\mathbb{C}\) be arbitrary. Define_ \[|\Phi\rangle=\sum_{x\in\{0,1\}^{4}}m_{x}\,|x,x\rangle\in\mathcal{H}_{+}^{8}. \tag{87}\] _Let \(E\subset\{0,1\}_{+}^{4}\), \(|E|=4\) be a subset of even-weight strings such that \(E\cup\overline{E}=\{0,1\}_{+}^{4}\). Then_ \[|\langle\Phi,\Psi_{A}\tilde{\otimes}\Psi_{B}\rangle|^{2}\leq F_{\mathcal{G}_{4 }^{+}}(\Psi_{A})F_{\mathcal{G}_{4}^{+}}(\Psi_{B})\left(\max_{x\in E}|m_{x}|+ \max_{y\in\overline{E}}|m_{y}|\right)^{2}\quad\text{ for all states }\Psi_{A},\Psi_{B}\in \mathcal{H}_{+}^{4}\.\] Proof.: Because \(\Psi_{A}\) and \(\Psi_{B}\) are supported on \(\mathcal{H}_{+}^{4}\) by assumption, we have by Eq. (30) \[\langle\Phi,\Psi_{A}\tilde{\otimes}\Psi_{B}\rangle =\sum_{x\in\{0,1\}_{+}^{4}}m_{x}(-1)^{|x|}\langle x,\Psi_{A} \rangle\langle x,\Psi_{B}\rangle\] \[=\sum_{x\in\{0,1\}_{+}^{4}}m_{x}e^{i\nu_{x}}\langle\Psi_{A},x \rangle\langle x,\Psi_{B}\rangle \tag{88}\] where \(\nu_{x}\) is defined by the identity \[(-1)^{|x|}\langle x,\Psi_{A}\rangle=e^{i\nu_{x}}\langle\Psi_{A},x\rangle \qquad\text{ for }\qquad x\in\{0,1\}_{+}^{4}\.\] Defining the operator \[M_{\Omega}=\sum_{x\in\Omega}m_{x}e^{i\nu_{x}}|x\rangle\langle x|\] for any subset \(\Omega\subset\{0,1\}_{+}^{4}\), it follows from Eq. (88) that \[\langle\Phi,\Psi_{A}\tilde{\otimes}\Psi_{B}\rangle=\langle\Psi_{A},M_{E}\Psi_ {B}\rangle+\langle\Psi_{A},M_{E}\Psi_{B}\rangle. \tag{89}\] Since \(M_{E}\) is supported on \(\mathsf{span}\{|x\rangle\}_{x\in E}\), we have \(\langle\Psi_{A},M_{E}\Psi_{B}\rangle=\langle\Pi_{E}\Psi_{A},M_{E}\Pi_{E}\Psi_ {B}\rangle\). With the Cauchy-Schwarz inequality and the definition of the operator norm \(\|M_{E}\|\) we thus get \[|\langle\Psi_{A},M_{E}\Psi_{B}\rangle| \leq\|\Pi_{E}\Psi_{A}\|\cdot\|\Psi_{E}\Psi_{B}\|\cdot\|M_{E}\|\] \[\leq\sqrt{F_{\mathcal{G}_{4}^{+}}(\Psi_{A})}\cdot\sqrt{F_{\mathcal{ G}_{4}^{+}}(\Psi_{B})}\cdot\|M_{E}\|\, \tag{90}\] where we applied Corollary 5.7. Identical reasoning applies to \(\overline{E}\) and yields the inequality \[|\langle\Psi_{A},M_{\overline{E}}\Psi_{B}\rangle|\leq\sqrt{F_{\mathcal{G}_{4}^{+ }}(\Psi_{A})}\cdot\sqrt{F_{\mathcal{G}_{4}^{+}}(\Psi_{B})}\cdot\|M_{\overline {E}}\|. \tag{91}\] Combining Eqs. (90), (91) with Eq. (89), we conclude that \[\left|\langle\Phi,\Psi_{A}\tilde{\otimes}\Psi_{B}\rangle\right| \leq|\langle\Psi_{A},M_{E}\Psi_{B}\rangle|+|\langle\Psi_{A},M_{ \overline{E}}\Psi_{B}\rangle|\] \[\leq\sqrt{F_{\mathcal{G}_{4}^{+}}(\Psi_{A})F_{\mathcal{G}_{4}^{+ }}(\Psi_{B})}\left(\|M_{E}\|+\|M_{\overline{E}}\|\right)\.\] Taking the square and observing that \[\|M_{\Omega}\|=\max_{x\in\Omega}|m_{x}e^{i\nu_{x}}|=\max_{x\in\Omega}|m_{x}|\] gives the claim. The following lemma will be useful to prove the main theorem. **Lemma 5.9**.: _The function_ \[f(\theta,x)=\prod_{j=1}^{4}(\cos\theta_{j})^{1-x_{j}}(\sin\theta_{j})^{x_{j}} \qquad\text{ for }\qquad\theta=(\theta_{1},\ldots,\theta_{4})\in\mathbb{R}^{4}\text{ and }x\in\{0,1\}^{4}\] _satisfies_ \[|f(\theta,x)|+|f(\theta,y)|\leq 1\quad\text{ for all }\theta\in\mathbb{R}^{4} \text{ and }\quad x,y\in\{0,1\}^{4}\text{ with }x,y\text{ even-weight and }x\neq y\.\] Proof.: Because \(x,y\) have even and different weight, it suffices to consider two cases, namely with \(|x-y|\in\{2,4\}\). Consider first the case where \(|x-y|=2\). Without loss of generality, assume that \((x_{1},x_{2})=(y_{1},y_{2})\), \(x_{3}\neq y_{3}\), \(x_{4}\neq y_{4}\). Since translating \(\theta\) by \(-\pi/2\) interchanges \(|\sin\theta|\) and \(|\cos\theta|\), it suffices to show the claim for \(x=(0,0,0,0)\) and \(y=(0,0,1,1)\). In this case we have \[|f(\theta,x)|+|f(\theta,y)| =|\cos\theta_{1}\cos\theta_{2}\cos\theta_{3}\cos\theta_{4}|+|\cos \theta_{1}\cos\theta_{2}\sin\theta_{3}\sin\theta_{4}|\] \[=|\cos\theta_{1}\cos\theta_{2}|\cdot(|\cos\theta_{3}\cos\theta_{4 }|+|\sin\theta_{3}\sin\theta_{4}|)\] \[\leq|\cos\theta_{1}\cos\theta_{2}|\] \[\leq 1\,\] where the first inequality follows from the Cauchy-Schwarz inequality in \(\mathbb{R}^{2}\). Since \(\theta\in\mathbb{R}^{4}\) was arbitrary, this concludes the proof for \(|x-y|=2\). The proof for \(|x-y|=4\), i.e., \(y=\overline{x}\) proceeds similarly. Again it suffices to show the claim for \(x=(0,0,0,0)\). In this case \[|f(\theta,x)|+|f(\theta,y)| =|\cos\theta_{1}\cos\theta_{2}\cos\theta_{3}\cos\theta_{4}|+|\sin \theta_{1}\sin\theta_{2}\sin\theta_{3}\sin\theta_{4}|\] \[\leq|\cos\theta_{1}\cos\theta_{2}|+|\sin\theta_{1}\sin\theta_{2}|\] \[\leq 1\,\] where the first inequality follows from \(|\cos\theta_{3}\cos\theta_{4}|\leq 1\ \text{ and }|\sin\theta_{3}\sin\theta_{4}|\leq 1\ \text{ and the second one from the Cauchy-Schwarz inequality. }\) **Theorem 5.10** (Multiplicativity of the fermionic Gaussian fidelity for 4-mode pure states.).: _Let \(\mathcal{H}_{+}^{n}\) be the set of pure \(n\)-fermion states with positive parity and let \(\mathcal{G}_{n}^{+}\) be the set of pure \(n\)-fermion Gaussian states with positive parity. We have that_ \[F_{\mathcal{G}_{8}}(\Psi_{A}\tilde{\otimes}\Psi_{B})=F_{\mathcal{G}_{4}}(\Psi_ {A})F_{\mathcal{G}_{4}}(\Psi_{B})\qquad\text{ for all }\qquad\Psi_{A},\Psi_{B}\in\mathcal{H}_{+}^{4}\.\] Proof.: We first observe that \(\mathcal{H}^{8}\) is a direct sum of the four spaces \(\mathcal{H}^{4}_{+}\otimes\mathcal{H}^{4}_{+}\), \(\mathcal{H}^{4}_{+}\otimes\mathcal{H}^{4}_{-}\), \(\mathcal{H}^{4}_{-}\otimes\mathcal{H}^{4}_{+}\) and \(\mathcal{H}^{4}_{-}\otimes\mathcal{H}^{4}_{-}\). This is because states in these subspaces have different eigenvalues with respect to the corresponding parity operators on the factors (interpreted as Majorana monomials on \(\mathcal{H}^{8}\) these are the monomials \(c(1^{8}0^{8})\) and \(c(0^{8}1^{8})\)). It follows immediately that the overlap with a state of the form \(\Psi_{A}\tilde{\otimes}\Psi_{B}\in\mathcal{H}^{4}_{+}\tilde{\otimes}\mathcal{ H}^{4}_{+}\) is maximized for a decomposition into states belonging to \(\mathcal{H}^{4}_{+}\tilde{\otimes}\mathcal{H}^{4}_{+}\) only. In particular, it follows that \[F_{\mathcal{G}_{8}}(\Psi_{A}\tilde{\otimes}\Psi_{B})=F_{\mathcal{G}^{+}_{8}}( \Psi_{A}\tilde{\otimes}\Psi_{B})\qquad\text{ for all}\qquad\Psi_{A},\Psi_{B}\in \mathcal{H}^{4}_{+}\] and by the same reasoning, we have \[F_{\mathcal{G}_{4}}(\Psi)=F_{\mathcal{G}^{+}_{4}}(\Psi)\qquad\text{ for all}\qquad\Psi\in\mathcal{H}^{4}_{+}\.\] We conclude that it suffices to show that \[F_{\mathcal{G}^{+}_{8}}(\Psi_{A}\tilde{\otimes}\Psi_{B})=F_{ \mathcal{G}^{+}_{4}}(\Psi_{A})F_{\mathcal{G}^{+}_{4}}(\Psi_{B})\qquad\text{ for all}\qquad\Psi_{A},\Psi_{B}\in\mathcal{H}^{4}_{+}\.\] Let \(\Psi_{A},\Psi_{B}\in\mathcal{H}^{4}_{+}\) be arbitrary. The inequality \(F_{\mathcal{G}^{+}_{8}}(\Psi_{A}\tilde{\otimes}\Psi_{B})\geq F_{\mathcal{G}^{ +}_{4}}(\Psi_{A})F_{\mathcal{G}^{+}_{4}}(\Psi_{B})\) follows trivially from the definition of fermionic Gaussian fidelity in Eq. (2), because \(\mathcal{G}^{+}_{4}\otimes\mathcal{G}^{+}_{4}\subseteq\mathcal{G}^{+}_{8}\). The inequality \(F_{\mathcal{G}^{+}_{8}}(\Psi_{A}\tilde{\otimes}\Psi_{B})\leq F_{\mathcal{G}^{ +}_{4}}(\Psi_{A})F_{\mathcal{G}^{+}_{4}}(\Psi_{B})\) is a consequence of the Schmidt decomposition for fermionic states put forward in Ref. [61] and of Lemmas 5.8 and 5.9. According to Ref. [61], an arbitrary pure fermionic state \(\Phi\in\mathcal{H}^{n}\) admits a Schmidt decomposition of the form \[|\Phi\rangle=\sum_{x\in\{0,1\}^{n}}m_{x}|x,x\rangle\] with \[m_{x}=\prod_{j=1}^{n}(\cos\theta_{j})^{1-x_{j}}(-\sin\theta_{j}) ^{x_{j}}\qquad\text{with}\qquad\theta_{j}\in\mathbb{R}\text{ for }j\in[n]\.\] With this definition of \(m_{x}\) for \(n=4\), an arbitrary state \(\Phi\in\mathcal{G}^{+}_{8}\) can be written as in Eq. (87) and the conditions for Lemma 5.8 apply. We have \[F_{\mathcal{G}^{+}_{8}}(\Psi_{A}\tilde{\otimes}\Psi_{B}) =\max_{\Phi\in\mathcal{G}^{+}_{8}}|\langle\Phi,\Psi_{A}\otimes \Psi_{B}\rangle|\] \[\leq F_{\mathcal{G}^{+}_{4}}(\Psi_{A})F_{\mathcal{G}^{+}_{4}}( \Psi_{B})\left(\max_{x\in E}|m_{x}|+\max_{y\in E}|m_{y}|\right)^{2}\] where \(E\subset\{0,1\}^{4}_{+}\) with \(|E|=4\) a subset of even weight strings such that \(E\cup\overline{E}=\{0,1\}^{4}_{+}\). Notice that \(x,y\) have even-weight and that \(x\neq y\) because \(E\) and \(\overline{E}\) are disjoint sets. Identifying \(m_{x}\) (whose dependence on \(\theta\in\mathbb{R}^{4}\) is implicit) with \(f(\theta,x)\) in Lemma 5.9 (apart from a minus sign that is not relevant because we take the absolute value) we have \[F_{\mathcal{G}^{+}_{8}}(\Psi_{A}\tilde{\otimes}\Psi_{B}) \leq F_{\mathcal{G}^{+}_{4}}(\Psi_{A})F_{\mathcal{G}^{+}_{4}}(\Psi_ {B})\left(\max_{x\in E}|m_{x}|+\max_{y\in\overline{E}}|m_{y}|\right)^{2}\] \[\leq F_{\mathcal{G}^{+}_{4}}(\Psi_{A})F_{\mathcal{G}^{+}_{4}}(\Psi_ {B})\,\] giving the claim. Multiplicativity of \(\mathcal{D}\)-fidelity implies that of \(\mathcal{D}\)-extent In this section, we show that multiplicativity of the \(\mathcal{D}\)-fidelity implies multiplicativity of the \(\mathcal{D}\)-extent extent. In Section 6.1 we prove this for finite dictionaries: This follows immediately from the fact that \(F_{\mathcal{D}}(\Psi)\) and \(\xi_{\mathcal{D}}(\Psi)\) are related by (convex programming) duality. In Section 6.2, we extend this results for infinite, i.e., continuously parameterized dictionaries. We achieve this extension by using (finite) \(\epsilon\)-nets for the set of Gaussian states. Similar approaches have been applied in the signal processing context, see e.g., the work [62], which shows how to approximately solve atomic norm minimization problems for sparse recovery when the parameters indexing the dictionary lie in a small-dimensional space. ### Multiplicativity for finite dictionaries We will restrict our attention to finite dictionaries in this section. For \(|\mathcal{D}|<\infty\), the \(\mathcal{D}\)-fidelity is related to the dual formulation of the \(\mathcal{D}\)-extent as (see [19, Eq. (3.2)] and [18, Theorem 4]) \[\xi_{\mathcal{D}}(\Psi)=\max_{y\in\mathcal{H}:F_{\mathcal{D}}(y) \leq 1}|\langle\Psi,y\rangle|^{2}\ . \tag{92}\] Let \(\mathcal{H}_{1},\mathcal{H}_{2}\) and \(\mathcal{H}_{3}\) be a triple of Hilbert spaces. Let \(\{\mathcal{D}_{j}\}_{j\in[3]}\) be a family of dictionaries, where \(\mathcal{D}_{j}\subset\mathcal{H}^{j}\) for \(j\in[3]\). We assume that \[\mathcal{D}_{1}\otimes\mathcal{D}_{2}\subseteq\mathcal{D}_{3}. \tag{93}\] We are interested in the following two properties: \[\mathsf{Mult}^{\xi}(\mathcal{D}_{1},\mathcal{D}_{2},\mathcal{D}_ {3}):\quad\xi_{\mathcal{D}_{3}}(\Psi_{1}\otimes\Psi_{2})=\xi_{\mathcal{D}_{1 }}(\Psi_{1})\xi_{\mathcal{D}_{2}}(\Psi_{2})\quad\text{ for all }\quad\Psi_{j}\in \mathcal{H}_{j}\text{ for }j\in[2]\] \[\mathsf{Mult}^{F}(\mathcal{D}_{1},\mathcal{D}_{2},\mathcal{D}_{3}) :\quad F_{\mathcal{D}_{3}}(\Psi_{1}\otimes\Psi_{2})=F_{\mathcal{D}_{1 }}(\Psi_{1})F_{\mathcal{D}_{2}}(\Psi_{2})\quad\text{ for all }\quad\Psi_{j}\in \mathcal{H}_{j}\text{ for }j\in[2]\.\] As an important example, let \(n_{j}\in\mathbb{N}\) for \(j\in[2]\), \(n_{3}=n_{1}+n_{2}\), \(\mathcal{H}_{j}=(\mathbb{C}^{2})^{\otimes n_{j}}\) and let \(\mathsf{STAB}_{n}\) be the set of stabilizer states on \((\mathbb{C}^{2})^{\otimes n}\). Then \(\mathsf{Mult}^{\xi}(\mathsf{STAB}_{n_{1}},\mathsf{STAB}_{n_{1}},\mathsf{STAB}_ {n_{1}+n_{2}})\) does not hold for certain (large) choices of \(n_{1}\) and \(n_{2}\)[19]. On the other hand, for \(n_{1},n_{2}\leq 3\) the multiplicativity property \(\mathsf{Mult}^{\xi}(\mathsf{STAB}_{n_{1}},\mathsf{STAB}_{n_{1}},\mathsf{STAB}_ {n_{1}+n_{2}})\) holds (see Ref. [18, Proposition 1]). This was shown using that the stabilizer fidelity is multiplicative, i.e. \(\mathsf{Mult}^{F}(\mathsf{STAB}_{n_{1}},\mathsf{STAB}_{n_{1}},\mathsf{STAB}_ {n_{1}+n_{2}})\). We claim the property \(\mathsf{Mult}^{F}(\mathcal{D}_{1},\mathcal{D}_{2},\mathcal{D}_{3})\) implies property \(\mathsf{Mult}^{\xi}(\mathcal{D}_{1},\mathcal{D}_{2},\mathcal{D}_{3})\). **Theorem 6.1**.: _Property \(\mathsf{Mult}^{F}(\mathcal{D}_{1},\mathcal{D}_{2},\mathcal{D}_{3})\) implies property \(\mathsf{Mult}^{\xi}(\mathcal{D}_{1},\mathcal{D}_{2},\mathcal{D}_{3})\)._ Proof.: We clearly have \[\xi_{\mathcal{D}_{3}}(\Psi_{1}\otimes\Psi_{2})\leq\xi_{\mathcal{D} _{1}}(\Psi_{1})\xi_{\mathcal{D}_{2}}(\Psi_{2}) \tag{94}\] for all \(\Psi_{1}\in\mathcal{H}_{1}\) and \(\Psi_{2}\in\mathcal{H}_{2}\) because of property (93) of the dictionaries \(\{\mathcal{D}_{j}\}_{j=1}^{3}\) and the definition (5) of \(\xi_{\mathcal{D}}\). To show the converse inequality, assume that \(y_{1}\in\mathcal{H}_{1}\), \(y_{2}\in\mathcal{H}_{2}\) are such that \[F_{\mathcal{D}_{j}}(y_{j})\leq 1\quad\text{ and }\quad\xi_{\mathcal{D}_{j}}(\Psi_{j} )=|\langle\Psi_{j},y_{j}\rangle|^{2}\quad\text{ for }\quad j\in[2]\.\] Then \[F_{\mathcal{D}_{3}}(y_{1}\otimes y_{2})=F_{\mathcal{D}_{1}}(y_{1} )F_{\mathcal{D}_{2}}(y_{2})\leq 1\] where we used the assumption that property \(\mathsf{Mult}^{F}(\mathcal{D}_{1},\mathcal{D}_{2},\mathcal{D}_{3})\) holds to obtain the equality. This implies that \(y_{1}\otimes y_{2}\) is a feasible point of the dual program for the quantity \(\xi_{\mathcal{D}_{3}}(\Psi_{1}\otimes\Psi_{2})\), see Eq. (92). Thus \[\xi_{\mathcal{D}_{3}}(\Psi_{1}\otimes\Psi_{2}) \geq|\langle\Psi_{1}\otimes\Psi_{2},y_{1}\otimes y_{2}\rangle|^{2}\] \[=|\langle\Psi_{1},y_{1}\rangle|^{2}\cdot|\langle\Psi_{2},y_{2} \rangle|^{2}\] \[=\xi_{\mathcal{D}_{1}}(\Psi_{1})\xi_{\mathcal{D}_{2}}(\Psi_{2}). \tag{95}\] Expression (95) together with Eq. (94) gives the claim. ### Multiplicativity for infinite dictionaries In this section, we extend the results of Section 6.2 to dictionaries \(\mathcal{D}\) that may contain infinitely many elements. Our strategy is to use an \(\epsilon\)-net for \(\mathcal{D}\in\mathcal{H}\) with a finite number of elements, we denote by \(\mathcal{D}^{\epsilon}\). We relate the extent and fidelity with respect to the dictionary \(\mathcal{D}\) to the extent and fidelity with respect to its net \(\mathcal{D}^{\epsilon}\) (see Lemmas 6.2 and 6.3) to prove that multiplicativity of the \(\mathcal{D}\)-fidelity implies multiplicativity of the \(\mathcal{D}\)-extent in Theorem 6.6. This result is a generalization of Theorem 6.1 (that considered finite dictionaries) for (possibly) infinite dictionaries. We will make use of the notion of \(\epsilon\)-net to replace our infinite set \(\mathcal{D}\) by a finite set \(\mathcal{D}^{\epsilon}\). Let \(\|\Psi\|=\sqrt{\langle\Psi,\Psi\rangle}\) for \(\Psi\in\mathcal{H}\) denote the norm on \(\mathcal{H}\). Let \(\mathcal{D}\subset\mathcal{H}\) and let \(\epsilon>0\). Then a set \(\mathcal{D}^{\epsilon}\subset\mathcal{H}\) is called an \(\epsilon\)-net for \(\mathcal{D}\) if for any \(\Psi\in\mathcal{D}\) there is some \(\Phi\in\mathcal{D}^{\epsilon}\) such that \(\|\Phi-\Psi\|\leq\epsilon\). We are interested in the case where for every \(\epsilon>0\) there is a finite \(\epsilon\)-net \(\mathcal{D}^{\epsilon}\) for \(\mathcal{D}\), with the additional property that \(\mathcal{D}^{\epsilon}\subset\mathcal{D}\), i.e., the net consists of elements of \(\mathcal{D}\). A sufficient condition for this being the case is that the subset \(\mathcal{D}\subset\mathcal{H}\) is compact. **Lemma 6.2**.: _Let \(\mathcal{D}\subset\mathcal{H}\) be a set of states. Assume that there is a finite \(\epsilon\)-net \(\mathcal{D}^{\epsilon}\) for \(\mathcal{D}\) such that \(\mathcal{D}^{\epsilon}\subset\mathcal{D}\), for some \(\epsilon>0\). Assume further that \(\mathcal{D}^{\epsilon}\) contains an orthonormal basis of \(\mathcal{H}\). Let \(d\) be the dimension of \(\mathcal{H}\). Then_ \[\xi_{\mathcal{D}}(\Psi)\leq\xi_{\mathcal{D}^{\epsilon}}(\Psi)\leq\xi_{ \mathcal{D}}(\Psi)\left(1+\sqrt{d}\epsilon\right)^{2}\qquad\text{ for all }\qquad\Psi\in\mathcal{H}\.\] Proof.: The first inequality follows immediately from the definition of \(\xi_{\mathcal{D}}\) and from the assumption that \(\mathcal{D}^{\epsilon}\subset\mathcal{D}\). To prove the second inequality, let \(\Psi\in\mathcal{H}\) be arbitrary. By definition of \(\xi_{\mathcal{D}}(\Psi)\) as an infimum, we have the following: For every \(m\in\mathbb{N}\), there exist \(N(m)\in\mathbb{N}\), \(\{\varphi_{j}(m)\}_{j=1}^{N(m)}\subset\mathcal{D}\) and \(\{c_{j}(m)\}_{j=1}^{N(m)}\subset\mathbb{C}\) such that \[\Psi=\sum_{j=1}^{N(m)}c_{j}(m)\varphi_{j}(m)\] and \[\|c(m)\|_{1}<\sqrt{\xi_{\mathcal{D}}(\Psi)}+\frac{1}{m}. \tag{96}\] Furthermore, we have \[\|c(m)\|_{1}^{2}\geq\xi_{\mathcal{D}}(\Psi)\quad\text{ for any }\quad N(m)\in\mathbb{N},\{s_{j}\}_{j=1}^{N(m)}\subset\mathcal{D}\quad\text{ and }\quad\{c_{j}(m)\}_{j=1}^{N(m)}\subset\mathbb{C}\.\] Fix such an \(m\in\mathbb{N}\). Since \(\mathcal{D}^{\epsilon}\) is an \(\epsilon\)-net for \(\mathcal{D}\), there is, for every \(j\in[N(m)]\), an element \(\varphi_{j}^{\epsilon}(m)\in\mathcal{D}^{\epsilon}\) and \(\delta_{j}(m)\in\mathcal{H}\) such that \[\varphi_{j}(m)=\varphi_{j}^{\epsilon}(m)+\delta_{j}(m)\qquad\text{ and }\qquad\|\delta_{j}(m)\|\leq\epsilon\.\] It follows that \[\Psi=\left(\sum_{j=1}^{N(m)}c_{j}(m)\varphi_{j}^{\epsilon}(m)\right)+\delta( m)\qquad\text{ where }\qquad\delta(m)=\sum_{j=1}^{N(m)}c_{j}(m)\delta_{j}(m)\.\] By the triangle inequality we have \[\|\delta(m)\|\leq\epsilon\sum_{j=1}^{N(m)}|c_{j}(m)|=\epsilon\|c(m)\|_{1}. \tag{97}\] Suppose \(\{e_{k}\}_{k=1}^{d}\) is an orthonormal basis contained in \(\mathcal{D}^{\epsilon}\). Then we can expand \[\delta(m)=\sum_{k=1}^{d}\alpha_{k}(m)e_{k}\,\] and it follows from (97) and the Cauchy-Schwarz inequality that \[\|\alpha(m)\|_{1} \leq\sqrt{d}\|\alpha(m)\|_{2}\] \[=\sqrt{d}\|\delta(m)\|_{2}\qquad\text{by the Cauchy-Schwarz inequality in $\mathbb{C}^{d}$}\] \[\leq\sqrt{d}\epsilon\|c(m)\|_{1}\qquad\text{ because of Eq.~{}\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eq It follows that \[\sqrt{F_{\mathcal{D}}(\Psi)} =|\langle\varphi,\Psi\rangle|\] \[\leq|\langle\varphi^{\epsilon},\Psi\rangle|+|\langle\delta,\Psi\rangle|\] \[\leq\sqrt{F_{\mathcal{D}^{\epsilon}}(\Psi)}+|\langle\delta,\Psi\rangle|\] \[\leq\sqrt{F_{\mathcal{D}^{\epsilon}}(\Psi)}+\|\Psi\|\cdot\|\delta\|\] \[\leq\sqrt{F_{\mathcal{D}^{\epsilon}}(\Psi)}+\|\Psi\|\cdot\epsilon\] where we used the definition of \(F_{\mathcal{D}^{\epsilon}}(\Psi)\) and the Cauchy-Schwarz inequality (in the penultimate step). The claim follows. **Lemma 6.4**.: _We have_ \[\|y\|^{2}\leq d^{2}\cdot F_{\mathcal{D}^{\epsilon}}(y)\qquad\text{ for every }\qquad y\in\mathcal{H}\,\] _where \(d=\dim\mathcal{H}\)._ Proof.: Let \(\{e_{k}\}_{k=1}^{d}\) be an orthonormal basis contained in \(\mathcal{D}^{\epsilon}\). Then \[|\langle e_{k},y\rangle|^{2}\leq F_{\mathcal{D}^{\epsilon}}(y)\qquad\text{ for every }\qquad k\in[d]\] because \(e_{k}\in\mathcal{D}^{\epsilon}\) for every \(k\in[n]\). We have \[\|y\| =\left(\sum_{k=1}^{d}|\langle e_{k},y\rangle|^{2}\right)^{1/2}\] \[\leq\sum_{k=1}^{d}|\langle e_{k},y\rangle|\] \[\leq d\cdot\sqrt{F_{\mathcal{D}^{\epsilon}}(y)}\] where we used that \(\|v\|_{2}\leq\|v\|_{1}\) for \(v\in\mathbb{C}^{d}\). The claim follows. **Lemma 6.5**.: _Let \(d_{j}=\dim\mathcal{H}_{j}\) for \(j\in[2]\). Assuming that the property \(\mathsf{Mult}^{F}(\mathcal{D}_{1},\mathcal{D}_{2},\mathcal{D}_{3})\) holds, we have_ \[\xi_{\mathcal{D}_{3}}(\Psi_{1}\otimes\Psi_{2})\geq g(d_{1},d_{2},d_{3}, \epsilon)\xi_{\mathcal{D}_{1}}(\Psi_{1})\xi_{\mathcal{D}_{2}}(\Psi_{2})\] _for a function \(g\) which satisfies_ \[\lim_{\epsilon\to 0}g(d_{1},d_{2},d_{3},\epsilon)=1\.\] Proof.: Suppose \(y_{j}\in\mathcal{H}_{j}\) for \(j\in[2]\) is such that \[F_{\mathcal{D}^{\epsilon}_{j}}(y_{j})\leq 1 \tag{98}\] and \[\xi_{\mathcal{D}^{\epsilon}_{j}}(\Psi_{j})=|\langle y_{j},\Psi_{j}\rangle|^{2} \tag{99}\] for \(j\in[2]\). Such a pair \((y_{1},y_{2})\) exists since \(\mathcal{D}^{\epsilon}_{1}\) and \(\mathcal{D}^{\epsilon}_{2}\) are finite sets and the dual definition definition of the extent in terms of a maximum applies (see Eq. (92)). Equation (98) implies that \[\|y_{j}\|\leq d_{j}\qquad\text{ for }\qquad j\in[2]\,\] see Lemma 6.4. We have \[\sqrt{F_{\mathcal{D}_{3}^{*}}(y_{1}\otimes y_{2})} \leq\sqrt{F_{\mathcal{D}_{3}}(y_{1}\otimes y_{2})}\qquad\text{ by Lemma \ref{lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lemlem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lemlem:lemlem:lem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lem:lemlemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlemlem:lemlem:lemlem:lemlem:lemlem:lemlemlem:lemlemlem:lemlem:lemlemlem:lemlem:lemlemlem:lemlem:lemlem:lemlemlem:lemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlemlem:lemlemlem:lemlemlem:lemlemlemlem:lemlemlem:lemlemlem:lemlemlemlem:lemlemlem:lemlemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlemlem:lemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlem:lemlemlemlem:lemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlemlemlem:lemlemlemlemlemlem:lemlemlemlemlemlem:lemlemlemlemlemlem:lemlemlemlemlem:lemlemlemlemlemlemlem:lemlemlemlemlem:lemlemlemlemlemlem:lemlemlemlemlemlem:lemlemlemlemlem:lemlemlemlemlemlem:lemlemlemlemlemlem:lemlemlemlemlemlem:lemlemlemlemlemlemlem:lemlemlemlemlemlem:lemlemlemlemlemlemlem:lemlemlemlemlemlem:lemlemlemlemlemlemlem:lemlemlemlemlemlem:lemlemlemlemlemlemlem:lemlemlemlemlemlemlemlem:lemlemlemlemlemlemlem:lemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlem:lemlemlemlemlemlemlem:lemlemlemlemlemlemlemlemlem:lemlemlemlemlemlemlem:lemlemlemlemlemlemlem:lemlemlemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlemlemlem:lemlemlemlemlemlemlem:lemlemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlem:lemlemlemlemlemlemlemlem:lemlemlemlemlemlemlem:lemlem Proof.: By (if necessary) replacing \(\mathcal{D}_{j}^{\epsilon}\) by \(\mathcal{D}_{j}^{\epsilon}\cup\{e_{k}^{(j)}\}_{k=1}^{d}\), for \(j\in[3]\), where \(\{e_{k}^{(j)}\}_{k=1}^{d}\) is an orthonormal basis of \(\mathcal{H}_{j}\) with \(e_{k}^{(j)}\in\mathcal{D}_{j}\), for \(n\in[3]\), we have that each \(\mathcal{D}_{j}^{\epsilon}\) is finite and contains an orthonormal basis of the respective space. The inequality \[\xi_{\mathcal{D}_{3}}(\Psi_{1}\otimes\Psi_{2})\geq\xi_{\mathcal{D}_{1}}(\Psi_ {1})\xi_{\mathcal{D}_{2}}(\Psi_{2})\qquad\text{ for all }\qquad\Psi_{1}\in\mathcal{H}_{1}\text{ and }\Psi_{2}\in \mathcal{H}_{2}\] now follows immediately from Lemma 6.5 by taking the limit \(\epsilon\to 0\). The converse inequality is trivial because \(\mathcal{D}_{1}\otimes\mathcal{D}_{2}\subset\mathcal{D}_{3}\). ## 7 Multiplicativity of the Gaussian extent for four fermions In this section we prove that the Gaussian extent is multiplicative for the tensor product of any two \(4\)-fermion pure states with positive parity. **Theorem 7.1** (Multiplicativity of the Gaussian extent for \(4\)-fermion pure states.).: _Let \(\mathcal{H}_{+}^{4}\) be the set of pure \(4\)-fermion states with positive parity and let \(\mathcal{G}_{n}\) be the set of Gaussian states on \(n\) fermions. Then_ \[\xi_{\mathcal{G}_{8}}(\Psi_{A}\widetilde{\otimes}\Psi_{B})=\xi_{\mathcal{G}_{ 4}}(\Psi_{A})\xi_{\mathcal{G}_{4}}(\Psi_{B})\qquad\text{ for all }\qquad\Psi_{A},\Psi_{B}\in\mathcal{H}_{+}^{4}\.\] Proof.: Since the metaplectic representation defines a surjective, continuous map \[\begin{array}{ccc}f:&[0,2\pi]\times SO(2n)&\to&\mathcal{G}_{n}\\ &(\varphi,R)&\mapsto&e^{i\varphi}U_{R}\left|0_{F}\right.\end{array}\] from the compact set \([0,2\pi]\times SO(2n)\) to \(\mathcal{G}_{n}\), the set \(\mathcal{G}_{n}\subset\mathcal{H}^{n}\) is compact. We also observe that the occupation number states \[\{\left|x\right\rangle\ \left|\ x\in\{0,1\}^{n}\right\}\] form an orthonormal basis contained in \(\mathcal{G}_{n}\). By compactness, we conclude that for any \(\epsilon>0\), there is a finite \(\epsilon\)-net \(\mathcal{G}_{n}^{\epsilon}\subset\mathcal{G}_{n}\) consisting of Gaussian states and containing an orthonormal basis of \(\mathcal{H}^{n}\). Finally, we note that we also have the inclusion \(\mathcal{G}_{n_{1}}\otimes\mathcal{G}_{n_{2}}\subset\mathcal{G}_{n_{1}+n_{2}}\) for \(n_{1},n_{2}\in\mathbb{N}\) arbitrary. Let us now specialize to \(n_{1}=n_{2}=4\). In this case, we have multiplicativity of the fermionic Gaussian fidelity by Theorem 5.10. In particular, the conditions for Theorem 6.6 apply and the claim follows. ## Acknowledgements BD and RK gratefully acknowledge support by the European Research Council under grant agreement no. 101001976 (project EQUIPTNT). ## Appendix A Alternative Gaussianity condition for \(4\)-fermion states In the following, we prove Lemma 5.1. Proof.: Consider \(\Psi\in\mathcal{H}_{+}^{4}\). We will show that \(\left\langle\Psi,\theta\Psi\right\rangle=0\) is equivalent to \(\Lambda\left(\left|\Psi\right\rangle\otimes\left|\Psi\right\rangle\right)=0\). According to Lemma 2.3, this is a sufficient and necessary condition for \(\Psi\) to be Gaussian. Let \(E\subset\{0,1\}_{+}^{4}\), \(|E|=4\) be a subset of even-weight strings such that \(E\cup\overline{E}=\{0,1\}_{+}^{4}\). We first compute \(\langle\Psi,\theta\Psi\rangle\). We have \[\langle\Psi,\theta\Psi\rangle =\sum_{x,y\in\{0,1\}_{+}^{4}}\langle\Psi,x\rangle\left\langle x |\,\theta\left(|y\rangle\left\langle y,\Psi\right\rangle\right)\right.\] \[=\sum_{x,y\in\{0,1\}_{+}^{4}}\langle\Psi,x\rangle\langle x,\theta y \rangle\overline{\langle y,\Psi\rangle}\] \[=\sum_{x,y\in\{0,1\}_{+}^{4}}(-1)^{\vartheta(y)}\langle x, \overline{y}\rangle\langle\Psi,x\rangle\langle\Psi,y\rangle\] \[=\sum_{x\in\{0,1\}_{+}^{4}}(-1)^{\vartheta(x)}\langle\Psi,x \rangle\langle\Psi,\overline{x}\rangle\] \[=\sum_{x\in E}(-1)^{\vartheta(x)}\langle\Psi,x\rangle\langle \Psi,\overline{x}\rangle+(-1)^{\vartheta(\overline{x})}\langle\Psi,\overline {x}\rangle\langle\Psi,x\rangle\] \[=2\sum_{x\in E}(-1)^{\vartheta(x)}\langle\Psi,x\rangle\langle \Psi,\overline{x}\rangle\,\] where the second step follows from \(\theta u=\overline{u}\theta\) for all \(u\in\mathbb{C}\) due to the antiunitarity of \(\theta\), the third step from Eq. (70) and the fourth and final steps from \(\vartheta(\overline{x})=\vartheta(x)\) for \(x\in\{0,1\}_{+}^{4}\). Thus, \[\langle\Psi,\theta\Psi\rangle=0\qquad\text{ if and only if }\qquad\sum_{x\in\{0,1\}_{+}^{4}}(-1)^{ \vartheta(x)}\langle\Psi,x\rangle\langle\Psi,\overline{x}\rangle=0. \tag{103}\] We proceed to prove that \(\Lambda(|\Psi\rangle\otimes|\Psi\rangle)=0\) is equivalent to Eq. (103). We start by using Eq. (13) to write the operator \(\Lambda\) in terms of creation and annihilation operators: \[\Lambda =\sum_{j=1}^{8}c_{j}\otimes c_{j}\] \[=\sum_{j=1}^{4}\left(c_{2j-1}\otimes c_{2j-1}+c_{2j}\otimes c_{2j}\right)\] \[=\sum_{j=1}^{4}\left(\left(a_{j}+a_{j}^{\dagger}\right)\otimes \left(a_{j}+a_{j}^{\dagger}\right)+i(a_{j}-a_{j}^{\dagger})\otimes i(a_{j}-a_{ j}^{\dagger})\right)\] \[=2\sum_{j=1}^{4}\left(a_{j}\otimes a_{j}^{\dagger}+a_{j}^{\dagger }\otimes a_{j}\right).\] Applying this expression to \(|\Psi\rangle\otimes|\Psi\rangle\) and using Eq. (11) gives \[\Lambda\left(|\Psi\rangle\otimes|\Psi\rangle\right) =2\sum_{x,y\in\{0,1\}_{+}^{4}}\sum_{j=1}^{4}\left(a_{j}\otimes a_ {j}^{\dagger}+a_{j}^{\dagger}\otimes a_{j}\right)\left(|x\rangle\otimes|y \rangle\right)\langle x,\Psi\rangle\langle y,\Psi\rangle\] \[=2\sum_{x,y\in\{0,1\}_{+}^{4}}\sum_{j=1}^{4}(-1)^{\eta_{j}(x+y)}( x_{j}\overline{y_{j}}+\overline{x_{j}}y_{j})\left(|x\oplus e_{j}\rangle \otimes|y\oplus e_{j}\rangle\right)\langle x,\Psi\rangle\langle y,\Psi\rangle\] \[=2\sum_{x,y\in\{0,1\}_{+}^{4}}\sum_{j=1}^{4}(-1)^{\eta_{j}(x+y)}( y_{j}\oplus\overline{x_{j}})\left(|x\oplus e_{j}\rangle\otimes|y\oplus e_{j} \rangle\right)\langle x,\Psi\rangle\langle y,\Psi\rangle\] \[=2\sum_{x,y\in\{0,1\}_{-}^{4}}\left(\sum_{j=1}^{4}(-1)^{\eta_{j}(x +y)}(y_{j}\oplus\overline{x_{j}})\langle x\oplus e_{j},\Psi\rangle\langle y \oplus e_{j},\Psi\rangle\right)(|x\rangle\otimes|y\rangle) \tag{104}\] where in the third line we used \(u\overline{v}+\overline{u}v=u\oplus\overline{v}\) for all \(u,v\in\{0,1\}\) and in the last line we used \(\left\{x\oplus e_{j}\,|\,x\in\{0,1\}_{+}^{4}\right\}=\{0,1\}_{-}^{4}\), \(\eta_{j}(x\oplus e_{j})=\eta_{j}(x)\) and \(u\oplus v\oplus 2e_{j}=u\oplus v\) valid for \(x\in\{0,1\}_{+}^{4}\), \(j\in[4]\) and \(u,v\in\{0,1\}\). It follows that \(\Lambda\left(|\Psi\rangle\otimes|\Psi\rangle\right)=0\) if and only if \[\sum_{j=1}^{4}(-1)^{\eta_{j}(x+y)}(y_{j}\oplus\overline{x_{j}})\langle x \oplus e_{j},\Psi\rangle\langle y\oplus e_{j},\Psi\rangle=0\qquad\text{ for all }\qquad x,y\in\{0,1\}_{+}^{4}. \tag{105}\] Since \(x\) and \(y\) have the same parity, either \(|x-y|=4\) (i.e., \(y=\overline{x}\)), \(|x-y|=2\) or \(|x-y|=0\) (i.e., \(y=x\)). The expression (105) is non-zero only if \(|x-y|=4\) (we argue below why). For \(|x-y|=4\) Eq. (105) becomes \[\sum_{j=1}^{4}(-1)^{\eta_{j}(x+\bar{x})}\langle x\oplus e_{j},\Psi\rangle \langle\bar{x}\oplus e_{j},\Psi\rangle=0\quad\text{ if and only if }\quad\sum_{z\in E}^{4}(-1)^{\vartheta(z)}\langle z,\Psi\rangle \langle z,\Psi\rangle=0\,\] where we used \(\{x\oplus e_{j}\,|\,j\in[4]\}=E\) with \(E\subset\{0,1\}_{+}^{4}\), \(|E|=4\) a subset of even-weight strings such that \(E\cup\overline{E}=\{0,1\}_{+}^{4}\). We also used that \((-1)^{\eta_{j}(x+\overline{x})}=(-1)^{j-1}\) can be replaced by \((-1)^{\vartheta(x)+\vartheta(z)}\) upon changing the summation over \(j\in[4]\) to a summation over \(z\in E\). We recovered the right hand side of Eq. (103), proving the claim. It remains to argue that Eq. (105) is zero for \(|x-y|\in\{0,2\}\). For \(|x-y|=0\), i.e., \(x=y\), Eq. (105) is zero because \(x_{j}\otimes\overline{x_{j}}=0\) for \(j\in[4]\). We exemplify that terms \(\propto|x\rangle\otimes|y\rangle\) with \(|x-y|=2\) are zero by considering \(x=1000\) and \(y=0100\). Starting from Eq. (104) we obtain \[\left(\langle 1000|\otimes\langle 0100|\right)\Lambda\left(|\Psi\rangle \otimes|\Psi\rangle\right)=((-1)^{\eta_{1}(1100)}+(-1)^{\eta_{2}(1100)})\langle 0 000,\Psi\rangle\langle 1100,\Psi\rangle=0\,\] where \(\eta_{1}(1100)=0\) and \(\eta_{2}(1100)=1\). The remaining cases with \(|x-y|=2\) proceed similarly. ## Appendix B Commutativity of the map \(\theta\) and quadratic Majorana monomials In the following, we prove Lemma 5.2. Proof.: We start by showing that \(\theta=c_{1}c_{3}c_{5}c_{7}K\), where \(K\) denotes the antiunitary given by complex conjugation in the number state basis. For this, it suffices to show that \(\theta\) is antiunitary, which directly follows from unitarity of \(c_{1}c_{3}c_{5}c_{7}\), and that it satisfies Eq. (70). We show the later using Eqs. (13) and (11): We have \[\theta\,|x\rangle =c_{1}c_{3}c_{5}c_{7}K\,|x\rangle\] \[=(a_{1}+a_{1}^{\dagger})(a_{2}+a_{2}^{\dagger})(a_{3}+a_{3}^{ \dagger})(a_{4}+a_{4}^{\dagger})\,|x\rangle\] \[=(-1)^{\eta_{4}(x)+\eta_{3}(x)+\eta_{2}(x)+\eta_{1}(x)}(x_{1}+ \overline{x_{1}})(x_{2}+\overline{x_{2}})(x_{3}+\overline{x_{3}})(x_{4}+ \overline{x_{4}})\,|\overline{x}\rangle\] \[=(-1)^{\vartheta(x)}\,|\overline{x}\rangle\ \,\] where we used \((-1)^{\eta_{4}(x)+\eta_{3}(x)+\eta_{2}(x)+\eta_{1}(x)}=(-1)^{3x_{1}+2x_{2}+x_{3 }}=(-1)^{x_{1}+x_{3}}=(-1)^{\vartheta(x)}\) and \(x_{j}+\overline{x_{j}}=1\) for \(j\in[4]\). The result \(\theta c_{j}c_{k}=c_{j}c_{k}\theta\) follows from simple algebra considering \(c_{2j}K=Kc_{2j}\) and \(c_{2j-1}K=-Kc_{2j-1}\). We show these last two equalities by explicitly computing their action on \(x\in\{0,1\}_{+}^{4}\): \[Kc_{2j}\,|x\rangle =K(a_{j}+a_{j}^{\dagger})\,|x\rangle=(x_{j}+\overline{x_{j}})\,| x\oplus e_{j}\rangle=|x\oplus e_{j}\rangle\ \,\] \[c_{2j}K\,|x\rangle =(a_{j}+a_{j}^{\dagger})\,|x\rangle=(x_{j}+\overline{x_{j}})\,|x \oplus e_{j}\rangle=Kc_{2j}\,|x\rangle\] \[Kc_{2j+1}\left|x\right\rangle =Ki(a_{j}-a_{j}^{\dagger})\left|x\right\rangle=-i(x_{j}-\overline{x_ {j}})\left|x\oplus e_{j}\right\rangle\,\] \[c_{2j+1}K\left|x\right\rangle =i(a_{j}-a_{j}^{\dagger})\left|x\right\rangle=i(x_{j}-\overline{x _{j}})\left|x\oplus e_{j}\right\rangle=-Kc_{2j+1}\left|x\right\rangle\.\] Proof.: We will prove that \(\theta c_{j}=-c_{j}\theta\) for \(j\in[8]\), which implies the result. We prove this for \(j\) odd, the proof for \(j\) even proceeds similarly. We use Eq. (13) to write the Majorana operators as creation and annihilation operators which act on basis states according to Eq. (11), and we apply \(\theta\) according to Eq. (70): \[\theta c_{2j-1}\left|x\right\rangle =\theta(a_{j}+a_{j}^{\dagger})\left|x\right\rangle\] \[=\theta(-1)^{\eta_{j}(x)}(x_{j}+\overline{x_{j}})\left|x\oplus e _{j}\right\rangle\] \[=(-1)^{\eta_{j}(x)}(-1)^{\vartheta(x\oplus e_{j})}(x_{j}+ \overline{x}_{j})\left|\overline{x\oplus e_{j}}\right\rangle\,\] \[c_{2j-1}\theta\left|x\right\rangle =(-1)^{\vartheta(x)}(a_{j}+a_{j}^{\dagger})\left|\overline{x}\right\rangle\] \[=(-1)^{\eta_{j}(\overline{x})}(-1)^{\vartheta(x)}(\overline{x_{j }}+x_{j})\left|\overline{x}\oplus e_{j}\right\rangle\.\] The equality \(\theta c_{2j-1}=-c_{2j-1}\theta\) follows from \(\left|\overline{x}\oplus e_{j}\right\rangle=\left|\overline{x\oplus e_{j}}\right\rangle\) for \(j\in[4]\), from \((-1)^{\eta_{j}(\overline{x})}=(-1)^{j+1}(-1)^{\eta_{j}(x)}\) and from \((-1)^{\vartheta(x)}=(-1)^{j}(-1)^{\vartheta(x\oplus e_{j})}\).
2306.02828
Heat equations associated to harmonic oscillator with exponential nonlinearity
We investigate the Cauchy problem for a heat equation involving a fractional harmonic oscillator and an exponential nonlinearity. Our main contributions are as follows: -We establish the local well-posedness in Orlicz spaces. -By considering small initial data in suitable Lebesgue spaces, we derive the existence of global weak-mild solutions. -We provide precise decay estimates for large time, revealing that the decay rate depends on the behavior of the nonlinearity near the origin. -Furthermore, we demonstrate that when considering certain non-negative initial data within the appropriate Orlicz space, the existence of local non-negative classical solutions is no longer guaranteed. In summary, our work addresses the local and global behavior of solutions, decay estimates, and the impact of nonlinearity on the existence of classical solutions, offering some insights into the dynamics of the considered heat equation with a fractional harmonic oscillator and exponential nonlinearity.
Divyang G. Bhimani, Mohamed Majdoub, Ramesh Manna
2023-06-05T12:23:12Z
http://arxiv.org/abs/2306.02828v2
# Heat equations associated to harmonic oscillator with exponential nonlinearity ###### Abstract. We consider the Cauchy problem for heat equation with fractional harmonic oscillator and exponential nonlinearity. We establish local well-posedness result in Orlicz spaces. We derive the existence of global weak-mild solutions for small initial data and obtain decay estimates for large time in Lebesgue spaces. In particular, we show that the decay depends on the behavior of the nonlinearity near the origin. Finally, we show that for some non-negative initial data in the appropriate Orlicz space, there is no local non-negative classical solution. Key words and phrases: Nonlinear heat equation, harmonic potential, exponential non-linearity 2010 Mathematics Subject Classification: 35Q40, 35Q55, 42B35 (primary), 35A01 (secondary). ## 1. Introduction This paper is concerned with the heat equation associated to fractional harmonic potential with exponential type non-linearity: \[\left\{\begin{array}{rcl}\partial_{t}u+(-\Delta+\varrho|x|^{2})^{\beta}u&=&f(u ),\quad(x,t)\in\mathbb{R}^{d}\times(0,\infty),\\ u(x,0)&=&u_{0}(x),\end{array}\right. \tag{1.1}\] where \(\varrho\geqslant 0,\ \beta>0\) and \(f:\mathbb{R}\to\mathbb{R}\) having an exponential growth at infinity with \(f(0)=0\). Note that the case \(\varrho=0\) and \(\beta=1\) in (1.1) corresponds to the standard nonlinear heat (NLH) equation. It is worth to mention that there has been a large amount of researches on the NLH equation, and the monographs [12, 13, 25, 26] cover a very extensive overview on the most established results on the subject. See also [3, 7, 8, 10, 22, 23, 24, 30] and the references therein. Recently Bhimani et al. in [2, Theorem 1.2] established well-posedness for (1.1) in Lebesgue spaces when \(f(u)=u|u|^{\gamma-1}\) (polynomial type nonlinearity). See also [1, Theorems 1.2 and 1.4] and [6, Theorem 1.1]. In this paper, we study the Cauchy problem for (1.1) with exponential nonlinearities. These nonlinearities arise in several physical models of self-trapped beams in plasma, see [21]. Some interesting avenues in this direction have been investigated for the \(2D\) energy critical NLS equation [5, 15] and for \(2D\) enrgy critical NLW equation [14, 16, 17]. As pointed out in [22, 23, 24], the well-posedness of (1.1) strongly depend on initial data space and the behaviour of non-linearity \(f\). Specifically, we assume that, for some \(C>0,p>1,\lambda>0\), in addition to \(f(0)=0\), \(f\) satisfies \[|f(u)-f(v)|\leqslant C|u-v|\left(e^{\lambda|u|^{p}}+e^{\lambda|v|^{p}}\right); \tag{1.2}\] or \[|f(u)-f(v)|\leqslant C|u-v|\left(|u|^{m-1}e^{\lambda|u|^{p}}+|v|^{m-1}e^{ \lambda|v|^{p}}\right), \tag{1.3}\] where \(m\geqslant 1+\frac{2p\beta}{d}\) measures the behavior of the nonlinearity \(f(u)\) near \(0\). Typical example \(f(u)=\pm we^{|u|^{p}}\) satisfies (1.2), and examples \(f(u)=\pm u|u|^{m-1},e^{u}-1-u,\pm u|u|^{m-1}e^{|u|^{q}}(q\leqslant p),e^{|u|^{ q}}-1(q\leqslant p)\) satisfy (1.3). Unless otherwise specified, along the rest of this article, we will assume that \(\varrho=1\). The spectral decomposition of the Hermite operator \(H=H^{1}=-\Delta+|x|^{2}\) on \(\mathbb{R}^{d}\) is given by \[H=\sum_{k=0}^{\infty}(2k+d)P_{k},\] where \(P_{k}\) stands for the orthogonal projection of \(L^{2}(\mathbb{R}^{d})\) onto the eigenspace corresponding to the eigenvalue \((2k+d)\). We define heat propagator associated to fractional harmonic oscillator \(H^{\beta}\) by \[e^{-tH^{\beta}}u_{0}(x)=\sum_{k=0}^{\infty}e^{-t(2k+d)^{\beta}}P_{k}u_{0}(x).\] While Lebesgue spaces are well-adapted to the heat equations with power nonlinearities, we are motivated here to consider initial data in Orlicz spaces in order to handle exponential nonlinearities. In [20], the authors characterise those nonlinearities \(f\) for which the standard NLH equation admits a local bounded solution in \(L^{q},\ 1\leq q<\infty,\) for all non-negative initial data \(u_{0}\in L^{q}\). In particular, it is proved that this holds if and only if \[\limsup_{s\to\infty}\left(\frac{f(s)}{s^{1+2q/d}}\right) <\infty\qquad\text{if}\qquad 1<q<\infty,\] \[\int_{1}^{\infty}\,\sup_{1\leq t\leq s}\left(\frac{f(t)}{t} \right)\ \frac{ds}{s^{1+2/d}} <\infty\qquad\text{if}\qquad q=1.\] The Orlicz space \(\exp L^{p}(\mathbb{R}^{d})\), \(1\leq p<\infty\) is defined as follows \[\exp L^{p}(\mathbb{R}^{d})=\left\{u\in L^{1}_{loc}(\mathbb{R}^{d}):\int_{ \mathbb{R}^{d}}\left(e^{\frac{|u(x)|^{p}}{\lambda p}}-1\right)dx<\infty\text{ for some }\lambda>0\right\},\] endowed with the Luxemburg norm \[\|u\|_{\exp L^{p}}=\inf\left\{\lambda>0:\int_{\mathbb{R}^{d}}\left(e^{\frac{|u (x)|^{p}}{\lambda p}}-1\right)dx\leq 1\right\}.\] In order to state local well-posedness result we shall use the following subspace of \(\exp L^{p}\): \[\exp L^{p}_{0}(\mathbb{R}^{d})=\left\{u\in L^{1}_{loc}(\mathbb{R}^{d}):\int_{ \mathbb{R}^{d}}\left(e^{\alpha|u(x)|^{p}}-1\right)dx<\infty\text{ for every }\alpha>0\right\}.\] We say that \(u\) is a _mild_ solution for the Cauchy problem (1.1) with \(u_{0}\in\exp L^{p}_{0}(\mathbb{R}^{d})\) if \(u\in C([0,T],\exp L^{p}_{0}(\mathbb{R}^{d}))\) satisfies \[u(t)=e^{-tH^{\beta}}u_{0}+\int_{0}^{t}e^{-(t-s)H^{\beta}}\,f(u(s))\,ds. \tag{1.4}\] **Theorem 1.1** (Local well-posedness).: _Let \(u_{0}\in\exp L^{p}_{0}(\mathbb{R}^{d})\) and \(0<\beta\leq 1.\) Assume that \(f\) satisfies (1.2). Then there exists \(T=T(u_{0})>0\) and a unique mild solution \(u\in C([0,T],\exp L^{p}_{0}(\mathbb{R}^{d}))\) to (1.1)._ **Remark 1.1**.: * The restriction on \(\beta\) in Theorem 1.1 comes due to Theorem 2.1 (2). In fact, we shall need this in order to prove Lemma 3.1. * We emphasis that the density \(C^{\infty}_{0}(\mathbb{R}^{d})\) in \(\exp L^{p}_{0}(\mathbb{R}^{d})\) plays a crucial role in the proof of Theorem 1.1. * Theorem 1.3 below asserts that we have non-existence of local solutions in \(\exp L^{p}(\mathbb{R}^{d})\). We note that \(e^{-tH^{\beta}}\) is continuous at \(t=0\) in \(\exp L^{p}_{0}(\mathbb{R}^{d})\) but not in \(\exp L^{p}(\mathbb{R}^{d}).\) See Proposition 2.2. Thus in order to study (1.1) in the Orlicz space \(\exp L^{p}(\mathbb{R}^{d})\) we recall weak- mild solution. We say that \(u\) is a _weak-mild_ solution for the Cauchy problem (1.1) with \(u_{0}\in\exp L^{p}_{0}(\mathbb{R}^{d})\) if \(u\in L^{\infty}((0,T),\exp L^{p}(\mathbb{R}^{d}))\) satisfies the associated integral equation (1.4) in \(\exp L^{p}(\mathbb{R}^{d})\) for almost all \(t\in(0,T)\) and \(u(t)\to u_{0}\) in the weak* topology as \(t\to 0.\) **Theorem 1.2** (Global existence).: _Let \(1<p\leqslant\frac{d(m-1)}{2\beta}\) and \(0<\beta\leqslant 1.\) Assume that \(f\) satisfies (1.3) for \(m\geqslant p\). Then there exists \(\epsilon>0\) such that for any initial data \(u_{0}\in\exp L^{p}(\mathbb{R}^{d})\) with \(\|u_{0}\|_{\exp L^{p}}\leqslant\epsilon,\) there exists a global weak-mild solution_ \[u\in L^{\infty}\left((0,\infty),\exp L^{p}(\mathbb{R}^{d})\right)\] _to (1.1) satisfying_ \[\lim_{t\to 0}\|u(t)-e^{-tH^{\beta}}u_{0}\|_{\exp L^{p}}=0.\] _Moreover we have_ \[\|u(t)\|_{L^{a}}\leqslant Ct^{-\left(\frac{1}{m-1}-\frac{d}{2\beta a}\right)},\ t>0, \tag{1.5}\] _where \(a\) satisfies_ 1. _If_ \(\frac{d}{2\beta}>\frac{p}{p-1}\)_, then_ \(\frac{d}{2\beta}(m-1)<a<\frac{d}{2\beta}(m-1)\frac{1}{(2-m)_{+}}.\)__ 2. _If_ \(\frac{d}{2\beta}=\frac{p}{p-1}\)_, then_ \(\frac{d}{2\beta}(m-1)<a<\frac{d}{2\beta}(m-1)\frac{1}{(2-m)_{+}}.\)__ 3. _If_ \(\frac{d}{2\beta}<\frac{p}{p-1}\) _and_ \((2-m)_{+}<\frac{d(p-1)}{2\beta p}\)_, then_ \(\frac{p}{p-1}(m-1)<a<\frac{d}{2\beta}(m-1)\frac{1}{(2-m)_{+}},\)__ _with \((z)_{+}\) stands for the positive part of a real number \(z\)._ **Remark 1.2**.: In view of hypothesis and conclusions stated in the above theorem, some comments arise, we enumerate them in what follows. 1. The restriction on \(\beta\) in Theorem 1.1 comes due to Theorem 2.1 (2). 2. The assumption \(p>1\) is needed in Corollary 2.1 and Corollary 2.2 below. 3. One may wonder if the range of \(a\) needed to obtain (1.5) is sharp. It is expected that this range can be improved. 4. Similar results was obtained in [7, 23, 24] for the standard NLH equation, that is \(\varrho=0\) in (1.1). **Definition 1.1** (\(\exp L^{p}\) -classical solution).: _Let \(u_{0}\in\exp L^{p}(\mathbb{R}^{d})\) and \(T>0\). A function \(u\in C((0,T];\exp L^{p}(\mathbb{R}^{d}))\bigcap L^{\infty}_{\rm loc}(0,T;L^{ \infty}(\mathbb{R}^{d}))\) is said to be \(\exp L^{p}\)-classical solution of (1.1) if \(u\in C^{1,2}((0,T)\times\mathbb{R}^{d})\), satisfies (1.1) in the classical sense and \(u(t)\to u_{0}\) in the weak\({}^{\star}\) topology as \(t\to 0\)._ **Theorem 1.3** (Non-existence).: _Assume that the nonlinear term \(f\) is continuous, \(f(x)\geqslant 0\) if \(x\geqslant 0,\) and_ \[\liminf_{\eta\to\infty}(f(\eta)\,e^{-\lambda\eta^{p}})>0\] _for some \(\lambda>0\) and \(p>1\). Then, there exists \(0\leqslant u_{0}\in\exp L^{p}(\mathbb{R}^{d})\) such that for every \(T>0\) the Cauchy problem (1.1) with \(\beta=1\) has no nonnegative \(\exp L^{p}\)- classical solution on \([0,T).\)_ Theorem 1.3 says that local solutions do not exist for certain data in \(\exp L^{p}(\mathbb{R}^{d})\) even though a small data global existence result holds in the same space \(\exp L^{p}(\mathbb{R}^{d})\). Theorem 1.3 thus complement Theorems 1.1 and 1.2. **Remark 1.3**.: The assumption \(\beta=1\) in Theorem 1.3 is merely due to our approach. In fact, the proof rely on the convolution formula for the Hermite heat semigroup \(e^{-tH}\) in (5.2). **Remark 1.4**.: In a forthcoming work, we will give a complete characterization of those nonlinearities \(f\) for which the equation (1.1) admits a local solution in Lebesque spaces \(L^{q}\) and in Orlicz spaces \(\exp L^{q}\), \(1\leq q<\infty\). We conclude the introduction with an outline of the paper. In the next section, we recall some basic facts and useful tools about Orlicz spaces. In Section 3 we give the proof of Theorem 1.1. The fourth section is devoted to the proof of Theorem 1.2. Finally, Section 5 contains the proof of the non-existence result given in Theorem 1.3. Along this paper, \(C\) will stands for a positive constant which may have different values at different places. ## 2. Preliminaries and key estimates We recall the definition of the Orlicz spaces and collect some related basic facts. For a complete presentation and more details, we refer the reader to [27, 11, 28]. We also give some key estimates. **Definition 2.1** (Orlicz space).: _Let \(\phi:\mathbb{R}^{+}\to\mathbb{R}^{+}\) be a convex increasing function such that_ \[\phi(0)=0=\lim_{s\to 0^{+}}\phi(s),\ \lim_{s\to\infty}\phi(s)=\infty.\] _The Orlicz space \(L^{\phi}(\mathbb{R}^{d})\) is defined as follows_ \[L^{\phi}(\mathbb{R}^{d})=\left\{u\in L^{1}_{loc}(\mathbb{R}^{d}):\int_{ \mathbb{R}^{d}}\phi\left(\frac{|u(x)|}{\lambda}\right)dx<\infty\text{ for some }\lambda>0\right\}\] _endowed with the Luxemburg norm_ \[\|u\|_{L^{\phi}}=\inf\left\{\lambda>0:\int_{\mathbb{R}^{d}}\phi\left(\frac{|u (x)|}{\lambda}\right)dx\leq 1\right\}. \tag{2.1}\] _We also consider the space_ \[L^{\phi}_{0}(\mathbb{R}^{d})=\left\{u\in L^{1}_{loc}(\mathbb{R}^{d}):\int_{ \mathbb{R}^{d}}\phi\left(\frac{|u(x)|}{\lambda}\right)dx<\infty\text{ for every }\lambda>0\right\}.\] Ioku et al. in [19, Section 2] proved that \[L^{\phi}_{0}(\mathbb{R}^{d})=\overline{C^{\infty}_{0}(\mathbb{R}^{d})^{\| \cdot\|_{L^{\phi}}}}=\text{the closure of }C^{\infty}_{0}(\mathbb{R}^{d})\text{ in }L^{\phi}( \mathbb{R}^{d}).\] Note that \[L^{\phi}(\mathbb{R}^{d})=\begin{cases}L^{\phi}_{0}(\mathbb{R}^{d})=L^{p}(\mathbb{ R}^{d})\quad if\ \phi(s)=s^{p}\ (1\leqslant p<\infty)\\ \exp L^{p}(\mathbb{R}^{d})\quad if\ \phi(s)=e^{s^{p}}-1\ (1\leqslant p<\infty)\end{cases}\] **Lemma 2.1** (Inclusion property).: 1. _[_23_, Lemma 2.3]_ \(L^{q}(\mathbb{R}^{d})\cap L^{\infty}(\mathbb{R}^{d})\hookrightarrow\exp L^{p} _{0}(\mathbb{R}^{d})\hookrightarrow\exp L^{p}(\mathbb{R}^{d})\) _for_ \(1\leqslant q\leqslant p,\) _with_ \[\|u\|_{\exp L^{p}}\leqslant\frac{1}{(\log 2)^{\frac{p}{p}}}\left(\|u\|_{L^{q}}+\|u \|_{L^{\infty}}\right).\] 2. _[_7_, Lemma 2.3]_ \(L^{q}(\mathbb{R}^{d})\cap L^{\infty}(\mathbb{R}^{d})\hookrightarrow L^{\phi} _{0}(\mathbb{R}^{d})\hookrightarrow L^{\phi}(\mathbb{R}^{d})\) _for_ \(q\leqslant 2p,\)__\(\phi(s)=e^{s^{p}}-1-s^{p}(p>1),\) _with_ \[\|u\|_{L^{\phi}(\mathbb{R}^{d})}\leqslant C(p)\left(\|u\|_{L^{q}}+\|u\|_{L^{ \infty}}\right).\] 3. _[_23_, Lemma 2.4]_ \(\exp L^{p}(\mathbb{R}^{d})\hookrightarrow L^{q}(\mathbb{R}^{d})\) _for_ \(1\leqslant p\leqslant q<\infty,\) _with_ \[\|u\|_{L^{q}}\leqslant\left(\Gamma(\frac{p}{q}+1)\right)^{\frac{1}{q}}\|u\|_{ \exp L^{p}},\] _where the Gamma function is given by_ \(\Gamma(x):=\int_{0}^{\infty}s^{x-1}e^{-s}\,ds,\ x>0.\)__ **Theorem 2.1** (see Theorem 1.1 in [2]).: _For \(1\leqslant p,q\leqslant\infty\) and \(\beta>0,\) set \(\sigma_{\beta}\coloneqq\frac{d}{2\beta}\Big{|}\frac{1}{p}-\frac{1}{q}\Big{|}.\)_ 1. _If_ \(p,q\in(1,\infty),\) _or_ \((p,q)=(1,\infty),\) _or_ \(p=1\) _and_ \(q\in[2,\infty),\) _or_ \(p\in(1,\infty)\) _and_ \(q=1,\) _then there exists a constant_ \(C>0\) _such that_ \[\|e^{-tH^{\beta}}g\|_{L^{q}}\leqslant\begin{cases}Ce^{-td^{\beta}}\|g\|_{L^{p} }\quad\text{if}\quad t\geqslant 1\\ Ct^{-\sigma_{\beta}}\|g\|_{L^{p}}\quad\text{if}\quad 0<t\leqslant 1.\end{cases}\] (2.2) 2. _If_ \(0<\beta\leqslant 1,\) _then the above estimate holds for_ \(p,q\in[1,\infty].\)__ **Remark 2.1**.: * From (2.2) we get \[\|e^{-tH^{\beta}}g\|_{L^{q}}\leqslant C\|g\|_{L^{q}},\quad 1<q<\infty.\] (2.3) * Since \(t^{\sigma_{\beta}}e^{-td^{\beta}}\leqslant C\) for all \(t\geqslant 1,\) (2.2) yields \[\|e^{-tH^{\beta}}g\|_{L^{q}}\leqslant Ct^{-\sigma_{\beta}}\|g\|_{L^{p}},\quad 0 <t<\infty.\] (2.4) Fino-Kirane in [7, Proposition 1] obtained several fixed time \(L^{q}-\exp L^{p}\) estimates for the fractional heat propagator \(e^{-t(-\Delta)^{\beta}}\) with \(0<\beta\leqslant 2.\) See also [23, Proposition 3.2] and [9, Lemma 3.1]. In the next proposition we generalize this result to fractional harmonic oscillator \(H^{\beta}\) for any \(\beta>0.\) Specifically, we have following **Proposition 2.1**.: _Let \(1<q\leq p<\infty,t>0\) and \(1\leq r\leq\infty.\) Then_ 1. \(\|e^{-tH^{\beta}}g\|_{\exp L^{p}}\leq C\left\|g\right\|_{\exp L^{p}}\) _for_ \(\beta>0\)__ 2. \(\|e^{-tH^{\beta}}g\|_{\exp L^{p}}\leq Ct^{-\frac{d}{2\beta q}}\,\left(\log(t^{- \frac{d}{2\beta}}+1)\right)^{-\frac{1}{p}}\,\|g\|_{L^{q}}\) _for_ \(\beta>0\)__ 3. \(\|e^{-tH^{\beta}}g\|_{\exp L^{p}}\leq\frac{C}{(\log 2)^{\frac{1}{p}}}\left[t^{- \frac{d}{2\beta r}}\|g\|_{L^{r}}+\|g\|_{L^{q}}\right]\) _for_ \(0<\beta\leq 1.\)__ Proof.: (1) By Taylor expansion and Theorem 2.1, for \(\lambda>0,\) we have \[\int_{\mathbb{R}^{d}}\left(\exp\left|\frac{e^{-tH^{\beta}}g}{\lambda}\right|^ {p}-1\right)\,dx\leq\sum_{k=1}^{\infty}\frac{C^{pk}\|g\|_{L^{pk}}^{pk}}{k! \lambda^{pk}}=\int_{\mathbb{R}^{d}}\left(\exp\left|\frac{Cg}{\lambda}\right|^ {p}-1\right)\,dx.\] Thus, we get \[\|e^{-tH^{\beta}}g\|_{\exp L^{p}} =\inf\left\{\lambda>0:\int_{\mathbb{R}^{d}}\left(\exp\left|\frac {e^{-tH^{\beta}}g}{\lambda}\right|^{p}-1\right)\,dx\leq 1\right\}\] \[\leq\inf\left\{\lambda>0:\int_{\mathbb{R}^{d}}\left(\exp\left| \frac{Cg}{\lambda}\right|^{p}-1\right)\,dx\leq 1\right\}=C\|g\|_{\exp L^{p}}.\] (2) By Theorem 2.1 for \(q\leq p,\) we obtain \[\int_{\mathbb{R}^{d}}\left(\exp\left|\frac{e^{-tH^{\beta}}g}{ \lambda}\right|^{p}-1\right)\,dx \leq\sum_{k=1}^{\infty}\frac{C^{pk}t^{-\frac{d}{2\beta}(\frac{1}{ q}-\frac{1}{pk})pk}\|g\|_{L^{q}}^{pk}}{k!\lambda^{pk}}\] \[=t^{\frac{d}{2\beta}}\left(\exp\left(\frac{Ct^{-\frac{d}{2\beta q }}\|f\|_{L^{q}}}{\lambda}\right)^{p}-1\right).\] This leads to \[\|e^{-tH^{\beta}}f\|_{\exp L^{p}}\leq Ct^{-\frac{d}{2\beta q}}\,\left(\log(t^ {-\frac{d}{2\beta}}+1)\right)^{-\frac{1}{p}}\|f\|_{L^{q}}.\] (3) By Lemma 2.1 (1), we have \[\|e^{-tH^{\beta}}f\|_{\exp L^{p}}\leq\frac{1}{(\log 2)^{\frac{1}{p}}}\left(\|e^ {-tH^{\beta}}f\|_{L^{q}}+\|e^{-tH^{\beta}}f\|_{L^{\infty}}\right).\] By Theorem 2.1, we obtain \[\|e^{-tH^{\beta}}f\|_{\exp L^{p}}\leq\frac{C}{(\log 2)^{\frac{1}{p}}}\left(\|f \|_{L^{q}}+t^{-\frac{d}{2\beta r}}\|f\|_{L^{r}}\right).\] We also need the following smoothing estimate to deal with the local well-posedness. **Proposition 2.2**.: _Let \(1\leqslant p<\infty\) and \(\beta>0.\) If \(g\in\exp L_{0}^{p}(\mathbb{R}^{d})\), then_ \[e^{-tH^{\beta}}g\in C([0,\infty),\exp L_{0}^{p}(\mathbb{R}^{d})).\] Proof.: The proof of this result uses similar idea as in [7, 22]. By density of \(C_{0}^{\infty}(\mathbb{R}^{d})\) in \(\exp L_{0}^{p}(\mathbb{R}^{d})\), it suffices to show that \[\lim_{t\to 0}\left\|e^{-tH^{\beta}}g-g\right\|_{L^{q}}=0 \tag{2.5}\] for all \(g\in C_{0}^{\infty}(\mathbb{R}^{d})\) and \(1\leqslant q\leqslant\infty\). We note that \[e^{-tH^{\beta}}g(x)=\sum_{k=0}^{\infty}e^{-t(2k+d)^{\beta}}P_{k}g(x),\ P_{k}g= \sum_{|\alpha|=k}\langle g,\Phi_{\alpha}\rangle\,\Phi_{\alpha},\] where \(\Phi_{\alpha},\ \alpha\in\mathbb{N}^{d}\), are the normalised Hermite functions. Thanks to [29, Lemma 1.5.2], we have the estimate \[\|\Phi_{\alpha}\|_{L^{q}}\leqslant C\,(1+|\alpha|)^{\frac{d}{4}},\quad\forall \ 1\leqslant q\leqslant\infty,\ \alpha\in\mathbb{N}^{d}. \tag{2.6}\] We emphasise that (2.6) was obtained in [29] for dimension \(d=1\). The \(d-\)dimensional estimate (2.6) easily follows by observing that \(\Phi_{\alpha},\ \alpha\in\mathbb{N}^{d}\) are the tensor product of one dimensional Hermite functions. Now, since \(H\Phi_{\alpha}=(2|\alpha|+d)\,\Phi_{\alpha}\), an integration by parts yields \[\langle g,\Phi_{\alpha}\rangle=(d+2|\alpha|)^{-N}\,\langle H^{N}g,\Phi_{\alpha }\rangle,\ N\in\mathbb{N}.\] Owing to \(H^{N}g\in C_{0}^{\infty}(\mathbb{R}^{d})\) and using Holder's inequality, we obtain that \[|\langle g,\Phi_{\alpha}\rangle|\leqslant C(d+2|\alpha|)^{-N+\frac{d}{4}}\,\| H^{N}\,g\|_{L^{q^{\prime}}}.\] Hence \[\|P_{k}g\|_{L^{q}} \leqslant C\sum_{|\alpha|=k}\,(d+2|\alpha|)^{-N+\frac{d}{2}}\|H^{N}\,g\|_{ L^{q^{\prime}}}\] \[\leqslant C(d+k)^{-N+\frac{d}{2}}\sum_{|\alpha|=k}1\] \[\leqslant C(d+k)^{-N+\frac{3d}{2}-1},\] where we have used the fact that \[\sum_{|\alpha|=k}1=\binom{k+d-1}{k}\lesssim(d+k)^{d-1}.\] Since \(g\in C_{0}^{\infty}(\mathbb{R}^{d})\), we get \[\left\|e^{-tH^{\beta}}g-g\right\|_{L^{q}} = \left\|\sum_{k=0}^{\infty}[e^{-t(2k+d)^{\beta}}P_{k}g-P_{k}g]\right\| _{L^{q}} \tag{2.7}\] \[\leq \sum_{k=0}^{\infty}\left(1-e^{-t(2k+d)^{\beta}}\right)\left\|P_{k} g\right\|_{L^{q}}\] \[\leq C\sum_{k=0}^{\infty}\left(1-e^{-t(2k+d)^{\beta}}\right)\,(d+k)^{ -N+\frac{3d}{2}-1}.\] Therefore, by taking \(N\) large enough and the limit as \(t\to 0\) in (2.7), we infer \[\lim_{t\to 0}\left\|e^{-tH^{\beta}}g-g\right\|_{L^{q}}=0.\] This completes the proof. As a consequence, we have the following: **Corollary 2.1**.: _Let \(0<\beta\leq 1,\ p>1,\ d>\frac{2\beta p}{p-1},\ r>\frac{d}{2\beta}.\) Then, for every \(g\in L^{1}(\mathbb{R}^{d})\cap L^{r}(\mathbb{R}^{d})\), we have_ \[\left\|e^{-tH^{\beta}}g\right\|_{\exp L^{p}}\leq\kappa(t)\left[\|g\|_{L^{1}}+ \|g\|_{L^{r}}\right],\ \forall\,t>0,\] _where \(\kappa\in L^{1}(0,\infty)\) is given by_ \[\kappa(t)=\frac{C}{(\log 2)^{\frac{1}{p}}}\,\min\{t^{-\frac{d}{2\beta r}}+1,t^ {-\frac{d}{2\beta}}(\log(t^{-\frac{d}{2\beta}}+1))^{-\frac{1}{p}}\}.\] Proof.: By Proposition 2.1 (2) with \(q=1\), we have \[\left\|e^{-tH^{\beta}}g\right\|_{\exp L^{p}}\leq Ct^{-\frac{d}{2\beta}}\, \left(\log(t^{-\frac{d}{2\beta}}+1)\right)^{-\frac{1}{p}}\,\|g\|_{L^{1}}. \tag{2.8}\] On the other hand, by Proposition 2.1 (3) with \(q=1\), we obtain \[\left\|e^{-tH^{\beta}}g\right\|_{\exp L^{p}} \leq\frac{C}{(\log 2)^{\frac{1}{p}}}\left[t^{-\frac{d}{2\beta r}} \|g\|_{L^{r}}+\|g\|_{L^{1}}\right]\] \[\leq\frac{C}{(\log 2)^{\frac{1}{p}}}(t^{-\frac{d}{2\beta r}}+1) \left[\|g\|_{L^{r}}+\|g\|_{L^{1}}\right] \tag{2.9}\] Combining (2.8) and (2.9), we obtain \[\left\|e^{-tH^{\beta}}g\right\|_{\exp L^{p}}\leq\kappa(t)\left[\|g\|_{L^{1}}+ \|g\|_{L^{r}}\right],\ \forall\,t>0.\] By the assumption \(d>\frac{2\beta p}{p-1},\ r>\frac{d}{2\beta}\) we see that \(\kappa\in L^{1}(0,\infty)\). For \(d=\frac{2\beta p}{p-1}\), we also have similar result. For this we first define the suitable Orlicz space. Let \(\phi(s):=e^{s^{p}}-1-s^{p},\ s\geq 0\) and \(L^{\phi}\) be the associated Orlicz space with the Luxemburg norm (2.1). From the definition, we have \[C_{1}\|g\|_{\exp L^{p}}\leq\|g\|_{L^{p}}+\|g\|_{L^{\phi}}\leq C_{2}\|g\|_{\exp L^{ p}}, \tag{2.10}\] for some \(C_{1},\ C_{2}>0.\) **Corollary 2.2**.: _Let \(0<\beta\leq 1,\ p>1,\ r>\frac{d}{2\beta}=\frac{p}{p-1}.\) For every \(g\in L^{1}(\mathbb{R}^{d})\cap L^{2p}(\mathbb{R}^{d})\cap L^{r}(\mathbb{R}^{d}),\) we have_ \[\|e^{-tH^{\beta}}g\|_{L^{\phi}}\leq\zeta(t)\left[\|g\|_{L^{1}}+\|g\|_{L^{2p}}+ \|g\|_{L^{r}}\right],\ \forall\ t>0,\] _where \(\zeta\in L^{1}(0,\infty)\) is given by_ \[\zeta(t)=\frac{C}{(\log 2)^{\frac{1}{p}}}\,\min\{t^{-\frac{d}{2\beta r}}+1,t^ {-\frac{p}{p-1}}(\log(t^{-\frac{p}{p-1}}+1))^{-\frac{1}{2p}}\}.\] Proof.: In view of Proposition 2.1, we obtain \[\int_{\mathbb{R}^{d}}\phi\left(\frac{|e^{-tH^{\beta}}g|}{\lambda} \right)\,dx =\sum_{k\geq 2}\frac{\|e^{-tH^{\beta}}g\|_{L^{pk}}^{pk}}{ \lambda^{pk}k!}\] \[\leq\sum_{k\geq 2}\frac{C^{pk}t^{-\frac{d}{2\beta}(1-\frac{1}{pk}) pk}\|g\|_{L^{1}}^{pk}}{\lambda^{pk}k!}\] \[=\sum_{k\geq 2}\frac{C^{pk}t^{-\frac{p}{p-1}(1-\frac{1}{pk}) pk}\|g\|_{L^{1}}^{pk}}{\lambda^{pk}k!}=t^{\frac{p}{p-1}}\,\phi\left(Ct^{-\frac{p}{p-1 }}\frac{\|g\|_{L^{1}}}{\lambda}\right)\] \[\leq t^{\frac{p}{p-1}}\,\left(\exp\left\{\left(Ct^{-\frac{p}{p-1 }}\frac{\|g\|_{L^{1}}}{\lambda}\right)^{2p}\right\}-1\right).\] The last step we have used the fact that \(e^{s}-1-s\leq e^{s^{2}}-1\) for every \(s\geq 0.\) Thus we obtain that \[\|e^{-tH^{\beta}}g\|_{L^{\phi}} \leq\inf\left\{\lambda>0:t^{\frac{p}{p-1}}\,\left(\exp\left\{ \left(Ct^{-\frac{p}{p-1}}\frac{\|g\|_{L^{1}}}{\lambda}\right)^{2p}\right\}-1 \right)\leq 1\right\}\] \[=Ct^{-\frac{p}{p-1}}\left(\log(t^{-\frac{p}{p-1}}+1)\right)^{- \frac{1}{2p}}\|g\|_{L^{1}}.\] In view of the embedding \(L^{2p}\cap L^{\infty}\to L^{\phi},\) we also have \[\|e^{-tH^{\beta}}g\|_{L^{\phi}}\leq(\log 2)^{-\frac{1}{p}}\left[\|e^{-tH^{\beta} }g\|_{L^{\infty}}+\|e^{-tH^{\beta}}g\|_{L^{2p}}\right].\] By proposition 2.1, and let \(r>\frac{d}{2\beta}=\frac{p}{p-1}\) we obtain that \[\|e^{-tH^{\beta}}g\|_{L^{\phi}}\leq(\log 2)^{-\frac{1}{p}}\left[t^{-\frac{d}{2 \beta r}}\|g\|_{L^{r}}+\|g\|_{L^{2p}}\right].\] Combining above inequalities, we obtain \[\|e^{-tH^{\beta}}g\|_{L^{\phi}}\leq\zeta(t)\,[\|g\|_{L^{1}}+\|g\|_{L^{2p}}+\|g\|_{L ^{r}}],\ \forall\ t>0.\] Since \(\frac{d}{2\beta r}<1,\ \frac{p}{p-1}-\frac{p}{p-1}\frac{1}{2p}=\frac{2p}{2(p-1)}>1\) we see that \(\zeta\in L^{1}(0,\infty).\) **Lemma 2.2**.: _[_4_, Lemma 4.1.5]__. Let \(\beta>0\). Let \(X\) be a Banach space and \(g\in L^{1}(0,T:X),\) then_ \[t\longmapsto\int_{0}^{t}e^{-(t-\tau)H^{\beta}}\,g(\tau)\,d\tau\in C([0,T];X).\] **Proposition 2.3**.: _[_23_, Proposition 2.9]__. Let \(1\leq p<\infty\) and \(u\in C([0,T],\exp L^{p}_{0}(\mathbb{R}^{d}))\) for some \(T>0.\) Then, for every \(\lambda>0,\) we have_ \[(e^{\lambda|u|^{p}}-1)\in C([0,T],L^{r}(\mathbb{R}^{d})),\ 1\leq r<\infty.\] **Corollary 2.3** ([23]).: _Let \(1\leq p<\infty\) and \(u\in C([0,T];\exp L^{p}_{0}(\mathbb{R}^{d}))\) for some \(T>0.\) Assume that \(f\) satisfies (1.2). Then, for every \(p\leq r<\infty,\) we have_ \[f(u)\in C([0,T];L^{r}(\mathbb{R}^{d})).\] To prove the global existence results, the following estimate of nonlinear term will be handy later. **Lemma 2.3**.: _[_23_, Lemma 2.6, p. 2387]__. Let \(\lambda>0,\ 1\leq p,q<\infty\) and \(K>0\) such that \(\lambda qK^{p}\leq 1\). Assume that_ \[\|u\|_{\exp L^{p}}\leq K.\] _Then_ \[\|e^{\lambda|u|^{p}}-1\|_{L^{q}}\leq(\lambda qK^{p})^{\frac{1}{q}}.\] **Lemma 2.4**.: _[_23_, Lemma 2.6]__, [7]__. Let \(m\geq p>1,\ a>\frac{p(m-1)}{p-1}.\) Define \(\sigma=\frac{1}{m-1}-\frac{d}{2\beta a}.\) Assume that \(d>\frac{2\beta p}{p-1},\ a<\frac{d(m-1)}{2\beta}\frac{1}{(2-m)_{+}}\). Then there exist \(r,q,\{\theta_{k}\}_{k=0}^{\infty},\{\rho_{k}\}_{k=0}^{\infty}\) such that \(1<r\leq a,\ q\geq 1\) and \(\frac{1}{r}=\frac{1}{a}+\frac{1}{q},\ \ 0<\theta_{k}<1\) and \(\frac{1}{q(pk+m-1)}=\frac{\theta_{k}}{a}+\frac{1-\theta_{k}}{\rho_{k}},\ p\leq \rho_{k}<\infty,\ \frac{d}{2\beta}\big{(}\frac{1}{r}-\frac{1}{a}\big{)}<1\),_ \[\sigma[1+\theta_{k}(pk+m-1)]<1.\] \[1-\frac{d}{2\beta}\left(\frac{1}{r}-\frac{1}{a}\right)-\sigma\theta_{k}(pk+m- 1)=0.\] ## 3. Proof of Theorem 1.1 We closely follows the method of proof developed in [7, Theorem 1.3] and [19, 22, 23]. Thus we shall only sketch proof. The main idea is to decompose the initial data \(u_{0}\in\exp L^{p}_{0}(\mathbb{R}^{d}),\) using the density of \(C^{\infty}_{0}(\mathbb{R}^{d}),\) into a small part in \(\exp L^{p}(\mathbb{R}^{d})\) and a smooth one. Let \(\exp L^{p}_{0}(\mathbb{R}^{d}).\) Then by density, for every \(\epsilon>0\) there exists \(v_{0}\in C^{\infty}_{0}(\mathbb{R}^{d})\) such that \(u_{0}=v_{0}+w_{0}\) with \[\|w_{0}\|_{\exp L^{p}(\mathbb{R}^{d})}\leqslant\epsilon.\] In order to study the problem (1.1), we consider the following two problems: \[\begin{cases}\partial_{t}v+(-\Delta+|x|^{2})^{\beta}v=f(v)\\ v(x,0)=v_{0}\in C^{\infty}_{0}(\mathbb{R}^{d})\end{cases}t>0,x\in\mathbb{R}^{d} \tag{3.1}\] and \[\begin{cases}\partial_{t}w+(-\Delta+|x|^{2})^{\beta}w=f(w+v)-f(v)\\ w(x,0)=w_{0},\ \|w_{0}\|_{\exp L^{p}}\leqslant\epsilon\end{cases}t>0,x\in \mathbb{R}^{d}. \tag{3.2}\] We observe that \(u=v+w\) is a mild solution of (1.1) whenever \(v,w\) are mild solutions of (3.1) and (3.2) respectively. Now we prove the following local wellposedness for (3.1) and (3.2). **Lemma 3.1** (local well-posedness).: _Let \(v_{0}\in L^{p}(\mathbb{R}^{d})\cap L^{\infty}(\mathbb{R}^{d}),p>1,\beta>0\). Assume that \(f\) satisfies (1.2). Then there exists \(T=T(v_{0})>0\) and a mild solution \(v\in C([0,T],\exp L^{p}_{0}(\mathbb{R}^{d}))\cap L^{\infty}(0,T;L^{\infty}( \mathbb{R}^{d}))\) of (3.1)._ **Lemma 3.2**.: _Let \(w_{0}\in\exp L^{p}_{0}(\mathbb{R}^{d}),p>1,\beta>0\). Assume that \(f\) satisfies (1.2). Let \(T>0\) and \(v\in L^{\infty}(0,T;L^{\infty}(\mathbb{R}^{d}))\) be given in Lemma 3.1. Then for \(\|w_{0}\|_{\exp L^{p}}\leqslant\epsilon\) with \(\epsilon\ll 1\) small enough, there exists \(\tilde{T}=\tilde{T}(w_{0},\epsilon,v)>0\) and a mild solution \(w\in C([0,\tilde{T}],\exp L^{p}_{0}(\mathbb{R}^{d}))\) of (3.2)._ To deal with above lemma, we need the following result. **Lemma 3.3** (see Lemma 4.4 in [23]).: _Let \(v\in L^{\infty}(0,T;L^{\infty}(\mathbb{R}^{d}))\) for some \(T>0\). Let \(1<p\leqslant q<\infty,\) and \(w_{1},w_{2}\in\exp L^{p}(\mathbb{R}^{d})\) with \(\|w_{1}\|_{\exp L^{p}},\ \|w_{2}\|_{\exp L^{p}}\leqslant M\) for sufficiently small \(M>0\) (namely \(2^{p}\lambda qM^{p}\leqslant 1\), where \(\lambda\) is given as in (1.2)). Then there exists a constant \(C_{q}>0\) such that_ \[\|f(w_{1}+v)-f(w_{2}+v)\|_{L^{q}}\leqslant C_{q}e^{2^{p-1}\lambda\|v\|_{\infty }^{p}}\,\|w_{1}-w_{2}\|_{\exp L^{p}}.\] Proof of Lemma 3.1.: We consider \[Y_{T}:=\Big{\{}v\in C([0,T],\exp L^{p}_{0}(\mathbb{R}^{d}))\cap L^{\infty}(0,T ;L^{\infty}(\mathbb{R}^{d})):\|v\|_{Y_{T}}\leqslant 2\|v_{0}\|_{L^{p}\cap L^{ \infty}}\Big{\}},\] where \(\|v\|_{Y_{T}}:=\|v\|_{L^{\infty}(0,T;L^{p})}+\|v\|_{L^{\infty}(0,T;L^{\infty})}\) and \(\|v_{0}\|_{L^{p}\cap L^{\infty}}:=\|v_{0}\|_{L^{p}}+\|v_{0}\|_{L^{\infty}}.\) Put \[\Phi(v):=e^{-tH^{\beta}}v_{0}+\int_{0}^{t}e^{-(t-\tau)H^{\beta}}\,f(v(\tau))\,d\tau.\] For small \(T>0\), we claim that \(\Phi:Y_{T}\to Y_{T}\) is a contraction. In view of Lemma 2.1 (2), Proposition 2.2, Theorem 2.1 and Lemma 2.2, \(\Phi\) maps from \(Y_{T}\to Y_{T}.\) Now taking Theorem 2.1 into account and following the arguments of [7, Lemma 3.1], we obtain \(\Phi\) is a contraction. Now Banach fixed point theorem gives the desired result. Proof of Lemma 3.2.: For \(\tilde{T}>0\), we consider \[W_{\tilde{T}}=\left\{w\in C([0,\tilde{T}],\exp L_{0}^{p}(\mathbb{R}^{d}))): \|w\|_{L^{\infty}(0,\tilde{T};\exp L_{0}^{p})}\leq 2\epsilon\right\}.\] Put \[\tilde{\Phi}(w):=e^{-tH^{\beta}}w_{0}+\int_{0}^{t}e^{-(t-\tau)H^{\beta}}\left[ f(w(\tau)+v(\tau))-f(v(\tau))\right]d\tau.\] We shall prove that \(\tilde{\Phi}:W_{\tilde{T}}\to W_{\tilde{T}}\) is a contraction map for sufficiently small \(\epsilon\) and \(\tilde{T}>0\). To do that let \(w_{1},w_{2}\in W_{\tilde{T}}.\) By Lemma 2.1 (1), we obtain \[\|\tilde{\Phi}(w_{1})-\tilde{\Phi}(w_{2})\|_{\exp L^{p}}\leq\frac{C}{(\ln 2) ^{\frac{1}{p}}}\left(\|\tilde{\Phi}(w_{1})-\tilde{\Phi}(w_{2})\|_{L^{p}}+\| \tilde{\Phi}(w_{1})-\tilde{\Phi}(w_{2})\|_{L^{\infty}}\right).\] In view of Theorem 2.1 and by Lemma 3.3, we obtain \[\|\tilde{\Phi}(w_{1})-\tilde{\Phi}(w_{2})\|_{L^{\infty}}\leq Ce^{2^{p-1}\lambda |v|_{L^{\infty}}^{p}}\tilde{T}^{1-\frac{d}{2\beta r}}\left\|w_{1}-w_{2}\right\| _{L^{\infty}(0,\tilde{T};\exp L^{p})},\] where \(r>0\) is an arbitrary constant such that \(r>\max\{p,\frac{d}{2\beta}\}\) and \(2^{p}\lambda r(2\epsilon)^{p}\leq 1.\) Similarly, we also obtain \[\|\tilde{\Phi}(w_{1})-\tilde{\Phi}(w_{2})\|_{L^{p}}\leq Ce^{2^{p-1}\lambda|v| _{\infty}^{p}}\tilde{T}\left\|w_{1}-w_{2}\right\|_{L^{\infty}(0,\tilde{T},\exp L ^{p})}.\] Thus by choosing \(\epsilon\ll 1\) small, we infer that \[\|\tilde{\Phi}(w_{1})-\tilde{\Phi}(w_{2})\|_{\exp L^{p}} \leq Ce^{2^{p-1}\lambda|v|_{\infty}^{p}}(\tilde{T}+\tilde{T}^{1- \frac{d}{2\beta r}})\left\|w_{1}-w_{2}\right\|_{L^{\infty}(0,\tilde{T};\exp L ^{p})}\] \[\leq\frac{1}{2}\|w_{1}-w_{2}\|_{L^{\infty}(0,\tilde{T};\exp L^{p})},\] where \(\tilde{T}\ll 1\) is chosen small enough such that \(Ce^{2^{p-1}\lambda|v|_{\infty}^{p}}(\tilde{T}+\tilde{T}^{1-\frac{d}{2\beta r }})\leq\frac{1}{2}.\) Taking Propositions 2.1 and 2.2 into account and following the arguments as in the proof of [7, Lemma 3.2], we get the desired result. We shall use Lemma 3.1 and Lemma 3.2 to prove Theorem 1.1. Proof of Theorem 1.1.: We choose \(T,\epsilon,\) and \(\tilde{T}\) in the following way. Let \(r>\max\{p,\frac{d}{2\beta}\}\) and fix \(\epsilon>0\) such that \(2^{p}\lambda r(2\epsilon)^{p}\leq 1.\) In order to use Lemma 3.1, we first decompose \(u_{0}=v_{0}+w_{0}\) with \(v_{0}\in C_{0}^{\infty}(\mathbb{R}^{d})\) and \(\|w_{0}\|_{\exp L^{p}}\leq\epsilon\) as before. Then by Lemma 3.1, there exist a time \(T>0\) and a mild solution \(v\in C([0,T],\exp L_{0}^{p}(\mathbb{R}^{d}))\cap L^{\infty}(0,T;L^{\infty}( \mathbb{R}^{d}))\) of (3.1) such that \[\|v\|_{L^{\infty}(0,T;L^{p}\cap L^{\infty})}\leq 2\|v_{0}\|_{L^{p}\cap L^{ \infty}}.\] Next we choose \(\tilde{T}>0\) such that \(\tilde{T}<T\) and \[Ce^{2^{p-1}\lambda\|v\|_{L^{p}\cap L^{\infty}}^{p}}(\tilde{T}+\tilde{T}^{1-\frac {d}{2\beta r}})\leqslant\frac{1}{2}.\] Then by Lemma 3.2, there exist a mild solution \(w\in C([0,\tilde{T}],\exp L^{p}_{0}(\mathbb{R}^{d}))\) of (3.2). Hence \(u:=v+w\) is a mild solution of (1.1) in \(C([0,\tilde{T}],\exp L^{p}_{0}(\mathbb{R}^{d}))\). This proves the existence part. Taking Theorem 2.1 and Proposition 2.3 into account, and closely following the method the proof used in [7, Theorem 1.3], uniqueness follows. Thus we shall omit the details. ## 4. Proof of Theorem 1.2 Proof of Theorem 1.2.: \((1):\) We closely follows the approach in [7, Theorem 1.3] and [19, 22, 23]. We consider the associated integral equation \[u(t)=e^{-tH^{\beta}}u_{0}+\int_{0}^{t}e^{-(t-s)H^{\beta}}\,f(u(s))\,ds \tag{4.1}\] where \(\|u_{0}\|_{\exp L^{p}}\leqslant\epsilon\), with small \(\epsilon>0\) to be fixed later. The nonlinearity \(f\) satisfies \(f(0)=0\) and \[|f(u)-f(v)|\leqslant C|u-v|\left(|u|^{m-1}e^{\lambda|u|^{p}}+|v|^{m-1}e^{ \lambda|v|^{p}}\right), \tag{4.2}\] for some constants \(C>0\) and \(\lambda>0\). Here \(p>1\) and \(m\) is larger than \(1+\frac{2p\beta}{d}\). From (4.2), we see that \[|f(u)-f(v)|\leqslant C|u-v|\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!}\left(|u|^ {pk+m-1}+|v|^{pk+m-1}\right). \tag{4.3}\] First we shall give the proof of Theorem 1.2(1). Let \(M>0\) and \[Y_{M}=\{u\in L^{\infty}\left((0,\infty),\exp L^{p}(\mathbb{R}^{d})\right): \sup_{t>0}t^{\sigma}\|u(t)\|_{L^{a}}+\|u\|_{L^{\infty}((0,\infty),\exp L^{p}( \mathbb{R}^{d}))}\leqslant M\},\] where \(a>\frac{d(m-1)}{2\beta}\geqslant p\) and \(\sigma=\frac{1}{(m-1)}-\frac{d}{2\beta a}=\frac{d}{2\beta}(\frac{2\beta}{d(m-1 )}-\frac{1}{a})>0\). Let \(\rho(u,v)=\sup_{t>0}\left(t^{\sigma}\|u(t)-v(t)\|_{L^{a}}\right).\) It is easy to see that \((Y_{M},\rho)\) is complete metric space. Now for \(u\in Y_{M}\), let the mapping \(\Phi\) is defined by \[\Phi[u](t)=e^{-tH^{\beta}}u_{0}+\int_{0}^{t}e^{-(t-\tau)H^{\beta}}\left(f(u( \tau))\right)\,d\tau. \tag{4.4}\] We shall show that \(\Phi\) is the mapping from \(Y_{M}\) into \(Y_{M}\). By Proposition 2.1 (1), Theorem 2.1 and Lemma 2.1 (3), we have \[\|e^{-tH^{\beta}}u_{0}\|_{\exp L^{p}}\leqslant C\|u_{0}\|_{\exp L^{p}}, \tag{4.5}\] and \[t^{\sigma}\|e^{-tH^{\beta}}u_{0}\|_{L^{a}}\leq Ct^{\sigma}t^{-\frac{d}{2\beta}( \frac{2\beta}{d(m-1)}-\frac{1}{a})}\|u_{0}\|_{L^{\frac{d(m-1)}{2\beta}}}=C\|u_{ 0}\|_{L^{\frac{d(m-1)}{2\beta}}}\leq C\|u_{0}\|_{\exp L^{p}}, \tag{4.6}\] where we have used \(1<p\leq\frac{d(m-1)}{2\beta}<a\). ### The case \(d>\frac{2\beta p}{p-1}\) Let \(u\in Y_{M}\). Then by Proposition 2.1 and Corollary 2.1, we obtain for \(q>\frac{d}{2\beta}\), \[\|\Phi(u)(t)\|_{\exp L^{p}} \leq \|e^{-tH^{\beta}}u_{0}\|_{\exp L^{p}}+\left\|\int_{0}^{t}e^{-(t- \tau)H^{\beta}}\left(f(u(\tau))\right)\,d\tau\right\|_{\exp L^{p}}\] \[\leq C\|u_{0}\|_{\exp L^{p}}+\int_{0}^{t}\left\|e^{-(t-\tau)H^{\beta} }\left(f(u(\tau))\right)\right\|_{\exp L^{p}}d\tau\] \[\leq C\|u_{0}\|_{\exp L^{p}}+\int_{0}^{t}\kappa(t-\tau)\,\|f(u(\tau) \|_{L^{1}\cap L^{q}}d\tau\] \[\leq C\|u_{0}\|_{\exp L^{p}}+\,\|f(u(\tau)\|_{L^{\infty}(0,\infty;(L^{ 1}\cap L^{q})}\int_{0}^{\infty}\kappa(\tau)d\tau\] \[\leq C\|u_{0}\|_{\exp L^{p}}+C\,\|f(u(\tau)\|_{L^{\infty}(0,\infty;(L^{ 1}\cap L^{q})},\] where \(\kappa(\tau)\) is as in Corollary 2.1. By (1.3), we have \[|f(u)|\leq C|u|^{m}(e^{\lambda|u|^{p}}-1)+C|u|^{m},\ m\geq p.\] In view of Holder's inequality and Lemma 2.1(3), for \(1\leq r\leq q,m\geq p\), we get \[\|f(u)\|_{L^{r}}\leq C\|u\|_{\exp L^{p}}^{m}(\|e^{\lambda|u|^{p}}-1\|_{L^{2r}}+1). \tag{4.7}\] Hence by Lemma 2.3 and since \(u\in Y_{M}\), we have for \(2q\lambda M^{p}\leq 1\), \[\|f(u)\|_{L^{\infty}(0,\infty;L^{r})}\leq CM^{m}. \tag{4.8}\] Finally we obtain \[\|\Phi(u)\|_{L^{\infty}(0,\infty,\exp L^{p})}\leq C\|u_{0}\|_{\exp L^{p}}+CM^{ m}\leq C\epsilon+CM^{m}.\] Let \(u,v\) be two elements of \(Y_{M}.\) By using (4.3) and Proposition 2.1, one obtain \[t^{\sigma}\|\Phi(u)(t)-\Phi(v)(t)\|_{L^{a}}\leq C\rho(u,v)\sum_{k=0}^{\infty}( C\lambda)^{k}M^{pk+m-1}. \tag{4.9}\] In fact we obtain \[t^{\sigma}\|\Phi(u)(t)-\Phi(v)(t)\|_{L^{a}}\leq t^{\sigma}\int_{0}^{ t}\left\|e^{-(t-\tau)H^{\beta}}(f(u(\tau))-f(v(\tau)))\,\right\|_{L^{a}}\,d\tau\] \[\quad\leq C\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!}t^{\sigma}\int _{0}^{t}(t-\tau)^{-\frac{d}{2\beta}(\frac{1}{r}-\frac{1}{a})}\|(u-v)(|u|^{pk+m- 1}+|v|^{pk+m-1})\|_{L^{r}}\,d\tau,\] \(1\leq r\leq a.\) Applying Holder's inequality, we obtain \[t^{\sigma}\|\Phi(u)(t)-\Phi(v)(t)\|_{L^{a}}\] \[\quad\leq C\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!}t^{\sigma} \int_{0}^{t}(t-\tau)^{-\frac{d}{2\beta}(\frac{1}{r}-\frac{1}{a})}\|(u-v)\|_{L^ {a}}\|(|u|^{pk+m-1}+|v|^{pk+m-1})\|_{L^{q}}\,d\tau\] \[\quad\leq C\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!}t^{\sigma} \int_{0}^{t}(t-\tau)^{-\frac{d}{2\beta}(\frac{1}{r}-\frac{1}{a})}\|(u-v)\|_{L ^{a}}[\|u\|_{L^{q(pk+m-1)}}^{pk+m-1}+\|v\|_{L^{q(pk+m-1)}}^{pk+m-1}]\,d\tau.\] Using interpolation inequality with \(\frac{1}{q(pk+m-1)}=\frac{\theta}{a}+\frac{1-\theta}{\rho},\ 0\leq\theta\leq 1\) and \(p\leq\rho<\infty\), we find that \[t^{\sigma}\|\Phi(u)(t)-\Phi(v)(t)\|_{L^{a}}\] \[\quad\leq C\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!}t^{\sigma} \int_{0}^{t}(t-\tau)^{-\frac{d}{2\beta}(\frac{1}{r}-\frac{1}{a})}\|(u-v)\|_{L ^{a}}\] \[\quad\left[\|u\|_{L^{a}}^{(pk+m-1)\theta}\,\|u\|_{L^{\rho}}^{(pk+ m-1)(1-\theta)}+\|v\|_{L^{a}}^{(pk+m-1)\theta}\,\|v\|_{L^{\rho}}^{(pk+m-1)(1- \theta)}\right]\,d\tau.\] In view of the embedding (Lemma 2.1), we obtain \[t^{\sigma}\|\Phi(u)(t)-\Phi(v)(t)\|_{L^{a}}\] \[\quad\leq C\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!}t^{\sigma} \int_{0}^{t}(t-\tau)^{-\frac{d}{2\beta}(\frac{1}{r}-\frac{1}{a})}\|(u-v)\|_{L ^{a}}\Gamma\left(\frac{\rho}{p}+1\right)^{\frac{(pk+m-1)(1-\theta)}{\rho}}\] \[\quad\left[\|u\|_{L^{a}}^{(pk+m-1)\theta}\,\|u\|_{\exp L^{p}}^{( pk+m-1)(1-\theta)}+\|v\|_{L^{a}}^{(pk+m-1)\theta}\,\|v\|_{\exp L^{p}}^{(pk+m-1)(1- \theta)}\right]\,d\tau.\] Since \(u,v\in Y_{M}\), we obtain \[t^{\sigma}\|\Phi(u)(t)-\Phi(v)(t)\|_{L^{a}}\] \[\leq C\rho(u,v)\,\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!}\Gamma \left(\frac{\rho}{p}+1\right)^{\frac{(pk+m-1)(1-\theta)}{\rho}}\,M^{pk+m-1}\] \[\times t^{\sigma}\left(\int_{0}^{t}(t-\tau)^{-\frac{d}{2\beta}( \frac{1}{r}-\frac{1}{a})}\,\tau^{-\sigma(1+(pk+m-1)\theta)}\,d\tau\right)\] \[\leq C\rho(u,v)\,\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!}\Gamma \left(\frac{\rho}{p}+1\right)^{\frac{(pk+m-1)(1-\theta)}{\rho}}\,M^{pk+m-1}\] \[\times\mathcal{B}\left(1-\frac{d}{2\beta}\left(\frac{1}{r}-\frac {1}{a}\right),1-\sigma(1+(pk+m-1)\theta)\right),\] where the parameters \(a,q,r,\theta=\theta_{k},\rho=\rho_{k}\) are given in Lemma 2.4. For these parameters one see that \[\mathcal{B}\left(1-\frac{d}{2\beta}\left(\frac{1}{r}-\frac{1}{a}\right),1- \sigma(1+(pk+m-1)\theta)\right)\leq C\] and \[\Gamma\left(\frac{\rho_{k}}{p}+1\right)^{\frac{(pk+m-1)(1-\theta_{k})}{\rho_{ k}}}\leq C^{k}\,k!.\] Combining the above estimates we obtain (4.9). Hence we get for \(M\) small \[\rho(\Phi(u)-\Phi(v))\leq CM^{m-1}\,\rho(u,v)\leq\frac{1}{2}\,\rho(u,v).\] The above estimates show that \(\Phi:Y_{M}\to Y_{M}\) is a contraction mapping for \(\epsilon>0\) and \(M\) sufficiently small. By Banach's fixed point theorem, we thus obtain the existence of a unique \(u\) in \(Y_{M}\) with \(\Phi(u)=u\). By (4.4), \(u\) solves the integral equation (4.1) with \(f\) satisfying (4.2). The estimate (1.5) follows from \(u\in Y_{M}\). This completes the proof of the existence of a global solution to (4.1) for \(d>2\beta p/(p-1)\). ### The case \(d<\frac{2\beta p}{p-1}\) We shall first establish the following two inequalities: \[\left\|\int_{0}^{t}e^{-(t-\tau)H^{\beta}}f(u(\tau))\,d\tau\right\|_{L^{ \infty}(0,\infty:\exp L^{p})}\leq C_{1}(M). \tag{4.10}\] and \[\sup_{t>0}t^{\sigma}\left\|\int_{0}^{t}e^{-(t-\tau)H^{\beta}}(f(u)-f(v))\,d \tau\right\|_{L^{a}}\leq C_{2}(M)\,\sup_{\tau>0}(\tau^{\sigma}\|u(\tau)-v( \tau)\|_{L^{a}}), \tag{4.11}\] where \(u,\ v\in Y_{M}\) and \(C_{1}\) and \(C_{2}\) are small when \(M\) is small. To prove this estimates we first note that \[\left(\log((t-\tau)^{-\frac{d}{2\beta}}+1)\right)^{-\frac{1}{p}}\leq 2^{\frac{ 1}{p}}(t-\tau)^{\frac{d}{2\beta p}}\ \text{for}\ 0\leq\tau<t-\eta^{-\frac{2\beta}{d}}, \tag{4.12}\] where \(\eta=\inf\{z\geq 1:z>2\log(1+z)\}.\) Thus by Proposition 2.1 (3), we obtain for \(r>\frac{d}{2\beta}\) and \(0<t\leq\eta^{-\frac{2\beta}{d}}\), \[\left\|\int_{0}^{t}e^{-(t-\tau)H^{\beta}}f(u(\tau))\,d\tau\right\|_{\exp L^{p} }\leq C\int_{0}^{t}((t-\tau)^{-\frac{d}{2\beta r}}+1)\|f(u(\tau))\|_{L^{r} \cap L^{1}}\,d\tau\leq C\,\sup_{t>0}\|f(u(\tau))\|_{L^{r}\cap L^{1}}.\] For \(t>\eta^{-\frac{2\beta}{d}}\) and \(1\leq q\leq p\), we write \[\left\|\int_{0}^{t}e^{-(t-\tau)H^{\beta}}f(u(\tau))\,d\tau\right\| _{\exp L^{p}}\] \[\leq\int_{0}^{t-\eta^{-\frac{2\beta}{d}}}\|e^{-(t-\tau)H^{\beta}} f(u(\tau))\|_{\exp L^{p}}\,d\tau+\int_{t-\eta^{-\frac{2\beta}{d}}}^{t}\|e^{-(t- \tau)H^{\beta}}f(u(\tau))\|_{\exp L^{p}}\,d\tau\] \[\leq\int_{0}^{t-\eta^{-\frac{2\beta}{d}}}(t-\tau)^{-\frac{d}{2 \beta q}}(\log((t-s)^{-\frac{d}{2\beta}}+1))^{-\frac{1}{p}}\|f(u(\tau))\|_{L^{q }}\,d\tau\] \[+\int_{t-\eta^{-\frac{2\beta}{d}}}^{t}((t-\tau)^{-\frac{d}{2\beta r }}+1)\|f(u(\tau))\|_{L^{r}\cap L^{1}}\,d\tau\] \[\leq C\int_{0}^{t-\eta^{-\frac{2\beta}{d}}}(t-\tau)^{-\frac{d}{2 \beta q}+\frac{d}{2\beta p}}\|f(u(\tau))\|_{L^{q}}\,d\tau+C\sup_{t>0}\|f(u(\tau ))\|_{L^{r}\cap L^{1}}:=\mathbf{I}+\mathbf{J}.\] Similar to the analysis of [7, 24], we obtain for small \(M\) and \(u\in Y_{M},\), \(\mathbf{I}\leq CM^{m}\) and \(\mathbf{J}\leq CM^{m}\). Finally, we obtain \[\left\|\int_{0}^{t}e^{-(t-\tau)H^{\beta}}f(u(\tau))\,d\tau\right\|_{\exp L^{p} }\leq CM^{m}. \tag{4.13}\] To estimate (4.11), we again use Proposition 2.1. In fact we obtain \[t^{\sigma}\left\|\int_{0}^{t}e^{-(t-\tau)H^{\beta}}(f(u)-f(v)) \,d\tau\right\|_{L^{a}}\] \[\leq C\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!}t^{\sigma}\int_{0}^ {t}(t-\tau)^{-\frac{d}{2\beta}(\frac{1}{r}-\frac{1}{a})}\|(u-v)(|u|^{pk+m-1}+| v|^{pk+m-1})\|_{L^{r}}\,d\tau.\] Applying the Holder's inequality, we obtain \[t^{\sigma}\left\|\int_{0}^{t}e^{-(t-s)H^{\beta}}(f(u(\tau))-f(v( \tau)))\,d\tau\right\|_{L^{a}}\] \[\leq C\sum_{k=0^{\infty}}\frac{\lambda^{k}}{k!}t^{\sigma}\int_{0}^ {t}(t-\tau)^{-\frac{d}{2\beta}(\frac{1}{r}-\frac{1}{a})}\|(u-v)\|_{L^{a}}\|(|u |^{pk+m-1}+|v|^{pk+m-1})\|_{L^{q}}\,d\tau\] \[\leq C\sum_{k=0^{\infty}}\frac{\lambda^{k}}{k!}t^{\sigma}\int_{0}^ {t}(t-\tau)^{-\frac{d}{2\beta}(\frac{1}{r}-\frac{1}{a})}\|(u-v)\|_{L^{a}}[\|u \|_{L^{q(pk+m-1)}}^{pk+m-1}+\|v\|_{L^{q(pk+m-1)}}^{pk+m-1}]\,d\tau.\] Following the similar analysis as in [7, 24], we get for small \(M\), \[t^{\sigma}\left\|\int_{0}^{t}e^{-(t-\tau)H^{\beta}}(f(u)-f(v))\,d\tau\right\|_ {L^{a}}\leq CM^{m-1}\,d(u,v).\] This together with (4.13) and (4.6) concludes the proof of global existence for dimensions \(d<2\beta p/(p-1)\). ### The case \(d=\frac{2\beta p}{p-1}\) Let \(u,v\in Y_{M}\). By using (4.3) and Proposition 2.1, we obtain \[t^{\sigma}\|\Phi(u)(t)-\Phi(v)(t)\|_{L^{a}}\] \[\leq t^{\sigma}\int_{0}^{t}\left\|e^{-(t-\tau)H^{\beta}}(f(u)-f(v)) \right\|_{L^{a}}\,d\tau\] \[\leq t^{\sigma}\int_{0}^{t}(t-\tau)^{-\frac{d}{2\beta}(\frac{1}{r}- \frac{1}{a})}\|(f(u(\tau))-f(v(\tau)))\|_{L^{r}}\,d\tau\] \[\leq C\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!}t^{\sigma}\int_{0}^{t}(t -\tau)^{-\frac{d}{2\beta}(\frac{1}{r}-\frac{1}{a})}\|(u-v)(|u|^{pk+m-1})+|v|^{ pk+m-1}\|_{L^{r}}\,d\tau,\] where \(1\leq r\leq a.\) Applying Holder's inequality, we obtain \[t^{\sigma}\|\Phi(u)(t)-\Phi(v)(t)\|_{L^{a}}\] \[\leq C\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!}t^{\sigma}\int_{0}^ {t}(t-\tau)^{-\frac{d}{2\beta}(\frac{1}{r}-\frac{1}{a})}\|(u-v)\|_{L^{a}}\|(|u |^{pk+m-1})+|v|^{pk+m-1}\|_{L^{q}}\,d\tau\] \[\leq C\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!}t^{\sigma}\int_{0}^ {t}(t-\tau)^{-\frac{d}{2\beta}(\frac{1}{r}-\frac{1}{a})}\|(u-v)\|_{L^{a}}\|u \|_{L^{q(pk+m-1)}}^{pk+m-1})+\|v\|_{L^{q(pk+m-1)}}^{pk+m-1}\,d\tau.\] Similar calculation yields, \[t^{\sigma}\|\Phi(u)(t)-\Phi(v)(t)\|_{L^{a}}\leq CM^{m-1}\,\sup_{\tau\geq 0}( \tau^{\sigma}\|(u-v)\|_{L^{a}})=CM^{m-1}\,\rho(u,v). \tag{4.14}\] Here we crucially use the fact that \(a\) to satisfy \(\frac{d}{2\beta}(m-1)<a<\frac{d}{2\beta}(m-1)\frac{1}{(2-m)_{+}}.\) Now we estimate \(\|\Phi(u)(t)\|_{L^{\infty}(0,\infty;\exp L^{p})}\). By (4.5) and (2.10), \[\|\Phi(u)\|_{L^{\infty}(0,\infty;\exp L^{p})}\leq\|e^{-tH^{\beta}} u_{0}\|_{L^{\infty}(0,\infty;\exp L^{p})}+\left\|\int_{0}^{t}e^{-(t-\tau)H^{ \beta}}\left(f(u(\tau))\right)\,d\tau\right\|_{L^{\infty}(0,\infty;\exp L^{p})}\] \[\leq C\|u_{0}\|_{\exp L^{p}}+\left\|\int_{0}^{t}\,e^{-(t-\tau)H^{ \beta}}\left(f(u(\tau))\right)\,d\tau\right\|_{L^{\infty}(0,\infty;L^{\phi})}+ \left\|\int_{0}^{t}\,e^{-(t-\tau)H^{\beta}}\left(f(u(\tau))\right)\,d\tau \right\|_{L^{\infty}(0,\infty;L^{p})}.\] By using Corollary 2.2, we obtain \[\left\|\int_{0}^{t}\,e^{-(t-\tau)H^{\beta}}\left(f(u(\tau))\right)\,d\tau \right\|_{L^{\infty}(0,\infty;L^{\phi})}\leq CM^{m}. \tag{4.15}\] By using (4.3) and Proposition 2.1, we obtain \[\left\|\int_{0}^{t}\,e^{-(t-\tau)H^{\beta}}\left(f(u(\tau))\right)\,d\tau \right\|_{L^{p}}\leq C\int_{0}^{t}\|f(u(\tau))\|_{L^{p}}\,d\tau.\] Similar computations as before, we obtain \[\left\|\int_{0}^{t}\,e^{-(t-\tau)H^{\beta}}\left(f(u(\tau))\right)\,d\tau \right\|_{L^{\infty}(0,\infty;L^{p})}\leq CM^{m}. \tag{4.16}\] Hence we get \[\|\Phi(u)\|_{L^{\infty}(0,\infty;\exp L^{p})}\leq C\|u_{0}\|_{\exp L^{p}}+2CM^ {m}.\] Now by (4.6), we get \[t^{\sigma}\,\|\Phi(u)\|_{L^{p}}\leq\|u_{0}\|_{\exp L^{p}}+CM^{m}.\] If we choose \(M\) and \(\epsilon\) small then \(\Phi\) maps \(Y_{M}\) into itself. Moreover, thanks to the inequality (4.14) we obtain that \(\Phi\) is a contraction map on \(Y_{M}\). The conclusion follows by the Banach fixed point theorem. ### Proof of Theorem 1.2 Let \(q\geq\max(\frac{d}{2\beta},p)\). By Proposition 2.1, we write \[\|u(t)-e^{-tH^{\beta}}u_{0}\|_{\exp L^{p}}\leq\int_{0}^{t}\left\|e ^{-(t-\tau)H^{\beta}}\left(f(u(\tau))\right)\right\|_{\exp L^{p}}\,d\tau\] \[\leq C\int_{0}^{t}\left\|\left(f(u(\tau))\right)\right\|_{L^{p}} \,d\tau+C\int_{0}^{t}(t-\tau)^{-\frac{d}{2\beta_{q}}}\left\|(f(u(\tau))) \right\|_{L^{q}}\,d\tau.\] Now one can easily get that for \(r=p,q\) \[\left\|(f(u(\tau)))\right\|_{L^{r}}\leq C\|u\|_{\exp L^{p}}^{m}.\] Finally, \[\|u(t)-e^{-tH^{\beta}}u_{0}\|_{\exp L^{p}} \leqslant C\int_{0}^{t}\left(C\|u(\tau)\|_{\exp L^{p}}^{m}+(t-\tau)^ {-\frac{d}{2\beta q}}\,C\|u(\tau)\|_{\exp L^{p}}^{m}\right)d\tau\] \[\leqslant Ct\|u\|_{L^{\infty}(0,\infty:\exp L^{p})}^{m}+Ct^{1- \frac{d}{2\beta q}}\|u\|_{L^{\infty}(0,\infty:\exp L^{p})}^{m}\] \[\leqslant C_{1}t+C_{2}t^{1-\frac{d}{2\beta q}},\] where \(C_{1},\ C_{2}>0\) are constants. Then \(\lim_{t\to 0}\|u(t)-e^{-tH^{\beta}}u_{0}\|_{\exp L^{p}}=0.\) Note that \(u(t)\to u_{0}\) as \(t\to 0\) in the weak\({}^{\star}\) topology follows from the analysis in [18]. This completes the proof of the theorem. ## 5. Proof of Theorem 1.3 We first construct an initial data which has some diverging integrability. **Lemma 5.1**.: _Let \(\alpha>0\) and \(p>1\)._ \[u_{0}(x):=\begin{cases}\alpha(\log\frac{1}{|x|})^{\frac{1}{p}},&|x|<1\\ 0,&|x|\geqslant 1.\end{cases} \tag{5.1}\] _Then, for every \(\lambda>0,\) there exists some \(\tilde{\alpha}>0\) such that_ \[\int_{0}^{\epsilon}\int_{B_{r}(0)}\exp(\lambda(e^{-tH}u_{0})^{p})dx\,dt=\infty,\] _for every \(\alpha>\tilde{\alpha},\ \epsilon>0,\) and \(r>0,\) where \(B_{r}(0)\subset\mathbb{R}^{d}\) is the ball centered at the origin with radius \(r>0.\)_ Proof.: Let us recall that the Weyl symbol of the Hermite semigroup \(e^{-tH}\) is given by \[e^{-tH}f(x)=C_{d}(\sinh(2t))^{-\frac{d}{2}}\,e^{-\frac{\tanh t}{2}|x|^{2}} \left(e^{-\frac{1}{2\sinh 2t}|\cdot|^{2}}*g\right)(x) \tag{5.2}\] where \(g(x)=e^{-\frac{\tanh t}{2}|\cdot|^{2}}\,f(\cdot),\) see [2, Eq. (3.3)]. Fix \(1>\epsilon>0,r>0\). Let \(\rho=\min\{r,\frac{1}{4}\}\). Then \(B_{|x|}(3x)\subset B_{1}(0)\) for every \(|x|<\rho.\) Then for any \(|x|<\rho,\) we have \[e^{-tH}u_{0}(x) =C_{d}(\sinh(2t))^{-\frac{d}{2}}\,e^{-\frac{\tanh t}{2}|x|^{2}} \int_{|y|<1}\left(e^{-\frac{1}{2\sinh 2t}|x-y|^{2}}\,e^{-\frac{\tanh t}{2}| \cdot|^{2}}u_{0}(y)\,\right)dy\] \[\geqslant C_{d}\,\alpha\,(\sinh(2t))^{-\frac{d}{2}}\,e^{-\frac{ \tanh t}{2}|x|^{2}}\int_{B_{|x|}(3x)}\left(e^{-\frac{1}{2\sinh 2t}|x-y|^{2}}\,e^{- \frac{\tanh t}{2}|y|^{2}}(-\log|y|)^{\frac{1}{p}}\right)dy.\] Since \(y\in B_{|x|}(3x)\), we have \(2|x|\leqslant|y|\leqslant 4|x|\) and \(|x|\leqslant|x-y|\leqslant 3|x|\) and thus \[e^{-tH}u_{0}(x) \geqslant C_{d}\,\alpha\,(\sinh(2t))^{-\frac{d}{2}}\,e^{-\frac{\tanh t }{2}|x|^{2}}\int_{B_{|x|}(3x)}\,\left(e^{-\frac{1}{2\sinh 2t}|x-y|^{2}}\,e^{-\frac{\tanh t }{2}|y|^{2}}(-\log|y|)^{\frac{1}{p}}\right)dy\] \[\geqslant C\,C_{d}\,\alpha\,\left(\frac{|x|^{2}}{\sinh(2t)} \right)^{\frac{d}{2}}\,e^{-\frac{\tanh t}{2}|x|^{2}}\,e^{-\frac{9}{2\sinh 2t}|x|^{2}}\,e^{-(8\tanh t)|x|^{2}}(-\log(4|x|))^{ \frac{1}{p}}\] \[\geqslant C\,\alpha\,\left(\frac{|x|^{2}}{t}\right)^{\frac{d}{2} }\,e^{-\frac{9}{4t}|x|^{2}}\,(-\log(4|x|))^{\frac{1}{p}}, \tag{5.3}\] for some \(C>0.\) Let \(\tilde{\epsilon}=\min\{\epsilon,\rho^{2}\}.\) Then for any \(0<t<\tilde{\epsilon},\) we have \(B_{\sqrt{t}}(0)\subset B_{\rho}(0).\) Hence, \[\int_{0}^{\epsilon}\int_{|x|<r}\exp(\lambda(e^{-tH}u_{0})^{p})\, dxdt \geqslant\int_{0}^{\tilde{\epsilon}}\int_{\frac{\sqrt{t}}{2}<|x|< \sqrt{t}}\exp(\lambda(e^{-tH}u_{0})^{p})dxdt\] \[\geqslant\int_{0}^{\tilde{\epsilon}}\int_{\frac{\sqrt{t}}{2}<|x|< \sqrt{t}}\exp(-\lambda C\alpha^{p}\log(4|x|))\,dxdt\] \[\geqslant C_{\alpha,\lambda}\int_{0}^{\tilde{\epsilon}}t^{\frac{ d}{2}-\frac{\lambda C\alpha^{p}}{2}}dt=\infty, \tag{5.4}\] for \(\alpha\geqslant\alpha_{0}:=\left(\frac{d+2}{C\lambda}\right)^{\frac{1}{p}}.\) This completes the proof of the Lemma. Proof of Theorem 1.3.: We first observe that \(u_{0}\) in (5.1) belongs to \(\exp L^{p}(\mathbb{R}^{d})\) for every \(\alpha>0.\) We shall prove the Theorem by contradiction method. So by contradiction we assume that there exists \(T>0\) and a nonnegative classical solution \(u\in C([0,T];\exp L^{p}(\mathbb{R}^{d}))\) to (1.1). For any \(t>0,\ \tau>0,\ t+\tau<T\), we have \[u(t+\tau)=e^{-(t+\tau)H}u_{0}+\int_{0}^{t+\tau}e^{-(t+\tau-s)H}\,f(u(s))\,ds \geqslant e^{-tH}u(\tau),\] since \(u\in\exp L^{p}(\mathbb{R}^{d})\) is a nonnegative classical solution to (1.1). Next we shall show that \(u(t)\geqslant e^{-tH}u_{0}\geqslant 0.\) To prove that we first see that as \(\tau\to 0\), we have \(u(t+\tau)\to u(t)\). Now, since \[e^{-\frac{|x-|^{2}}{2\sinh 2t}}\in L^{1}(\mathbb{R}^{d})\cap L^{\infty}(\mathbb{R} ^{d})\subset L^{1}(\log L)^{\frac{1}{2}}(\mathbb{R}^{d})\] for every \(x\in\mathbb{R}^{d}\) and \(u(s)\) converges in weak\({}^{\star}\)-topology to \(u_{0}\), we obtain that \[e^{-tH}u(s,x)=C_{d}(\sinh(2t))^{-\frac{d}{2}}\,e^{-\frac{\tanh t}{2}|x|^{2}} \int_{\mathbb{R}^{d}}\left(e^{-\frac{1}{2\sinh 2t}|x-y|^{2}}\,e^{-\frac{\tanh t}{2} |\cdot|^{2}}u(s,y)\,\right)dy\] converges to \[C_{d}(\sinh(2t))^{-\frac{d}{2}}\,e^{-\frac{\tanh t}{2}|x|^{2}}\int_{\mathbb{R }^{d}}\left(e^{-\frac{1}{2\sinh 2t}|x-y|^{2}}\,e^{-\frac{\tanh t}{2}|\cdot|^{2}}u_{0}(y) \,\right)dy\] as \(s\to 0.\) Since the initial data \(u_{0}\) is nonnegative, we obtain \[u(t)\geq e^{-tH}u_{0}\geq 0. \tag{5.5}\] Let us choose \(\phi\in C^{\infty}_{c}(\mathbb{R}^{d}),\ \phi\geq 0\) on \(\mathbb{R}^{d}\) and \(\phi\geq 1\) on \(B_{r}(0).\) Since \(u\) is a nonnegative classical solution to (1.1), we obtain \[\frac{d}{dt}\int_{\mathbb{R}^{d}}u\phi dx+\int_{\mathbb{R}^{d}}u(-H\phi)\,dx= \int_{\mathbb{R}^{d}}f(u)\,\phi\,dx\geq\int_{B_{r}(0)}f(u)\,dx.\] Therefore integrating over \(\tau\in[\sigma,T^{\prime}],\ 0<\sigma<T^{\prime}<T\), we obtain \[\int_{\mathbb{R}^{d}}u(T^{\prime})\phi dx-\int_{\mathbb{R}^{d}}u(\sigma)\phi dx +\int_{\sigma}^{T^{\prime}}\int_{\mathbb{R}^{d}}u(-H\phi)\,dxd\tau\geq\int_{ \sigma}^{T^{\prime}}\int_{B_{r}(0)}f(u(\tau))\,dxd\tau.\] Since \(u\in L^{\infty}(0,T^{\prime};\exp L^{2}(\mathbb{R}^{d}))\) and \(u(t)\to u_{0}\) in weak\({}^{\star}\) topology, by letting \(\sigma\to 0,\) we obtain \[\int_{\mathbb{R}^{d}}u(T^{\prime})\phi dx-\int_{\mathbb{R}^{d}}u_{0}\phi dx+ \int_{0}^{T^{\prime}}\int_{\mathbb{R}^{d}}u(-H\phi)\,dxd\tau\geq\int_{0}^{T^{ \prime}}\int_{B_{r}(0)}f(u(\tau))\,dxd\tau.\] Hence \[\int_{0}^{T^{\prime}}\int_{B_{r}(0)}f(u(\tau))\,dxd\tau<\infty. \tag{5.6}\] This gives contradiction. In fact by our assumption there are some positive constants \(C>0\) and \(\eta_{0}>0\) such that \[f(\eta)\geq C\,e^{\lambda\eta^{p}} \tag{5.7}\] for some \(\eta>\eta_{0}.\) Now take \(\rho<r\) and \(\tilde{\epsilon}<T^{\prime}\) as in the proof of the above Lemma. Then by (5.3), we obtain \[e^{-tH}u_{0}(x)\geq C\left(\log\frac{1}{4\sqrt{t}}\right)^{\frac{1}{p}}\geq \eta_{0} \tag{5.8}\] if \(t>0\) is small enough and \(x\in B_{\sqrt{t}}(0)\backslash B_{\sqrt{t}/2}(0)\subset B_{\rho}(0).\) Finally by (5.5), (5.7) and (5), we obtain that \[\int_{0}^{T^{\prime}}\int_{B_{r}(0)}f(u(\tau))\,dxd\tau\geq C\,\int_{0}^{ \tilde{\epsilon}}\int_{\sqrt{t}/2\leq|x|\leq\sqrt{t}}\exp\left(\lambda(e^{-tH} u_{0})^{p}\right)\,dxd\tau,\] which contradicts (5.6) and (5.4) if \(\alpha>0\) is sufficiently large. This proves Theorem 1.3. ## Acknowledgement The third author is thankful for the research grants (DST/INSPIRE/04/2019/001914). ## Funding Funding information is not applicable / No funding was received. **Declarations.** On behalf of all authors, the corresponding author states that there is no conflict of interest. No data-sets were generated or analyzed during the current study.
2308.03042
Achievable Information Rate Analysis in Diffusive Channels with Memory and Markov Source
This paper explores the Achievable Information Rate (AIR) of a diffusive Molecular Communication (MC) channel featuring a fully absorbing receiver that counts the absorbed particles during symbol time intervals (STIs) and resets the counter at the start of each interval. The MC channel, influenced by memory effect, experiences inter-symbol interference (ISI) arising from the molecules' delayed arrival. The channel's memory is quantified as an integer multiple of the STI and a single-sample memoryless detector is employed to mitigate complexity in computing the mutual information (MI). To maximize MI, the detector threshold is optimized under Gaussian approximation of its input. The channel's MI is calculated, considering the influence of ISI, in the context of binary concentration shift keying modulation. Two distinct scenarios were considered; independent and correlated source-generated symbols, the latter modeled as a first-order Markov process. For each communication scenario, two degrees of knowledge: ISI-Aware and ISI-Unaware were considered. Remarkably, it is demonstrated that employing a correlated source enables the attainment of higher capacity. The results indicate that the capacity-achieving input distribution is not necessarily uniform. Notably, when the STI is small, corresponding to the case of strong ISI, the maximum AIR is not achieved through equiprobable symbol transmission.
Fardad Vakilipoor, Luca Barletta, Stefano Bregni, Maurizio Magarini
2023-08-06T07:52:58Z
http://arxiv.org/abs/2308.03042v1
# Achievable Information Rate Analysis in Diffusive Channels with Memory and Markov Source ###### Abstract This paper explores the Achievable Information Rate (AIR) of a diffusive Molecular Communication (MC) channel featuring a fully absorbing receiver that counts the absorbed particles during symbol time intervals (STIs) and resets the counter at the start of each interval. The MC channel, influenced by memory effect, experiences inter-symbol interference (ISI) arising from the molecules' delayed arrival. The channel's memory is quantified as an integer multiple of the STI and a single-sample memoryless detector is employed to mitigate complexity in computing the mutual information (MI). To maximize MI, the detector threshold is optimized under Gaussian approximation of its input. The channel's MI is calculated, considering the influence of ISI, in the context of binary concentration shift keying modulation. Two distinct scenarios were considered; independent and correlated source-generated symbols, the latter modeled as a first-order Markov process. For each communication scenario, two degrees of knowledge: ISI-Aware and ISI-Unaware were considered. Remarkably, it is demonstrated that employing a correlated source enables the attainment of higher capacity. The results indicate that the capacity-achieving input distribution is not necessarily uniform. Notably, when the STI is small, corresponding to the case of strong ISI, the maximum AIR is not achieved through equiprobable symbol transmission. Diffusion, molecular communication, channel capacity, channel memory, achievable information rate. ## I Introduction Molecular Communication (MC) is an interdisciplinary communication paradigm that relies on particle propagation as a means of information transmission. MC has natural and artificial forms. Natural MC, which has evolved over millions of years, has great potential for investigating information exchange in biological systems. On the other hand, artificial MC is a human field that studies communication systems based on the principles of natural MC. One of the advantages of MC is its potential for use in environments where electromagnetic communication is not possible or desirable, such as in targeted drug delivery, nanomedicine, and implantable devices, for which electromagnetic radiation can be harmful or interfere [1, 2]. Various aspects of MC systems have been studied, including active vs. passive receivers, instantaneous vs. continuous release of molecules, and for different boundary conditions of the physical channel [3]. In order to better understand this novel communication paradigm, an analysis from the information theory perspective would give new insights into MC and even improve the system performance in artificial counterparts. ### _Related Literature_ Channel capacity serves as a fundamental metric to quantify the ability of a communication system to transmit information reliably from a sender to a receiver, as established by Shannon in his seminal work [4]. When it comes to MC, analyzing channel capacity becomes a necessary undertaking due to various factors, such as inter-symbol interference (ISI) caused by memory effects, energy constraints, slow propagation, and distinctive statistical characteristics [5]. One approach to investigate the capacity limits of molecular communication channels is to encode information through the timing of particle releases, as explored extensively by Rose _et al._[6]. First, they illustrated that any MC channel can be contextualized either from the timing perspective or the type of particles. Then, they mainly focused on obtaining upper and lower bounds on the capacity of MC timing channels. Lastly, they applied their theory obtained from the particle counting process to DNA and protein sequences. In another study [7], an MC timing channel is introduced, where particles decay after a finite time, and upper and lower bounds on the associated capacity are derived. Another approach in MC involves encoding information based on the number of particles released at the transmitter and decoding based on the number of received particles during the symbol time interval (STI) [8]. Early investigations into MC channel capacity in the presence of particle intensity considered receivers that do not interact with information particles (IP), which are referred to as transparent receivers [9]. However, in practical scenarios, receivers commonly engage in a reaction process, binding with the IP through natural processes. The concentration-based channel, coupled with a ligand receptor-equipped receiver, has been the subject of investigation in previous works, such as those by Einolghozati _et al._[10] and Tahmasbi _et al._[11]. These studies employ a Markov chain model to capture the reception of molecules by ligand receptors and analyze the channel capacity in terms of bits per channel use. Furthermore, the findings from these studies have been extended to multiple access channels in [12]. The binding process can be linked to the concentration of interacting particles and, equivalently, to the number of particles absorbed by the receivers. Consequently, recent efforts have been made to evaluate the capacity and establish bounds for diffusive MC channels with fully absorbing (FA) receivers. In work by Ghavami _et al._[13], the capacity of a 1D diffusive channel with an advection term was examined. Initially, they ignored the consideration of ISI. Subsequently, a memory length equivalent to two symbol intervals was introduced, thereby accommodating the effects of ISI. The authors proceeded to illustrate and examine the metrics of capacity per channel use and capacity per unit of time. In one study [14], upper and lower bounds on the channel capacity were derived, assuming Poisson and Gaussian models to characterize the statistical properties of the received signal. In another study [15], the received signal was modeled as a Poisson random variable (RV), and bounds were determined for the constrained capacity of a diffusive MC system employing concentration shift keying (CSK) as the modulation scheme. The lower bound was derived from the mutual information (MI), calculated as the difference between the marginal entropy of the output and the conditional entropy of the output given the input. On the other hand, the upper bound was derived from the dual expression for channel capacity. In a different investigation [16], the channel capacity was evaluated for various reaction rates of the absorbing receiver, assuming a uniform bit probability distribution, although an optimal input distribution for bit transmission would be expected in capacity analysis. Moreover, this work considered the threshold of the memoryless detector as a predefined constant, which may not ensure optimal detection and accurate computation of the maximum MI. ### _Motivation and Contribution_ In this paper, we consider an FA receiver that, under the assumption of perfect symbol synchronization, counts the number of particles absorbed along each STI and resets the counter at the beginning of the next interval [13]. The MC channel introduces a memory effect, due to the delayed arrival of molecules, and thus ISI. Hence we are dealing with a channel with additive memory property. The term additive indicates that the particles' delayed arrival can result in the incremental accumulation of the absorbed particles. From the statistical perspective, the received signal in each STI follows a multinomial distribution. However, given the settings and channel characterizations, we demonstrate that it can be approximated as Gaussian. The reset counting mechanism that we have considered at the receiver side of the channel is a concept not confined solely to this context. In the realm of electronics, analogous functionalities are realized through sample-and-hold circuits. Brain neurotransmitters perform a similar task. Through reuptake mechanisms, the transporter proteins are responsible for removing neurotransmitters from the synaptic cleft, resetting their concentration and terminating their signaling effects [17]. Observing that in nature and electronic circuits, we felt motivated to consider such receiving mechanism and investigate the Achievable Information Rate (AIR), when the channel impulse response (CIR) varies with the transmission rate. To mitigate the computational complexity associated with considering all possible combinations of previously transmitted symbols in the calculation of MI, we estimated the channel memory length in terms of integer multiples of STIs. For our analysis, we employed a single-symbol memoryless detector, although it is worth noting that a multi-symbol receiver may yield better performance due to the strong memory effect present in the diffusive channel. Hence, we refer to our capacity calculation as the memoryless capacity. Previous works mentioned in this paper assumed a fixed threshold to detect the received signal. However, we optimize the threshold through a brute-force algorithm to find the memoryless capacity. Investigating the capacity regardless of a fixed predefined value of threshold allows us to better perceive the characterizations of the channels with memory with respect to the input distributions. We believe that under the optimum threshold, we can compare different scenarios in terms of bit rate per unit of time. This paper undertakes an exploration of MI under various distinct scenarios. Initially, we contemplate a situation wherein a correlated source is aligned with a receiver possessing an awareness of preceding symbol transmissions (ISI-Aware). Subsequently, we assess the same source type but in the absence of any prior knowledge regarding past transmitted symbols (ISI-Unaware). Subsequent to this, we transition to an independent source and proceed to evaluate MI within analogous contextual configurations (_i.e.,_ ISI-Aware and ISI-Unaware). The correlated source is modeled as a first-order Markov source with time-invariant transition probabilities. The capacity and associated input distributions are determined for each source type. Our findings demonstrate that, compared to independent sources, correlated sources can achieve higher capacity. This is primarily attributed to the degree of freedom that correlation offers in avoiding the consecutive transmission of identical symbols. This strategy proves particularly advantageous when the STI is short (_i.e.,_ high symbol transmission rate), resulting in increased ISI. Notably, we also establish that as the STI increases (_i.e.,_ reduced ISI), the same capacity can be achieved regardless of the source type or knowledge of previously transmitted symbols. Additionally, our research reveals a perhaps counter-intuitive observation: in scenarios involving fast symbol transmission rates, avoiding the transmission of "\(1\)"s (_i.e.,_ not releasing IPs) is not the sole optimal solution. Conversely, we demonstrate that adjusting the input distribution to allow a higher number of "\(1\)"s (_i.e.,_ releasing IPs) to be transmitted can also be a viable strategy, offering a compromise in terms of AIR. Ultimately, we believe that the superior performance of correlated sources in terms of AIRs, shown in this work, gives an insight into how to design codes for the molecular diffusive channel and, more in general, for additive Gaussian channels whose variance depends on the transmitted symbols. We would like to point out that our approach and methodology are valid, not necessarily in MC studies. Actually, it is applicable to any channel with additive memory property under Gaussian statistics when the source is either independent or correlated. ### _Outline_ The paper is structured as follows: Sec. II introduces the system model, including the calculation of memory, as well as the CIR. Furthermore, the suitability of the Gaussian approximation for channel modeling is discussed, followed by an examination of the transition probabilities of the channel. In Sec. III, we provide a detailed explanation and formulation of the memoryless channel capacity, AIR, and MI for both the independent and correlated sources. For each source type, two MIs are derived, corresponding to the ISI-Aware and ISI-Unaware scenarios. Section IV presents numerical results illustrating the capacity and AIR for the four distinct scenarios, considering various STIs and input probabilities. Finally, Sec. V concludes the paper by offering final remarks. ### _Notations_ The RVs are represented by uppercase italic letters (\(X\)), while their realizations are denoted by lowercase italic letters (\(x\)). The vector \((x_{r},\ldots,x_{v})\) is expressed as \(x_{r}^{v}\). Specifically, the presence of a superscript indicates that the variable is a vector, while the subscript indicates the index of the first element, and the superscript indicates the index of the last element in the vector. If there is only a subscript, it denotes a single variable with the corresponding index. Additionally, the joint probability of the vector \((x_{r},\ldots,x_{v-1})\) and \(x_{v}\) can be written as \(P_{X_{r}^{v-1}X_{r}^{v-1}}(x_{r}^{v-1},x_{v})\!=\!P_{X_{r}^{s}}(x_{r}^{v})\). The Hamming weight operator applied to a binary vector \(x_{r}^{r+n}\in\{0,1\}^{n+1}\) is denoted as \(w_{H}(x_{r}^{r+n})\), which counts the number of occurrences of "1" in vector \(x_{r}^{r+n}\). The operator \(\{f\}^{+}\) is defined as the \(\max\{0,f\}\). The \(Q\) function and complementary error function are defined as \[Q(z)=\frac{1}{2}\mathrm{erfc}\left(\frac{z}{\sqrt{2}}\right)=\frac{1}{\sqrt{2 }\pi}\int_{z}^{\infty}e^{-\frac{z^{2}}{\pi}}dy,\quad z\in\mathbb{R}. \tag{1}\] The binary entropy function \(H_{2}:\,[0,1]\rightarrow[0,1]\) is defined as \[H_{2}(x)\triangleq-x\log_{2}(x)-(1-x)\log_{2}(1-x). \tag{2}\] ## II System Model and Analysis In this section, we undertake a comprehensive characterization of the transmitter, receiver, and the propagation dynamics of the IPs. Subsequently, we proceed to quantify the memory of the system, representing it as an integer multiple of the STI. Following the characterization of the CIR, we proceed to model the received signal and approximate its statistical behavior by employing a Gaussian distribution. ### _Propagation Aspects_ This work considers a communication system made of a point transmitter, a diffusion-based channel, and an FA spherical receiver. At the beginning of each STI of duration \(T_{\mathrm{sym}}\) where "\(1\)" is sent, the transmitter sends a pulse corresponding to the instantaneous release of \(N_{\mathrm{T}}\) IPs. The receiver counts the number of particles absorbed within each STI and resets the counter at the beginning of the next interval. We believe that this mechanism is not far from reality [18]. The IPs diffuse through the medium between transmitter and receiver with constant diffusion coefficient \(D\left[\mu\mathrm{m}^{2}/\mathrm{s}\right]\). In practice, the value of \(D\) depends on the temperature, viscosity of the fluid, and the Stokes' radius of the molecule [19]. The receiver's absorption property stems from the reaction between receiver and IPs. In effect, the counting process is tantamount to measure the concentration of desired particles at the receiver, resulting from the interaction between its surface and particles. In a biological environment, enzymes can be secreted by the receiver to eliminate effects resulting from past reactions, thus enabling resetting [20]. The propagation of diffusive particles is governed by the Fick's second law [21], which relates the time derivative of the flux to the Laplacian of the concentration of molecules \(c\left(d,t\right)\) at a given distance \(d\) and time \(t\), as \[\frac{\partial c\left(d,t\right)}{\partial t}=D\nabla^{2}c\left(d,t\right). \tag{3}\] The initial and boundary conditions of (3) vary depending on the MC system characteristics. Authors in [22] specified the boundary and initial conditions for an impulsive release of molecules, an unbounded environment, and an FA spherical receiver. They obtained the expression for the hitting rate of molecules onto the receiver surface, as a function of the distance \(d\) between the transmitter and the center of the receiver with radius \(R\) at time \(t\). Then, assuming the independent random movement of the particles and the homogeneity of the medium, they derived the expected cumulative number of absorbed particles as \[N(t)=\frac{N_{\mathrm{T}}R}{d}\mathrm{erfc}\left(\frac{d-R}{2\sqrt{D}t}\right). \tag{4}\] In our study, we consider a binary concentration shift keying (BCSK) modulation, where IPs release corresponds to "\(1\)" and no release corresponds to "\(0\)". At the receiver, the number of absorbed particles is counted and reset at the beginning of the next interval. At the end of each STI, the receiver returns a single sample, representing the total number of particles absorbed during that interval. Assuming that the receiver resets the counter right at the beginning of STIs (_i.e._ perfect synchronization between transmission and reset intervals at the receiver), we expect that the receiver observation changes by varying the duration of the STI \(T_{\mathrm{sym}}\). To compute the MI, we need to calculate the probability that particles hit the receiver. Since the total number of released particles is \(N_{\mathrm{T}}\), if the counter has not been reset between the initial time of release until time \(t\), the probability that a particle hits the receiver at time \(t\) is \(N(t)/N_{\mathrm{T}}\). If the counter is reset, instead, the probability that a particle released at \(t\!=\!0\) hits the receiver within the \(i\)th STI is \[h_{i}=\frac{N(iT_{\mathrm{sym}})-N((i-1)T_{\mathrm{sym}})}{N_{\mathrm{T}}}\, \tag{5}\] because a particle that has been absorbed at any time \(t\!<\!(i-1)T_{\mathrm{sym}}\) does not have a second chance to hit the receiver. ### _Memory Duration Characterization_ When studying slow diffusive communication, it is important to quantify the effect of channel memory. To compute MI and transition probabilities between input and detected output, we need to account for all possible combinations of the preceding symbol sequence. If channel memory spans \(M\) symbols, there are \(2^{M}\) different possible sequences to consider. The memory length depends on the transmission rate of symbols. In our model, it should be as small as possible, because evaluating \(2^{M}\) combinations can make computation impractical. Moreover, due to the differential nature of (5) and asymptotical convergence of (4), the probability of a particle being absorbed a long time after release eventually tends to \(0\). To obtain an estimate \(M\) of the effective memory length in terms of STIs, being not unnecessarily long or so short to miss the effect of the released particles, we define \[M=\left\lceil\frac{T_{\alpha}}{T_{\rm sym}}\right\rceil\, \tag{6}\] where \(T_{\alpha}\) is the time required to reach some negligible hitting probability \(\alpha\), as given by \[\frac{R}{d}\left(\mathrm{erfc}\left(\frac{d-R}{2\sqrt{D(T_{\alpha}+T_{\rm sym} )}}\right)\right)-\mathrm{erfc}\left(\frac{d-R}{2\sqrt{DT_{\alpha}}}\right) \right)=\alpha. \tag{7}\] Note that (7) is a transcendental equation with unknown \(T_{\alpha}\). We do not know an explicit solution for such equation. Hence, we solve it numerically by the _regula-falsi_ method [23]. For example, Fig. 1 plots the expected cumulative number of absorbed particles over time without resetting the counter (blue curve), computed by (4) for a diffusive MC system modeled as above, with parameters set as in Tab. I, and with STI \(T_{\rm sym}\!=\!2\,\mathrm{s}\). Here, the memory length results in \(M\!=\!4\). The expected number of absorbed particles within each STI, resetting the counter at its beginning, is also highlighted as the difference of values at interval boundaries. On the other hand, Fig. 2 plots the distribution (5) of probability \(h_{i}\) that particles are absorbed by a resetting receiver within the \(i\)th interval for different values of \(T_{\rm sym}\). We observe that the memory length resulting from (7) increases with \(T_{\rm sym}\) when measured in time units (\(T_{\alpha}\)), but decreases in terms of STIs (\(M\)). The vector \(h_{1}^{M}\!=\!(h_{1},\ldots,h_{M})\) represents the CIR of the system. ### _Gaussian Approximation_ The received signal at the \(i\)th STI \(\mathrm{R}_{i}\) consists of the number of particles released for the \(i\)th transmitted symbol \(\mathrm{C}_{i}\) as well as of those released for previous symbols and absorbed within the current interval \(\mathrm{P}_{i}\). We consider also an environment external noise \(\mathrm{E}\), due to random factors that increase or reduce the number of particles that the receiver counts in any interval. For example, a negative value of \(\mathrm{E}\) expresses the effect of extraneous molecules that unbind absorbed IPs. As obvious, \(\mathrm{E}\) is independent of \(\mathrm{C}_{i}\) and \(\mathrm{P}_{i}\). In conclusion, the observation at the \(i\)th interval is the superposition of the current signal, of previously transmitted symbols and of external noise, that is, \[\mathrm{R}_{i}=\mathrm{C}_{i}+\mathrm{P}_{i}+\mathrm{E}. \tag{8}\] Next, we want to show that we can use a Gaussian model to describe the randomness in the number of absorbed particles in STIs. Due to the nature of the absorption phenomenon and the temporal correlation of the absorption in different STIs, the number of particles absorbed in each STI follows the multinomial distribution. Consider each STI as a bin. Hence, we have \(M\) bins that correspond to the STIs in which a particle can reach the receiver. There is also an extra bin that represents the scenario of a particle that has not been absorbed. So in the end, for our statistical model, there are \(M+1\) bins. We can write the probability of a particle falling into the \(i\)th of the first \(M\) bins as \(h_{i}\) and the probability corresponding to the last extra bin as \(h_{M+1}\!=\!1\!-\!\sum_{i=1}^{M}h_{i}\). Let \(N_{1}^{M+1}\!=\!(N_{1},\ldots,N_{M+1})\) be the number of particles that fell into each bin. Then \(N_{1}^{M+1}\) is multinomial-distributed over \(N_{\mathrm{T}}\) trials and bin probabilities \(h_{1}^{M+1}\). We can compute the entries of the covariance matrix of \(N_{1}^{M+1}\) as follows \[\mathrm{Var}(N_{i})=N_{\mathrm{T}}h_{i}(1-h_{i})\,\qquad i\in\{1, \ldots,M+1\}\, \tag{9}\] \[\mathrm{Cov}(N_{i},N_{j})=-N_{\mathrm{T}}h_{i}h_{j}\,\qquad(i\neq j). \tag{10}\] If \(h_{i}\) and \(h_{j}\) are much smaller than \(1\), then we have \[\frac{\mathrm{Var}(N_{i})}{|\mathrm{Cov}(N_{i},N_{j})|}=\frac{1-h_{i}}{h_{j}} \gg 1. \tag{11}\] By Central Limit Theorem (CLT), for \(N_{\mathrm{T}}\) sufficiently large, the random vector \(N_{1}^{M+1}\) is approximately Gaussian distributed. However, our focus is on characterizing the joint distribution of the first \(M\) bins: so the vector \(N_{1}^{M}\) is approximately Gaussian distributed with mean vector \(N_{\mathrm{T}}h_{1}^{M}\) and covariance matrix that is approximately diagonal thanks to (11). This is equivalent to having virtually independent and Gaussian-distributed marginals as follows \[N_{i}\sim\mathcal{N}\Big{(}N_{\mathrm{T}}h_{i},N_{\mathrm{T}}h_{i}(1-h_{i}) \Big{)},\qquad i\in\{1,\ldots,M\}. \tag{12}\] In practice, to be this approximation valid, the probability that the Gaussian distribution generates negative values should be negligible. That is, the model parameters should be chosen to Fig. 1: Expected cumulative number of absorbed particles over time without resetting (blue curve) (system parameters as in Tab. I, \(T_{\rm sym}\!=\!2\,\mathrm{s}\), \(M\!=\!4\)). The expected number of absorbed particles within each interval, when the counter is reset, is highlighted by vertical double arrows. have the mean \(\mu\) and standard deviation \(\sigma\) of the Gaussian distribution satisfying _e.g._\(\mu>3\sigma\), which implies \[\frac{N_{\mathrm{T}}h_{i}}{1-h_{i}}>9. \tag{13}\] One of the necessary parts of a communication system is the detector that attempts to recover the transmitted symbol from the received signal. In this paper, we consider a memoryless binary detector with the rule \[\hat{S}_{i}=\begin{cases}1&\text{if }\mathtt{R}_{i}\geq\tau,\\ 0&\text{otherwise.}\end{cases} \tag{14}\] Note that, for each realization of the channel, such as input symbol distribution, and STIs, we always look for the threshold \(\tau\) that maximizes the MI. Let \(s_{i}\!\in\!\{0,1\}\) denote the transmitted symbol associated with the \(i\)th STI and \(g(\omega;\mu,\sigma^{2})\) be a Gaussian probability density function (pdf) with mean \(\mu\) and variance \(\sigma^{2}\), where \(g(\omega;0,0)\!=\!\delta(\omega)\) is the Dirac delta function. Then, the pdf of the current signal conditioned on a specific realization of the current transmitted symbol (\(S_{i}\!=\!s_{i}\)) can be written as \[f_{\mathtt{C}_{i}|S_{i}=s_{i}}(\omega)=g\Big{(}\omega;s_{i}N_{\mathrm{T}}h_{1 },s_{i}N_{\mathrm{T}}h_{1}(1-h_{1})\Big{)}. \tag{15}\] Let the vector \(s_{i-M+1}^{i-1}\!\in\!\{0,1\}^{M-1}\) be a realization of \((S_{i-M+1},\ldots,S_{i-1})\), that is the \(M-1\) symbols preceding the \(i\)th interval. The conditional pdf of particles released in the past \(M-1\) intervals and absorbed within the \(i\)th interval, given the sequence of preceding symbols, is \[\begin{split}& f_{\mathtt{P}_{i}|S_{i-M+1}^{i-1}=s_{i-M+1}^{i-1}}( \omega)=\\ & g\bigg{(}\omega;N_{\mathrm{T}}\sum_{j=2}^{M}s_{i-j+1}h_{j},N_{ \mathrm{T}}\sum_{j=2}^{M}s_{i-j+1}h_{j}(1-h_{j})\bigg{)}\.\end{split} \tag{16}\] The external noise is assumed to follow a time-independent Gaussian distribution with pdf \[f_{\mathtt{E}}(\omega)=g\Big{(}\omega;\mu_{\mathtt{E}},\sigma_{\mathtt{E}}^{ 2}\Big{)}. \tag{17}\] To obtain the conditional pdf of the received signals, two pdfs need to be considered first. The first is the pdf of the number of particles received in the \(i\)th interval but released in previous intervals given the previously transmitted symbols, including the environment noise, _i.e._\((\mathtt{P}_{i}+\mathtt{E})|s_{i-M+1}^{i-1}\). The second is the pdf of particles received in the \(i\)th interval, released in the current and previous intervals given the previously transmitted symbols and the current symbol, including the external noise, that is \((\mathtt{C}_{i}+\mathtt{P}_{i}+\mathtt{E})|s_{i-M+1}^{i}\). As all the involved RVs are Gaussian and conditionally independent, their sum results in a Gaussian RV with a mean and variance that is the sum of means and variances, respectively, and we obtain \[\begin{split} f_{\mathtt{P}_{i}+\mathtt{E}|s_{i-M+1}^{i-1}}( \omega)=& g\bigg{(}\omega;\mu_{\mathtt{E}}+N_{\mathrm{T}}\sum_{j= 2}^{M}s_{i-j+1}h_{j},\\ &\sigma_{\mathtt{E}}^{2}+N_{\mathrm{T}}\sum_{j=2}^{M}s_{i-j+1}h_ {j}(1-h_{j})\bigg{)}\,\end{split} \tag{18}\] \[\begin{split} f_{\mathtt{C}_{i}+\mathtt{P}_{i}+\mathtt{E}|s_{i-M+1 }^{i}}(\omega)=& g\bigg{(}\omega;\mu_{\mathtt{E}}+N_{\mathrm{T}} \sum_{j=1}^{M}s_{i-j+1}h_{j},\\ &\sigma_{\mathtt{E}}^{2}+N_{\mathrm{T}}\sum_{j=1}^{M}s_{i-j+1}h_ {j}(1-h_{j})\bigg{)}\.\end{split} \tag{19}\] Thus, the channel transition probabilities given a specific sequence of symbols can be written as \[\begin{split} P_{\hat{S}_{i}|S_{i-M+1}^{i-1},S_{i}}(1|s_{i-M+1}^{i -1},0)=\\ &\quad\mathrm{Pr}(\mathtt{P}_{i}+\mathtt{E}\geq\tau|s_{i-M+1}^{i-1 })=\\ &\quad Q\Bigg{(}\frac{\tau-\mu_{\mathtt{E}}-N_{\mathrm{T}}\sum_{j =2}^{M}s_{i-j+1}h_{j}}{\sqrt{\sigma_{\mathtt{E}}^{2}+N_{\mathrm{T}}\sum_{j=2}^{M }s_{i-j+1}h_{j}(1-h_{j})}}\Bigg{)}\,\end{split} \tag{20}\] \[\begin{split} P_{\hat{S}_{i}|S_{i-M+1}^{i-1},S_{i}}(0|s_{i-M+1}^{i -1},0)=\\ &\quad\mathrm{Pr}(\mathtt{P}_{i}+\mathtt{E}<\tau|s_{i-M+1}^{i-1})= \\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad 1-\mathrm{Pr}( \mathtt{P}_{i}+\mathtt{E}\geq\tau|s_{i-M+1}^{i-1})\,\end{split} \tag{21}\] \[\begin{split} P_{\hat{S}_{i}|S_{i-M+1}^{i-1},S_{i}}(1|s_{i-M+1}^{i -1},1)=\\ &\quad\mathrm{Pr}(\mathtt{C}_{i}+\mathtt{P}_{i}+\mathtt{E}\geq\tau| s_{i-M+1}^{i-1})=\\ &\quad\quad\quad Q\Bigg{(}\frac{\tau-\mu_{\mathtt{E}}-N_{\mathrm{T }}\sum_{j=1}^{M}s_{i-j+1}h_{j}}{\sqrt{\sigma_{\mathtt{E}}^{2}+N_{\mathrm{T}}\sum_ {j=1}^{M}s_{i-j+1}h_{j}(1-h_{j})}}\Bigg{)}\,\end{split} \tag{22}\] Fig. 2: Distribution (5) of the probability that particles are absorbed by a resetting receiver within the \(i\)th interval for \(T_{\mathrm{sym}}\!=\!0.5,1,2\) s. The channel memory length, resulting from (7) for \(\alpha\!=\!0.001\), increases with \(T_{\mathrm{sym}}\) when measured in time units (\(T_{\alpha}\)), but decreases in terms of STIs (\(M\)). \[P_{\hat{S}_{i}|S_{i-M+1}^{i-1},S_{i}}(0|s_{i-M+1}^{i-1},1)=\] (23) \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ as \[\begin{split}\sum_{i=1}^{n}H(S_{i}|S_{1}^{i-1})&=H(S_{1} )+\sum_{i=2}^{n}H(S_{i}|S_{1}^{i-1})\,\\ &=H(S_{1})+(n-1)H(S_{i}|S_{1}^{i-1})\,\\ &=H(S_{1})+(n-1)H(S_{i}|S_{i-1})\.\end{split} \tag{33}\] Substituting (33) into the definition of entropy of the process (26) and taking the limit, we can write the average entropy of the source as \[\begin{split}\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^{n} H(S_{i}|S_{1}^{i-1})=\\ \lim_{n\rightarrow\infty}\frac{1}{n}\Big{(}H(S_{1})+(n-1)H(S_{ i}|S_{i-1})\Big{)}=H(S_{i}|S_{i-1})\.\end{split} \tag{34}\] In this study, we consistently make the assumption that the initial symbol in a sequence generated by the Markov source follows the asymptotic probabilities of the source. Consequently, we express the entropy of the Markov source as the mean of the two conditional entropies. \[\begin{split} H(S_{i}|S_{i-1})=& P_{S_{i-1}}(0)H(S_ {i}|S_{i-1}=0)+\\ P_{S_{i-1}}(1)& H(S_{i}|S_{i-1}=1)=\pi_{0}H_{2}(p) +\pi_{1}H_{2}(q)\.\end{split} \tag{35}\] #### Iii-B1 ISI-Aware In this scenario, we consider the presence of a receiver which has a comprehensive knowledge about the previously transmitted symbols. This knowledge is tantamount to being aware of the factors responsible for ISI. Analogous to the source entropy, we can express the sum of conditional entropy of the transmitted symbol given the previously transmitted sequence and the currently estimated symbol as the sum of individual entropies. \[\begin{split}\sum_{i=1}^{n}H(S_{i}|S_{1}^{i-1},\hat{S}_{i})=\\ H(S_{1}|\hat{S}_{1})+H(S_{2}|S_{1}^{2},\hat{S}_{2})+\cdots+\sum_ {i=M}^{n}H(S_{i}|S_{1}^{i-1},\hat{S}_{i})=\\ H(S_{1}|\hat{S}_{1})+H(S_{2}|S_{1}^{2},\hat{S}_{2})+\cdots+(n-M)H (S_{i}|S_{1}^{i-1},\hat{S}_{i})\.\end{split} \tag{36}\] Applying the limit to take the average results in \[\begin{split}\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^{n} H(S_{i}|S_{1}^{i-1},\hat{S}_{i})=\\ \lim_{n\rightarrow\infty}\frac{1}{n}\Big{(}H(S_{1}|\hat{S}_{1}) +H(S_{2}|S_{1}^{2},\hat{S}_{2})+\cdots\\ +(n-M)H(S_{i}|S_{1}^{i-1},\hat{S}_{i})\Big{)}=H(S_{i}|S_{1}^{i-1},\hat{S}_{i})\.\end{split} \tag{37}\] Considering the finite length of the memory interval as previously defined in Sec. II, we can disregard the level of surprise associated with symbols transmitted significantly earlier. Consequently, we discard the symbols transmitted prior to the effective memory interval. \[H(S_{i}|S_{1}^{i-1},\hat{S}_{i})\leq H(S_{i}|S_{i-M+1}^{i-1},\hat{S}_{i}). \tag{38}\] The entropy of the current symbol, conditioned on the previously transmitted symbols and the currently estimated symbol, can be expressed as the marginalization over the realizations of the previously transmitted symbols and the currently estimated symbol, yielding \[\begin{split} H(S_{i}|S_{i-M+1}^{i-1},\hat{S}_{i})=\\ \sum_{\forall s_{i-M+1}^{i-1},\hat{s}_{i}}& P_{S_{i-M +1}^{i-1},\hat{S}_{i}}(s_{i-M+1}^{i-1},\hat{s}_{i})H(S_{i}|s_{i-M+1}^{i-1}, \hat{s}_{i})\,\end{split} \tag{39}\] and \[\begin{split} H(S_{i}|s_{i-M+1}^{i-1},\hat{s}_{i})=\\ -\sum_{\forall s_{i}}P_{S_{i}|S_{i-M+1}^{i-1},\hat{S}_{i}}(s_{i}| s_{i-M+1}^{i-1},\hat{s}_{i})\times\\ \log_{2}\big{(}P_{S_{i}|S_{i-M+1}^{i-1},\hat{S}_{i}}(s_{i}|s_{i-M+1 }^{i-1},\hat{s}_{i})\big{)}\.\end{split} \tag{40}\] The conditional probability of realizations in (40) can be computed as follows (see Appendix A) \[\begin{split}& P_{S_{i}|S_{i-M+1}^{i-1},\hat{S}_{i}}(s_{i}|s_{i-M+1 }^{i-1},\hat{s}_{i})=\\ &\frac{P_{\hat{S}_{i}|S_{i-M+1}^{i-1}}(\hat{s}_{i}|s_{i-M+1}^{i-1 })P_{S_{i}|S_{i-1}}(s_{i}|s_{i-1})}{\sum_{x\in\{0,1\}}P_{S_{i}|S_{i-M+1}^{i-1}, \hat{S}_{i}}(\hat{s}_{i}|s_{i-M+1}^{i-1},x)P_{S_{i}|S_{i-1}}(x|s_{i-1})}\,\end{split} \tag{41}\] where the conditional and joint probabilities of a specific estimated symbol and previously transmitted symbols are \[\begin{split}& P_{\hat{S}_{i}|S_{i-M+1}^{i-1}}(\hat{s}_{i}|s_{i-M+1 }^{i-1})=\\ &\sum_{\forall s_{i}}P_{\hat{S}_{i}|S_{i-M+1}^{i-1},\hat{S}_{i}}( \hat{s}_{i}|s_{i-M+1}^{i-1},s_{i})P_{S_{i}|S_{i-1}}(s_{i}|s_{i-1})\,\end{split} \tag{42}\] \[\begin{split}& P_{\hat{S}_{i},S_{i-M+1}^{i-1}}(\hat{s}_{i},s_{i-M+1 }^{i-1})=\\ & P_{\hat{S}_{i}|S_{i-M+1}^{i-1}}(\hat{s}|s_{i-M+1}^{i-1})P_{S_{i- M+1}^{i-1}}(s_{i-M+1}^{i-1})\.\end{split} \tag{43}\] Based on (43), the calculation of the probability for a given symbol sequence generated by the source requires knowledge of the Markov model. Since we are utilizing a Markov source as described in (30), the probability of a specific sequence can be determined by traversing the sequence through the Markov model. It is important to note that the probability of the first element in a sequence is assumed to correspond to the asymptotic probability of the Markov source for that particular symbol realization: \[P_{S_{r}^{r}}(s_{r}^{v})=\pi_{0}^{1-s_{r}}\pi_{1}^{s_{r}}\prod_{j=r+1}^{v}P_{S_ {j}|S_{j-1}}(s_{j}|s_{j-1}). \tag{44}\] In the end, the MI associated with the correlated source and ISI awareness is (45). Note that the MI cannot have a negative value, and what we are computing in this paper is equivalent to lower bounds to the actual MIs due to the assumptions such as causality, effective memory, etc. Hence, we only take into account the positive values of MIs. #### Iii-B2 ISI-Unaware In this particular scenario, we make the assumption that the receiver does not have any knowledge regarding the symbols transmitted prior to the current time. This assumption is equivalent to loosening the bound on MI. Consequently, by disregarding the information pertaining to previously transmitted symbols, we can establish the following inequality \[H(S_{i}|S_{i-M+1}^{i-1},\hat{S}_{i})\leq H(S_{i}|\hat{S}_{i})\, \tag{46}\] and from the definition of average conditional entropy, we write \[H(S_{i}|\hat{S}_{i})=-\sum_{\forall s_{i},\hat{s}_{i}}P_{S_{i},\hat{S}_{i}}(s_{i },\hat{s}_{i})\log_{2}\left(P_{S_{i}|\hat{S}_{i}}(s_{i}|\hat{s}_{i})\right)\,, \tag{47}\] \[P_{S_{i},\hat{S}_{i}}(s_{i},\hat{s}_{i})=P_{\hat{S}_{i}|S_{i}}(\hat{s}_{i}|s_{i })P_{S_{i}}(s_{i}). \tag{48}\] The conditional probability of the detected symbol given the transmitted symbol is obtained by marginalizing over the previously transmitted symbols using the Bayes theorem. \[P_{\hat{S}_{i}|S_{i}}(\hat{s}_{i}|s_{i})=\] \[\sum_{\forall s_{i-M+1}^{i-1}}P_{\hat{S}_{i}|S_{i-M+1}^{i-1},S_{i }}(\hat{s}_{i}|s_{i-M+1}^{i-1},s_{i})\frac{P_{\hat{S}_{i-M+1}^{i}}(s_{i-M+1}^{i })}{P_{S_{i}}(s_{i})}. \tag{49}\] Substituting (49) into (48), the term corresponding to the currently transmitted symbol cancel out, and we obtain \[P_{S_{i},\hat{S}_{i}}(s_{i},\hat{s}_{i})=\] \[\sum_{\forall s_{i-M+1}^{i-1}}P_{S_{i}|S_{i-M+1}^{i-1},S_{i}}( \hat{s}_{i}|s_{i-M+1}^{i-1},s_{i})P_{S_{i-M+1}^{i}}(s_{i-M+1}^{i}). \tag{50}\] To compute the conditional probability of the current transmitted symbol given the estimated one, we apply the Bayes rule \[P_{S_{i}|\hat{S}_{i}}(s_{i}|\hat{s}_{i})=\frac{P_{S_{i},\hat{S}_{i}}(s_{i}, \hat{s}_{i})}{P_{\hat{S}_{i}}(\hat{s}_{i})}. \tag{51}\] The probability of the estimated symbol can be computed from the marginalization of the joint probability over all possible realizations of the transmitted symbols. \[P_{\hat{S}_{i}}(s_{i})=\sum_{\forall s_{i}}P_{\hat{S}_{i}.S_{i}}(\hat{s}_{i}, s_{i}). \tag{52}\] In the end, one can compute the MI corresponding to the correlated source with ISI unawareness on the receiver side as (53). ### _Independent Source_ Another type of source that is considered in this paper from a statistical perspective is the one where symbols are generated independently with specific probabilities. Let \(\lambda_{1}=P_{S_{i}}(1)\) and \(\lambda_{0}=1-\lambda_{1}=P_{S_{i}}(0)\) denote the probabilities of transmitting symbols "1" and "0", respectively. In this scenario, there is no temporal dependency between the symbols generated by the source, _i.e._, \[P_{S_{1}^{c}}(s_{1}^{n})=\prod_{i=1}^{n}P_{S_{i}}(s_{i}). \tag{54}\] As a result, we can discard the conditioning on the previously transmitted symbol, and the entropy of the source simplifies to a binary entropy function. \[H(S_{i}|S_{1}^{i-1})=H(S_{i})=H_{2}(\lambda_{0}). \tag{55}\] #### Iii-B1 ISI-Aware Similarly to the scenario with the correlated source, we can rely on Eqns. (39)-(43). Noting that the source is independent, a main difference in this case in comparison to the correlated source scenario is the probability of a particular sequence of symbols, and we write it as \[P_{S_{r}^{c}}(s_{r}^{v})=\lambda_{0}^{v-r+1-w_{H}(S_{r}^{c})}\lambda_{1}^{w_{ H}(S_{r}^{c})}. \tag{56}\] The MI for the case of an independent source with knowledge about the previously transmitted symbols (_i.e._, ISI-Aware) is (57). #### Iii-B2 ISI-Unaware Without the knowledge of previously transmitted symbols, the equations derived in Sec. III-A2 remain applicable. Nevertheless, it is necessary to calculate the probability of each specific sequence using (56), considering the independent nature of symbol generation by the source. Consequently, the MI of the independent source, under the condition of unknown previously transmitted symbols, can be extracted (58). ## IV Numerical Evaluation and Results We present a selection of results that illustrate the superiority of correlated sources in achieving higher capacity. It should be noted that the optimal input distribution for achieving capacity may not be uniform. The numerical evaluation was conducted using system parameters listed in Table I, obtained from [25], with the exception of the external noise and \(\alpha\). We intentionally selected noise standard deviation \(\sigma_{\text{ext}}\) and mean \(\mu_{\text{ext}}\) such that there are instances where the values of E become negative, indicating that the external noise impedes IPs absorption. The parameter \(\alpha\) is chosen to ensure the validity of the last sample of the CIR as (13). Figure 3 illustrates the channel capacity (24) for various STIs (\(0.2\leq T_{\rm sym}\leq 1.5\)) in different scenarios. These scenarios include ISI-Aware with a correlated source (blue curve with square marker), ISI-Unaware with a correlated source (blue curve with triangle marker), ISI-Aware with an independent source (red curve with square marker), and ISI-Unaware with independent source (red curve with triangle marker). As expected, the capacity with ISI awareness is generally higher than that with ISI unawareness. Interestingly, the correlated source achieves a higher capacity compared to the independent source. Normally, it is expected that hiring an independent source results in higher capacities in communication systems. However, in this unique scenario, due to the high ISI effect, the correlated source allows us to tackle the problem of the ISI, and the reduction of the source entropy compared to the independent one is worth it. Specifically, the maximum channel capacity in ISI-Aware scenario with correlated source, \(C_{\rm ISIA}^{\rm CRR}\), is \(1.50\,\)[bit/s] at \(T_{\rm sym}\!=\!0.40\,\)s, with the input probability distribution of \(p\!=\!0.60\) and \(q\!=\!0.62\). On the other hand, the maximum capacity for the ISI-Aware scenario with independent source, \(C_{\rm ISIA}^{\rm ND}\), is \(1.43\,\)[bit/s] at \(T_{\rm sym}\!=\!0.45\,\)s, with an input probability distribution of \(\lambda_{0}\!=\!0.52\). Comparing the two maximum capacities in the ISI-Aware scenario, we observe that the independent source achieves its maximum at a higher \(T_{\rm sym}\) compared to the correlated source. Moving on to the ISI-Unaware scenario, the maximum capacity for the correlated source, \(C_{\rm ISIA}^{\rm CRR}\), is \(1.24\,\)[bit/s] at \(T_{\rm sym}\!=\!0.57\,\)s, with an optimum input probability distribution at \(p\!=\!0.60\) and \(q\!=\!0.60\). The maximum capacity for the independent source, \(C_{\rm ISIA}^{\rm ND}\), is \(1.18\,\)[bit/s] at \(T_{\rm sym}\!=\!0.60\,\)s, with an input probability distribution of \(\lambda_{0}\!=\!0.50\). We also observe a slight shift in the STI corresponding to the maximum capacity in both ISI-Unaware cases. Interestingly, by increasing the STI and consequently reducing the impact of ISI, all capacities overlap. Therefore, regardless of the source type or the ISI knowledge, the same performance can be achieved. This overlap occurs because as \(T_{\rm sym}\) increases, the effect of ISI diminishes, rendering the knowledge of previously transmitted symbols less valuable. It is important to note that the significant difference between the ISI-Aware cases corresponding to different types of receivers is observed only within a certain range of STIs (\(0.3\) to \(0.85\)). The same observation applies to the other two curves representing ISI unawareness. Figure 4 illustrates the AIR across the input distribution space, represented by color, in the context of ISI awareness with a correlated source. The black hexagram marker indicates the capacity point associated with this scenario, corresponding to \(p\!=\!0.60\) and \(q\!=\!0.65\), with a value of \(C_{\rm ISIA}^{\rm CRR}\!=\!1.42\). By analyzing the expressions for stationary probabilities (31) and (32), we can infer that when \(p\) and \(q\) are equal to each other, it is equivalent to transmit an infinitely long sequence with equiprobable symbols. However, as \(p\) and \(q\) approach \(1\), it indicates a preference to avoid consecutive transmission of the same symbol. In the provided example, we observe that the optimal input distribution is when the input distribution of the correlated source is asymptotically equiprobable, but it is preferable to avoid generating consecutive identical symbols, particularly for the transmission of \(``1"\). This observation is supported by the fact that \(q\) is slightly higher than \(p\), indicating a lesser desire for transmitting two successive \(``1"\) symbols. Of course, when both \(p\) and \(q\) \[\begin{split} I_{\rm ISIA}^{\rm IND}=&\Bigg{\{}H_{2}( \lambda_{0})+\sum_{\forall s_{i-M+1},\,\beta_{i}}\Bigg{[}\lambda_{0}^{M-1-w_{H }(s_{i-M+1}^{i-1})}\lambda_{1}^{w_{H}(s_{i-M+1}^{i-1})}\sum_{\forall s_{i}} \bigg{[}\lambda_{0}^{1-s_{i}}\lambda_{1}^{s_{i}}P_{\hat{S}_{i}|S_{i-M+1}^{i-1}, \hat{S}_{i}}(\hat{s}_{i}|s_{i-M+1}^{i-1},s_{i})\bigg{]}\times\\ &\sum_{\forall s_{i}}\bigg{[}\frac{\lambda_{0}^{1-s_{i}}\lambda_{ 1}^{s_{i}}P_{\hat{S}_{i}|S_{i-M+1}^{i-1},\hat{S}_{i}}(\hat{s}_{i}|s_{i-M+1}^{i-1 },s_{i})}{\sum\limits_{x\in\{0,1\}}\lambda_{0}^{1-x}\lambda_{1}^{x}P_{\hat{S}_ {i}|S_{i-M+1}^{i-1},\hat{S}_{i}}(\hat{s}_{i}|s_{i-M+1}^{i-1},x)}\log\Big{(}\frac {\lambda_{0}^{1-s_{i}}\lambda_{1}^{s_{i}}P_{\hat{S}_{i}|S_{i-M+1}^{i-1},\hat{S} _{i}}(\hat{s}_{i}|s_{i-M+1}^{i-1},s_{i})}{\sum\limits_{x\in\{0,1\}}\lambda_{0}^{1 -x}\lambda_{1}^{x}P_{\hat{S}_{i}|S_{i-M+1}^{i-1},\hat{S}_{i}}(\hat{s}_{i}|s_{i -M+1}^{i-1},x)}\bigg{)}\bigg{]}\Bigg{]}\Bigg{\}}^{+}\.\end{split} \tag{57}\] \begin{table} \begin{tabular}{|c|c|c|} \hline Variable & Definition & Value \\ \hline \hline \(N_{\rm T}\) & Number of released molecules & \(10^{4}\) \\ \hline \(R\) & Radius of the receiver \(\mathcal{R}_{i}\) & \(1\,\mu\)m \\ \hline \(d\) & Distance between transmitter and center of receiver & \(10\,\mu\)m \\ \hline \(\alpha\) & Minimum acceptable probability & \(0.001\) \\ \hline \(\mu_{\rm out}\) & Mean of the external noise signal & \(50\) \\ \hline \(\sigma_{\rm out}\) & Standard deviation of the external noise signal & \(50\) \\ \hline \(D\) & Diffusion coefficient for the signaling molecule & \(79.4\,\mu\)m\({}^{2}\)/s \\ \hline \end{tabular} \end{table} TABLE I: System parameters \[\begin{split} I_{\text{ISIU}}^{\text{IND}}=&\Bigg{\{}H_{2}( \lambda_{0})+\sum_{\forall s_{i},\hat{s}_{i}}\Bigg{[}\sum_{\forall s_{i-M+1}^{i- 1}}\bigg{[}\lambda_{0}^{M-1-w_{H}(s_{i-M+1}^{i-1})}\lambda_{1}^{w_{H}(s_{i-M+1} ^{i-1})}P_{S_{i}|S_{i-M+1}^{i-1},S_{i}}(\hat{s}_{i}|s_{i-M+1}^{i-1},s_{i}) \bigg{]}\times\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ the AIR is lower in the ISI-Unaware scenario as expected but with the same color pattern. Figure 6 depicts the AIR for the same scenario as shown in Fig. 4, but here the STI is \(T_{\mathrm{sym}}\!=\!0.7\,\mathrm{s}\). In this case, the capacity is achieved with \(p\!=\!q\!=\!0.55\), indicating a preference for almost equiprobable input distributions. By comparing the capacity-achieving input distribution in Figs. 4 and 6 we can observe that the optimal values of \(p\) and \(q\) converge to \(0.5\) as the STI increases. With \(p\) and \(q\) closer to \(0.5\), the source behaves more similarly to a source that emits independent and uniformly distributed symbols. Figure 7 presents the AIR for a similar scenario as depicted in Fig. 5, with STI \(T_{\mathrm{sym}}\!=\!0.7\,\mathrm{s}\). The capacity is obtained when the input distribution is equiprobable, characterized by \(p\!=\!q\!=\!0.57\), indicating a preference for equal probabilities from a stationarity perspective. Comparing this capacity with the one shown in Fig. 5, we observe a shift in the optimal input distribution. In this case, the capacity is achieved with an equiprobable distribution that slightly avoids generating the same symbols successively, whereas in Fig. 5, the avoidance of generating "\(1\)"s was a little more preferred. This observation can be explained by the increased STI \(T_{\mathrm{sym}}\), which leads to a relatively reduced impact of ISI. In Fig. 8, we present the difference between the AIRs illustrated in Fig. 4 and Fig. 5 to analyze the disparity between the ISI-Aware scenario and the ISI-Unaware case for an STI of \(T_{\mathrm{sym}}\!=\!0.3\,\mathrm{s}\). Consistent with the theoretical prediction stated in (46), the difference between the two AIRs is non-negative. There are two prominent regions where the difference between the AIRs is significant. The first region, located in the upper right side of the figure, demonstrates that the performance of the ISI-Unaware scenario tends to approach \(0\), whereas the ISI-Aware case exhibits a higher AIR in that region. A similar observation can be made for the bottom left region of the figure. Since the independent source's input space can be spanned by a single variable, \(\lambda_{0}\), we can examine the AIR values of the independent source as a function of the STI, \(T_{\mathrm{sym}}\), and the probability of transmitting a "\(0\)", \(\lambda_{0}\). Figure 9 illustrates the AIR values in the ISI-Aware scenario, where the source generates symbols independently, and the receiver is aware of the previously transmitted symbols. As we also observed in Fig. 3, the highest capacity is achieved when \(T_{\mathrm{sym}}=0.45\,\mathrm{s}\). To gain further insight, in Fig. 10 we present a cross-sectional view of Fig. 9, focusing on specific STIs (\(T_{\mathrm{sym}}\!\in\!\{0.30,0.45,0.60,0.75,0.90\}\,\mathrm{s}\)). The hexagram markers indicate the capacity corresponding to each STI. It is evident that as \(T_{\mathrm{sym}}\) increases, the input distribution associated with the channel capacity gradually approaches an equiprobable input distribution (_i.e.,_\(\lambda_{0}\!=\!0.5\)). However, the maximum capacity does not occur when symbols are transmitted with equal probability. In fact, our analysis demonstrates a preference for transmitting a slightly higher number of "\(0\)" symbols Fig. 8: Difference between the AIR values corresponding to Fig. 4 and Fig. 5 over the input distribution space. Fig. 6: AIR as a function of the correlated source input distribution with \(T_{\mathrm{sym}}\!=\!0.7\,\mathrm{s}\) in ISI-Aware scenario. Capacity is achieved at \(p\!=\!q\!=\!0.55\), with a value of \(C_{\mathrm{ISIA}}^{\mathrm{CRR}}\!=\!1.27\). Fig. 7: AIR as a function of the correlated source transition probabilities with \(T_{\mathrm{sym}}\!=\!0.7\,\mathrm{s}\) in ISI-Unaware scenario. Capacity is \(C_{\mathrm{ISIU}}^{\mathrm{CRR}}\) obtained with \(p\!=\!0.57\) and \(q\!=\!0.57\). compared to "\(1\)"s. Interestingly, when \(T_{\rm sym}\!=\!0.3\) s, even for \(\lambda_{0}\!<\!0.35\) and \(\lambda_{0}\!>\!0.65\), we observe favorable AIR values compared to those associated with \(T_{\rm sym}\!\geq\!0.6\) s. Figure 11 shows the AIR in a similar fashion as in Fig. 9, but assumes no knowledge on ISI. Compared to Fig. 9, we can observe that the AIR drops. However, the shape of the manifold remains similar. The cross-sectional view of Fig. 11 is depicted in Fig. 12 for the same set of \(T_{\rm sym}\) as for Fig. 10. The maximum possible capacity achieved with equiprobability of input symbols is at \(T_{\rm sym}=0.6\) s. The reason for the complex shape of the AIR curves in Fig. 10 is not trivial: each \(\lambda_{0}\) is associated with a different channel, which depends on the specific optimum detector threshold, \(\tau\). The curve for \(T_{\rm sym}\!=\!0.3\) s exhibits two local maxima. The maximum at \(\lambda_{0}\!\approx\!0.28\) suggests transmitting fewer "\(0\)"s is beneficial, which may seem counter-intuitive given the higher ISI associated with faster transmission rates. However, the other maximum at \(\lambda_{0}\!\approx\!0.75\) suggests transmitting more "\(0\)"s is optimal. This observation is sensible because the ISI increases with the transmission rate. By transmitting "\(1\)" less frequently, the ISI is reduced, yielding an improvement in the AIR. As expected, the maximum associated with \(\lambda_{0}\!\approx\!0.75\) is higher than the one associated with \(\lambda_{0}\!\approx\!0.28\). ## V Conclusions We have investigated the Achievable Information Rate (AIR) of a diffusive molecular communication (MC) channel with a fully absorbing receiver, which counts particles absorbed along each symbol time interval (STI) and resets the counter at every interval. The MC channel is affected Fig. 11: AIR as a function of the independent source probability of transmitting “\(0\)”, \(\lambda_{0}\), and of the STI \(T_{\rm sym}\) in ISI-Unaware scenario. Fig. 12: AIR values corresponding to \(T_{\rm sym}\!\in\!\{0.30,0.45,0.60,0.75,0.90\}\) s when the source is of the independent type, and ISI-Unaware scenario holds. Hexagram markers indicate the capacity associated with each \(T_{\rm sym}\). Fig. 10: AIR values corresponding to \(T_{\rm sym}\!\in\!\{0.30,0.45,0.60,0.75,0.90\}\) s when the source is of the independent type, and ISI-Aware scenario holds. Hexagram markers indicate the capacity associated with each \(T_{\rm sym}\). Fig. 9: AIR as a function of the independent source probability of transmitting “\(0\)”, \(\lambda_{0}\), and of the STI \(T_{\rm sym}\) in ISI-Aware scenario. by memory and thus inter-symbol interference (ISI), due to the delayed arrival of molecules. To reduce the complexity in calculating the mutual information (MI), we have measured the effective memory length as an integer number of STIs and considered a single-symbol memoryless detector. Unlike previous works, we have also optimized the detector threshold to MI. We have approximated as Gaussian the received signal distribution and calculated the channel MI affected by ISI. Our investigation on AIR covers four distinct scenarios as the independent source and correlated source with and without knowledge about the previously transmitted symbols at the receiver side. Our selection of numerical results demonstrates that, in general, with correlated source, we can achieve higher capacity. The optimal input probability distribution achieving the capacity may not be uniform. In particular, when the STI \(T_{\mathrm{sym}}\) is small, thus implying strong ISI, the maximum AIR does not occur with the equiprobable transmission of symbols. ## Appendix A In this section, we obtain an equivalent expression to describe the probability of an event conditioned on the presence of two other joint events that was extensively used in the manuscript. \[P_{A|B,C}(a|b,c) =\frac{P_{C|A,B}(c|a,b)P_{A,B}(a,b)}{P_{C,B}(c,b)}\] \[=\frac{P_{C|A,B}(c|a,b)P_{A|B}(a|b)P_{B}(b)}{P_{C|B}(c|b)P_{B}(b)}\] \[=\frac{P_{C|A,B}(c|a,b)P_{A|B}(a|b)}{P_{C|B}(c|b)} \tag{59}\] \[=\frac{P_{C|A,B}(c|a,b)P_{A|B}(a|b)}{\sum\limits_{\forall y}P_{C,A|B}(c,y|b)}\] \[=\frac{P_{C|A,B}(c|a,b)P_{A|B}(a|b)}{\sum\limits_{\forall y}P_{C |A,B}(c|y,b)P_{A|B}(y|b)}\]
2310.19250
Assessment of Differentially Private Synthetic Data for Utility and Fairness in End-to-End Machine Learning Pipelines for Tabular Data
Differentially private (DP) synthetic data sets are a solution for sharing data while preserving the privacy of individual data providers. Understanding the effects of utilizing DP synthetic data in end-to-end machine learning pipelines impacts areas such as health care and humanitarian action, where data is scarce and regulated by restrictive privacy laws. In this work, we investigate the extent to which synthetic data can replace real, tabular data in machine learning pipelines and identify the most effective synthetic data generation techniques for training and evaluating machine learning models. We investigate the impacts of differentially private synthetic data on downstream classification tasks from the point of view of utility as well as fairness. Our analysis is comprehensive and includes representatives of the two main types of synthetic data generation algorithms: marginal-based and GAN-based. To the best of our knowledge, our work is the first that: (i) proposes a training and evaluation framework that does not assume that real data is available for testing the utility and fairness of machine learning models trained on synthetic data; (ii) presents the most extensive analysis of synthetic data set generation algorithms in terms of utility and fairness when used for training machine learning models; and (iii) encompasses several different definitions of fairness. Our findings demonstrate that marginal-based synthetic data generators surpass GAN-based ones regarding model training utility for tabular data. Indeed, we show that models trained using data generated by marginal-based algorithms can exhibit similar utility to models trained using real data. Our analysis also reveals that the marginal-based synthetic data generator MWEM PGM can train models that simultaneously achieve utility and fairness characteristics close to those obtained by models trained with real data.
Mayana Pereira, Meghana Kshirsagar, Sumit Mukherjee, Rahul Dodhia, Juan Lavista Ferres, Rafael de Sousa
2023-10-30T03:37:16Z
http://arxiv.org/abs/2310.19250v1
Assessment of Differentially Private Synthetic Data for Utility and Fairness in End-to-End Machine Learning Pipelines for Tabular Data ## Abstract Differentially private (DP) synthetic data sets are a solution for sharing data while preserving the privacy of individual data providers. Understanding the effects of utilizing DP synthetic data in end-to-end machine learning pipelines impacts areas such as health care and humanitarian action, where data is scarce and regulated by restrictive privacy laws. In this work, we investigate the extent to which synthetic data can replace real, tabular data in machine learning pipelines and identify the most effective synthetic data generation techniques for training and evaluating machine learning models. We systematically investigate the impacts of differentially private synthetic data on downstream classification tasks from the point of view of utility as well as fairness. Our analysis is comprehensive and includes representatives of the two main types of synthetic data generation algorithms: marginal-based and GAN-based. To the best of our knowledge, our work is the first that: (i) proposes a training and evaluation framework that does not assume that real data is available for testing the utility and fairness of machine learning models trained on synthetic data; (ii) presents the most extensive analysis of synthetic data set generation algorithms in terms of utility and fairness when used for training machine learning models; and (iii) encompasses several different definitions of fairness. Our findings demonstrate that marginal-based synthetic data generators surpass GAN-based ones regarding model training utility for tabular data. Indeed, we show that models trained using data generated by marginal-based algorithms can exhibit similar utility to models trained using real data. Our analysis also reveals that the marginal-based synthetic data generator MWEM PGM can train models that simultaneously achieve utility and fairness characteristics close to those obtained by models trained with real data. ## Introduction Differential privacy (DP) is the standard for privacy-preserving statistical summaries [1]. Companies such as Microsoft [2], Google [3], Apple [4], and government organizations such as the US Census [5], have successfully applied DP in machine learning and data sharing scenarios. The popularity of DP is due to its strong mathematical guarantees. Differential Privacy guarantees privacy by ensuring that the inclusion or exclusion of any particular individual does not significantly change the output distribution of an algorithm. In areas ranging from health care, humanitarian action, education, and socioeconomic studies, the publication and sharing of data is crucial for informing society and scientific collaboration. However, the disclosure of such data sets can often reveal private, sensitive information. Privacy-preserving data publishing aims at enabling such collaborations while preserving the privacy of individual entries in the data set. Tabular/categorical data about individuals are relevant in many applications, from health care to humanitarian action. Privacy-preserving data publishing for such data can be done in the form of a synthetic data table that has the same schema and similar distributional properties as the real data. The aim here is to release a perturbed version of the original information, so that it can still be used for statistical analysis, but the privacy of individuals in the database is preserved. The biggest advantage of synthetic data sets is that, once released, all data analysis and machine learning tasks are performed in the same way it is done with real data. As noted by [6], the switch between real and synthetic data in data analysis and machine learning pipelines is seamless - the same analysis tools, libraries and algorithms are applied in the same manner in both data sets. Other privacy-preserving technologies, such as federated learning, requires expertise and appropriate tools to perform data analysis and model training. Due to the all the potential benefits of synthetic data, understanding the impacts of synthetic data in downstream classification tasks have become of extreme importance. A trend observed in recent studies is to evaluate performance of synthetic data generators of two types: marginal-based synthesizers [7] and generative adversarial networks (GAN) based synthesizers [8, 9, 6]. Marginal-based synthetic data generators are suitable for tabular data only, and have gained increased popularity after the algorithm MST won the NIST competition in 2018 [10]. Marginal-based synthesizers are named as such due to the fact that they learn approximate data distributions by querying noisy marginals from the real data. Notable marginal-based algorithms are MWEM PGM [11] and PrivBayes [12]. GAN-based synthesizers, on the other hand, are flexible algorithms, and are suitable for tabular, image and other data formats. GANs learn patterns and relationships from the input data based on a game, in the sense of game theory, between two machine learning models, a discriminator model and the generator model. Among popular differentially private GAN architectures we list DP-GAN [13], DP-CTGAN [14], PATE-GAN [15] and PATE-CTGAN [14]. One of the major applications of synthetic data is for training machine learning models. Therefore, it is paramount to understand how exchanging real data for synthetic data impacts the performance of the trained machine learning models. By performance, we mean not only the utility of the model (its accuracy, for example) but also how well the model performs for different subgroups of the data set - the fairness of the model. The impact of machine learning models on minorities subgroups is an active area of research, and several works have investigated the trade-offs among model accuracy, bias, and privacy [16, 17, 18, 19]. However, only recently bias caused by the use of synthetic data in downstream classification received attention [20, 21, 7]. This problem becomes particularly relevant in the context of synthetic data sets generated with differential privacy guarantees. It is known that differential privacy can affect fairness in machine learning models [17]. Despite recent work investigating the impact of synthetic data in downstream model fairness [20, 8], there are important questions that remain unanswered. * There is no published work that systematically studies the utility and fairness of machine learning models trained on several GAN based and marginal-based synthetic tabular data set generation algorithms. * Previous studies have not evaluated machine learning models trained on synthetic data set generation algorithms for multiple definitions of fairness. * In previous studies, it was always assumed that real data was available for evaluating the fairness of models trained on synthetic data. Here, we propose and evaluate a pipeline where no such assumption is necessary. **Contributions** In this work, we investigate the impacts of differentially private synthetic data on downstream classification, where we focus on understanding the impacts on model utility and fairness. Our investigation focus on two aspects of such impact: * What is the impact in model utility when utilizing synthetic data for training machine learning models? Can synthetic data also be used to evaluate utility of machine learning models? * What is the impact in model fairness when utilizing synthetic data for training machine learning models? Can synthetic data be used to evaluate fairness of machine learning models? In our investigations we also evaluate if there are clear differences in performance between marginal-based and GAN-based synthetic data, and if there is a synthesizer algorithm that produces data that clearly outperform others. Our research work evaluates the impact of utilizing synthetic data sets for both training and testing in machine learning pipelines. We empirically compare the performance of marginal-based synthesizers and GAN-based synthesizers within the context of a machine learning pipeline. Our experiments yield a comprehensive analysis, encompassing utility and fairness metrics. Our main contributions are: * We propose a training and evaluation framework that does not assume that real data is available for testing the utility and fairness of machine learning models trained on synthetic data. Fig 1: Pipeline for model training and evaluation using synthetic data (1) We generate Synthetic data sets for model training and model testing utilizing differentially private synthesizers. (2) We train models utilizing synthetic data and evaluate on a synthetic test data. Model selection is made during this phase. (3) Based on the previous phase results, model is trained using synthetic data and deployed. Model is applied to real (test) data in production phase. * We present an extensive analysis of synthetic data set generation algorithms in terms of utility and fairness when used for training machine learning models. In particular, this is the first systematic comparison of several marginal-based and GAN-based algorithms for fairness and utility of the resulting machine learning models. * This is the first of such studies that includes several different definitions of fairness. **Main Findings:** 1. **Marginal-based synthetic data can accurately train machine learning models for tabular data.** Marginal-based synthetic data can train models with similar utility to models trained on real data. Our experiments show that for a privacy-loss parameter \(\epsilon>5.0\), models trained with MWEM PGM (AUC = 0.684), MST (AUC = 0.662) and Privbayes (AUC = 0.668) provides utility very similar to models trained on real data (AUC = 0.684). Additionally, we evaluated models using synthetic data, and found that marginal-based synthetic provides a good evaluation, with synthetic data providing an AUC = 0.671 versus AUC = 0.684 (measured using real data). 2. **Synthetic data sets trained with MWEM PGM can be used for accurate model training and fairness evaluation in the case of tabular data.** We found that MWEM PGM synthetic data can train models that achieves very similar utility and fairness characteristics of models trained with real data. Additionally, the synthetic data generated by MWEM PGM algorithm showed very similar behavior to real data when used to evaluate utility an fairness of machine learning models. This is the first study that (first time that it is showing that synthetic data can actually present reliable behavior and a potential substitute for real data sets in end-to-end machine learning pipelines) This work significantly extends and sub sums a previous version, presented at the _Machine Learning for Data: Automated Creation, Privacy, Bias Workshop_ at the _International Conference on Machine Learning (ICML)_ (workshop without proceedings) [22]. ## 1 Related Works As synthetic data generation becomes standard practice for data sharing and publishing, understanding the impacts of utilizing synthetic data in machine learning pipelines is of significant importance. Although previous works have advised against using synthetic data to train and evaluate any final tools deployed in the real world [23], in very sensitive scenarios, such as human trafficking data [24], synthetic data might be the only available data for training and testing models. The promises synthetic data brings generated an interest in understanding impacts of utilizing synthetic in data analysis and machine learning. Some of these works include analysing the utility of differentially private synthetic data in different tasks [25], investigating if training models with differentially private synthetic images can increase subgroup disparities [8], the impacts different types of synthetic data can have in model fairness [26, 20], utility of synthetic data in downstream health care classification systems [7], and whether feature importance can be accurately analyzed using differentially private synthetic data [21]. All these works are ultimately trying to answer a same question: to which extent can we substitute real data with synthetic data, and which are the best synthetic data generation techniques for model training? However these works still left questions unanswered. First of all, there hasn't been a systematic study of impacts of using synthetic data sets in end-to-end machine learning pipelines, which means evaluating the use of synthetic data for model training and model evaluation. Additionally, there has been a lot of focus on image classification tasks [8, 20] where the disparity in accuracy are largely attributable to the class imbalance in these data sets: i.e disadvantaged classes are also rare classes in the data set thereby leading to worse performance on these. In contrast, our work studies these issues in the context of tabular data sets and in settings where the data has an intrinsic bias against sub-populations that are not necessarily rare in the data set. Moreover, our work focus on comparing two types of data synthetization algorithm families: marginal-based and GAN-based data synthesizers. While, these two type of data synthetization algorithms have been previously compared for utility [25], no such extensive comparative analysis exists for fairness. We are the first to extensively study the differences of applying data generated by these two families types of data synthetization algorithms in end-to-end machine learning pipelines for utility and multiple fairness metrics. ## 2 Preliminaries In this section we introduce the concepts of differential privacy and algorithmic fairness. We refer the reader to [27, 28, 1] for detailed explanation of these concepts. Additionally, we describe the synthetic data generation techniques and the data sets used in our experiments. ### Differential privacy Differential privacy is a rigorous privacy notion used to protect an individual's data in a data set disclosure. We present in this section notation and definitions that we will use to describe our privatization approach. We refer the reader to [29], [30] and [31] for detailed explanations of these definitions and theorems. Pure Differential Privacy. A randomized mechanism \(\mathcal{M}:\mathcal{D}\rightarrow\mathcal{A}\) with data base domain \(\mathcal{D}\) and output set \(\mathcal{A}\) is \(\epsilon\)-differentially private if, for any output \(A\subseteq\mathcal{Y}\) and neighboring databases \(D,D^{\prime}\in\mathcal{D}\) (i.e., \(D\) and \(D^{\prime}\) differ in at most one entry), we have \[\Pr[\mathcal{M}(D)\in A]\leq e^{\epsilon}\Pr[\mathcal{M}(D^{\prime})\in A]\] Approximate Differential Privacy. A randomized mechanism \(\mathcal{M}:\mathcal{D}\rightarrow\mathcal{A}\) with data base domain \(\mathcal{D}\) and output set \(\mathcal{A}\) is \((\epsilon,\delta)\)-differentially private if, for any output \(A\subseteq\mathcal{Y}\) and neighboring databases \(D,D^{\prime}\in\mathcal{D}\) (i.e., \(D\) and \(D^{\prime}\) differ in at most one entry), we have \[\Pr[\mathcal{M}(D)\in A]\leq e^{\epsilon}\Pr[\mathcal{M}(D^{\prime})\in A]+\delta\] The privacy loss of the mechanism is defined by the parameter \(\epsilon\geq 0\) in the case of 'pure' differential privacy and parameters \(\epsilon,\delta\geq 0\) in the case of 'approximate' differential privacy. The definition of neighboring databases used in this paper is user-level privacy. User-level privacy defines neighboring to be the addition or deletion of a single user in the data and all possible records of that user. Informally, the definition above states that the addition or removal of a single individual in the database does not provoke significant changes in the probability of any differentially private output. Therefore, differential privacy limits the amount of information that the output reveals about any individual. A function \(f\) (also called query) from a data set \(D\in\mathcal{D}\) to a result set \(A\subseteq\mathcal{A}\) can be made differentially private by injecting random noise to its output. The amount of noise depends on the sensitivity of the query. ### Fairness Metrics In this section we present the definition of two different fairness metrics: Equal Opportunity [27] and Statistical Disparity [28]. Given a data set \(W=(X,Y^{\prime},C)\) with binary protected attribute \(C\) (e.g. race, sex, religion, etc), remaining decision variables \(X\) and predicted outcome \(Y^{\prime}\), we define Equal Opportunity and Statistical Disparity as follows. Equal Opportunity/ Equality of Odds requires equal True Positive Rate (TPR) across subgroups: \[\Pr(Y^{\prime}=1|Y=1,C=0)=\Pr(Y^{\prime}=1|Y=1,C=1)\] where Y' is the model output. Statistical Parity requires positive predictions to be unaffected by the value of the protected attribute, regardless of true label \[\Pr(Y^{\prime}=1|C=0)=\Pr(Y^{\prime}=1|C=1)\] We follow the approach of [32, 33] and utilize difference in Equal Opportunity (DEO) = \(|\Pr(Y^{\prime}=1|Y=1,C=0)-\Pr(Y^{\prime}=1|Y=1,C=1)|\) and difference in Statistical Parity (DSP) = \(|\Pr(Y^{\prime}=1|C=0)-\Pr(Y^{\prime}=1|C=1)|\) to measure model fairness. ### Differentially Private Synthetic Data Generators. We use several differentially private (DP) synthetic data generators that have been specifically tailored for generating tabular data with the goal of enhancing their utility for learning tasks. We consider two broad categories of approaches: i) marginal-based methods, ii) and Generative Adversarial Network (GAN) based models. #### 2.3.1 Marginal-based methods MWEM PGMIs a variation of the multiplicative weights with exponential mechanism algorithm (MWEM), which is an algorithm that generated synthetic data based on linear queries. The algorithm aims to produce a data distribution that produces query answers similar answers resulted when querying the real data set. The MWEM PGM variation combines probabilistic graphical models with the MWEM algorithm. The structure of the graphical model is determined by the measurements, such that no information is lost relative to a full contingency table representation. MstIs a synthetic data generation algorithm that acts selecting 2- and 3-way marginals for measurement. It combines one principled step, which is to find the maximum spanning tree (MST) on the graph where edge weights correspond to mutual information between two attributes, with some additional heuristics to ensure that certain important attribute pairs are selected, and a final step to select triples while keeping the graph tree-like. PrivBayesIn order to improve the utility of the generated synthetic data, [12] approximates the actual distribution of the data by constructing a Bayesian network using the correlations between the data attributes. This allows them to factorize the joint distribution of the data into marginal distributions. Next, to ensure differential privacy, noise is injected into each of the marginal distributions and the simulated data is sampled from the approximate joint distribution constructed from these noisy marginals. #### 2.3.2 GAN-based methods Generative neural networks (GANs) are a type of artificial neural network used in machine learning for generating new data samples similar to a given training data set. Generative adversarial networks are based on a game, in the sense of game theory, between two machine learning models, a discriminator model \(D\) and the generator \(G\) model. The goal of the generator is to learn realistic samples that can fool the discriminator, while the goal of the discriminator is to be able to tell generator generated samples from real ones [13]. **Conditional Tabular GAN (CTGAN)**[34] is an approach for generating tabular data. CTGAN adapts GANs by addressing issues that are unique to tabular data that conventional GANs cannot handle, such as the modeling of multivariate discrete and mixed discrete and continuous distributions. It achieves these challenges by augmenting the training procedure with mode-specific normalization, and by employing a conditional generator and training-by-sampling that allows it to explore discrete values more evenly. When applying differentially private SGD (DP-SGD) [35] in combination with CTGAN the result is a DP approach for generating tabular data. The **PATE (Private Aggregation of Teacher Ensembles)** framework [36] protects the privacy of sensitive data during training, by transferring knowledge from an ensemble of teacher models trained on partitions of the data to a student model. To achieve DP guarantees, only the student model is published while keeping the teachers private. The framework adds Laplacian noise to the aggregated answers from the teachers that are used to train the student models. CTGAN can provide differential privacy by applying the PATE framework. We call this combination PATE-CTGAN, which is similar to PATE-GAN [15], for images. The original data set is partitioned into \(k\) subsets and a DP teacher discriminator is trained on each subset. Further, instead of using one generator to generate samples, \(k\) conditional generators are used for each subset of the data. ### Data sets Adult data setIn the Adult data set (32561 instances), the features were categorized as protected variable (C): gender (male, female); and response variable (Y): income (binary); decision variables (X): the remaining variables in the data set. We map into categorical variables all continuous variables. Prison Recidivism data setFrom the COMPAS data set (7214 instances), we select severity of charge, number of prior crimes, and age category to be the decision variables (X). The outcome variable (Y) is a binary indicator of whether the individual recidivated (re-offended), and race is set to be the protected variable (C). We utilize a reduced set of features as proposed in [18]. Fair Prison Recidivism data setWe construct a "fair" data set based on the COMPAS recidivism data set by employing a data preprocessing technique for learning non-discriminating classifiers from [37], which involves changing the class labels in order to remove discrimination from the data set. This approach selects examples close to the decision boundary to be either 'promoted', i.e label flipped to the desirable class, or 'demoted', i.e label flipped to the undesirable class (ex: the'recidivate' label in the COMPAS data set is the undesirable class). By flipping an equal number of positive and negative class examples, the class skew in the data set is maintained. ## 3 Experimental Evaluation One potential outcome of synthetic data sharing is the utilization of synthetic data for training and evaluating an ML model. The trained model could be deployed without assessing its performance on real data, due to lack of data access. However, it is important to acknowledge that these trained models are ultimately applied to real data. This scenario is illustrated in Figure 1. In our experiments, we address the concern that there may be substantial disparities in performance between the evaluation phase (employing synthetic data) and the deployment phase (utilizing real data). We compare the performance of logistic regression models trained with differentially private synthesizers, focusing on two performance dimensions: utility and fairness. The follow the approach of [20] and use logistic regression for downstream classification evaluation to avoid another layer of stochasticity. To assess the utility performance, we employ the AUC-ROC metric, which quantifies trade-off between the recall and false positive rate. We examine fairness performance through three different perspectives. Previous research [17] has indicated that differentially private machine learning models tend to perform worse on minority groups. To this point we evaluate the decay in accuracy for the different subgroups in the protected attribute. We also measure the difference in equality of odds (DEO) and the difference in statistical parity (DSP). These metrics allow us to assess any disparities or bias in the model's predictions across different groups. Furthermore, we also investigate the extent to which one can accurately assess a model utilizing synthetic data sets. Again, we evaluate two performance dimensions: utility and fairness. Our experiments include two types of synthesizers: marginal-based and GAN-based synthesizers. We generate synthetic data using three differentially private marginal-based synthesizers: MST [10], MWEM-PGM [11] and PrivBayes [38]; and four GAN-based synthesizers: DP-GAN, DP-CTGAN, PATE-GAN and PATE-CTGAN [14]. For each synthetic data generation technique, we generate data sets utilizing different four privacy-loss budgets \(\epsilon=\{0.5,1.0,5.0,10.0\}\). We randomly divide the real data set into an 80/20 split, separating the data into generator and test data sets. We run 10 rounds of synthetic DP data generation on the 80% split (generator data), where we generate synthetic train and synthetic test data sets. We utilize the SmartNoise Library1 implementation of the synthesizers, and approximate-DP approaches use the library's default value of \(\delta\). For experiments using PrivBayes Synthesizers, we use the DiffPrivLib implementation 2. Footnote 1: [https://smartnoise.org](https://smartnoise.org) Footnote 2: [https://github.com/IBM/differential-privacy-library](https://github.com/IBM/differential-privacy-library) We train Logistic Regression models using the generated DP synthetic data sets. In experiments where we test the trained models on real data, model performance is evaluated on the real test data (the 20% test split from the real data). In experiments where we test the trained models on synthetic data, models are evaluated using the synthetic test data sets. We report, for each technique and each value of privacy loss parameter, the mean across 10 rounds. Our experiments use three data sets: the UCI Adult data set [39] and ProPublica's COMPAS recidivism data [40], and a fair COMPAS data set as defined in Section 2.4. The fair COMPAS data set provides a way to evaluate synthetic data generation performance in fair and biased versions of the same data set. ### Utility analysis of synthetic data in machine learning pipelines We evaluate the quality of models trained with synthetic data sets by measuring AUC and accuracy of the protected class. We consider privacy-loss budgets of 0.5, 1.0, 5.0 and 10.0. We compare the AUC obtained in our experiments with the AUC measured by training models with the real (non-synthetic) Adult, COMPAS, and fair COMPAS data sets. Figure 2 (a) shows AUC for different privacy losses and different synthesizers. The plots show the variation of AUC as a function of \(\epsilon\) for marginal-based and GAN-based synthesizers. The top row refers to marginal-based synthesizers. Overall, the performance of the models trained on marginal-based synthetic data is very close to the baseline model, trained on real data. For all three synthesizers, we see an increase in AUC as we increase \(\epsilon\). For all data sets, Adult, COMPAS and fair COMPAS, the perfomance of MST and MWEM-PGM are similar across all values of \(\epsilon\). PrivBayes has a slightly lower performance. For \(\epsilon>5.0\), all three synthesizer presented very similar performance. For COMPAS data set (which has a small dimension) the performance of synthetic data sets as training data is very close to the performance of the real data. The bottom row of figure 2 (a) presents the perfomance of GAN-based synthetic data. The overall performance of this type of synthesizer is worse and the performance of the marginal-based synthesizer. As noted by [25], models trained on GAN-based synthetic data perform worse than models trained on marginal-based synthetic data. With AUC \(\approx 0.5\), we can say that they do not do much better than random guessing. Additionally, we see a much greater variance in results for a same privacy-loss budget, which is observed by the large error bars. Finally, as the privacy-loss budget increases, the utility does not necessarily increase. Although several works have assessed the performance of machine learning models trained with synthetic data sets [20, 21, 25], this is the first study to analyze if synthetic data sets can be used for model assessment, and how close to reality such assessment is. In Figure 2 (b) we present the plots of variation of AUC for different values of epsilon. The plots in the first line refer to performance of models trained on marginal-based synthesizers, the the plots in the second line refer to GAN-based synthesizers. By comparing the evaluation of models trained with marginal-based data in Figure 2 (a) - assessment with real data, and in Figure 2 (b) - assessment with synthetic data, we see that the assessment is very similar in both cases when the synthesizers are MST and MWEM PGM. When assessing with synthetic data, we notice that PrivBayes present a large difference in assessment results when assessing model trained on Adult and fair COMPAS synthetic data. GAN-based synthetic data present inconsistent behavior when used for model assessment. When comparing the assessments in Figure 2 (a) - assessment with real data, and (b) - assessment with synthetic data, we notice that using DP-GAN synthetic data for model assessment can over estimate model AUC. Overall, GAN-based synthetic data will make assessments that are as good as random guessing. Marginal-based synthetic data does better at training and assessing utility of models.We ranked the utility performance of all synthesizers taking based on two criteria: ability to generate synthetic data for model training and ability to generate synthetic data for model assessment. Table 1 shows the ranking of synthesizers when generating training ans assessment data for the Adult data and the COMPAS data. The table also shows model AUC metrics when measured with real data - AUC(R), and model AUC when measured with synthetic data - AUC(S). All table results accounts for synthetic data generated with privacy-loss parameter \(\epsilon=5.0\). MWEM PGM synthetic data outperforms all other synthetic data for both tasks: utility as training data for machine learning models and utility as evaluation data for ## References * [1] J. A. Bell, A. A. Bell, and A. A. Bell (2014) A survey of the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the effects of the survey on the effects of the effects of the survey on the effects of the effects of the survey on the effects of the effects on the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the survey on the effects of the effects on the effects of the survey on the effects of the effects on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the effects of the survey on the effects of the survey on the effects of the survey on the effects of the effects of the survey machine learning models. The performance of synthetic data sets generated with MWEM PGM and MST perform well and with a small performance decay when compared to real data, both when using the synthetic data for model training and model assessment. For model training, when comparing the AUC achieved by model trained with the real data set (AUC = 0.892 ) to the metrics achieved by models trained with MWEM PGM data (AUC = 0.850 ) and MST (AUC = 0.836), the decrease in performance is small. The synthetic data sets also present a good performance as assessment data. The model assessment resulted when using MST data (AUC = 0.804) and MWEM PGM data (AUC = 0.820) presents consistent results with a small decay. Although PrivBayes data presents good performance in model training (AUC = 0.846), there is a significant discrepancy between assessment utilizing real data and assessment utilizing synthetic data. We reach similar conclusions when analysing results for COMPAS data. Using GAN-based data as training data resulted in models with utility very close to random guess, as already observed in previous analysis, with DP-GAN synthetic data performing slightly better than the rest of GAN-based data sets. ### Fairness analysis of synthetic data in machine learning pipelines Impacts on subgroup accuracyIn the previous section, we showed that adding privacy by utilizing synthetic data sets in machine learning pipelines results in a utility decrease. We now proceed to perform a fairness analysis. In this experiment, presented in Table 2 we analyze model accuracy for different groups in the protected class to understand whether the addition of privacy to the data pipeline harms model utility more for the minority class than it does for the privileged class. Results in Table 2 refer to the Adult data set. From a fairness perspective, the overall behavior of all synthesizers is to have less accuracy decay for the protected class than it does for the privileged class. As observed on the utility experiments, MWEM PGM and MST are the best performing synthetic data sets for both pipeline tasks: training and evaluation. Although MWEM PGM presents good results for minority and privileged classes, where the model accuracy is very close to the baseline model - captured by accuracy minority(R) and accuracy privileged(R) in Table 2. Additionally, evaluation with MWEM PGM synthetic data sets captured accuracy metric for both classes - captured by accuracy minority(S) and accuracy privileged(S) - that are very close to model evaluation done with real data. \begin{table} \begin{tabular}{l c c c c c c} & \multicolumn{2}{c}{Rank} & \multicolumn{2}{c}{Adult} & \multicolumn{2}{c}{COMPAS} \\ Synthesizer & Adult & COMPAS & AUC (R) & AUC (S) & AUC (R) & AUC (S) \\ \hline \hline MWEM PGM & 1st & 1st & 0.850 & 0.820 & 0.684 & 0.671 \\ MST & 2nd & 2nd & 0.836 & 0.804 & 0.662 & 0.643 \\ PrivBayes & 3rd & 3rd & 0.846 & 0.544 & 0.668 & 0.629 \\ DP-GAN & 4th & 6th & 0.667 & 0.880 & 0.503 & 0.568 \\ PATE-CTGAN & 5th & 4th & 0.343 & 0.504 & 0.552 & 0.492 \\ DP-CTGAN & 6th & 5th & 0.284 & 0.485 & 0.504 & 0.502 \\ PATE-GAN & 7th & 7th & 0.210 & 0.597 & 0.362 & 0.587 \\ \end{tabular} \end{table} Table 1: Synthesizer utility comparison. We compare and rank all synthesizers by their ability to generate quality training data and evaluation data for machine learning pipelines. The comparison presented accounts for synthetic data generated with privacy-loss parameter \(\epsilon=5.0\). In addition to present a performance ranking for Adult and COMPAS data, we show a comparison of model AUC measured with real data - AUC(R), and model AUC measured with synthetic data - AUC(S). Impacts on statistical parityA model presents statistical parity if the percentage of positive predictions are the same for all subgroups. The goal of the experiments in this section is to measure whether models trained with synthetic data preserve the characteristics of models trained on real data. Our experiments measure the difference in statistical parity (DSP) of models. We measure DSP of models using real data - DSP(R), and using synthetic data - DSP(S). We present a detailed comparison of DSP for all three data sets and all synthesizers on Table 3. We notice from our experiments that several models trained on synthetic data seem to be less biased than the model trained on real data. MWEM PGM synthesizer presented the best utility overall, based on the results present in the previous experiments. PATE-CTGAN, however, was ranked in 5th place in utility. To understand better what is behind this apparent fairness provided by PAET-CTGAN, we investigate the percentage of positive labelled samples in the training data, evaluation data and predictions. We present percentages for minority and privileged classes for adult data in Table 4. We observe in Table 4 that synthetic data generated with PATE-CTGAN presents a very similar percentages of samples with positive labels, of \(\approx 5\%\) for each group that belongs to the protected attribute. At a first sight, this seems like a data set with promising fairness capabilities. However, when training models with such data, there are no positive predictions resulting from the model scoring. The model trained with PATE-CTGAN data acts like a majority baseline classifier for all groups. The data sets generated with DP-CTGAN presented an accentuated disparity in positive labels percentages between minority and privileged classes. In the real data 30% of privileged class contains positive labels, while only 10% of minority class contains positive labels. Although DP-GAN synthesizer generates data where 31% of privileged class with positive labels (a value similar to the one presented in the real data - 30%), there is a significant decrease in the percentage of positive class in the minority class, which is \(\approx 6\%\). This imbalance is even further accentuated by the models trained with DP-GAN synthetic data. Model predictions resulted in over half of samples from the privileged class being classified with positive labels (versus 20% of minority class). MWEM PGM once again was the best overall performing model, as it preserves similar percentages of positive labels for all groups, 11% and 30% (compared to 11% and 30% in real data). Models trained with MWEM PGM also presented similar metric to models trained with real data, and even presenting slightly improvement in fairness. The DSP delta presented in Table 3 quantifies the difference in DSP observed during model evalution with real data and model evaluation with synthetic data. For Adult \begin{table} \begin{tabular}{l c c c c} & \multicolumn{3}{c}{Accuracy of different subgroups - Adult data} \\ Synthesizer & minority (R) & minority (S) & privileged (R) & privileged (S) \\ \hline \hline Real & 0.924 & – & 0.804 & – \\ MWEM PGM & 0.909 & 0.898 & 0.779 & 0.770 \\ MST & 0.914 & 0.895 & 0.756 & 0.765 \\ PrivBayes & 0.892 & 0.596 & 0.709 & 0.575 \\ DP-GAN & 0.733 & 0.929 & 0.585 & 0.855 \\ PATE-CTGAN & 0.892 & 0.938 & 0.695 & 0.942 \\ DP-CTGAN & 0.889 & 0.999 & 0.693 & 0.999 \\ PATE-GAN & 0.892 & 0.874 & 0.695 & 0.854 \\ \end{tabular} \end{table} Table 2: Accuracy comparison for different groups. The comparison presented accounts for synthetic data generated with privacy-loss parameter \(\epsilon=5.0\). We show a comparison of model accuracy for the different groups measured with real data (R), and model accuracy measured with synthetic data (S). data set, a positive DSP delta means that evaluation with synthetic data observed fairer results than evaluation with real data. For COMPAS and fair COMPAS data, a negative DSP delta means that evaluation with synthetic data observed fairer results than evaluation with real data. Across all data sets, models trained with MWEM PGM presented DSP metrics very similar to models trained with real data, this is captured by the DSP(R) metric. Impacts on equal opportunityEqual Opportunity requires equal True Positive Rate (TPR) across subgroups. Difference in equal opportunity (DEO) measures the difference of privileged group TPR and minority group TPR. We perform a thorough analysis to understand two points.First, what is the DEO of models trained with synthetic data sets, and how does it compare with models trained with real data? Second, we investigate whether synthetic data preserves similar true positive rates across all subgroups. We present in Table 5 experiment results comparing DEO of models trained with differentially private synthetic data sets (\(\epsilon=5.0\)). These experiment are similar to the statistical parity experiments, we use real data - DEO(R) - to measure DEO of models trained on synthetic data, as well as synthetic data - DEO(S). The model trained with MWEM PGM synthetic data was the only one that presented a similar DEO to the baseline model, outperforming all other models trained with synthetic data. Note that our comparison, as in the DSP case, focus on understanding which synthetic data sets \begin{table} \begin{tabular}{l l l l l} Data & Synthesizer & DSP(R) & DSP(S) & DSP delta \\ \hline \hline Adult & MST & 0.083 & 0.072 & 0.011 \\ & MWEM PGM & 0.168 & 0.159 & 0.009 \\ & PrivBayes & 0.051 & 0.035 & 0.016 \\ & DP-CTGAN & -0.001 & 0.000 & -0.001 \\ & DP-GAN & 0.346 & 0.253 & -0.093 \\ & PATE-CTGAN & 0.000 & 0.000 & 0.000 \\ & PATE-GAN & 0.000 & 0.000 & 0.000 \\ & Real & **0.189** & & \\ COMPAS & MST & -0.182 & -0.101 & -0.082 \\ & MWEM PGM & -0.218 & -0.190 & -0.028 \\ & PrivBayes & -0.211 & -0.166 & -0.046 \\ & DP-CTGAN & -0.034 & 0.001 & -0.034 \\ & DP-GAN & 0.072 & -0.089 & 0.161 \\ & PATE-CTGAN & -0.008 & -0.009 & 0.001 \\ & PATE-GAN & 0.000 & -0.001 & 0.001 \\ & Real & **-0.205** & & \\ COMPAS & MST & -0.185 & -0.090 & -0.095 \\ (fair) & MWEM PGM & -0.018 & 0.015 & -0.032 \\ & PrivBayes & -0.065 & 0.037 & -0.027 \\ & DP-CTGAN & -0.034 & -0.004 & -0.030 \\ & DP-GAN & 0.066 & 0.096 & -0.030 \\ & PATE-CTGAN & 0.000 & 0.000 & 0.000 \\ & PATE-GAN & 0.000 & 0.000 & 0.000 \\ & Real & **-0.025** & & \\ \end{tabular} \end{table} Table 3: Difference in statistical parity (DSP) of models trained with synthetic data. We measure the DSP of models using real test data - DSP(R) and synthetic test data DSP(S). DEO delta quantifies the difference between DSP(R) and DSP(S). All synthetic data where generated using privacy-loss parameter \(\epsilon=5.0\). \begin{table} \begin{tabular}{l c c c c c c} Data & Synthesizer & DEO (R) & DEO (S) & DEO Delta \\ \hline \hline Adult & MST & 0.038 & 0.076 & -0.037 \\ & MWEM PGM & 0.206 & 0.200 & 0.006 \\ & PrivBayes & 0.094 & 0.030 & 0.063 \\ & DP-CTGAN & -0.002 & \(\approx\)0.00 & -0.002 \\ & DP-GAN & 0.527 & 0.641 & -0.116 \\ & PATE-CTGAN & 0.000 & 0.000 & 0.000 \\ & PATE-GAN & 0.000 & 0.000 & 0.000 \\ & Real & **0.173** & & \\ COMPAS & MST & -0.150 & -0.089 & -0.061 \\ & MWEM PGM & -0.215 & -0.224 & 0.009 \\ & PrivBayes & -0.177 & -0.158 & -0.020 \\ & DP-CTGAN & -0.031 & -0.000 & -0.031 \\ & DP-GAN & -0.075 & 0.020 & 0.055 \\ & PATE-CTGAN & -0.011 & -0.009 & -0.002 \\ & PATE-GAN & 0.000 & -0.001 & 0.001 \\ & Real & **-0.204** & & \\ COMPAS & MST & -0.181 & -0.073 & -0.107 \\ (fair) & MWEM PGM & -0.019 & 0.037 & -0.056 \\ & PrivBayes & -0.057 & 0.003 & -0.054 \\ & DP-CTGAN & -0.030 & -0.005 & -0.026 \\ & DP-GAN & 0.097 & 0.087 & 0.010 \\ & PATE-CTGAN & 0.000 & 0.000 & 0.000 \\ & PATE-GAN & 0.000 & -0.001 & -0.000 \\ & Real & **-0.027** & & \\ \end{tabular} \end{table} Table 4: Ratio of samples with positive labels for each subgroup in the protect class in the Adult data. We compare percentages present in the true labels of the real data and the predicted labels. Analogously, we measure the percentage of samples with positive present in the training, testing and predicted labels for data sets generated from three distinct synthesizer techniques: MWEM PGM, PATE-CTGAN and DP-GAN. Predictions(data1/data2) represents prediction labels of an experiment where model was trained with data1, and predictions were performed on data2. \begin{table} \begin{tabular}{l c c c c} Generation & Generated data & Predictions(R) & Predictions(S) \\ algorithm & Female & Male & Female & Male \\ \hline \hline Real & 0.109 & 0.303 & 0.055 & 0.244 & \\ MWEM PGM & 0.120 & 0.307 & 0.042 & 0.209 & 0.043 & 0.202 \\ MST & 0.123 & 0.297 & 0.032 & 0.115 & 0.031 & 0.102 \\ PrivBayes & 0.265 & 0.342 & 0.004 & 0.055 & 0.056 & 0.091 \\ PATE-GAN & 0.125 & 0.144 & \(\approx\) 0 & \(\approx\) 0 & \(\approx\) 0 \\ PATE-CTGAN & 0.056 & 0.058 & \(\approx\) 0 & \(\approx\) 0 & \(\approx\) 0 \\ DP-GAN & 0.061 & 0.307 & 0.199 & 0.545 & 0.016 & 0.269 \\ DP-CTGAN & \(\approx\) 0 & 0.002 & 0.227 & 0.130 & \(\approx\) 0 & \(\approx\) 0 \\ \end{tabular} \end{table} Table 5: Difference in equal opportunity (DEO) of models trained with synthetic data. We measure the DEO of models using real test data - DEO(R) and synthetic test data DEO(S). DEO delta quantifies the difference between DEO(R) and DEO(S). All synthetic data where generated using privacy-loss parameter \(\epsilon=5.0\). can train model that behave as close as possible to models trained with real data. Models trained with MST, which presented promising utility metrics and subgroup accuracy, did not capture as well the difference in equality on odds in experiments with the Adult data. For experiments with COMPAS and fair COMPAS data, MST performs better, but still worse than MWEM PGM, as we can see on Table 5. As we investigate the details of variation in TPR it becomes clear MWEM PGM is the the best technique for training models that preserve fairness characteristics of models trained with real data. Experiments with Adult data (Figure 3) show that the difference between the privileged group TPR and the minority group TPR of models trained with MWEM PGM data is very similar to the difference between subgroups TPR of models trained with real data. Experiments with COMPAS data (Figure 4) are even more compelling. Not only the difference between the subgroup TPR of the model trained with MWEM PGM data is close to that of the model trained with real data, but the true positive rates of the subgroups are also very similar to the TPR of the model trained with real data. Figures 3 and 4 show that models trained with marginal-based synthetic data outperforms models trained with GAN-based synthetic data for our tested data sets. We make a similar analysis when evaluating how good synthetic data sets are for Fig 3: True positive rate (TPR) variation of different subgroups of the protected attribute of the Adult data. The top two rows shows TPR variation for different values of privacy-loss parameter \(\epsilon\), of models trained with synthetic data and evaluated with real data. The bottom two rows shows TPR variation for different values of privacy-loss parameter \(\epsilon\), of models trained with synthetic data and evaluated with synthetic data. assessing TPRs. Figures 3 and 4 also present plots of TPR when synthetic data is used during model assessment. Models trained with MWEM PGM data present very similar assessment when using both real and synthetic data as test data. Models trained on MST and PrivBayes present greater discrepancies. Models trained on GAN-based data present even greater discrepancies between assessments made with real and synthetic data as test data. Fig 4: TPR variation: COMPAS and \(\epsilon\), of models trained with synthetic data and evaluated with real data.We also present TPR variation for different values of \(\epsilon\), of models trained with synthetic data and evaluated with synthetic data. ## 4 Limitations and Future Works Although the data sets utilized in our analysis are commonly employed in fairness literature, extending the validity of our findings to larger-scale data sets would provide a more comprehensive understanding of the generalizability and robustness of marginal-based synthetic data approaches. Future research should focus on exploring the performance of these frameworks in real-world scenarios with diverse and extensive data sets. This would contribute to the broader applicability and reliability of synthetic data methods in various domains and facilitate a more nuanced understanding of their limitations and capabilities. Finally, extending our analysis to non-tabular data would be an interestign sequel to this work. ## 5 Conclusion Our research comprehensively evaluates the impact of synthetic data sets for training and testing in machine learning pipelines in the case of tabular data sets. Specifically, we compare the performance of marginal-based and GAN-based synthesizers within a machine-learning pipeline and analyze various utility and fairness metrics for tabular data sets. Our main findings are as follows: Marginal-based synthetic data demonstrated comparable utility to real data in end-to-end machine-learning pipelines. MWEM PGM (AUC = 0.684) provides utility very close to models trained on real data (AUC = 0.684). Furthermore, we show that model evaluation using synthetic data also provides similar results to evaluation using real data, for tabular data. The metrics obtained when utilizing marginal-based synthetic data (AUC=0.671) are comparable to real data (AUC = 0.684). Synthetic data sets trained with MWEM PGM do not increase model bias and can provide a realistic fairness evaluation. Our study reveals that MWEM PGM synthetic data can train models that achieve similar utility and fairness characteristics as models trained with real data. Additionally, when used to evaluate the utility and fairness of machine learning models, the synthetic data generated by the MWEM PGM algorithm exhibits behavior very similar to real data. These findings highlight synthetic data's potential reliability and viability as a substitute for real data sets in end-to-end machine learning pipelines for tabular data. Furthermore, our research sheds light on the implications of model fairness when utilizing differentially private synthetic data for model training. One crucial observation is that synthetic data that does well in model training might perform differently when used as evaluation data. This was the case with Privbayes and some of the GAN-based synthetic data. This observation is important as synthetic data techniques gain acceptance as the standard data publishing approach in domains such as healthcare, humanitarian action, education, and population studies.
2306.04281
HornFuzz: Fuzzing CHC solvers
Many advanced program analysis and verification methods are based on solving systems of Constrained Horn Clauses (CHC). Testing CHC solvers is very important, as correctness of their work determines whether bugs in the analyzed programs are detected or missed. One of the well-established and efficient methods of automated software testing is fuzzing: analyzing the reactions of programs to random input data. Currently, there are no fuzzers for CHC solvers, and fuzzers for SMT solvers are not efficient in CHC solver testing, since they do not consider CHC specifics. In this paper, we present HornFuzz, a mutation-based gray-box fuzzing technique for detecting bugs in CHC solvers based on the idea of metamorphic testing. We evaluated our fuzzer on one of the highest performing CHC solvers, Spacer, and found a handful of bugs in Spacer. In particular, some discovered problems are so serious that they require fixes with significant changes to the solver.
Anzhela Sukhanova, Valentyn Sobol
2023-06-07T09:35:59Z
http://arxiv.org/abs/2306.04281v2
# HornFuzz: Fuzzing CHC solvers ###### Abstract. Many advanced program analysis and verification methods are based on solving systems of Constrained Horn Clauses (CHC). Testing CHC solvers is very important, as correctness of their work determines whether bugs in the analyzed programs are detected or missed. One of the well-established and efficient methods of automated software testing is fuzzing: analyzing the reactions of programs to random input data. Currently, there are no fuzzers for CHC solvers, and fuzzers for SMT solvers are not efficient in CHC solver testing, since they do not consider CHC specifics. In this paper, we present HornFuzz, a mutation-based gray-box fuzzing technique for detecting bugs in CHC solvers based on the idea of metamorphic testing. We evaluated our fuzzer on one of the highest performing CHC solvers, Spacer, and found a handful of bugs in Spacer. In particular, some discovered problems are so serious that they require fixes with significant changes to the solver. metamorphic testing, fuzzing, CHC solvers + Footnote †: 10.0223 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4-0007-0446-6/23/06.. $15.00 [https://doi.org/10.1145/39343.359345](https://doi.org/10.1145/39343.359345) + Footnote †: 10.0223 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4-0007-04446-6/23/06.. $15.00 [https://doi.org/10.1145/39343.359345](https://doi.org/10.1145/39343.359345) + Footnote †: 10.0223 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4-0007-0446-6/23/06.. $15.00 [https://doi.org/10.1145/39343.359345](https://doi.org/10.1145/39343.359345) + Footnote †: 10.0223 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4-0007-04446-6/23/06.. $15.00 [https://doi.org/10.1145/39343.359345](https://doi.org/10.1145/39343.359345) + Footnote †: 10.0223 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4-0007-04446-6/23/06.. $15.00 [https://doi.org/10.1145/39343.359345](https://doi.org/10.1145/39343.359345) + Footnote †: 10.0223 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4-0007-04446-6/23/06.. $15.00 [https://doi.org/10.1145/39343.359345](https://doi.org/10.1145/39343.359345) ## 1. Introduction CHC solvers are widely used in static analysis. Constrained Horn Clauses (CHC) are logical implications in first-order theories, and programs can be modeled as systems of such formulae (Becker, 1988). There are high-performance CHC solvers that automatically solve these systems: Spacer (Petersburg, 1988), Eldarica (Becker, 1988), PCSat (Kurz, 1988), etc. Currently, the most well-known and efficient CHC solver is Spacer (which regularly performs well in the annual CHC solver competition, CHC-COMP (Schafer, 1988)). This solver is part of the Z3 (Friedman, 1988) project and uses Z3 for SMT solving and interpolation. If a CHC solver works incorrectly, it can lead to wrong conclusions during the analysis and, as a result, to undetected bugs. That is why it is necessary to look for bugs in CHC solvers. One of the fast and efficient approaches to finding bugs is fuzzing: an automated software testing technique that involves providing unexpected or random data as input to a computer program and analyzing the reaction of the program. It is commonly used in the domains of software security and quality assurance (Becker, 1988; Schafer, 1988; Schafer, 1988; Schafer, 1988; Schafer, 1988). To the best of our knowledge, there are no fuzzers for CHC solvers now, and fuzzers for SMT solvers (SMT fuzzers) are not suitable for testing CHC solvers. Constrained Horn Clauses are formulae of a certain structure (Becker, 1988), and SMT fuzzers usually do not retain this structure when they generate new inputs. In this scenario, we get not a CHC system, but an SMT formula, that is, we test not a CHC solver, but its SMT part. If SMT fuzzers retain the CHC structure, then they generate formulae with little to no variability, which is suboptimal for the fuzzing process. Moreover, the SMT fuzzers do not consider the peculiarities of CHC solver implementations. Thus creating a fuzzer for testing CHC solvers, is of great interest. Some of the serious bugs that can occur in the solvers are incorrect satisfiability checks (Schafer, 1988) and the generation of an invalid model. In this work, we have focused on finding exactly these bugs. Our paper makes the following contributions: 1. We propose to use metamorphic testing as a basis for fuzzing CHC solvers. 2. We have designed and developed an open-source mutational fuzzer based on metamorphic testing for CHC solvers, HornFuzz1. Footnote 1: [https://github.com/AnzhelaSukhanova/HornFuzz](https://github.com/AnzhelaSukhanova/HornFuzz) [accessed: June 9, 2023] 3. We have tested HornFuzz on Spacer CHC solver and successfully found both CHC system satisfiability bugs and some cases of wrong model generating. Some of the bugs have already been fixed by the Spacer developers. A group of problems with model generation has not yet been fixed, since it requires significant changes in the solver. The rest of the paper is organized as follows. In Sect. (2) we present the basic terminologies used throughout the paper and give an overview of our approach. In Sect. (4) we explain the technical solutions and describe our implementation in detail. Then, in Sect. (5), we analyze the bugs discovered. Sect. (6) includes information about related work; we draw conclusions and briefly discuss plans for future work in Sect. (7). ## 2. Overview This section gives a definition of Constrained Horn Clauses and discusses solvers of systems of such clauses. It also introduces fuzzing, metamorphic testing, and presents the idea on which HornFuzz is based. ### Constrained Horn Clauses A Constrained Horn Clause (CHC) is a first-order logic formula of the form \(V\,V(\varphi\wedge p_{1}[X_{1}]\wedge\cdots\wedge p_{n}[X_{n}])\to h[X]\), where * \(\varphi\) -- constraint over some background theory; * \(V\) -- variables; * \(X_{1}\), \(\ldots X_{n}\), \(X\) -- terms over \(V\); * \(p_{1}\), \(\ldots p_{n}\) -- uninterpreted fixed-arity predicates; * \(h\) -- uninterpreted fixed-arity predicate or \(\bot\)(Borde et al., 2017). The Constrained Horn Clause is linear if its premise contains at most one uninterpreted predicate. A system of clauses is linear if every clause in it is linear. Accordingly, a system is non-linear if at least one of its clauses is non-linear. Such systems of Constrained Horn Clauses are more difficult to solve than linear systems (Kolmogorov, 1979). Rules are the Constrained Horn Clauses containing an uninterpreted predicate in the implication conclusion. ### CHC solvers To efficiently solve CHC systems, CHC solvers rely on performing multiple specific SMT queries, and to do that they use SMT solvers. SMT solvers are complex tools for evaluating the satisfiability of SMT instances. Satisfiability Modulo Theories (SMT) is the problem of deciding the satisfiability of a first-order formula over some first-order theories. In this paper, we will talk about the most widely used and efficient CHC solver -- Spacer. Spacer is part of an open-source project Z3, one of the highest performing SMT solvers (Borde et al., 2017), and uses its components for SMT-solving and interpolation (Kolmogorov, 1979). Spacer supports linear real and integer arithmetic, array theory, and offers best-effort support for many other SMT theories: data structures, bit-vectors, and non-linear arithmetic (Kolmogorov, 1979). It is also able to solve non-linear clauses. ### Fuzzing Fuzzing is a technique for automated software testing which is based on analyzing the program reaction to random input data (Kolmogorov, 1979). A fuzzer can be generation-based or mutation-based, depending on whether inputs are generated from scratch or by modifying existing inputs (Kolmogorov, 1979). In the first case, the fuzzer can generate data, for example, according to a specified grammar. Mutational fuzzers start their work with a certain set of initial inputs (so-called seeds). As they work, they change the seeds through the use of mutations. Metamorphic testing is a variant of mutation-based fuzzing. It is a fuzzing technique that proposes to generate new test data while preserving some seed property (Borde et al., 2017). The expectation is that the seed and its mutant have a specific common property called the metamorphic relation; i.e., fuzzer mutations must retain this property. ## 3. Concepts In this section, we present the main ideas on which HornFuzz is based. We describe the bug space that it considers and the mutations it uses. ### Main idea It is difficult to create a variety of Constrained Horn Clauses with non-trivial solutions, so the mutational approach is more suitable for fuzzing CHC solvers than generative. This way of generating test data is the base of HornFuzz. Additionally, there are many various benchmarks with CHC systems from solver competitions and papers that can be used as seeds. At the moment, the capabilities of existing CHC solvers vary greatly, and having no reference solver is very beneficial. This discourages the use of solvers as oracles. Given this, metamorphic testing is of particular interest to find CHC satisfiability bugs. This testing technique does not require a complex oracle (in our case, another reference CHC solver) with which we would check the solver-under-test results. In the context of satisfiability check bugs, the metamorphic relation is satisfiability. If the seed is satisfiable, then its metamorphic mutants should be satisfiable too and vice versa. Thus, using equivalent mutations, we can quickly check such a complex property as satisfiability. This idea formed the basis for our HornFuzz fuzzer. ### Bug space HornFuzz targets two kinds of bugs. First, it tries to find a satisfiability check bug. Each mutant is checked by the solver, and the result is compared with the seed satisfiability. If the satisfiability differs, we have found a satisfiability check bug. If both formulae are satisfiable, the fuzzer substitutes the mutant's model (its satisfying assignment) into its formula and checks whether it is correct: every mutant clause should be satisfied by its model. If this property is violated, we have found a model generation bug. Additionally, HornFuzz collects statistics on cases where CHC solver cannot solve the instance or timeouts when checking satisfiability or model. We describe the decision process for different cases in table (1). ### Mutations Mutations used by HornFuzz can be divided into three types: Z3 rewrites (mutation 9), changing solver parameters (mutation 10) and our own CHC specific mutations (mutations 1-8). Z3 rewrites and solver parameter transformations are complex satisfiability-preserving transformations which are already implemented in Z3. Mutations 1-4 and 6 can be applied only to the clause body. 1. SWAP_AND swaps two terms of the random conjunction: \[\varphi\wedge\psi\leadsto\psi\wedge\varphi\] 2. DUP_AND duplicates one term of the random conjunction: \[\varphi\wedge\psi\leadsto\varphi\wedge\psi\wedge\varphi\] \begin{table} \begin{tabular}{c|c|c|c} \hline \hline TruthSolver result & sat & unsat & unknown \\ \hline sat & check model & handle bug & log info \\ \hline unsat & handle bug & pass & \\ \hline \hline \end{tabular} \end{table} Table 1. Problem handling. 3. BREAK_AND splits the random conjunction into two: \[\varphi\land\psi\land\tau\rightsquigarrow\varphi\land(\psi\land\tau)\] 4. SWAP_OR swaps two terms of the random disjunction: \[\varphi\lor\psi\rightsquigarrow\psi\lor\varphi\] 5. MIX_BOUND_VARS shuffles variables in the quantifier prefix: \[\forall\,(x,y,z).\psi(x,y,z)\rightsquigarrow\forall\,(y,z,x).\psi(x,y,z)\] 6. ADD_INEQ replaces the random inequality with a conjunction of the same inequality and a less strong one: \[x<c\rightsquigarrow(x<c)\land(x<c+1)\] 7. ADD_LIN_RULE adds a linear rule that can be simplified to \[\forall\,\tilde{x}.\bot\to P(\tilde{x})\] where \(P\) is a randomly chosen uninterpreted predicate from the initial formula. The premise of the implication is one of the unsatisfiable formula set. 8. ADD_NONLIN_RULE adds a non-linear rule of the form: \[\forall\,\tilde{v}.(\exists\,\tilde{x}.\,(x_{1}>x_{2}\land P( \tilde{x},\tilde{v}))\land(x_{2}>x_{3}\land P(\tilde{x},\tilde{v}))\land \cdots\land(x_{n}>x_{1}\land P(\tilde{x},\tilde{v})))\to P(\tilde{x}, \tilde{v})\] where \(\tilde{x}=(x_{1},x_{2},\ldots,x_{n})\), \(x_{i}\in\mathbb{Z}\), \(n\) is a random number from 1 to 10; \(\tilde{v}=(o_{1},o_{2},\ldots,o_{m})\), \(m\) -- arity of a randomly chosen uninterpreted predicate \(P\), all types of \(\tilde{v}\) elements correspond to the \(P\) argument types. The notation \(P(\tilde{x},\tilde{v})\) actually means that \(P\) is applied to a random sequence of \(m\) arguments from the union of \(\tilde{x}\) and \(\tilde{v}\) with respect to the declared argument types (at least one such sequence always exists: it is \(\tilde{v}\)). 9. Equivalent rewrites offered by Z3 with or without parameters (see (A.1) for a complete list of parameters). 10. Changing the solver parameters (see (A.2) for a complete list of parameters) that affect the instance solving process (Brenker and Kern, 1996). CHC can be represented as \(\forall\,V\!-\!(\varphi\wedge p_{1}[X_{1}]\land\cdots\wedge p_{n}[X_{n}]) \lor h[X]\) and some seeds contain them in this form. We do not use mutations DUP_OR and BREAK_OR similar to mutations DUP_AND and BREAK_AND because doubling any of this disjunction terms would break the CHC structure. Mutation MIX_BOUND_VARS can affect the order in which clauses are considered by the solver. We expect the mutation ADD_INEQ to affect the Model Based Projection (MBP) process for LIA (Kern, 1996). When MBP tries to eliminate integer variables from a formula, it builds upper and lower bounds for these variables. This bound is based on the formula inequalities and the choice of lower or upper bound depends on the number of the corresponding inequalities. ## 4. Implementation In this section we demonstrate the details of HornFuzz implementation: give a description of HornFuzz workflow, talk about heuristics of seed selection and implementation details. This section also gives an overview of bug case reducer. ### HornFuzz process Figure (1) presents an overview of the HornFuzz work process. HornFuzz works continuously, mutating and checking instances until it is stopped forcibly. Its work begins with seed preparation. Each seed defines a group consisting of the seed and its mutated versions (seed group). After seed preprocessing, a seed group is selected. The last group element will be mutated: it can be either the seed or its last mutant. The next step is to select a mutation and apply it. Then the resulting mutant is checked for all types of interesting bugs handled (Section (3.2)). Figure 1. HornFuzz workflow. If no bugs are found, the mutant is added to its seed group. This process is then repeated until it becomes necessary to change the seed group. This can happen in the following cases. * The execution trace generated by the solver under test when solving mutants from the same seed group does not change for \(5\cdot n\) where \(n\) -- number of clauses in the system. * Once a bug has been detected, continuing to work with that mutant's seed group may result in the same bug being rediscovered. Therefore, we use a bug detection limit upon reaching which the fuzzer proceeds to the next instance. * We have reached a limit for the number of "unknown" solver results. This is done because there is a high probability that the solver will not be able to solve a mutant for a seed, which resulted in "unknown". For example, if an instance has become so complex that a solving timeout is reached, then most likely other mutations will also lead to a timeout. Also in cases where the formula takes too long to be solved or the solver does not discover new traces for a long time, the mutants can be discarded. The seed group returns to the state before the addition of the mutants, that is, when it contains only the seed. Highlighted blocks are configurable. They may be skipped or realized in several different ways, depending on the options with which HornFuzz is launched. * The choice of mutation can be weighted or equiprobable. * Seed selection depends on the chosen heuristics, which attempt to predict which CHC instances are more likely to trigger a bug. ### Metrics There are two main metrics for fuzzer performance: code coverage (Zhou et al., 2017) and unique execution traces (Beng et al., 2017). We use both metrics for different needs. To evaluate HornFuzz in the fuzzing process, we use the number of unique execution traces discovered when solving instances. This metric is good for tracking progress, since it allows us to understand whether the instance solving process has changed. Code coverage does not provide such information. In addition, it makes sense to collect not the entire execution trace, but the sequence of the main steps of solving the Constrained Horn Clause system. When a trace is detailed, small changes that do not affect the solution process (for example, deletion of subexpressions or other auxiliary actions) can lead to an increase in the number of unique traces. That is, with an insignificant change in the CHC system and its solution, the value of the metric will increase. To avoid such irrelevant boosting of this metric, we use the selective collection of unique execution traces. An example of the execution trace part is presented in the listing (1). For evaluating the effectiveness of the fuzzer in general, code coverage is a good metric. That is why we collect coverage statistics too. Unique execution traces are a priority metric for us, but it is also important to consider coverage statistics. ### Seed selection Most mutation-based fuzzers are sensitive to seed selection (Zhou et al., 2017). Therefore, it is more efficient to choose an instance for mutation not in random order but according to some heuristic. HornFuzz implements several ways to prioritize CHC systems. When starting the fuzzer, any of the following three heuristics can be selected, as well as their combination (the instances can be divided into groups according to one heuristic and ordered in each group according to the other). 1. Selection of instances that cover the most rare transitions when solving. 2. Selection of the most complex instances. According to this heuristic, non-linear systems are preferred to linear ones, and within these groups they are ordered by the number of uninterpreted predicates. 3. Selection of the most simple instances, an inverse of heuristic (2). The second heuristic is designed to test the solver behavior on complex inputs, and the third heuristic is focused on increasing the number of launches because they will be faster. Now we take a closer look at the first point, since this heuristic seems to us to be the most efficient in finding bugs. The solver can be represented as a system that changes its state at discrete times or discrete-time Markov chain (Koren and Koren, 2016). The states in this context are the steps of the CHC system solving recorded in the execution trace, and the transitions are given by two states following each other in the trace. That is, speaking of rare solver transitions, we mean pairs of states, not single state. It is important to understand how the solver got into a particular trace state, since this may indicate with what state of the solver or memory it came there. Therefore, information about the transition is more meaningful than information about visiting the trace state or line. Moreover, this heuristic is particularly interesting because the transitions that the solver takes the least often correspond to parts of the solver that are executed rarely. Intuitively, rarely executed parts have a higher probability of containing a bug. Thus, this heuristic is not focused so much on opening new traces, but on reaching rare transitions, which are more likely to lead to the finding bugs. We find the rarest transitions by collecting the statistics of solving the input formulae. HornFuzz builds a transition matrix for each instance and a combined matrix of all transitions using the execution traces. Thus, the probability of each transition can be calculated as the number of times this transition was made divided by the total number of transitions from its source state. Let each solver state have a number, \(n\) be the total number of states. To select seed groups with the most rare transitions, we compile a transition matrix \(P\). It is a stochastic matrix of transition probabilities \(p_{ij}\) from state \(i\) to state \(j\). \[P=\left(\begin{array}{cccc}p_{11}&p_{12}&\ldots&p_{1n}\\ p_{21}&p_{22}&\ldots&p_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ p_{n1}&p_{n2}&\ldots&p_{nn}\end{array}\right)\] From this matrix, a weight matrix \(W\) can be obtained, where the transition weight is inversely proportional to the transition probability. \[w_{ij}=\left\{\begin{array}{ll}\frac{1}{p_{ij}}\;,\;\text{if}\;p_{ij}>0\\ 0\;\;\;\text{else}\end{array}\right.\] By element-wise multiplying the weight matrix with the matrix \(T_{m}\), which contains the number of transitions of a particular instance \(m\), and taking the sum of all elements of the result, we get a value that indicates the priority of this instance: \[k_{m}=\sum_{1\leqslant i,j\leqslant n}w_{ij}\cdot t_{ij}^{m}\] where \(t_{ij}^{m}\) is an element of \(T_{m}\). Instance selection is determined by the value of \(k_{m}\). ### Mutation choice Depending on the HornFuzz launch options, it can choose mutations equiprobably or weighted. In both cases, the fuzzer first selects the type of mutation: rewrites, solver parameters, or own mutations. The probability of choosing one type or another is the same. The current mutation will be selected from mutations of this type. If HornFuzz is run with the option of weighted mutation selection, the mutation weights are updated throughout the duration of the fuzzer work. The change in mutation weight depends on whether the mutation opens a new execution trace. Initially, all mutations have the same weight 0.1. The updated weight is calculated using the following formula based on the golden ratio. \[\text{w}=0.62\cdot\text{w}^{\prime}+0.38\cdot p\] where \(\text{w}^{\prime}\) -- current weight and \(p\) -- probability of opening a new trace (the ratio of how often this mutation resulted in a new unique execution trace to the total number of applications of this mutation). ### Reducer Large chains of mutations and complex instances are not suitable for reporting bugs as they are difficult for a human to understand. We implemented a custom test case reducer for our CHC fuzzing. It allows one to perform the following activities: * search for the minimum subsequence of the mutation chain, which still triggers a bug; * simplify CHC system that triggered a bug (problem CHC system). Minimizing chains of mutations helps to localize the bug. In particular, if one of the mutations changes the solver parameter associated with a certain transformation, then most likely the bug is in this transformation. Reducing the mutatant greatly simplifies the bug analysis. For example, a formula that combines several theories after reduction may contain operations of only one theory, which allows you to quickly localize the bug. For the mutation chain reduction the Delta Debugging algorithm is used (Blek and Kresner, 2017), which can be described as follows. The mutation chain is divided into chunks of fixed size (initialized to half the size of the initial chain). Algorithm considers each chunk in turn, checking whether the bug still occurs when that chunk is removed, and eliminating it if so. Afterwards, the chunk size is halved and these steps are repeated. Reduction terminates when no chunk of size one can be removed. The problem CHC system is reduced by removing subexpressions using the Hierarchical Delta Debugging algorithm (HernFuzz, 2017). Instead of reducing subsequences like regular Delta Debugging, it removes subtrees of the CHC system AST. First, an attempts are made to completely remove each clause from the system. Then the reducer goes through the AST of the remaining clauses. When excluding parts of an instance AST, the following must be true: * reduced system remains equivalent to its original formula; * the bug remains reproducible. Equivalence is checked as follows. Let \(\{f_{i}\}_{i=0}^{n}\) be the original system of clauses and \(\{r_{i}\}_{i=0}^{n}\) be the simplified one. Then the system \(\{f_{i}\neq r_{i}\}_{i=0}^{n}\) must be unsatisfiable. ### Details We implemented the HornFuzz algorithm as a prototype fuzzer, written in Python. Below are some empirically derived constants used by the prototype. * The number of times the fuzzer works with instances from one seed group in a row is limited to 100. * The mutation weights are recalculated every 1000 runs. The fuzzer can be launched with the following options. * -mutations allows one to choose mutation types, that is, HornFuzz can be configured to use only one or two mutation types (by default, HornFuzz uses all types). There are three mutation types: our own mutations, Z3 rewrites and changing solver parameters. * -heuristic allows one to choose the seed selection heuristic. One of the following four heuristics can be chosen: seed selection by complex or simple inputs, by rare transitions, and in default order. * -options allows to run the fuzzer with an equiprobable choice of mutations. ## 5. Evaluation In this section, we talk about the HornFuzz evaluation and analyze its results together with the bugs found in Spacer CHC solver. We noticed that after adding mutations that use the solver parameters, the fuzzer began to detect more bugs. In addition, it is necessary to check the assumption about the efficiency of seed selection by rare transitions. Thus, we consider two main hypotheses: using solver parameters significantly increases the probability of finding a bug; using seed selection by rare transitions increases the fuzzer efficiency. To test these hypotheses we aimed to answer the following research questions. **RQ 1:** Does using solver parameters or/and seed selection by rare transitions allow one to explore different scenarios of solving the CHC systems? **RQ 2:** Which mutations are most effective? **RQ 3:** How effective are different HornFuzz configurations in finding bugs? We run all experiments on a machine with the following environment: Arch Linux x86_64 operating system, Intel Core i7-4790 CPU, 3.60GHz, 32Gb RAM. ### Seeds Our experiment uses the following CHC systems as seeds: * benchmarks of the CHC solver competition CHC-COMP for 2021 (Brandt et al., 2021); * benchmarks of the international software verification competition SV-COMP (Brandt et al., 2021); * benchmarks from papers (Brandt et al., 2021; Krizhevsky et al., 2021). HornFuzz does not use formulae that Spacer cannot solve or that cause a timeout. Currently, the fuzzer uses 3404 Constrained Horn Clause systems in LIA, LRA, array theory, and combinations. ### Experiments To answer the proposed research questions, we compare four HornFuzz configurations. We can describe these configurations using the set of options with which the fuzzer was launched (3). * A "naive" HornFuzz configuration does not use solver parameters and does not prioritize instances. * A "parameter" configuration uses all mutations, but does not prioritize instances. * A "transition" configuration does not use solver parameters, but prioritizes instances by rare transitions. * A "full" HornFuzz configuration works with all mutations and prioritizes instances by rare transitions. All of these configurations used a weighted choice of mutations. We compare these HornFuzz versions on 10 runs, each of which lasted 24 hours. **RQ 1.** The code coverage and unique trace statistics averaged on all launches along with standard deviations are presented in table (2). As a baseline, we compare against run with no mutations at all (i.e. against the basic unmutated seeds). The data shows the configuration with seed selection by rare transitions outperforms the naive one. The transition version has more coverage on average (relative to the baseline) and also discovers almost the same number of unique traces, while having 17% fewer executions. This can be explained by the fact that the launch of instances with more rare transitions is targeted at opening new unique traces. The connection between the focus on rare transitions and the coverage growth is not so obvious, but it is there: rare transitions often lead to previously unvisited solver code lines. The version using solver parameters also outperforms the naive version in fewer launches. On average, it requires 5.88 runs to find a new unique trace, compared to 11.9 runs for the naive configuration. Thus, although the naive version of HornFuzz is faster, the transition and parameter versions are better at analyzing the solver behavior. The full HornFuzz configuration explores the solver much more efficiently than the naive one and the transition configuration. Compared to the parameter version, the advantage of the full version is not so significant, but still there and with fewer runs. Although the coverage of parameter version may be larger (considering the deviation), the full version is more stable. Thus, we can conclude that the use of solver parameters and seed selection based on rare transitions increases the HornFuzz efficiency. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline Data & Only seeds & Naive version & Transition version & Parameter version & Full version \\ \hline Runs & 3404 & **146083**\(\pm\) 1781 & 120823 \(\pm\) 2828 & 52885 \(\pm\) 3942 & 47304 \(\pm\) 1968 \\ \hline Line coverage, lines & 52257 & 52845 \(\pm\) 358 & 53380 \(\pm\) 553 & 56280 \(\pm\) 3100 & **57530**\(\pm\) 566 \\ \hline Line coverage, \% & 18.5 & 18.71 \(\pm\) 0.0013 & 18.9 \(\pm\) 0.002 & 19.92 \(\pm\) 0.011 & **20.37**\(\pm\) 0.002 \\ \hline Growth, lines & – & 588 \(\pm\) 26 & 1123 \(\pm\) 458 & 4023 \(\pm\) 2566 & **5273**\(\pm\) 486 \\ \hline Growth, \% & – & 0.21 \(\pm\) 0.01 & 0.4 \(\pm\) 0.16 & 1.42 \(\pm\) 0.91 & **1.87**\(\pm\) 0.17 \\ \hline Unique traces & 933 & **12273**\(\pm\) 138 & 12249 \(\pm\) 89 & 8991 \(\pm\) 1035 & 8461 \(\pm\) 265 \\ \hline Number of runs to & – & 11.9 & 9.86 & 5.88 & **5.59** \\ open a new trace & & & & \\ \hline \hline \end{tabular} \end{table} Table 2. Coverage and unique trace statistics averaged over all runs along with standard deviations. \begin{table} \begin{tabular}{c|c|c} \hline \hline -mutations -heuristic & default & rare transitions \\ \hline our own mutations, Z3 rewrites & naive & transition \\ \hline all mutations: our own mutations, Z3 rewrites, changing solver parameters & parameter & full \\ \hline \hline \end{tabular} \end{table} Table 3. HornFuzz configurations. **RQ 2:** To answer this question, we collected statistics on mutation weights. The mutation weight means the probability of opening a new trace when using this mutation. Table (4) shows the solver parameter mutations with the highest weights for HornFuzz versions that use this mutation group. Table (5) shows the other mutations with the highest weights for all HornFuzz configurations. For each run, the final mutation weights were taken and then averaged over all runs. The data obtained confirm our hypothesis about the efficiency of the solver parameters. The high weights of the solver parameters mean that their use often leads to the discovery of a new trace. It is also shown that proposed CHC solver-specific mutations have a significantly higher probability of opening a new trace than any simplifications (SMT-specific equivalent rewrites). Among simplifications empty_simplify has the highest new trace discover probability, that is, simplification with rewriting rules enabled by default, which basically perform Boolean formulae simplifications. **RQ 3:** The considered runs revealed only cases of incorrect model generation. We are not yet able to classify all bugs but we have done some work on bug localization. We manually divided the bugs found into several groups according to the parts of the solver to which they belong. These groups are presented in table (6). The table shows bug numbers averaged on all launches along with standard deviations. When analyzing the detected bugs, we noticed that many of them are caused by transformations of CHC systems: in particular, by linear and eager rule inlinings. In addition to transformation groups, our attention was also attracted by bugs that affect the Spacer core. We also found bugs that belong to several parts of the solver and categorize them as "unclassified". The full version is shown to outperform all other configurations in terms of the number of bugs discovered on average. Also, it is shown that the use of solver parameter mutation or seed prioritization increases the number of bugs found compared to the naive version. Since many discovered bugs belong to linear and eager rule inlinings, it is interesting to compare HornFuzz configurations in terms of number of discovered bugs in other parts of the solver, as this shows how the fuzzer explores different solver parts. By detecting bugs in the Spacer core, the parameter version is in the lead; however, in other groups, the full fuzzer configuration outperforms all other configurations. In summary, the data obtained convince us that the use of solver parameters and instance selection by rare transitions improve the HornFuzz quality. ### Discovered bugs During our experiments, HornFuzz has found 2 confirmed satisfiability bugs and 13 confirmed model generation problems in Spacer. 11 of them have already been fixed by the Z3 developers, while others are in the process of being fixed. The following issues have been resolved. * Duplication of the conjunction element (mutation DUP_AND) changed the solving result from sat to unsat2. Footnote 2: [https://github.com/Z3Prover/z3/issues/5714](https://github.com/Z3Prover/z3/issues/5714) [accessed: June 9, 2023] * When solving the instance with the fp.xform.array_blast solver parameter, the solving result changed from sat to unsat3. When solving a system with the fp.xform.array_blast, in instances in the theory of arrays pairs of equalities of the form (\(v_{1}=select\ A\ i_{1}\)) \(\land\) (\(v_{2}=select\ A\ i_{2}\)) were replaced by (\(i_{1}=i_{2}\to v_{1}=v_{2}\)) (Ackermann reduction [(14)]), where, \(A-\) array parameterized by the set of indices \(I\) and the set of values \(V\), \(i_{1},i_{2}\in I\), \(v_{1},v_{2}\in V\). * 9 cases of incorrect model generation4. \begin{table} \begin{tabular}{l|c|c} \hline \hline Parameter & Parameter & Full \\ & version & version \\ \hline xform.transform\_arrays & 0.967 & 0.964 \\ \hline xform.slice & 0.957 & 0.960 \\ \hline xform.inline\_eager & 0.750 & 0.761 \\ \hline xform.inline\_linear & 0.455 & 0.477 \\ \hline xform.tail\_simplifier\_pve & 0.376 & 0.399 \\ \hline xform.elim\_term\_ite & 0.321 & 0.320 \\ \hline xform.inline\_linear\_branch & 0.284 & 0.318 \\ \hline space.eq\_prop & 0.284 & 0.305 \\ \hline space.use\_inductive\_generalizer & 0.279 & 0.318 \\ \hline other & \multicolumn{2}{c}{\(<0.3\)} \\ \hline \hline \end{tabular} \end{table} Table 4. Probabilities of opening a new trace when changing solver parameters. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline Mutation & Naive & Transition & Parameter & Full \\ & version & version & version \\ \hline SWAP\_OR & 0.282 & 0.344 & 0.350 & 0.380 \\ \hline ADD\_INEQ & 0.226 & 0.270 & 0.262 & 0.279 \\ \hline SWAP\_AND & 0.207 & 0.247 & 0.240 & 0.255 \\ \hline DUP\_AND & 0.207 & 0.251 & 0.242 & 0.255 \\ \hline MIX\_BOUND\_VARS & 0.191 & 0.230 & 0.219 & 0.234 \\ \hline BREAK\_AND & 0.120 & 0.143 & 0.125 & 0.140 \\ \hline empty\_simplify & 0.094 & 0.113 & 0.109 & 0.118 \\ \hline elim\_and & 0.009 & 0.012 & 0.010 & 0.014 \\ \hline other & \multicolumn{3}{c}{\(<0.01\)} \\ \hline \hline \end{tabular} \end{table} Table 5. Probabilities of opening a new trace when applying simplifications and our mutations. A group of cases of incorrect model generation is associated with bugs in clause transformations. 4 such bugs have not been fixed5 since fixing them and preventing other bugs of this type requires significant and thoughtful changes in the solver. Footnote 5: [https://github.com/Z3Prover/z3issues/5874](https://github.com/Z3Prover/z3issues/5874) [accessed: June 9, 2023] Footnote 6: [https://github.com/Z3Prover/z3issues/5882](https://github.com/Z3Prover/z3issues/5882) [accessed: June 9, 2023] Footnote 7: [https://github.com/Z3Prover/z3issues/5892](https://github.com/Z3Prover/z3issues/5892) [accessed: June 9, 2023] Since bug deduplication has not yet been implemented in the fuzzer, the bugs found were reported sequentially: HornFuzz always tested the solver version in which the last detected bug was fixed. ## 6. Related Work The closest related work for HornFuzz is research on fuzzing SMT solvers. There are many efficient SMT solver fuzzers: STORM (Han et al., 2017), BanditFuzz (Kang et al., 2017), FuzzSMT (FuzzSMT, 2017), Falcon (FuzzSMT, 2017), OpFuzz (Puzz, 2017), YinYang (Puzz, 2017), etc. STORM is an open-source black-box mutational fuzzer for detecting critical bugs in SMT solvers. STORM uses fragments of existing SMT instances to generate new inputs. The fuzzer generates instances that are satisfiable by construction and thus it solves the oracle problem. BanditFuzz is a multiagent reinforcement learning performance SMT fuzzer. BanditFuzz generates tests according to the grammar given to it and mutates them while maintaining the structure. The fuzzer examines which grammatical structures lead to bug occurrences. BanditFuzz is open-source. FuzzSMT is a grammar-based black-box SMT fuzzer. FuzzSMT randomly generates syntactically valid SMT formulae in array or bit-vector theory in order to detect critical defects. Falcon is a grammar-based generative fuzzer based on exploring the functionalities used by the SMT solver (configuration space). It learns the correlations between the generated inputs (formula space) and the configuration space and proposes a feedback-driven mechanism. OpFuzz is a type-aware mutational SMT fuzzer. OpFuzz leverages type-aware operator mutation to generate test inputs and validates the results of the SMT solvers by comparing the results of two or more solvers and reporting their inconsistencies. YinYang is a mutational fuzzer for SMT solvers based on Semantic Fusion methodology. It fuse two existing equisatisfiable formulae into a new formula that combines the structures of its ancestors in a novel manner and preserves the satisfiability by construction. Unfortunately, none of the SMT solver fuzzers is suitable for fuzzing CHC solvers, e.g., finding satisfiability bugs and model generation problems. STORM generates incorrect systems, that is, it does not preserve the CHC structure. This fuzzer creates the input instances from fragments of formulae from initial data. It generates a random assignment of all free variables in the formulae. Then, STORM randomly selects some parts of the original formulae and, in accordance with their values in the considered interpretation, composes instances from these subformulae. Thus, the probability that STORM will generate a Constrained Horn Clause is extremely small. BanditFuzz, FuzzSMT and Falcon can generate a correct Constrained Horn Clause systems, but it is difficult to create various Constrained Horn Clauses from scratch. The main problem is the satisfiability of rule clause bodies and query clause body. Firstly, at least one rule body for each uninterpreted predicate must be satisfiable, or we will end up with a trivial "false" interpretation. Secondly, if we want to generate satisfiable CHC system, we must guarantee that all rule bodies are unsatisfiable with query clause body. Therefore, we need to synthesize such (BranditFuzz, 2017) query body. Such a synthesis problem is comparable in complexity to the solution of the CHC system. In addition, if the generation-based fuzzer cannot synthesize formulae that are known to be satisfiable, then it must check the results that the solver under test produces. Thus, another CHC solver is required to check the generated systems. Moreover, such a solver may also be needed in the case when the satisfiability is known in order to make sure that the formula generator is correct. However, at the moment the capabilities of the CHC solvers are very different, that is, not all systems can be checked. OpFuzz is also inefficient in testing CHC solvers, since it uses other solvers as oracles. YinYang is able to generate correct CHC systems. However, since it obtains them by combining other CHC systems, its mutants grow very quickly, which leads to timeouts when solving. In addition, when obtaining instances in this way, the solver still considers each subsystem independently, since there are no clauses with predicates from both systems. Thus, the fuzzer does not explore new solutions. ## 7. Conclusion In this paper we present HornFuzz, the first fuzzer for testing the CHC solvers. HornFuzz is mutational and is based on metamorphic \begin{table} \begin{tabular}{l|c|c|c|c} \hline Group & Naive version & Transition version & Parameter version & Full version \\ \hline Linear rule inlining transformation & \(0\pm 0\) & \(0.17\pm 0.41\) & \(29.7\pm 10.59\) & \(48.5\pm 16.66\) \\ \hline Eager rule inlining transformation & \(25\pm 12.33\) & \(31.67\pm 8.87\) & \(13.7\pm 6.52\) & \(19.75\pm 11.54\) \\ \hline Other transformations & \(1.67\pm 1.61\) & \(3.17\pm 3.06\) & \(1.6\pm 1.07\) & \(5.25\pm 6.32\) \\ \hline Spacer core & \(0\pm 0\) & \(0\pm 0\) & \(0.7\pm 2.21\) & \(0.13\pm 0.35\) \\ \hline Unclassified & \(5.08\pm 3.99\) & \(5.17\pm 4.79\) & \(5.9\pm 3.25\) & \(12.13\pm 4.12\) \\ \hline \hline Number of bugs discovered & \(31.75\pm 15.59\) & \(40.17\pm 9.75\) & \(51.6\pm 7.81\) & \(85.75\pm 21.35\) \\ \hline \end{tabular} \end{table} Table 6. Bug classification. testing. It utilizes best practices from the state-of-the-art fuzzing research: has several seed selection heuristics and uses weighted selection for mutations. We also implemented a specialized reducer based on [hierarchical] delta debugging. HornFuzz has found bugs in the Spacer solver and its developers acknowledged them as genuine and (in some cases) serious problems. Some bugs have already been fixed, while others are in the process of being fixed. While we were interested in validating Spacer as the most used CHC solver, testing other solvers may be future work. Now it requires the highlevel instrumentation: trace collection must be added to the solver under test. You cannot also use mutations that change Z3 parameters. But other fuzzer components can be used without modification. Moreover, an important task for the future is to add bug deduplication. Now HornFuzz cannot determine whether bugs have a common cause or not. One correction can fix many bugs, and if we report every bug we find, we will create a lot of inconvenience for solver developers. Without bug deduplication we have to wait until the bug we reported is fixed to see if the others are reproducible and it slows down the bug reporting process. Also, it would be interesting to extend the fuzzer with new mutations.
2309.02075
CHEX-MATE: A non-parametric deep learning technique to deproject and deconvolve galaxy cluster X-ray temperature profiles
Temperature profiles of the hot galaxy cluster intracluster medium (ICM) have a complex non-linear structure that traditional parametric modelling may fail to fully approximate. For this study, we made use of neural networks, for the first time, to construct a data-driven non-parametric model of ICM temperature profiles. A new deconvolution algorithm was then introduced to uncover the true (3D) temperature profiles from the observed projected (2D) temperature profiles. An auto-encoder-inspired neural network was first trained by learning a non-linear interpolatory scheme to build the underlying model of 3D temperature profiles in the radial range of [0.02-2] R$_{500}$, using a sparse set of hydrodynamical simulations from the THREE HUNDRED PROJECT. A deconvolution algorithm using a learning-based regularisation scheme was then developed. The model was tested using high and low resolution input temperature profiles, such as those expected from simulations and observations, respectively. We find that the proposed deconvolution and deprojection algorithm is robust with respect to the quality of the data, the morphology of the cluster, and the deprojection scheme used. The algorithm can recover unbiased 3D radial temperature profiles with a precision of around 5\% over most of the fitting range. We apply the method to the first sample of temperature profiles obtained with XMM{\it -Newton} for the CHEX-MATE project and compared it to parametric deprojection and deconvolution techniques. Our work sets the stage for future studies that focus on the deconvolution of the thermal profiles (temperature, density, pressure) of the ICM and the dark matter profiles in galaxy clusters, using deep learning techniques in conjunction with X-ray, Sunyaev Zel'Dovich (SZ) and optical datasets.
A. Iqbal, G. W. Pratt, J. Bobin, M. Arnaud, E. Rasia, M. Rossetti, R. T. Duffy, I. Bartalucci, H. Bourdin, F. De Luca, M. De Petris, M. Donahue, D. Eckert, S. Ettori, A. Ferragamo, M. Gaspari, F. Gastaldello, R. Gavazzi, S. Ghizzardi, L. Lovisari, P. Mazzotta, B. J. Maughan, E. Pointecouteau, M. Sereno
2023-09-05T09:21:17Z
http://arxiv.org/abs/2309.02075v2
# CHEX-MATE: A non-parametric deep learning technique to ###### Abstract Temperature profiles of the hot galaxy cluster intracluster medium (ICM) have a complex non-linear structure that traditional parametric modelling may fail to fully approximate. For this study, we made use of neural networks, for the first time, to construct a data-driven non-parametric model of ICM temperature profiles. A new deconvolution algorithm was then introduced to uncover the true (3D) temperature profiles from the observed projected (2D) temperature profiles. An auto-encoder-inspired neural network was first trained by learning a non-linear interpolatory scheme to build the underlying model of 3D temperature profiles in the radial range of [0.02-2] \(\mathrm{R}_{\mathrm{500}}\), using a sparse set of hydrodynamical simulations from the three hundred project. A deconvolution algorithm using a learning-based regularisation scheme was then developed. The model was tested using high and low resolution input temperature profiles, such as those expected from simulations and observations, respectively. We find that the proposed deconvolution and deprojection algorithm is robust with respect to the quality of the data, the morphology of the cluster, and the deprojection scheme used. The algorithm can recover unbiased 3D radial temperature profiles with a precision of around 5% over most of the fitting range. We apply the method to the first sample of temperature profiles obtained with XMM-_Newton_ for the CHEX-MATE project and compared it to parametric deprojection and deconvolution techniques. Our work sets the stage for future studies that focus on the deconvolution of the thermal profiles (temperature, density, pressure) of the ICM and the dark matter profiles in galaxy clusters, using deep learning techniques in conjunction with X-ray, Sunyaev Zel'Dovich (SZ) and optical datasets. ## 1 Introduction Galaxy clusters are ideal probes of the large-scale structure of the Universe (Holder et al., 2001; Planck Collaboration XXIV, 2016; Bocquet et al., 2015; Sereno et al., 2017; Abbott et al., 2022). X-ray observations of the hot gas in the ICM, which constitutes the dominant baryonic component in galaxy clusters, provide us with a useful tool for identifying and studying these objects. Shallow, wide-field X-ray surveys by ROSAT (ROentgen SATellite) and eROSITA (extended ROentgen Survey with an Imaging Telescope Array) have now discovered thousands of clusters (e.g. Piffaretti et al., 2011; Klein et al., 2022, and references therein). In recent years, the detailed X-ray follow-up of samples extracted from these surveys has exploited the high spatial resolution of _Chandra_ and the large field of view and sensitivity of X-ray Multi-Mirror Mission (XMM-_Newton_) to investigate the morphological, structural, and scaling properties of the cluster population (e.g. Lovisari & Maughan, 2022; Kay & Pratt, 2022, and references therein). The X-ray-derived radial temperatures and density profiles are key ingredients to derive the thermodynamic properties of the ICM, and, under the assumption of hydrostatic equilibrium, the total mass profile in galaxy clusters (Bohringer et al., 2007; Pratt et al., 2010; Ettori et al., 2010, 2013; Eckert et al., 2022). These X-ray studies have revealed the presence of two distinct types of clusters: cool cores (CCs), characterised by dense and the low-temperature cores, and non-cool core (NCCs), which exhibit a relatively flat central density and temperature. Various morphological parameters have been introduced to analyse X-ray images and to link these to the dynamical behavior of galaxy clusters and to the presence or absence of a low temperature cores, providing insights into their structural characteristics, internal dynamics, and evolutionary stages (Rasia et al., 2013; Campitiello et al., 2022). Although it is now well-established that Active Galactic Nuclei (AGN) feedback plays a major role in suppressing the ICM cooling in cluster cores, the reason for the CC and NCC dichotomy is still not fully understood (Rasia et al., 2015; Barnes et al., 2018). X-ray observations give access to the projected (2D) density and temperature profiles of the ICM. The latter is obtained from fitting a thermal model to the spectra extracted in concentric annuli about a given centre (usually the X-ray peak or centroid). For further scientific applications, these must then be deprojected to obtain the 3D profiles. If needed, the effect of the instrumental point spread function (PSF) can be taken into account in the deprojection step. While the deprojected (3D) gas density in shells can be easily estimated from the X-ray surface brightness (Croston et al., 2006; Bartalucci et al., 2017; Ghirardini et al., 2019), the deprojection of ICM temperature profiles is not trivial. This is partly due to the need for sufficient photon counts to build and fit the spectrum, leading to the temperature profiles having significantly coarser angular resolution than the density. The relationship between the observed 2D temperature profile, \(\mathbf{T}_{\mathrm{2D}}\), and the originating 3D temperature profile, \(\mathbf{T}_{\mathrm{3D}}\), can be expressed in matrix form as \[\mathbf{T}_{\mathrm{2D}}=\mathbf{C}_{\mathrm{PSF}}\otimes\mathbf{C}_{\mathrm{ proj}}\otimes\mathbf{T}_{\mathrm{3D}}=\mathbf{C}\otimes\mathbf{T}_{\mathrm{3D}}, \tag{1}\] where \(\otimes\) denotes the matrix product. Assuming a cluster is spherically symmetric and that the 3D temperature profile is defined in concentric spherical shells, the \((i,j)^{\rm th}\) element of the matrix \({\bf C}_{\rm proj}\) encodes the projection effect of the \(j^{\rm th}\) 3D shell onto the \(i^{\rm th}\) 2D annulus on the plane of the sky. The 2D annuli may have the same or different radii to the 3D shells. We note that \({\bf C}_{\rm PSF}\) is a second matrix that describes the effect of the finite instrumental PSF. Its \((k,i)^{\rm th}\) element contains the fraction of counts from the \(i^{\rm th}\) 2D annulus that are redistributed by the telescope into the \(k^{\rm th}\) observed 2D annulus. If there are \(n\)-model 3D shells, and correspondingly \(n\)-model 2D annuli, plus \(m\)-observed annuli, then the dimensions of \({\bf C}_{\rm proj}\) and \({\bf C}_{\rm PSF}\) are \(n\times n\) and \(m\times n\), respectively. If the PSF is ignored, then the dimensions of \({\bf C}_{\rm proj}\) should change to \(m\times n\). Footnote 2: The scaled radius \(R_{\rm a}\) is defined such that \(R_{\rm a}\) is the radius at which the mean matter density is \(\Delta\rho_{c}\), where \(\rho_{c}=3H^{2}(z)/8\pi G\) is the critical density of the universe at redshift \(z\). The fitting of projected parametric models of the 3D temperature profiles to both observed and simulated 2D data has been widely used in the literature (De Grandi & Molendi, 2002; Pizzolato et al., 2003; Ascasibar & Diego, 2008; Bulbul et al., 2010; Gaspari et al., 2012; Ghirardini et al., 2019). Initially, these were polytropic models that assumed a simple relationship between the density and the temperature distribution (\(T\propto\rho^{\gamma-1}\)), but this does not fully capture all the complexities of real galaxy clusters, especially the central regions of CCs clusters. The quality of recent data has necessitated more complicated models to be proposed, perhaps the most widely used being that proposed by Vikhlinin et al. (2006): \[{\rm T}_{\rm 3D}(r)=T_{0}\times(x+\tau)/(x+1)\times\frac{(r/R_{\rm a})^{-\alpha}} {(1+(r/R_{\rm a})^{b})^{c/b}}, \tag{2}\] where \(x=(r/R_{\rm cool})^{\gamma_{\rm cool}}\) and \(\{T_{0},\tau,R_{\rm r},R_{\rm cool},a_{\rm cool},a,b,c\}\) are the model parameters. In the framework of the _Representative XMM-Newton cluster structure survey_ (REXCESS; Bohringer et al., 2007) and _Following the most massive galaxy clusters over cosmic time_ (M2C; Bartalucci et al., 2019) projects, Democles et al. (2010) and Bartalucci et al. (2018) developed a non-parametric-like deconvolution approach. For this study the Vikhlinin et al. (2006) parametric model was used to perform the PSF correction and deprojection in order to estimate the temperature at the weighted radii of the 2D annular binning scheme. The 3D uncertainties were then computed consistently from the 2D errors, and random temperatures were drawn within these uncertainties to compute the temperature derivatives which were used in the hydrostatic equilibrium total mass computations. However, parametric approaches are not fully satisfactory since, with a limited number of parameters, they could fail to capture features in the temperature profile due to shock fronts, edges, mergers and the presence of cool cores with one single model. Moreover, a high degree of degeneracy between the parameters could be present. The Vikhlinin et al. (2006) parametric temperature model, which was developed for cool core systems, is a complex eight-parameter model, four of which correspond to the cool-core component. It is therefore not well-suited to highly disturbed NCC clusters, which have flatter central temperature profiles instead of declining cool cores. Furthermore, for typical X-ray data quality, it exhibits a high degree of degeneracy between its parameters, leading to poorly constrained model parameters and the results that depend on the prior choices in MCMC fitting schemes. Recently, Gianfagna et al. (2021), using a sample drawn from high resolution numerical simulations, found that the Vikhlinin et al. (2006) parametric model could only fit well to 50% of their sample in the range [0.1-1] \({\rm R}_{500}\)3. Footnote 3: The scaled radius \(R_{\rm a}\) is defined such that \(R_{\rm a}\) is the radius at which the mean matter density is \(\Delta\rho_{c}\), where \(\rho_{c}=3H^{2}(z)/8\pi G\) is the critical density of the universe at redshift \(z\). Model-independent direct spectral deprojection methods offer an alternative and are commonly used to deconvolve the 3D temperature profiles. This can involve the onion-skin technique (Fabian et al., 1981; David et al., 2001; Johnstone et al., 2005; Russell et al., 2008; Lakchchaura et al., 2016), where the 3D layers are successively built up from the outside in. However, this approach is strongly dependent on the choice of the outermost bin because it is necessary to take into account the contribution to the emission from the shells outside the outermost annulus used for the analysis. Alternatively, isothermal models can be fitted to each annular spectrum and then the matrix method (i.e. Eq. 1) can be used to deproject (e.g. Ettori et al., 2002). Ignoring the PSF effect, the equation for temperature profiles assuming that the observed projected spectra consist of a linear combination of isothermal emission models weighted by the projected emission measure simplifies to \[T_{2Dk}=\sum_{j=1}^{n}\frac{w_{k,j}}{\sum_{j=1}^{n}w_{k,j}}T_{3D,j}. \tag{3}\] Here, \(T_{3D,j}\) and \(T_{2Dk}\) are the 3D and 2D temperatures at the \(j\)th 3D spherical shell and \(k\)th 2D observed annulus, respectively, and the weights, \(w_{k,j}\) consist of the emission measure contribution of the spherical shells onto the observed annuli (e.g. Mathiesen & Evrard, 2001). However, such model-independent approaches are often unstable if the data are noisy because Eqn.1 is an inverse problem, meaning that any noise becomes greatly amplified by the deconvolution procedure. In addition, the simplistic emission measure weighting has been found to be inaccurate when applied to X-ray observations. In particular, it has been demonstrated that in the presence of a multi-temperature components gas, \(w\) is more appropriately expressed as a non-linear combination of density and temperature (Mazzotta et al., 2004; Vikhlinin, 2006), further complicating the deconvolution procedure. Machine Learning (ML) techniques have emerged as a powerful technique for predicting key features of data and for solving inverse problems to reconstruct (deconvolve) signals, images, etc, from observations. ML techniques have been applied to study galaxy clusters too. Ntampaka et al. (2015) developed an ML algorithm based on Support Distribution Machines to reconstruct dynamical cluster masses using the velocity distribution of cluster members from simulations, achieving a reduction in the scatter between the predicted and true mass by a factor of two compared to standard methods. More complex ML approaches have led to similar significant improvements in the mass estimates (Armitage et al., 2019; Calderon & Berlind, 2019). Using deep learning techniques, Convolutional Neural Network (CNN) models have also been used to infer the dynamical mass of galaxy clusters (Ho et al., 2019; Ramanah et al., 2020; Ho et al., 2021; de Andres et al., 2022). In particular, Yan et al. (2020) used mock datasets of stellar mass, soft X-ray flux, bolometric X-ray flux, and Compton y-parameter images as input to train a CNN model to infer the mass of galaxy clusters, and Gupta & Reichardt (2020, 2021) trained CNN models to estimate cluster masses used mock SZ, cosmic micro-wave background (CMB) lensing maps. Ferragamo et al. (2023), using a combination of an auto-encoder and a random forest regression technique on a sample of 73,138 mock Compton-y parameter maps from the hydrodynamical simulations of the Three Hundred Project (Cui et al., 2018), and were able to reconstruct the 3D gas mass profile and total mass in galaxy clusters with a scatter of about 10% with respect to the true values. de Andres et al. (2022) and Ho et al. (2022) have used real observations to estimate the total mass profiles of galaxy clusters using deep learning models trained on mock simulations. While de Andres et al. (2022) used the _Planck_ SZ maps (Planck Collaboration et al., 2016) to determine the masses of Planck clusters, Ho et al. (2022) used relative line-of-sight velocities and projected radial distances of galaxy pairs from Sloan Digital Sky Survey (SDSS) data (Alam et al., 2015) to determine the mass of the Coma cluster. In this work, we show the first use of neural networks, trained on numerical simulations, to deproject the X-ray temperature profiles of galaxy clusters. Our technique is based on that proposed by Bobin et al. (2019, 2023) where a so-called Interpolatory Autoencoder (**IAE**) neural network is built to model the 3D temperature profiles by learning a non-linear interpolatory scheme from a limited set of example profiles called 'anchor points'. The main advantage of the **IAE** neural network is that it is able to capture the intrinsic low-dimensional, non-linear nature of the profiles even when the training sample is not large in size. This is crucial as a small sample size can otherwise pose several challenges to the effectiveness of a deep learning algorithm. The model is trained and tested with a set of 315 simulated temperature profiles, in the radial range of [0.02-2] R\({}_{500}\), from the three hundred project (Cui et al., 2018). A robust temperature deconvolution scheme is then introduced to fit the trained **IAE** model, that makes use of an efficient regularisation term in the likelihood, along with Markov chain Monte Carlo (MCMC) sampling. The technique is then applied to a pilot sample of X-ray temperature profiles from the CHEX-MATE project (Cluster HEritage project with XMM-_Newton_: Mass Assembly and Thermodynamics at the Endpoint of structure formation; CHEX-MATE Collaboration, 2021). The paper is organised as follows. Section 2 discusses in detail the simulations used in training the **IAE** model for temperature profiles. In Section 3 we present the IAE model, and Section 4 deals with model training and the learning-based deconvolution technique. The performance of the deconvolution algorithm is tested with simulations in Section 5, while in Section 6, we apply our approach for the first time to a representative sample of 28 galaxy clusters from the first data release (DR1 hereafter, Rossetti et al., 2023, in prep.) in the CHEX-MATE sample. Finally, in Section 7, we summarise our work. Throughout this work, we adopt a flat \(\Lambda\)CDM model with \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\rm m}=0.3\) and \(\Omega_{\Lambda}=0.7\). Further, \(E(z)\) is the ratio of the Hubble constant at redshift \(z\) to its present value, \(H_{0}\) and \(h_{70}=H_{0}/70=1\). ## 2 Simulations In this work, training of the neural network is undertaken using the gas mass-weighted 3D temperature profiles, \(\mathbf{T}_{\rm 3D}\), of galaxy clusters from the Three Hundred Project(Cui et al., 2018; Ansarifard et al., 2020). These simulations are based on the 324 Lagrangian regions centred on the \(z=0\) most massive galaxy clusters selected from the MultiDark dark-matter-only MDPL2 simulation (Klypin et al., 2016), carried out with the cosmological parameters from the _Planck_ mission (Planck Collaboration XIII, 2016). MDPL2 is a periodic cube of comoving size equal to 1.48 Gpc containing \(3840^{3}\) dark matter particles. The selected Figure 1: Comparison of the observed 2D temperature profiles, scaled as a function of \(R_{500}\) and \(\Upsilon_{\chi}\), the temperature in the [0.15-0.75] \(R_{500}\) region. The thin grey lines show 50 randomly selected simulated 2D temperature profiles from the Three Hundred Project, extracted with an observation-like annular binning resolution, derived using emission measure (left panel) and spectroscopic-like (right panel) weighting schemes. The thin red lines show individual profiles in the Planck Collaboration XI (2011) sample. For better visibility, the error bars corresponding to the observed profiles are not shown. The regions enclosing thick black and red lines show the 1-\(\sigma\) dispersion (16th–84th percentile range) of the temperature profiles of the full simulated sample and the _Planck_ sample respectively. The regions enclosing the thick blue lines show the 1-\(\sigma\) dispersion of the CHEX-MATE DR1 sample. Scaled by R\({}_{500}\) and T\({}_{\chi}\), both the emission measure and spectroscopic-like derived 2D simulated temperature profiles become somewhat self-similar. Figure 2: Number of clusters as a function of T\({}_{\chi}\) in the Three Hundred Project sample, the _Planck_ SZ sample and the DR1 sample. regions were resimulated with the inclusion of baryons and were carried out with the code GADGET-X(Beck et al., 2016). To treat the baryonic physics several processes were included such as: metallicity-dependent radiative cooling, the effect of a uniform time-dependent UV background, a sub-resolution model for star formation from a multi-phase interstellar medium, kinetic feedback driven by supernovae, metal production from SN-II, SN-Ia and asymptotic-giantbranch stars, and AGN feedback (Rasia et al., 2015). In the present work, we ignore the redshift dependence of the profiles, if any, and only consider the simulated sample at a fixed redshift of \(z=0.33\), which is the average redshift of the CHEX-MATE sample. However, we consider a mass range of M\({}_{500}>10^{14}\) M\({}_{\odot}\) allowing us to build a library covering the full mass range of the CHEX-MATE sample. This left us with 314 clusters in the simulated sample. The temperature profiles were derived in 48 fixed logarithmically spaced radial bins in the range [0.02-2] R\({}_{500}\)(Ansarifard et al., 2020). The lowest radial limit of 0.02 R\({}_{500}\) was chosen since it encloses approximately 100 gas particles for the simulated sample, which we call the precision threshold condition, thus ensuring that the analysis is statistically robust and that the results are not affected by numerical fluctuations in the gas properties at small radii (Rasia et al., 2015). The 3D mass-weighted temperature in a given shell \(i\), \(T_{3D,i}\) (i.e the \(\mathrm{\SIUnitSymbolMicro}\) element of the \(\mathrm{\SIUnitSymbolMicro}\)D vector), was calculated by weighting the temperature of the \(p\)\(\mathrm{\SIUnitSymbolMicro}\) gas particle (\(T_{p}\)) using its gas mass (\(m_{p}\)) as a weighting function \(w\), \[T_{3D,i}=\frac{\sum T_{p}m_{p}}{\sum m_{p}}\,, \tag{4}\] In this calculation, no attempt was made to exclude low-temperature sub-clumps in the outskirt regions of the clusters, however, only particles with temperature \(>\)0.3 keV were considered. We estimated the projected 2D temperature profiles (\(\mathbf{T}_{\mathrm{2D}}\)) along the line of sight (\(l\)) using the 3D gas density (\(\rho\)) and temperature profiles (\(\mathrm{\SIUnitSymbolMicro}\)D). The 2D temperature profiles were estimated in pre-defined logarithmically spaced annular bins by first considering the classical emission-measure weights (\(\mathbf{C}=\mathbf{C}_{\mathrm{proj}}\), see Eqn. 3): \[\mathbf{T}_{\mathrm{2D}}=\frac{\int w\mathbf{T}_{\mathrm{3D}}dl}{\int wdl}= \mathbf{C}\otimes\mathbf{T}_{\mathrm{3D}}, \tag{5}\] where \(w=\rho^{2}\)(e.g. Mathiesen and Evrard, 2001). We produced several versions of the \(\mathbf{T}_{\mathrm{2D}}\) profiles: First the \(\mathbf{T}_{\mathrm{2D}}\) profiles were first estimated in the same radial bins as those of the \(\mathbf{T}_{\mathrm{3D}}\) (48 bins) by using a matrix \(\mathbf{C}\) of dimension \(48\times 48\) (\(\mathbf{C}_{48,48}\)). We also estimated \(\mathbf{T}_{\mathrm{2D}}\) in a coarser binning scheme to reproduce typical radial sampling from present-day X-ray observatories such as _XMM-Newton_ and _Chandra_. These have either twelve or six logarithmic bins reaching only up to R\({}_{500}\), corresponding to matrices of dimension 12\(\times\)48 (\(\mathbf{C}_{12,48}\)) and 6\(\times\)48 (\(\mathbf{C}_{6,48}\)) respectively. We also considered a more complex case where we use the spectroscopic-like weighting proposed by Mazzotta et al. (2004) to generate the 2D temperature profiles using the binning schemes discussed above. In this case, apart from the normalisation, the matrix elements of \(\mathbf{C}\) simply change to \(C_{i,j}=C_{i,j}T_{3D,j}^{3/4}\) (or equivalently, the weights change to \(w=\rho^{2}T_{3D}^{-3/4}\)), where \(T_{3D,j}\) is the mass-weighted 3D temperature profile in the \(j\)\(\mathrm{\SIUnitSymbolMicro}\) bin. In many clusters in the simulated sample, the temperature profiles in the first few inner bins (typically 0-13 radial bins corresponding to radii between \(\approx\) [0.02-0.07] R\({}_{500}\)) were noisy (i.e having \(<\) 100 gas particles). For such systems, the 2D profiles were estimated without considering such bins. Figure 1 shows the observed scaled 2D temperature profiles of the _Planck_ SZ sample (Planck Collaboration XI 2011) and the XMM-_Newton_ DR1 sample (Rossetti et al., 2023, described in detail in Sect. 6.2). These are compared to 50 randomly drawn 2D temperature profiles from the Three Hundred Project using emission measure (left panel) and spectroscopic-like (right panel) weighting schemes and an observation-like convolution matrix, \(\mathbf{C}_{12,48}\). Both observational and simulated temperature profiles were scaled by the average 2D temperature (\(\mathrm{\SIUnitSymbolMicro}\)x) in the radial range of [0.15-0.75] R\({}_{500}\). Figure 2 shows the distribution of the clusters in the simulated sample, _Planck_ sample and DR1 sample on the basis of \(\mathrm{\SIUnitSymbolMicro}\)X. These two figures illustrate three points that will be critical for the following study: 1. In common with a number of works over the last 20 years (e.g. De Grandi and Molendi, 2002; Vikhlinin et al., 2006; Pratt et al., 2007; Leccardi and Molendi, 2008; Ghirardini et al., 2019), the structural similarity in the observed temperature profiles are clearly visible in Fig. 1. The central regions are characterised by a large spread, due to a mixed population of cool core and disturbed systems, while beyond the central 0.15 R\({}_{500}\) the profiles all decline in a similar fashion. 2. The simulated profiles follow the same general trend as the observed profiles. The average trend and 1-\(\sigma\) dispersion of the simulations is very consistent with that of the CHEX-MATE DR1 sample. The simulated temperature profiles on average are slightly hotter in the centre compared to the _Planck_ SZ sample. This may be related to the fact that there are more low mass clusters in the simulated sample compared to the _Planck_ SZ sample. Such low mass clusters are expected to be more strongly affected by AGN feedback, potentially leading to higher temperatures in the central region (Iqbal et al., 2018). Alternatively, the higher central temperatures in the simulations may simply be due to the fact that the sample has a large number of NCC clusters. 3. Overall, the observed temperature profiles are well represented by the simulated sample. This fact will be key to a successful training stage of the **IAE** model, which relies on identifying underlying trends in the data that would not otherwise be found. We note that the simulated profiles do not have to precisely match the observed data: as we will see, the most important point is that they reproduce the overall structure and diversity of the observed profiles, which is what our **IAE** model learns. We further classified the simulated clusters using three schemes. This is important to quantify how well the **IAE** model reconstructs the radial temperature distribution for different types of objects and profile shapes. ### CC and NCC classification Firstly, we classify the profiles as CC and NCC by visual inspection. The objective here is simply to select simulated profiles that mimic those of observed cool-core-like clusters with a central temperature drop, and non cool-core clusters that display an almost isothermal central temperature profile. The profiles which show a decreasing trend towards the cluster centre (positive temperature gradient) were classified as CC clusters. We identify about one-third of the clusters as belonging to the CC class. In Fig. 3, grey lines in the left panels and right panels show the 3D temperature profiles (\(\rm{T_{3D}}\)) of CC and NCC clusters respectively. ### Dynamical classification Clusters in these simulations were classified on the basis of their intrinsic dynamical state (relaxed or disturbed) using a variety of estimators (Rasia et al., 2013). The two important intrinsic estimators are \(f_{\rm{s}}=M_{\rm{sub}}/M_{\rm{tot}}\), the fraction of cluster mass (\(M_{\rm{tot}}\)) included in substructures (\(M_{\rm{sub}}\)), and \(\Delta_{\rm{r}}=|r_{\rm{s}}-r_{\rm{cm}}|/R_{\rm{sp}}\), which is the measure of the offset between the central density peak (\(r_{\rm{s}}\)), and the centre of mass (\(r_{\rm{cm}}\)) of the cluster normalised to aperture radius \(R_{\rm{sp}}\). Both of the estimators were computed at \(\rm{R_{500}}\). Both \(f_{\rm{s}}\) and \(\Delta_{\rm{r}}\) are expected to be lower than 0.1 for relaxed objects (Cialone et al., 2018; De Luca et al., 2021). These two dynamical parameters can be combined (Rasia et al., 2013) to give the so-called relaxation parameter \(\chi_{{}_{D}}\) \[\chi_{{}_{D}}=\frac{1}{2}\times\left(\frac{\Delta_{\rm{r}}-\Delta_{\rm{r,med} }}{|\Delta_{\rm{r,quark}}-\Delta_{\rm{r,med}}|}+\frac{f_{\rm{s}}-f_{\rm{s,med} }}{|f_{\rm{s,quark}}-f_{\rm{s,med}}|}\right). \tag{6}\] Here \(\Delta_{\rm{r,med}}\) and \(f_{\rm{s,med}}\) are the medians of the \(\Delta_{\rm{r}}\) and \(f_{\rm{s}}\) distributions, respectively, and \(\Delta_{\rm{r,quark}}\) and \(f_{\rm{s,quark}}\) are the first or the third quartiles, depending on whether the parameters of a specific cluster are smaller or larger than the median. According to this definition, clusters with \(\chi_{{}_{D}}<0\) are classified as relaxed, and clusters \(\chi_{{}_{D}}>0\) are classified as disturbed. The left panel of Fig. 4 shows the histogram of \(\chi_{{}_{D}}\) values. The cyan and magenta hatched regions represent the 20 most relaxed clusters and 20 most disturbed clusters, respectively. We will refer to these sub-samples as MR20 and MD20 hereafter. In the top panel of Fig. 3, we show the corresponding temperature profiles of the MR20 clusters (left panel) and the MD20 clusters (right panel) with cyan and magenta lines, respectively. It is interesting to note that only a few of the most relaxed objects are also categorised as CC clusters. Visual inspection of emissivity maps shows, as expected, that \(\chi_{{}_{D}}\) is clearly linked to the overall gas morphology, as also found in Campitiello et al. (2022). ### Structural classification To enable a better assessment of the performance of the **IAE** model for temperature profile reconstruction, we also classified the 3D temperature profiles based directly on their smoothness. Bumps in the temperature profiles are usually associated with complex astrophysical processes such as merger shocks, gas condensation, the presence of cold substructures, sloshing, and turbulence, all of which affect the temperature in a given annulus. To measure the degree of the bumpiness of the 3D tem Figure 3: Classification of temperature profiles in the Three Hundred Project. Left panel: Grey line shows the visually classified CC clusters. Cyan and green lines show the 20 most relaxed clusters (top panel) and 20 most smooth profiles (bottom panel). Right panel: Grey line shows the visually classified NCC clusters. Magenta and orange lines show the 20 most disturbed clusters (top panel) and irregular profiles (bottom panel) perature profiles, we used the starlet wavelet transform, which is widely used in component separation in astrophysical images (Starck et al., 2007), to split each profile into its smooth and non-smooth components. Using this technique, the 3D temperature profile \(\mathbf{T}_{\mathrm{3D}}(r)\) can be decomposed into a \(J+1\) coefficient set \(\mathbf{W}=\{\mathbf{w}_{1},...,\mathbf{w}_{J},\mathbf{T}_{J}\}\), as a superposition of the form \[\mathbf{T}_{\mathrm{3D}}(r)=\mathbf{T}_{J}(r)+\sum_{j=1}^{J}\mathbf{w}_{j}(r)\,, \tag{7}\] where \(\mathbf{T}_{J}\) is a smooth (coarse resolution) version of the original temperature profile and \(\mathbf{w}_{j}\) represents the structure in the temperature profile on scale \(2^{-j}\). Figure 5 shows the starlet decomposition for one of the clusters in the Three Hundred Project which exhibits a complex shape in the range [0.5-1] R\({}_{500}\). The cluster is experiencing a major merger and there is an enhancement of the temperature due to the propagation of a shock in this region. We use the starlet transform with \(J=2\), which we have found to be the optimal configuration to measure the non-smoothness, yielding a decomposition into a smooth temperature component and two additional non-smooth components, \(\mathbf{w}_{1}(r)\) and \(\mathbf{w}_{2}(r)\). We then define the root mean square deviation, \(\chi_{s}\) of the difference between the true and smooth temperature profiles in the radial range of [0.08-1] R\({}_{500}\) as a measure of the non-smoothness of the temperature profiles. \[\chi_{s}=\sqrt{\frac{1}{u}\sum_{i=1}^{u}(T_{\mathrm{3D}j}-T_{\mathrm{4}j})^{2} }\,, \tag{8}\] where \(u\) is the number of data points in the range of [0.08-1] R\({}_{500}\), and the lower limit of 0.08 R\({}_{500}\) corresponds to the radius at which all clusters satisfy the precision threshold condition. The temperature profiles were first scaled (normalised) by the average mass-weighted temperature in the radial range of [0.15-0.75] R\({}_{500}\) before applying the decomposition operator to calculate \(\chi_{s}\). The right hand panel of Fig. 4 shows the distribution of \(\chi_{s}\) for the full sample, which follows an approximately log-normal distribution. The green and orange hatched regions represent the 20 most smooth profiles and 20 most irregular profiles, respectively, based on the \(\chi_{s}\) criterion. We will refer to these sub-samples as MS20 and MI20 henceforth. In the bottom panel of Fig. 3, we show the corresponding temperature profiles of the MS20 (left panel) and MI20 profiles (right panel) with green and orange lines respectively. Here also, only a few of the clusters with the most smooth profiles are categorised as CC clusters. The correlation between \(\chi_{s}\) and \(\chi_{s}\) is shown in Fig. 1. They are moderately correlated, with a Spearman's correlation coefficient of 0.42 and a \(P\) value of \(5\times 10^{-15}\). Figure 4: Distribution of clusters in the Three Hundred Project as a function of the \(\chi_{\mathrm{D}}\) (Eqn. 6) and \(\chi_{\mathrm{S}}\) (Eqn. 8) criteria. The hatched cyan and magenta regions show the 20 most relaxed clusters and the 20 most disturbed clusters respectively based on \(\chi_{\mathrm{D}}\) criterion. The hatched green and orange 20 most show the 20 most regular profiles and the 20 most irregular profiles respectively based on \(\chi_{\mathrm{S}}\) criteria. Figure 5: Smooth (coarse) component of a complex temperature profile derived from the application of the Starlet transform with \(J=2\). The bottom panel shows the corresponding difference between true and smooth temperature profiles. ## 3 Neural network model for learning 3D temperature profiles The deconvolved temperature profile can in principle be obtained by solving the following classical inverse problem \[\mathbf{T}_{\text{2D}}=\mathbf{C}\otimes\mathbf{T}_{\text{3D}}+\mathbf{N}\,, \tag{9}\] where \(\mathbf{C}\) is a non-linear operator (matrix) which represents the observational and instrumental effects (projection, PSF, etc) and \(\mathbf{N}\) represents the statistical properties of the noise. The standard way of solving Eqn. 9 is to consider least squares regression with some regularisation \(\mathbf{R}\) \[\mathbf{T}_{\text{3D}}^{\text{fit}}=\min_{\mathbf{T}}\ \mathbf{R}(\mathbf{T})+ \|\mathbf{T}_{\text{2D}}-\mathbf{C}\otimes\mathbf{T}\|^{2}\, \tag{10}\] where \(\mathbf{T}_{\text{3D}}^{\text{fit}}\) is the best-fitting model profile for \(\mathbf{T}_{\text{3D}}\), which is obtained by optimising the above relation with respect to \(\mathbf{T}\). However, Eqn. 9 is an ill-posed (non-linear) problem, and using standard non-parametric methods does not result in a unique and stable solution. Therefore, one has to resort to advanced deconvolution techniques. In this work, we propose one such algorithm that makes use of neural networks to model the temperature profiles, and whose framework will be explained below. A learning-based regularisation procedure for direct deconvolution using the trained neural network is discussed in Sect. 4. Our approach is based on manifold learning, which stems from the manifold hypothesis, that suggests the existence of a lower dimensional manifold on which real-world data lies (Fefferman et al. 2013). This is evidently the case for galaxy cluster temperature profiles, which clearly display some degree of regularity, as seen in Fig. 4. The goal is then to find the lower dimensional manifold by learning the underlying structure of the data. When one has access to a large training set (from observations and/or simulations), it may be possible to make use of machine learning (deep learning) methods to build an underlying manifold. However, this becomes quite difficult when available training samples are sparse, as is the case for cluster temperature profiles. In such cases, rather than learning the underlying manifold structure, Bobin et al. (2023) proposed the Interpolator AutoEncoder (**IAE**), that learns to travel on a manifold by way of interpolation between a limited number of _anchor points_ that belong to it. We assume that any temperature profile in a training set \(\{\mathbf{T}^{i}\}^{i=1,\ldots,n}\), where \(n\) represents the total number of elements in the set, can be interpolated from a small set of \(d\) anchor points \(\{\mathbf{T}^{e}_{a}\}^{e=1,\ldots,d}\) using an appropriate metric \(\mathbf{\Pi}\) \[\mathbf{\Theta}(\mathbf{\Lambda}^{i})=\min_{\mathbf{\Lambda}^{i}}\sum_{e=1}^{ d}\lambda_{e}^{i}\mathbf{\Pi}(\mathbf{T}^{i},\mathbf{T}^{e}_{a})\,, \tag{11}\] where \(\mathbf{\Theta}\) is called the _barycentre_. The elements of vector \(\mathbf{\Lambda}^{i}=\{\lambda_{1}^{i},\ldots,\lambda_{d}^{i}\}\) are the barycentric weights (\(\sum_{e=1}^{d}\lambda_{e}^{i}=1\)) which are optimised in the above equation. If we consider the metric \(\mathbf{\Pi}\) to be Euclidian, then \[\mathbf{\Theta}(\mathbf{\Lambda}^{i})=\min_{\mathbf{\Lambda}^{i}}\sum_{e=1}^{ d}\lambda_{e}^{i}\|\mathbf{\Gamma}^{i}-\mathbf{T}^{e}_{a}\|^{2}. \tag{12}\] The above equation reduces \(\mathbf{\Theta}(\mathbf{\Lambda}^{i})\) to an orthogonal projection onto the span of anchor points \(\mathbf{T}^{e}_{a}\), that is \[\mathbf{T}^{i}\equiv\mathbf{\Theta}(\mathbf{\Lambda}^{i})=\sum_{e=1}^{d} \lambda_{e}^{i}\mathbf{T}^{e}_{a}. \tag{13}\] The problem then reduces to finding (optimising) barycentric weights such that the barycentre \(\mathbf{\Theta}\) accurately reconstructs any input temperature profile in the training sample. However, if the profiles are non-linear, with varying amplitudes and shapes, as is the case with the temperature profiles in galaxy clusters, the standard metric \(\mathbf{\Pi}\) may not reconstruct an appropriate barycentric representation. Our method, therefore, uses the approach proposed by Bobin et al. (2019, 2023), in which a data-driven metric is constructed using a deep learning neural network that is well adapted to build physically relevant barycentres of anchor points. We introduce an auto-encoder (Vincent et al. 2010) inspired neural network model which learns to transport points (temperature profiles in our case) onto the underlying manifold using a non-linear interpolation scheme between the anchor points. The structure of the neural network we are considering is shown in the left hand panel of Fig. 6. It consists of an encoder (\(\mathbf{\Phi}\)), that takes an input, and a decoder (\(\mathbf{\Psi}\)), that generates the desired output. The role of the encoder is to transform the input data into a lower-dimensional representation, while the decoder is responsible for mapping the lower-dimensional data back into the original space. By performing these mappings, auto-encoders are able to learn the underlying structure of the data. In contrast to standard auto-encoders, our model training is performed by minimising the error between the input and the reconstructed training sample according to the Euclidean distance onto the manifold spanned by the anchor points in the encoder (feature space). More precisely, for the encoder \(\mathbf{\Phi}\), the representation of the input profile \(\mathbf{T}^{i}\) (belonging to the training set \(\mathbf{\Phi}(\mathbf{T}^{i})\)) is expressed in terms of the barycentre, \(\mathbf{\Theta}\), in feature space, as an orthogonal projection onto the span of the anchor points \(\mathbf{\Phi}(\mathbf{T}^{e}_{a})\) given in Eqn. 13: \[\mathbf{\Phi}(\mathbf{T}^{i})\equiv\mathbf{\Theta}(\mathbf{A}^{i})=\sum_{e=1}^{d} \lambda^{i}_{e}\mathbf{\Phi}(\mathbf{T}^{e}_{a}). \tag{14}\] The barycentric weights are constrained to sum to one so as to avoid certain scaling indeterminacies, and are not necessarily constrained to be positive like actual barycentric weights, which potentially allows us to extrapolate beyond the affine hull of the encoded anchor points. More precisely, the barycentric weights for the \(n\) elements in the training sample are computed as follows: \[\min_{\mathbf{\Lambda}^{i}}\sum_{i=1}^{n}\left\|\mathbf{\Phi}(\mathbf{T}^{i})-\sum_{ e=1}^{d}\lambda^{i}_{e}\mathbf{\Phi}(\mathbf{T}^{e}_{a})\right\|^{2}\text{ s.t. }\sum_{e}\lambda^{i}_{e}=1, \tag{15}\] which can be approximated by taking the solution to the least-squares problem followed by a rescaling of the barycentric weights in order to make them sum to one. Once the optimal barycentric weights (\(\mathbf{\Lambda}^{i}\)) are computed for each element \(\mathbf{T}^{i}\) of the training sample, the approximations (i.e. the barycenters) go back through the decoder \(\mathbf{\Psi}\) to reproduce the input as \(\widetilde{\mathbf{T}}^{i}\!\!=\!\!\mathbf{\Psi}(\mathbf{\Theta})=\mathbf{\Psi}(\sum_{e=1} ^{d}\lambda^{i}_{e}\mathbf{\Phi}(\mathbf{T}^{e}_{a}))\). The learning stage reduces to estimating the weights and biases of layers of \(\mathbf{\Phi}\) and \(\mathbf{\Psi}\) using an appropriate cost function that minimises the error between the input, \(\mathbf{T}^{i}\) and the output, \(\mathbf{\Psi}(\sum_{e=1}^{d}\lambda^{i}_{e}\mathbf{\Phi}(\mathbf{T}^{e}_{a}))\), so that \[\min_{\mathbf{\Phi},\mathbf{\Psi}}\mu\sum_{i=1}^{n}\left\|\mathbf{\Gamma}^{i}-\widetilde{ \mathbf{T}}^{i}\right\|^{2}+\sum_{i=1}^{n}\left\|\mathbf{\Phi}(\mathbf{T}^{i})- \mathbf{\Theta}(\mathbf{T}^{i})\right\|^{2}. \tag{16}\] In the training stage, we thus learn the non-linear interpolation scheme that best approximates the training samples in feature space, and the mapping between the barycentres and real space. The parameter \(\mu\) controls the trade-off between these two objectives. In the evaluation phase only the decoder \(\mathbf{\Psi}(\mathbf{\Theta})\), which embeds the mapping between the barycentric weights and 3D temperature profiles, is used. As shown in the right panel of Fig. 6, the decoder is used as a generative model that is parameterised by the barycentric weights, \(\mathbf{\Lambda}\) (for convenience we drop the subscript _'i'_ from now on). This model can easily be convolved to fit the observed 2D temperature profile so as to recover the true (3D) temperature profile. From now on, we refer to the decoder as the **IAE** model. The number and choice of anchor points and the model training will be discussed in the following Section. ## 4 Model training and fitting ### Model training We use a JAX (Bradbury et al., 2018) implementation to develop and train the **IAE** model. As a training sample, we use 200 randomly drawn \(\mathbf{T}_{\text{3D}}\) profiles from the full sample of 315 extracted from the Three Hundred Project simulations. Each profile in the training sample is first normalised to entries that sum to 1. The model is trained at the same fixed radial binning as that of \(\mathbf{T}_{\text{3D}}\) profiles in the [0.02-2] \(\text{R}_{\text{500}}\) radial range. For the training stage, several configurations were tested, among which the following choices were found to perform the best: * **Network architecture:** Both the encoder and the decoder are multi-layer perceptron (MLP) neural networks, which are composed of 2 layers, each of which has a number of hidden units equal to the input signal dimension (i.e. 48). We employ a smooth and non-monotonic Mish2 activation function to introduce non-linearity and enhance the learning capacity of our deep neural network model. Since the **IAE** model employs a barycenter transformation of the training sample in encoder space to achieve dimensionality reduction, in this work, we only focus on a specific architecture with a fixed number of neurons per layer, corresponding to the dimension of the input samples. Further exploration of more general architectures is left for future work. For both encoder and decoder, the output \(\mathbf{Z}^{i+1}\) of layer \(l\) can be expressed as Footnote 2: Mish\((x)=x\times\tanh(\ln(1+e^{x}))\) \[\mathbf{Z}^{i+1}=\text{Mish}(\mathbf{W}^{l}\otimes\mathbf{Z}^{l}+\mathbf{b}^{ l})+\varepsilon^{l}\mathbf{Z}^{l}. \tag{17}\] Here, the first term represents the standard output of the neural network, with \(\mathbf{W}\) and \(\mathbf{b}\) defined as weight matrix and bias vector respectively. The second term represents skip connections (He et al., 2015; Huang et al., 2016), also known as residual connections. The skip connection acts by partially re-injecting \(\mathbf{Z}\) up to a layer-dependent scalar factor \(\varepsilon\). In general, the residual injection factors are typically chosen to be small for low-level layers and larger for deeper layers. This approach helps mitigate the vanishing gradient phenomenon, which is commonly encountered during the training of deep Figure 7: Five anchor points (example profiles), \(\mathbf{T}^{e}_{a}\), where \(e\) runs from 1 to 5 used in the IAE model. networks. For each layer \(l\) of encoder and decoder, we consider following the functional form of \(\varepsilon^{l}\) as used in Bobin et al. (2023) \[\varepsilon^{l}=\varepsilon_{0}\left(2^{1/l}-1\right), \tag{18}\] where \(\varepsilon_{0}\) is a constant factor. By using skip connections with re-injection and layer-dependent scaling, the model can leverage both the direct information flow from earlier layers and the higher-level abstractions learned by the deeper layers, which can lead to improved performance and better training in deep neural networks. * **Cost function:** The cost function defined in Eqn. 16 is composed of two terms. The first term measures the reconstruction error in real space, and the second term defines the error in feature space. The parameter \(\mu\) allows one to tune the trade-off between these two terms. An accurate **IAE** model relies on both a low reconstruction error (i.e. first term of the training loss), and an efficient interpolation scheme in feature space. It has been emphasised in Bobin et al. (2023) that the second term helps improve the training process by constraining the feature space. In addition, depending on the problem and data at stake, it can help to increase the model accuracy by reducing the interpolation error in feature space, which in turn can reduce noise propagation at inference. In the present case, we noted that the trained model is not particularly sensitive to \(\mu\), which we set to 10 000 to minimise the reconstruction error in real space. * **Training hyper-parameters:** The batch size (the number of training profiles processed together before updating the neural network weights) is fixed to 32. The optimisation is performed by back-propagation using the standard Adam solver (Kingma & Ba, 2014) with a step size of \(10^{-3}\) and a number of epochs equal to 25000. It is customary to further regularise the model by adding noise to the training samples, which limits over-fitting effects. To do that, Gaussian noise with mean zero and standard deviation of \(2\times 10^{-3}\) is added to the samples at the training stage. The batch normalisation was achieved by normalising the input batch using a global mean of 0 and a standard deviation of 1. Finally, we fix the residual parameter (\(\varepsilon_{0}\)) to 0.1. * **Number of anchor points:** Anchor points serve as the basis on which temperature profiles are reconstructed using barycentric weights. Training with a small number of anchor points results in smoother (more regular) profiles; conversely, a large number of anchor points increases the model-to-data fidelity. Thus, the choice of the number of anchor points used during the training stage is essentially equivalent to choosing a regularisation parameter. For our study, the number of anchor points is fixed at five. These are generated by first dividing the training sample into five groups using a \(k\)-means clustering algorithm. The anchor points are then assumed to be the central points (centroids) of these five groups. By using five anchor points, we can ensure that the model-to-data residual remains below 10% over the observable radial range of \(\approx\) [0.02-1] \(R_{500}\), as shown in Sect. 5, and at the same time, we can avoid any possible biases that could be introduced if the observations were shallow (bias-variance problem). Figure 7 shows the anchor points used in the neural network model. In Sect. 5.1.2, we will discuss the effect of increasing the number of anchor points. Table 1 provides a comprehensive summary of our neural network architecture, along with the optimal hyper-parameters used in the study. For our implementation, we used publicly available source code hosted on a GitHub repository3(Bobin et al., 2019, 2023). Footnote 3: [https://github.com/jobin/IAE](https://github.com/jobin/IAE) Since our simulated sample is small, we use the term 'validation' to refer to testing of the model performance on simulated data (Sec. 5) before using it on real-world data, where the 3D temperature profiles are not directly available. We therefore used the training sample itself to evaluate the convergence of the cost function. Specifically, we monitored the cost function during training and found that after approximately 25000 iterations, \begin{table} \begin{tabular}{c c c c} \hline \hline Layer type & Layer & Activation & Neurons \\ \hline \multirow{4}{*}{Encoder} & Input & - & 48 \\ & Layer 1 & Mish & 48 \\ & Layer 2 & Mish & 48 \\ & Output & - & 48 \\ \hline \multicolumn{4}{c}{Bayccenter representation in \(\Lambda\)} \\ \hline \multirow{4}{*}{Decoder} & Input & - & 48 \\ & Layer 1 & Mish & 48 \\ \cline{1-1} & Layer 2 & Mish & 48 \\ \cline{1-1} & Output & - & 48 \\ \hline \hline \end{tabular} \end{table} Table 1: Details on the neural network architecture and hyper-parameters used in this work. \begin{table} \begin{tabular}{c c} \hline \hline Parameter & Range \\ \hline \(\lambda_{1}\) & \(-14;+22\) \\ \(\lambda_{2}\) & \(-10;+15\) \\ \(\lambda_{3}\) & \(-30;+17\) \\ \(\lambda_{4}\) & \(-3;+8\) \\ \(\lambda_{5}\) & \(-15;+10\) \\ \(\alpha\) & 100; \(+530\) \\ \hline \hline \end{tabular} \end{table} Table 2: Flat priors used for the IAE model parameters. the cost function reached a point where it became flat. At this stage, we considered the training process to be sufficiently converged, and we terminated the training. ### Model fitting The **IAE** model is tested/fitted on the validation sample consisting of the remaining 115 galaxy clusters in the sample which were not used in the training stage. We have verified that the validation sample is representative of the full sample: about one-third of the validation clusters have cool cores, and the fractions of relaxed/disturbed clusters and smooth/irregular profiles are similarly distributed in the training and validation samples. We employ Markov Chain Monte Carlo (MCMC) analysis to estimate the parameters of the IAE models and use the publicly available emcee python package (Foreman-Mackey et al., 2013) for this purpose. The parameter estimation is undertaken on all the IAE parameters: the five anchor point weights \(\boldsymbol{\Lambda}=[\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4},\lambda_{ 5}]\), and the amplitude (normalisation) parameter \(\alpha\). The deconvolved temperature profile can be obtained from the trained non-parametric IAE model by minimising the following log-likelihood: \[\mathcal{L}(\boldsymbol{\Lambda},\alpha) =\Gamma\times\text{Tr}\left((\boldsymbol{\Lambda}-\bar{\boldsymbol {\Lambda}})^{\top}\otimes\boldsymbol{\Sigma}_{\text{o}}^{-1}\otimes( \boldsymbol{\Lambda}-\bar{\boldsymbol{\Lambda}})\right)+\] \[\frac{1}{2}\times\text{Tr}\left((\boldsymbol{\Gamma}^{\text{val} }-\boldsymbol{\Gamma}^{\text{IAE}})^{\top}\otimes\boldsymbol{\Sigma}_{\text{ o}}^{-1}\otimes(\boldsymbol{\Gamma}^{\text{val}}-\boldsymbol{\Gamma}^{\text{IAE}})\right), \tag{19}\] where \(\boldsymbol{\Gamma}^{\text{val}}\) is a temperature profile (2D or 3D) in the validation sample to be fitted, \(\boldsymbol{\Sigma}_{\text{o}}\) is the error covariance matrix and \(\boldsymbol{\Gamma}^{\text{IAE}}=\mathbf{C}\otimes\text{IAE}(\boldsymbol{ \Lambda},\alpha)\) is the corresponding convolved IAE model predicted profile. Tr and \(\top\) represent the trace and transpose of the matrix respectively. The first term represents the mean proximity term, with \(\Gamma\) controlling its overall contribution to the likelihood. This enforces the solution to be a barycentre of the example profiles (i.e. it searches for the best approximation of the input signal with respect to the learned model/network). We find that \(\Gamma\) in the range 0.1-1 generally provides good results, and we, therefore, fix it to 1. \(\bar{\boldsymbol{\Lambda}}\) the mean value of the \(\boldsymbol{\Lambda}\)s) and \(\boldsymbol{\Sigma}_{\text{t}}\) (the covariance matrix of the \(\boldsymbol{\Lambda}\)s) are computed from the training set by generating 100 Monte Carlo simulations for each cluster with log-normal noise, which are then subsequently fitted to the **IAE** model using the Adam optimiser. This cost-effective regularisation strategy is introduced to avoid model extrapolation (physically unrealistic results), and enables us to have a robust and effective deconvolution algorithm. The second term is the standard likelihood related to some additive Gaussian noise perturbation. We have used flat prior distributions and Tab. 2 shows the prior ranges of all parameters. We used Getdist(Lewis, 2019) with the chains generated by emcee to produce 2D contours and marginal posteriors. The **IAE** model testing was undertaken by fitting it to the \(\boldsymbol{\text{T}}_{\text{3D}}\) profiles and \(\boldsymbol{\text{T}}_{\text{2D}}\) profiles built in Sect. 2. For simplicity, we ignore the PSF in the testing phase. We tested and validated our model by considering three fitting cases: 1. **3D-3D fit with fine binning**: the \(\boldsymbol{\text{T}}_{\text{3D}}\) profiles are directly fitted to recover the best-fitting 3D profiles from the **IAE** model. The goal, in this case, is to assess the ability of the **IAE** model to reproduce the input 3D temperature profile shape. In this case, as there is no projection, \(\mathbf{C}\) in Eqn. 19 is simply an identity matrix of size \(48\times 48\) (\(\boldsymbol{\text{C}}_{48,48}\)). 2. **2D-3D fit with fine binning**: we fitted the 2D projected temperature profiles with the **IAE** model convolved with a projection matrix \(\mathbf{C}\). In this case, we wish to assess how well the **IAE** model recovers the intrinsic 3D temperature profile when only 2D projected data are available. We used the same 2D radial logarithmic binning as that of the \(\boldsymbol{\text{T}}_{\text{3D}}\) profiles, meaning that \(\mathbf{C}\) has dimensions of \(48\times 48\) (\(\boldsymbol{\text{C}}_{48,48}\)). For this testing phase, we assume standard emission measure weighting to calculate the elements of \(\mathbf{C}\). 3. **2D-3D fit with coarse binning**: the 2D projected temperature profiles having coarse logarithmic radial binning of twelve or six points up to R\({}_{500}\) were fitted to the **IAE** model convolved with matrix \(\mathbf{C}\). Here, the goal is to assess the ability of the IAE model to recover the intrinsic 3D temperature profile when only a coarse 2D projected profile, similar to that obtained from present-day observations, is available. In this case \(\mathbf{C}\) has a dimensions of \(12\times 48\) (\(\boldsymbol{\text{C}}_{12,48}\)) and \(6\times 48\) (\(\boldsymbol{\text{C}}_{6,48}\)) for the 2D temperature profiles with twelve and six bins respectively. As above, we use standard emission measure weighting to calculate the elements of \(\mathbf{C}\). In Sect. 5.4, we will also consider the Mazzotta et al. (2004) temperature-dependent spectroscopic-like weights. For case 3, which seeks to mimic the typical characteristics of 2D temperature profiles measured with current X-ray satellites, we assume that the uncertainties increase linearly with radius. Based on our previous experience with XMM-_Newton_ and _Chandra_ observations, we assume temperature profile uncertainties that increase from 5% to 25% in the [0.02-1] R\({}_{500}\) radial range for the 12-bin profiles, and from 10% to 30% for the 6-bin profiles. We built a diagonal error covariance matrix, i.e. \(\boldsymbol{\Sigma}_{\text{o}}\), using this approximation. This was then incorporated in the likelihood and acts as a weighting function, giving more weight to the inner regions in the fit. In general, regardless of whether the errors increase monotonically, the inclusion of errors in the likelihood leads to an overall improvement in the fit. For cases 1 and 2 (fine binning), we do not consider errors in the likelihood and as such \(\boldsymbol{\Sigma}_{\text{o}}\) is a unit matrix. Both model training and fitting a single profile with MCMC can be completed within a few minutes on a 16-core CPU. In the objects where the temperature profiles in the first few inner bins were not reliable (i.e. having \(<100\) gas particles), these bins were not considered in the fitting. However, no such constraint was applied during the training stage, as one expects the network to learn only the fundamental structure of the data rather than the noise. ## 5 Model evaluation In this Section, we discuss the robustness of the non-parametric **IAE** model reconstruction using different schemes. We check the performance of our model with respect to the radial binning, which is important since the number of radial bins corresponding to the observations is much lower compared to the resolution of the temperature profiles in the simulated sample. We also consider different weighting schemes in the fit. The model is tested with the 115 temperature profiles in the validation sample. The performance of the model was evaluated by comparing the original 3D and 2D temperature profiles with those recovered from the **IAE** model. For each case, we calculated the median fractional residual and its associated 1-\(\sigma\) dispersion (16th-84th percentile range) at three scaled radii (0.02 R\({}_{500}\), R\({}_{500}\), and 2 R\({}_{500}\)), and over the full radial range. These results are presented in Table 4, and each case is discussed in more detail below. ### 3D-3D reconstruction of temperature profiles #### 5.1.1 Overall performance We first consider the simplest case, corresponding to the 3D-3D fit with fine binning, where we directly fitted the **IAE** model to the intrinsic 3D gas mass-weighted temperature profiles (\(\mathbf{T}_{\rm 3D}\)), ignoring projection effects. The left hand panel of Fig. 8 shows the fractional residuals (\(\Delta\mathbf{T_{3D}}/\mathbf{T_{3D}}\)) between the input (true) and recovered temperature profiles for all the individual clusters in the validation sample. The median fractional residual profile along with 1-\(\sigma\) dispersion (16th-84th percentile range) are also plotted. The median fractional residual profile is found to be close to zero throughout the radial range: at radii, 0.02 R\({}_{500}\), R\({}_{500}\), and 2 R\({}_{500}\), the values are \(-0.010\pm 0.060\), \(0.010\pm 0.051\) and \(-0.020\pm 0.120\) respectively. Moreover, the median fractional residual over the full radial range is found to be \(-0.001\pm 0.042\). The 1-\(\sigma\) dispersion in the fractional residuals is nearly constant at around \(\pm 5\%\), except beyond 1.5 R\({}_{500}\). Within the validation sample, the fractional residuals of the 20 most relaxed / disturbed clusters (MR20 / MD20) are displayed at the top in the right panel of Fig. 8, while the 20 most smooth / irregular profiles (MS20 / MI20) are shown at the bottom. In all cases, the median fractional residuals are again consistent with zero. The 1-\(\sigma\) dispersion in fractional residuals over all radii for the MD20 (MI20) sub-sample is \(\pm 0.045\) (\(\pm 0.053\)), which is larger, as expected, compared to the dispersion of \(\pm 0.032\) (\(\pm 0.029\)) found in the MR20 (MS20) sub-sample. This conclusion is supported by the fact that the histogram of the residuals of the MR20 (MS20) sub-sample is more peaked at zero, and hence is narrower compared to the MD20 (MI20) sub-sample. In general, we find that for disturbed clusters and for irregular profiles, the **IAE** model smooths out the sharp small scale variations in the 3D temperature profiles. Figure 8: Fractional residuals for 115 clusters in the validation sample with **IAE** for the 3D-3D fit. The three horizontal dashed black lines represent zero and \(\pm 5\%\) fractional residuals; the vertical dashed black lines represent R\({}_{500}\). Left panel: The grey lines show the individual fractional residuals of all the clusters. The solid black line and shaded black region show the median and 1-\(\sigma\) dispersion of the fractional residual distribution, respectively. The histogram shows the distribution of fractional residuals over all radii. Right panel: The cyan and magenta lines in the top panel show the fractional residuals of MR20 and MD20 sub-samples, respectively. The green and orange lines in the bottom panel show the fractional residuals of the MS20 and MR20 sub-samples respectively. Shaded regions show the corresponding 1-\(\sigma\) dispersion of the fractional residual distribution. The histograms show the distribution of fractional residuals over all radii. Regions enclosed by the solid black lines show the 1-\(\sigma\) dispersion of the fractional residual of the full validation sample. The **IAE** model can reconstruct 3D temperature profiles with a fractional difference of about 5% across nearly the full radial range. Figure 9: Results for the most relaxed / disturbed clusters and for the most smooth / irregular profiles with **IAE** for the 3D-3D fit. Left panel: Dashed cyan and magenta lines show the true 3D temperature profiles of the most relaxed and disturbed clusters respectively in the validation sample. Similarly, dashed green and orange lines show the most smooth and irregular true 3D temperature profiles respectively. The solid lines and the corresponding shaded regions show the median and 1-\(\sigma\) dispersion of the reconstructed temperature profile obtained from the IAE model using MCMC. #### 5.1.2 Anchor point weights, \(\lambda_{i}\) We have shown above that the **IAE** model is able to recover the average shape of the 3D profiles with high accuracy. In this context, it is interesting to consider how the anchor point weights, \(\lambda\), change according to the characteristics of the profile under consideration. Figure 9 shows the temperature profiles of the most relaxed / disturbed clusters in the validation sample, classified according to the \(\chi_{\rm D}\) criterion discussed in Sect. 2.2, and of the most regular / irregular profiles in the validation sample, classified according to the \(\chi_{\rm S}\) criterion introduced in Sect. 2.3. The reconstructed median temperature profile and fractional residuals obtained with the **IAE** using MCMC are also shown. The **IAE** model produces smoother profiles on small scales by ignoring the fluctuations on such scales. At large scales, the **IAE** model is able to reproduce the underlying structure of the input temperature profiles. The bottom left hand panel shows the fractional residuals, which can be seen to be less than 5% over most of the radial range. In Appendix A.2, the top panel of the Fig. A.2 shows the corresponding posterior distribution of the parameters of the **IAE** model obtained using MCMC. The parameters are seen to be well-constrained, and as anticipated the relaxed cluster profile (or the most regular profile) has tighter constraints compared to the most disturbed cluster (or the most irregular profile) which has relatively larger contour levels. Figure. A.3 of Appendix A.2 shows the comparison of temperature profiles and the reconstructed temperature profiles of 20 example clusters in the validation sample. We also tested the effect on the **IAE** model of increasing the number of anchor points. We found that the model fidelity can be improved by increasing the number of anchor points and that the choice of 20 anchor points reduces the residuals significantly. In Appendix A.2, Fig. A.4 we show the recovered ensemble plot of fractional residuals using the **IAE** model with 20 anchor points for the full validation sample, and for the different sub-samples. There is a significant improvement in the average fractional residual in all the cases. The median of the fractional residuals for the full sample over the entire radial range is Figure 10: Fractional 2D and 3D residuals for 115 clusters in the validation sample with **IAE** for the 2D-3D fit (fine binning). The three horizontal dashed black lines represent zero and \(\pm\)5% fractional residuals; the vertical dashed black lines represent R\({}_{500}\). Left panel: Grey lines show the individual 2D (top panel) and 3D (bottom panel) residuals of all the clusters. The solid black line and shaded black region in the left panels show the median and 1-\(\sigma\) dispersion of the 2D (top panel) and 3D (bottom panel) residual distribution, respectively. The histogram shows the distribution of residuals over all radii. Right panel: The cyan and magenta lines show the 2D (top panel) and 3D (bottom panel) residuals of the MR20 and MD20 sub-samples respectively. Green and orange lines show the 2D (top panel) and 3D (bottom panel) residuals of the MS20 and MI20 sub-samples respectively. Shaded regions show the corresponding 1-\(\sigma\) dispersion of the residual distribution. Regions enclosed by the solid black lines show the 1-\(\sigma\) dispersion of the median residual of the full validation sample. The histograms show the distribution of residuals over all radii. When given 2D profiles as input, the **IAE** model can reconstruct 3D temperature profiles with a fractional difference of about 5% across nearly the full radial range. found to be \(0.002\pm 0.030\), about 25% smaller compared to the fiducial **IAE** model obtained with five anchor points. However, the usefulness of this higher dimensional model is limited to simulations only. The temperature profiles that can be obtained from current X-ray satellites generally have temperature data at around 8-15 points for typical deep observations. Use of the **IAE** model with 20 anchor points in cases such as this would result in over-fitting and/or large variance. ### 2D-3D reconstruction of temperature profiles with fine 2D binning We now discuss the efficiency of the **IAE** model when fitting the 2D (projected) temperature profiles, defined at the same radial grid as in the previous case and at which the **IAE** model is defined (2D-3D fit with fine binning case). Here, the 3D **IAE** model is convolved with the standard emission-measure weighting matrix. The resulting projected model is then fitted to the input 2D temperature profiles, in order to reconstruct the 3D temperature profiles. Since projection results in smoother 2D temperature profiles, washing out fluctuations at small scales, one expects the 3D reconstruction obtained from the 2D profile to be more regular compared to what was found in the previous section. It is also important to note that projection effects are dominant in the inner regions (especially in CC clusters), which can introduce degeneracy into the reconstructed 3D temperature profiles in the central region. However, both the 2D and 3D profiles of CC clusters will always display a central temperature dip. Thus one can expect a larger scatter in the 3D reconstructed temperature profiles in the central regions, as compared to the 3D-3D fitting case. In Fig. 10, we show the ensemble plot of fractional residuals of the 2D (top panel) and 3D (bottom panel) temperature profiles for the validation sample (left panel) and sub-samples (right panel). The fractional residuals in 2D space (where the fitting is actually performed) are smaller compared to the 3D temperature residuals, as expected. For the 2D fit, we find median fractional residuals at radii \(0.02\) R\({}_{500}\), R\({}_{500}\), and \(2\) R\({}_{500}\) to be \(0.009\pm 0.027\), \(0.004\pm 0.040\) and \(-0.018\pm 0.095\) respectively. The median of fractional residuals for the full sample and over the entire radial range is found to be \(-0.002\pm 0.027\). Unlike in the 3D-3D case, where the dispersion around the median was slightly larger in the outer regions only, here, it also increases towards the centre, as expected from the arguments given above. The dispersion is about \(\pm 10\%\) at the first bin. For the 3D reconstruction, we find median fractional residuals at \(0.02\) R\({}_{500}\), R\({}_{500}\), and \(2\) R\({}_{500}\) of \(0.021\pm 0.110\), \(0.014\pm 0.052\) and \(-0.018\pm 0.095\), respectively. The median of fractional residuals for the full sample and over the entire radial range is found to be \(-0.003\pm 0.045\). Moreover, as in the 3D-3D case, here too, the histogram of the fractional residuals over all radii of MR20 and MS20 sub-samples are narrowly peaked compared to the MD20 and MI20 sub-samples, indicating again that the profiles of more relaxed clusters, or intrinsically smoother temperature 2D profiles, are reconstructed with higher fidelity in general. In the left panel of Fig. 11, we show the recovered temperature profiles for the extreme cases of the most relaxed / disturbed cluster and the most smooth / irregular profiles in the validation sample. As in the 3D-3D case, the difference between the input and recovered temperature profiles is less than 5% over most of the radial range. In Appendix A.2, the bottom panel of the Fig. A.2 shows the corresponding posterior distribution of the **IAE** model parameters. Here also, all the parameters are well constrained. The comparison to the equivalent parameters contours for the 3D-3D case, also shown on the plot, show that, understandably, the 2D-3D reconstruction has slightly larger contour intervals compared to the 3D-3D. ### 2D-3D reconstruction of temperature profiles with an observation-like binning So far we have tested the **IAE** model only with high resolution simulated temperature profiles. However, real observed 2D temperature profiles are of much lower spatial resolution, have fewer data points, and are generally detected up to R\({}_{500}\) only. In this Section, we test the accuracy of the **IAE** model to recover simulated temperature profiles with resolutions similar to those found with the current X-ray observations (2D-3D fit with coarse binning case). First, we consider a case where we fitted 2D temperature profiles having resolutions similar to that expected with moderately deep X-ray observations. In such observations, we normally expect around twelve annular data points limited up to R\({}_{500}\). We also impose more realistic errors in the 2D temperature profiles: They are assumed to increase linearly with a radius from 5% in the innermost bin to 25% in the outermost bin. Later in this Section, we will also consider a fitting case with 2D temperature profiles defined at only six radial points within R\({}_{500}\), with errors ranging from 10% to 30% from the innermost to the outermost radial bin. Figure 11: Results for the most relaxed / disturbed clusters and for the most smooth / irregular profiles with **IAE** for the 2D-3D fit (fine binning). Left panel: Dashed cyan and magenta lines show the true 3D temperature profiles of the most relaxed and disturbed clusters in the validation sample, respectively. Dashed green and orange lines show the most smooth and irregular true 3D temperature profiles, respectively. The solid lines and the corresponding shaded regions show the median and 1-\(\sigma\) dispersion of the reconstructed temperature profile obtained from the **IAE** model using MCMC. The dotted lines show the 2D temperature profiles actually used in the fitting. #### 5.3.1 Twelve bin case In Fig. 12, we show the ensemble plot of the 2D and 3D fractional residuals for the 2D-3D fit with the coarse binning case, by considering twelve 2D temperature data points within R\({}_{500}\). Even with the lower resolution, we find that within the 2D fitting range (i.e. up to R\({}_{500}\)), the 3D fractional residuals are still close to zero, with a 1-\(\sigma\) dispersion of about \(\pm 5\%\), as in the previous cases. The median 3D fractional residuals at radii 0.02 R\({}_{500}\), R\({}_{500}\), and 2 R\({}_{500}\) is found to be \(0.003\pm 0.071\), \(-0.010\pm 0.064\) and \(-0.070\pm 0.185\) respectively. The median of fractional residuals for the full sample and over the entire radial range is found to be \(-0.006\pm 0.051\). Beyond R\({}_{500}\), where no 2D temperature data were available to fit, and thus where the constraints on the 3D reconstruction are only due only to projection effects, the scatter increases with radius, reaching a 1-\(\sigma\) dispersion of \(\pm 20\%\) at the last bin (2 R\({}_{500}\)). Moreover, beyond 1.5 R\({}_{500}\), 3D temperature profiles are underestimated by about 7%. However, it is important to mention that the true 3D temperature profiles mainly lie within the 1-\(\sigma\) dispersion of reconstructed temperature profiles. As before in the fine binning case, the dispersion in the 2D fractional residual is much smaller compared to the 3D reconstruction. For the 2D fit, we find median fractional residuals at radii 0.02 R\({}_{500}\) and R\({}_{500}\) to be \(0.001\pm 0.008\), \(-0.026\pm 0.073\) respectively. The median of fractional residuals for the full sample and over the entire radial range is found to be \(-0.002\pm 0.026\) for the 2D profiles, similar to that found in the 2D-3D fit with the fine binning case. Since we assumed that the errors increase radially outwards such as in real observations, putting more weight on the inner regions in the fit, the constraints in the inner region are better compared to the 2D-3D fit with the fine binning case. For comparison, Fig. A.5, in Appendix. A.3 shows the 3D fractional residuals for the case where we do not consider error bars in the fit. Here, we find that the scatter is increased in the inner regions as compared to both 2D-3D fit with fine binning case (previous case) and coarse binning case (present case). As in the previous cases, the histogram of the residuals of the MR20 (MS20) sub-sample has a stronger peak around zero and reduced wings compared to the MD20 (MI20) sub-sample. For example, the 1-\(\sigma\) dispersion in 3D fractional residuals over all radii for MD20 (MI20) sub-sample is found to be \(\pm 0.055\) (\(\pm 0.065\)) and for MR20 (MS20) sub-sample it is \(\pm 0.041\) (\(\pm 0.036\)). In the left hand panel of Fig. 13, we show the **IAE** recovered temperature profiles of the most relaxed and disturbed cluster and of the most regular and irregular profile in the validation sample. As in previous cases, here also the difference between the input and recovered temperature profiles is less than 5% in Figure 12: Fractional residuals for 115 clusters in the validation sample with **IAE** for the 2D-3D fit (coarse binning) using 2D temperature profiles defined at twelve radial bins up to R\({}_{500}\). Colour coding is the same as in Fig. 10. When given 2D temperature profiles with a binning scheme typical for moderately deep X-ray observations, the **IAE** model can still reconstruct 3D temperature profiles with fractional differences of about 5% throughout the 2D fitting range (i.e. [0.02-1] R\({}_{500}\).) the 2D fitting range of [0.02-1] R\({}_{500}\). Beyond R\({}_{500}\), as expected, the residuals can be high. In Appendix A.3, the top panel of Fig. A.6, shows the corresponding posterior distribution of the parameter. One finds that the confidence intervals for the **IAE** model parameters are larger compared to fine binning cases (i.e. cases 1 and 2). However, we were still able to put relatively good bounds on the parameters, which are represented by nearly Gaussian posterior distributions. Figure A.7 in Appendix A.3 shows the comparison of true 2D and 3D temperature profiles and the reconstructed temperature profiles of 20 example clusters in the validation sample for the twelve bin case. #### 5.3.2 Six bin case The 2D and 3D fractional residuals for a fit considering only six data points with errors linearly increasing from 10% in the innermost bin to 30% in the outermost bin in the range [0.2-1] R\({}_{500}\) are shown in Fig. 14. We find that the median 2D and 3D fractional residuals are still consistent with zero in the 2D fitting range. However, as expected, the 1-\(\sigma\) dispersion is larger compared to the previous cases and temperature profiles are underestimated by about 8% beyond 1.5 R\({}_{500}\) (where there are no 2D data). We find median 2D fractional residuals at radii 0.02 R\({}_{500}\) and R\({}_{500}\) to be \(-0.006\pm 0.022,-0.022\pm 0.070\) respectively. For the 3D reconstruction, we find median fractional residuals at 0.02 R\({}_{500}\), R\({}_{500}\), Figure 14: Fractional residuals for 115 clusters in the validation sample with **IAE** for the 2D-3D fit (coarse binning) using 2D temperature profiles defined at six radial bins up to R\({}_{500}\). Colour coding is the same as in Fig. (10). For simplicity, we have not shown the sub-sample cases. Even when input 2D temperature profiles with a binning scheme typical for shallow X-ray observations, the **IAE** model can still reconstruct 3D temperature profiles with fractional differences of about 5% throughout the 2D fitting range (i.e. [0.02-1] R\({}_{500}\)). Figure 13: Results for the most relaxed and disturbed clusters and for the most smooth and irregular profile with the 2D-3D fit (coarse binning) using 2D temperature profiles defined at twelve (left panel) and six radial bins (right panel) up to R\({}_{500}\). Errors in the 2D temperature profiles are assumed to increase linearly with a radius from 5% (10%)in the innermost bin to 25% (30%) in the outermost bin for the twelve (six) bin case. Colour coding is the same as in Fig. 11. and 2 R\({}_{500}\) to be \(0.05\pm 0.128\), \(-0.004\pm 0.090\) and \(-0.080\pm 0.235\) respectively. The median of fractional residuals for the full sample and over the entire radial range is found to be \(-0.008\pm 0.038\) and \(-0.014\pm 0.075\) for the 2D and 3D profiles respectively. In the right panel of Fig. 13, we show the temperature profiles of the most relaxed and disturbed cluster and of the most regular and irregular profile in the validation sample. We find that even with only six data points in the fit, the **IAE** is still able to recover the 3D temperature profiles with residuals less than 10% over most of the cluster region. However, the confidence intervals of the reconstructed profiles and **IAE** parameters, shown in the bottom panel of Fig. 6 in Appendix A.3, are larger compared to previous cases. Finally Fig. 8 in Appendix A.3 shows the comparison of true 2D and 3D temperature profiles and the reconstructed temperature profiles of 20 clusters in the validation sample for the six bin case. For comparison, Table 4 provides the median fractional residuals obtained for the different cases of fitting schemes discussed in this Section. Similarly, Table 5 shows the best-fitting parameters of **IAE** model for different cases obtained with MCMC. One can see that as we go from the high resolution simulated profiles to lower resolution observational-like profiles, the dispersion in fractional residuals and parameter estimates increases. We also checked the performance of the model with other binning schemes and found the performance of the **IAE** model to be robust. In particular, we checked the performance by considering five 2D data points up to 0.5 R\({}_{500}\) in the fit. We find that the **IAE** model is able to reproduce the results with an average fractional difference of about 5% up to 0.5 R\({}_{500}\) which then increases with radius and becomes about 10% at R\({}_{500}\) and 25% at 2 R\({}_{500}\). We also considered an IAE model with 20 anchor points, applied to the two observation-like cases, and found that its performance is very similar to that of our fiducial five-parameter IAE model, unlike in the 3D-3D case where it is found to have better performance. This implies that increasing the number of anchor points does not necessarily increase the model fidelity for these cases, as one must also have higher resolution input 2D temperature profiles for the model to be fitted against. ### 2D-3D reconstruction of temperature profiles with spectroscopic-like weighting In the previous Sections, we have only focused on 3D temperature reconstruction from the **IAE** model using 2D temperature profiles derived using standard emission-measure weights (Mathiesen & Evrard 2001). In this Section, we consider more complex spectroscopic-like weighting (Mazzotta et al. 2004), which has a stronger dependence on the 3D temperature profiles. This makes deconvolution a more complicated problem and, therefore, it is important to check the accuracy of the **IAE** model in this case. In Fig. 15, we show the fractional residual for 2D and 3D temperature profiles between the input and **IAE** recovered temperature profiles in 2D-3D fit with twelve data points in the range [0.02-1] R\({}_{500}\). We find the median fractional residuals at radii 0.02 R\({}_{500}\) and R\({}_{500}\) to be \(0.002\pm 0.008\), \(-0.027\pm 0.065\) respectively for the 2D profiles. For the 3D reconstruction, we find median fractional residuals at 0.02 R\({}_{500}\), R\({}_{500}\), and 2 R\({}_{500}\) to be \(0.040\pm 0.072\), \(-0.003\pm 0.065\) and \(-0.060\pm 0.180\) respectively. We see that on average there is a small but noticeable 4% overestimation in the 3D temperature profiles in the first 4 radial bins. This could be caused by the presence of dense and cold substructures that in the simulated objects could lower the central value \begin{table} \begin{tabular}{c c} \hline \hline Parameter & Range \\ \hline \(T_{0}\) & \(0;+4\) \\ \(\tau\) & \(+0.2;+1\) \\ \(\log(R_{cool}/R_{500})\) & \(-2.5;0\) \\ \(a_{cool}\) & \(0;+4\) \\ \(\log(R_{i}/R_{500})\) & \(-1;+1\) \\ \(a\) & \(0;+0.6\) (\(0;+0.1\)) \\ \(b\) & \(+1;+4\) \\ \(c\) & \(0;+4\) (\(+1;+4\)) \\ \hline \end{tabular} 1 \end{table} Table 3: Flat priors used for the Vikhlinin et al. (2006) model parameters. Figure 15: Fractional residuals for 115 clusters in the validation sample with **IAE** for the 2D-3D fit (coarse binning) using spectroscopic-like 2D temperature profiles defined at twelve radial bins up to R\({}_{500}\). For simplicity, we have not shown the subsample cases. of the 3D spectroscopic-like temperature in the innermost region, where the impact of this formulation is the strongest (see e.g. Fig. 3 of Rasia et al. (2014)). Similarly, beyond \(\rm R_{500}\) the temperature profiles are underestimated by 8% on average. This effect could also play a role for the central mismatch, since the convolution is temperature dependent, the slight overestimation in the first few innermost bins may be also linked to the underestimation of temperature profiles in the outermost bins. This suggests the importance of deriving accurate estimation of the temperature profiles beyond the 2D fitting range. More detailed treatment in this regard is beyond the scope of this paper and we propose possible explanations as an important future direction. However, we do find the median residual is consistent with zero over all the radial range of [0.02-2] \(\rm R_{500}\) and as in the previous cases, for the majority of the clusters the true 3D temperature profiles lie within 1-\(\sigma\) dispersion of the **IAE** recovered temperature profiles. The median of fractional residuals for the full sample and over the entire radial range is found to be \(-0.003\pm 0.038\) and \(-0.003\pm 0.075\) for the 2D and 3D profiles respectively. ### Comparison of **Iae** model to a parametric model In this Section, we use the validation sample of 115 clusters to compare the non-parametric results from **IAE** model to those obtained from a parametric temperature model. We first obtain the best-fitting 3D temperature from the Vikhlinin et al. (2006) model (Eqn. 2) considering the prior range on each parameter given in Table 3, and using the same binning schemes as used for the **IAE** model in previous sections, assuming a spectroscopic-like weighting scheme. Temperature profiles were first scaled by \(\rm T_{x}\) before fitting them to the parametric model, so as to bring the parameter \(\bar{T}_{0}\) to a comparable scale. We find that in the 2D-3D (or 3D-3D) fine binning case, the 3D reconstruction is poor compared to the observational-like cases where the fitting is weighted according to the errors, which increase with radius. We also tried to fit the temperature profiles in log space, which could effectively address any heteroscedasticity issues and stabilise the variance over the large radial range. However, this still did not improve the model reconstruction in the 2D-3D (or 3D-3D) fine binning case. This indicates that such a parametric model strug \begin{table} \begin{tabular}{l c c c c c} \hline \hline Sample & Case & 0.02 \(\rm R_{500}\) & \(\rm R_{500}\) & \(\rm 2\,R_{500}\) & Full range \\ \hline \multicolumn{6}{c}{3D-3D fit with fine binning (48 data points)} \\ \hline Full sample & 3D & \(-0.010\pm 0.060\) & \(0.010\pm 0.051\) & \(-0.020\pm 0.120\) & \(-0.001\pm 0.042\) \\ MD20 & 3D & \(-0.018\pm 0.017\) & \(0.014\pm 0.085\) & \(0.093\pm 0.128\) & \(0.000\pm 0.048\) \\ MR20 & 3D & \(0.035\pm 0.065\) & \(0.014\pm 0.035\) & \(-0.060\pm 0.045\) & \(-0.002\pm 0.032\) \\ ML20 & 3D & \(0.003\pm 0.060\) & \(0.027\pm 0.061\) & \(-0.125\pm 0.132\) & \(0.001\pm 0.052\) \\ MS20 & 3D & \(-0.033\pm 0.048\) & \(0.002\pm 0.045\) & \(-0.037\pm 0.110\) & \(-0.001\pm 0.029\) \\ \hline \multicolumn{6}{c}{2D-3D fit with fine binning (48 data points)} \\ \hline Full sample & 2D & \(0.009\pm 0.027\) & \(0.004\pm 0.040\) & \(-0.018\pm 0.095\) & \(-0.002\pm 0.027\) \\ Full sample & 3D & \(0.021\pm 0.110\) & \(0.014\pm 0.052\) & \(-0.018\pm 0.095\) & \(-0.003\pm 0.045\) \\ MD20 & 2D & \(-0.001\pm 0.037\) & \(-0.009\pm 0.065\) & \(0.067\pm 0.105\) & \(0.000\pm 0.033\) \\ MD20 & 3D & \(-0.021\pm 0.120\) & \(0.021\pm 0.088\) & \(0.067\pm 0.105\) & \(-0.002\pm 0.058\) \\ MR20 & 2D & \(0.020\pm 0.035\) & \(0.016\pm 0.031\) & \(-0.060\pm 0.037\) & \(-0.003\pm 0.023\) \\ MR20 & 3D & \(0.089\pm 0.120\) & \(0.015\pm 0.035\) & \(-0.050\pm 0.037\) & \(-0.003\pm 0.035\) \\ ML20 & 2D & \(0.019\pm 0.015\) & \(0.014\pm 0.035\) & \(-0.106\pm 0.120\) & \(0.000\pm 0.038\) \\ ML20 & 3D & \(0.049\pm 0.135\) & \(0.025\pm 0.070\) & \(-0.106\pm 0.120\) & \(-0.001\pm 0.064\) \\ MS20 & 2D & \(0.010\pm 0.013\) & \(0.006\pm 0.040\) & \(-0.036\pm 0.065\) & \(-0.002\pm 0.018\) \\ MS20 & 3D & \(0.015\pm 0.080\) & \(0.012\pm 0.043\) & \(-0.036\pm 0.065\) & \(-0.002\pm 0.031\) \\ \hline \multicolumn{6}{c}{2D-3D fit with coarse binning (12 data points)} \\ \hline Full sample & 2D & \(0.001\pm 0.008\) & \(-0.002\pm 0.073\) & - & \(-0.002\pm 0.026\) \\ Full sample & 3D & \(0.003\pm 0.071\) & \(-0.010\pm 0.064\) & \(-0.07\pm 0.185\) & \(-0.006\pm 0.051\) \\ MD20 & 2D & \(-0.003\pm 0.010\) & \(-0.009\pm 0.065\) & - & \(-0.004\pm 0.028\) \\ MD20 & 3D & \(0.005\pm 0.045\) & \(-0.010\pm 0.80\) & \(0.113\pm 0.243\) & \(-0.006\pm 0.058\) \\ MR20 & 2D & \(-0.001\pm 0.007\) & \(-0.024\pm 0.056\) & - & \(-0.003\pm 0.021\) \\ MR20 & 3D & \(0.069\pm 0.063\) & \(0.006\pm 0.50\) & -0.138\(\pm 0.101\) & \(-0.006\pm 0.042\) \\ ML20 & 2D & \(-0.002\pm 0.007\) & \(-0.001\pm 0.051\) & \(-0.003\pm 0.030\) \\ ML20 & 3D & \(-0.014\pm 0.047\) & \(0.012\pm 0.50\) & \(-0.174\pm 0.175\) & \(-0.007\pm 0.065\) \\ MS20 & 2D & \(-0.000\pm 0.006\) & \(-0.025\pm 0.060\) & \(0.000\pm 0.018\) \\ MS20 & 3D & \(0.034\pm 0.055\) & \(0.005\pm 0.60\) & \(-0.08\pm 0.165\) & \(0.000\pm 0.036\) \\ \hline \multicolumn{6}{c}{2D-3D fit with coarse binning (6 data points)} \\ \hline Full sample & 2D & \(-0.006\pm 0.022\) & \(-0.002\pm 0.070\) & - & \(-0.008\pm 0.038\) \\ Full sample & 3D & \(0.050\pm 0.128\) & \(-0.001\pm 0.090\) & \(-0.080\pm 0.235\) & \(-0.014\pm 0.075\) \\ MD20 & 2D & \(-0.008\pm 0.027\) & \(-0.016\pm 0.070\) & - & \(-0.009\pm 0.040\) \\ MD20 & 3D & \(0.005\pm 0.060\) & \(0.002\pm 0.115\) & \(0.011\pm 0.255\) & \(-0.016\pm 0.062\) \\ MR20 & 2D & \(-0.016\pm 0.015\) & \(-0.018\pm 0.062\) & - & \(-0.009\pm 0.032\) \\ MR20 & 3D & \(0.073\pm 0.130\) & \(0.009\pm 0.075\) & \(-0.131\pm 0.140\) & \(-0.016\pm 0.060\) gles to accurately capture the true underlying patterns in the noiseless data, or when the noise covariance is negligible. By weighting the fitting according to the errors, which reflect the inherent uncertainties in the data and which increase with radial distance, the model can better adapt to the complexities of the noiseless data, resulting in improved performance. The significant improvement achieved by incorporating error covariance can be visually observed in Fig. 16. Even with coarse resolution, as discussed in the next paragraph, the fit shows a remarkable enhancement when realistic error covariance is considered during the fitting process. Another reason for the sub-optimal performance of the parametric model can be attributed to its highly non-linear nature and the strong degeneracy between the parameters. This results in poor constraints on the parameters, and the reconstructed 3D temperature profiles could depend strongly on the choice of fitting priors. The arguments discussed above can be explained with Fig. 16. The top panel of the Fig. 16 shows the dispersion for the 2D-3D fine and coarse binning cases with prior ranges of parameters \(a=0-0.6\) and \(c=0-4\), which have a significant effect on the profiles in the central and outer regions respectively. We find, for the 2D-3D fine binning case, that the 3D reconstructed temperature profiles obtained from this parametric fitting have a large bias in both the central and outer regions, with median fractional residuals of values about 30% and 11% at the first and last bin respectively. For observational-like binning, having a weighted fitting, the bias in the central regions becomes consistent with zero, however, there is still a bias beyond the R\({}_{500}\) which increases with the median fractional residual of values about 18%. We find that the optimal priors for parameters \(a\) and \(c\) are \(a=0-0.1\) and \(c=1-4\) respectively, leading to a minimal bias in the central and outer regions respectively. This is shown in the bottom panel of Fig. 16, where one finds a median consistent with zero, but with slightly larger dispersion compared to the **IAE** model for the observational-like cases. In the outer regions, however, the dispersion in the 2D-3D fine binning case is barely consistent with zero for the parametric model. Considering the optimal priors for the \(a\) and \(c\) parameters discussed above, the left panel of Fig. 17 shows the reconstruction of the 3D temperature profiles with the **IAE** and parametric models for typical CC and NCC clusters in the simulated sample with observational-like binning having twelve bins. While the CC profile is recovered well by both models, the reconstruction is poor in the central region for the parametric fit to the NCC Figure 16: The 1-\(\sigma\) dispersion in the 3D fractional differences obtained with MCMC for priors provided in Table 3 for the Vikhlinin et al. (2006) parametric model (Eqn. 2). In the figure, we consider the 2D-3D fine binning case and 2D-3D observational-like coarse binning cases with twelve and six bins. The top panel shows the results with prior ranges for \(a=0-0.6\) and \(c=0-4\), while the bottom panel presents the results with priors ranges for \(a=0-0.1\) and \(c=1-4\). The regions enclosed by cyan and magenta lines in the bottom panel show the corresponding dispersion recovered with the **IAE** model for the observational-like cases with twelve and six bins respectively. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ Case} & \(\lambda_{1}\) & \(\lambda_{2}\) & \(\lambda_{3}\) & \(\lambda_{4}\) & \(\lambda_{5}\) & \(\alpha\) \\ \hline \multicolumn{7}{c}{3D-3D fit with fine binning (48 data points)} \\ \hline Most disturbed cluster & \(0.0\pm 1.1\) & \(0.03\pm 0.77\) & \(-0.2\pm 1.4\) & \(0.34^{+0.23}_{-0.23}\) & \(0.91\pm 0.69\) & \(380.9\pm 7.0\) \\ Most relaxed cluster & \(3.0\pm 2.2\) & \(2.2\pm 1.5\) & \(-2.7\pm 2.9\) & \(0.54^{+0.23}_{-0.23}\) & \(-2.0\pm 1.5\) & \(143.6\pm 9.4\) \\ Most irregular profile & \(1.8\pm 1.0\) & \(1.43\pm 0.71\) & \(-1.3\pm 1.4\) & \(0.14\pm 0.33\) & \(-1.08\pm 0.66\) & \(284.5\pm 7.1\) \\ Most smooth profile & \(3.2^{+2.2}_{-2.4}\) & \(2.6\pm 1.8\) & \(-4.0\pm 3.5\) & \(0.81\pm 0.79\) & \(-1.5\pm 1.7\) & \(129^{+12}_{-12}\) \\ \hline \multicolumn{7}{c}{2D-3D fit with the binning (48 data points)} \\ \hline Most disturbed cluster & \(0.2\pm 1.2\) & \(0.16\pm 0.82\) & \(-0.5\pm 1.6\) & \(0.40^{+0.35}_{-0.67}\) & \(0.78\pm 0.78\) & \(381.7\pm 8.0\) \\ Most relaxed cluster & \(2.8^{+1.2}_{-1.2}\) & \(-2.14\pm 0.2\) & \(-4.2\pm 0.2\) & \(-0.04^{+0.03}_{-0.04}\) & \(-1.8\pm 1.5\) & \(144^{+13}_{-15}\) \\ Most irregular profile & \(2.4^{+2.1}_{-1.3}\) & \(1.78\pm 0.93\) & \(-2.0\pm 2.0\) & \(0.44^{+0.12}_{-0.11}\) & \(-1.58^{+1.1}_{-1.7}\) & \(289\pm 11\) \\ Most smooth profile & \(3.0^{+2.0}_{-3.5}\) & \(2.5^{+0.1}_{-3.5}\) & \(-3.6^{+0.5}_{-1.9}\) & \(0.69^{+0.76}_{-0.80}\) & \(-1.6\pm 1.7\) & \(130^{+21}_{-21}\) \\ \hline \multicolumn{7}{c}{2D-3D fit with coarse binning (12 data points)} \\ \hline Most disturbed cluster & \(-0.6^{+3.0}_{-1.0}\) & \(-0.4^{+2.2}_{-0.1}\) & \(0.5^{+3.0}_{-0.3}\) & \(0.20\pm 0.77\) & \(1.3^{+1.7}_{-1.7}\) & \(378\pm 16\) \\ Most relaxed cluster & \(0.9^{+2.6}_{-2.9}\) & \(0.8^{+0.1}_{-0.0}\) & \(-0.3^{+1.8}_{-0.1}\) & \(0.15^{+0.81}_{-0.62}\) & \(-0.6^{+2.3}_{-2.3}\) & \(141^{+0.8}_{-1.1}\) \\ Most irregular profile & \(0.5^{+3.2}_{-2.8}\) & \(0.4^{+2.0}_{-1.2}\) & \(0.3^{+3.7}_{-3.7}\) & \(0.01\pm 0.87\) & \(-0.1\pm 2.1\) & \(282\pm 13\) \\ Most smooth profile & \(1.9^{+2.1}_{-2.8}\) & \(1.9^{+2.1}_{-1.8}\) & \(-2.6^{+2.7}_{-2.6}\) & \(0.56^{+0.53}_{-0.91}\) & \(-0.8\pm 2.0\) & \(131^{+11}_{-15}\) \\ \hline \multicolumn{7}{c}{2D-3D fit with coarse binning (6 data points)} \\ \hline Most disturbed cluster & \(-0.6^{+2.9}_{-1.5}\) & \(-0.5^{+2.1}_{-2.4}\) & \(0.5^{+3.2}_{-3.2}\) & \(0.31^{+0.24}_{-0.26}\) & \(1.3^{+1.7}_{-1.7}\) & \(380\pm 16\) \\ Most relaxed cluster & \(0.6^{+3.0}_{-1.5}\) & \(0.5^{+3.7}_{-2.4}\) & \(0.4\pm 0.7\) & \(0.08\pm 0.96\) & \(-0.5^{+3.7}_{-2.5}\) & \(141^{+15}_{-22}\) \\ Most irregular profile & \(-0.3\pm 3.7\) & \(-0.1\pm 2.7\) & \(1.0\pm 4.7\) & \(0.0\pm 1.0\) & \(0.4\pm 2.5\) & \(281\pm 24\) \\ Most smooth profile & \(0.8^{+3.4}_{-3.4}\) & \(0.9^{+2.0}_{-2.4}\) & \(-1.0^{+4.9}_{-4.0}\) & \(0.34\pm 0.96\) & \(-0.1\pm 2.4\) & \(129^{+12}_{-12}\) \\ \hline \end{tabular} 1 \end{table} Table 5: Best fit results for the **IAE** parameters derived with the MCMC for the fitting schemes and samples considered in Secs. 5.1, 5.2 and 5.3. case, and would require larger values of \(a\) to improve the fit in the central region. Similarly, in the right panel of Fig. 17, we show the 3D reconstruction of two complex profiles. These two clusters are experiencing ongoing merger shocks. Here one sees that, in such scenarios, the parametric model performs poorly compared to the **IAE** model, being unable to capture the true underlying structure of the data. We find that even increasing the priors on \(a\) and \(c\) did not have any significant improvement in the parametric fit for such complex profiles. The accurate estimation of the shape of the temperature profile is vital since the estimation of total mass profiles depends on it. ## 6 First application to CHEX-MATE X-ray data ### Modifications to the **Iae** model Although the Three Hundred Project provide us with one of the highest resolution hydrodynamical simulation samples to date, due to numerical issues, the thermal profiles could only reliably be estimated above 0.02 R\({}_{500}\) for most of the galaxy clusters in the sample. The number of available 2D annular temperature data points and their radial distribution will depend on the object mass and luminosity, the presence or absence of a cool core, and the depth of the observation4. From our experience of X-ray analysis of typical observations of local (\(z<0.5\)) massive (\(M_{500}>10^{14}\) M\({}_{\odot}\)) galaxy clusters available in the XMM-_Newton_ or _Chandra_ archives, we find that for many objects, one is generally able to obtain some temperature data points interior to 0.02 R\({}_{500}\) (corresponding to \(20-40\arcsec\) at \(z=0.05\) and \(5-10\arcsec\) at \(z=0.3\) for typical cluster masses). Footnote 4: See Chen et al. (2023) for a discussion of an optimal binning method. Therefore, in order to make the best use of the available data, one needs to look for an optimal extrapolation of the **IAE** model that is able to reconstruct the temperature profiles robustly even in the very central regions. To build an **IAE** model that is suitable for application to such observations, we first extrapolated the simulated temperature profiles to 0.005 R\({}_{500}\) by fitting a Vikhlinin et al. (2006) parametric model in the inner regions (up to 0.5 R\({}_{500}\)). We then re-trained the **IAE** model in the full radial range of [0.005-2] R\({}_{500}\) with the simulated dataset, augmented by the parametric model extrapolation in the very central regions. ### Observed sample We then use this updated **IAE** model on the latest CHEX-MATE Data Release 1, DR1 sample (Rossetti et al. 2023) to deconvolve the temperature profiles. The DR1 sample is a "technical but representative' sub-sample, which was built to test our pipeline for the extraction and reconstruction of the radial temperature and density profiles. It is composed of 30 clusters, whose distribution in mass, redshift, and Planck signal-to-noise-ratio (S/N) reflect the properties of the CHEX-MATE parent sample. In Appendix A.4, Table A.1 provides the details of all the clusters in the DR1 sample. For data reduction and analysis, we used the XMM-Newton Science Analysis System (SAS), version 16.1. We refer to Bartalucci et al. (2023) for details on the data reduction procedures (calibration, standard pattern cleaning, removal of noisy MOS CCDs, and light-curve filtering) and on the detection of contaminating sources. From the EPIC images in the 0.7-1.2 keV band, we extracted both mean and median surface brightness radial profiles, centered on the peak and on the centroid within \(R_{500}\). For the temperature profile, we extract spectra Figure 17: CC and NCC model recover comparison. Left panel: Comparison of the 3D temperature profiles of typical CC and NCC clusters in the Three Hundred Project sample recovered with the **IAE** and parametric models using twelve 2D annuli within R\({}_{500}\) (points with error bars). The dashed line shows the true 3D temperature profiles. The solid lines and shaded regions show the reconstructed 3D temperature profiles with 1-\(\sigma\) dispersion obtained with the **IAE** model. The dotted lines are the 3D temperature profiles recovered with the Vikhlinin et al. (2006) parametric model. For better visibility, the 1-\(\sigma\) dispersion for the parametric model is not shown. Right panel: 3D temperature profile reconstruction with the **IAE** and parametric models for two complex cases in the Three Hundred Project. For better visibility, 2D profiles and the 1-\(\sigma\) dispersion are not shown. For both figures, the bottom panel shows a fractional difference between the true and recovered 3D profiles. For NCC and CC clusters, both the **IAE** model and parametric model reconstruction with optimal priors are comparable, but the former exhibits slightly better performance. For the complex cases, the **IAE** model is more accurate in uncovering the profile shapes. in concentric annuli centered on the surface brightness peak, using the MOS-spectra and PN-spectra ESAS tools (Snowden et al., 2008) embedded in SAS. For each region, we perform a joint fit of the MOS1, MOS2, and PN spectra with an adsorbed thermal model, to which we add a model for all the background components (Galactic foregrounds, CXB, Cosmic-ray particle background, residual soft protons). We estimate priors for the parameters of this background model that are allowed to vary within their uncertainty during the joint fit with the cluster parameters, running the Markov Chain Monte Carlo method within XSPEC (see Rossetti et al., 2023, for more details). In this work, two clusters (PSZ2 G046.88+56.48 and PSZ2 G057.78+52.32) that require background treatments using off-set observations were not considered in the analysis. ### Method For deconvolution of these observed profiles, we assume that the 3D temperature profiles can be represented by the **IAE** model, convolved with a response matrix \(\mathbf{C}=\mathbf{C}_{\text{PSF}}\mathbf{\otimes}\mathbf{C}_{\text{proj}}\), which simultaneously takes into account projection and PSF redistribution. The projection matrix, \(\mathbf{C}_{\text{proj}}\), is built by using the DR1 density profiles from Duffy et al. (2023, in prep.), derived using the non-parametric deconvolution algorithm of Croston et al. (2006). More details of the derivation of the density profiles can be found in Croston et al. (2008) and Pratt et al. (2022). \(\mathbf{C}_{\text{PSF}}\) is constructed as in Croston et al. (2006), which uses the parametric Figure 19: Scaled 3D temperature profiles of the DR1 sample recovered with the **IAE** model. Also shown in the bottom panel is the ratio of 3D temperature profiles recovered with the **IAE** model to the parametric models. For better visibility, the error bars corresponding to the individual profiles are not shown. The black lines and grey shaded grey regions represent the median and 1-\(\sigma\) dispersion of the sample. The difference between the **IAE** model and the parametric model can be as high as 20%, although the average ratio between them remains close to unity. Figure 20: Left Panel: Comparison of the observed \(\text{T}_{X}\) and the best-fit \(\text{T}_{\text{X,model}}\) obtained with non-parametric **IAE** and parametric Vikhlinin et al. (2006) models. Solid lines show the best fit for the data. We see that both our non-parametric and parametric approaches provide tight and accurate constraints on the average temperature of clusters. Figure 18: Comparison of the scaled 2D and 3D temperature profiles of a typical NCC (PSZ2 G050.40+31.17) and CC (PSZ2 G057.92+27.64) cluster in the DR1 sample recovered with the **IAE** and parametric models. Solid lines and the associated shaded regions show the median and 1-\(\sigma\) dispersion of the reconstructed 3D temperature profile obtained with MCMC. Regions enclosed by the dashed lines represent the corresponding 1-\(\sigma\) dispersion 2D temperature profiles fitted to the observed 2D data (black dots). In line with our results with simulations for observational-like cases, we find that both the IAE model and parametric model with optimal priors generate comparable profiles for NCC and CC clusters. PSF model of Ghizzardi (2001) as a function of the energy and angular offsets, the parameters of which can be found in EPIC-MCT-TN-0115 and EPIC-MCT-TN-0126. Footnote 5: [http://www.iasf-milano.inaf.it/~simona/pub/EPIC-MCT/EPIC-MCT-TN-011.pdf](http://www.iasf-milano.inaf.it/~simona/pub/EPIC-MCT/EPIC-MCT-TN-011.pdf) Footnote 6: [http://www.iasf-milano.inaf.it/~simona/pub/EPIC-MCT/EPIC-MCT-TN-012.pdf](http://www.iasf-milano.inaf.it/~simona/pub/EPIC-MCT/EPIC-MCT-TN-012.pdf) The IAE model was then projected, taking into account the spectroscopic-like weighting scheme proposed Mazzotta et al. (2004), and fitted to the observed 2D profiles. In our future work, we will examine the more complex Vikhlinin (2006) weighting scheme, which is more robust for lower temperature clusters/groups, and compare the results to other weighting schemes. ### Results #### 6.4.1 Estimation of profiles In Fig. 18, we show the 3D temperature profiles reconstructed using the **IAE** model and the Vikhlinin et al. (2006) 8-parameter parametric model for a typical NCC and a typical CC cluster in the DR1 sample. In general, we find that with the annular resolution of the present 2D profiles, both models produce similar reconstructed 3D temperature profiles. However, the parameters of the Vikhlinin et al. (2006) model are poorly constrained, and the final reconstructed temperature profiles (especially the inner and outer regions) may depend on the chosen priors. Figure 19 shows the 3D temperature profiles of the clusters in the DR1 sample obtained with the **IAE** model, scaled by the average temperature (T\({}_{x}\)) in the [0.15-0.75] R\({}_{500}\) region. We find that fractional dispersion is about 22% in the inner region which first decreases with the radius and attains a minimum value of 3% at around 0.5 R\({}_{500}\). It then starts to increase with radius, achieving a maximum value of 22% in the outer regions. Also plotted in the sub-panel is the ratio of the 3D temperature profiles recovered with the **IAE** and parametric models. One finds that within the radial range of [0.1-1] R\({}_{500}\), the difference be Figure 21: Comparison of the logarithmic derivatives 3D temperature profiles of a typical NCC (PSZ2 G050.40+31.17) and CC (PSZ2 G057.92+27.64) cluster in the DR1 sample recovered with the **IAE** and parametric models. Solid lines and the associated shaded regions show the median and 1-\(\sigma\) dispersion obtained with MCMC. The region enclosed by the dashed lines represents 1-\(\sigma\) dispersion, if no smoothing is applied to the profiles derived from the MCMC chain. The bottom panel shows the ratio of the median 3D temperature profiles obtained using **IAE** with and without smoothing. Figure 22: Logarithmic derivatives of 3D temperature profiles of the DR1 sample recovered with the **IAE** model. Also shown in the bottom panel is the difference between profiles recovered with the **IAE** model and the parametric model. For better visibility, the error bars corresponding to the individual profiles are not shown. The black lines and grey shaded grey regions represent the median and 1-\(\sigma\) dispersion of the sample. tween **IAE** and Vikhlinin et al. (2006) model is less than 10%. The difference between them can be as high as 25% in the inner and outer regions. However, on average both models predict very similar profiles with a difference of less than 2% over the entire radial range of [0.005-2] R\({}_{500}\). As a consistency check, we compared the values of the average temperature in the [0.15-0.75] R\({}_{500}\) region. Figure 20 shows the observed T\({}_{\rm X}\) compared to T\({}_{\rm X,model}\), the temperature derived from a projection of the 3D non-parametric **IAE** and parametric models in the same annulus. Fitting a straight line to the (T\({}_{\rm X,model}\),T\({}_{\rm X}\)) one finds the slope for the **IAE** and parametric model to be 1.01 \(\pm\) 0.01 and 1.01 \(\pm\) 0.02 respectively. #### 6.4.2 Estimation of derivatives While non-parametric models offer greater flexibility in modelling complex patterns and relationships, one requires a large amount of data to accurately estimate derivatives. Small irregularities in the profiles often amplify the noise in the derivatives. Therefore, it is often desirable to apply some degree of smoothing to the profiles to have accurate derivatives in the non-parametric approaches. As can be seen from Fig. 19, the reconstructed 3D temperature profiles from the **IAE** model have a reasonably smooth underlying structure. We find that the direct computation of numerical derivatives of individual profiles derived from the MCMC chains using spline interpolation, without applying any smoothing, usually provided a good estimate of the logarithmic derivatives and corresponding 1-\(\sigma\) interval. Nonetheless, we sometimes found the derivative estimates to be noisy, particularly beyond the 2D fitting range. This noise can be attributed to logarithmic binning, which can create sparsity in the outer regions. Another potential cause for the noise is small spikes in the temperature profiles between consecutive radii in the profiles inherited by the model from the simulations itself in the inner regions due to the limited resolution there. We, therefore, choose to apply a very minimal smoothing, such that only the sharp discontinuities, if any (usually small in magnitude), on local scales (2-3 radial bins) are affected/corrected and the general non-linear structure is preserved. We use the algorithm developed by Cappellari et al. (2013) which implements the one-dimensional locally linear weighted regression Cleveland (1979)7. It uses a tri-cube weighting function with weights \((1-u^{3})^{3}\) where \(u\) is a distance from the local point R under consideration and a smoothing parameter \(f\) which is the fraction of neighborhood points to be considered in the local fit around R. Increasing the value of \(f\) increases the neighborhood of influential points leading the smoother profiles. For our case, we apply modest smoothing with \(f=0.15\). Footnote 7: [https://pypi.org/project/loess/](https://pypi.org/project/loess/) Figure 21 shows the corresponding logarithmic derivatives of the temperature profiles of the two clusters discussed in the previous sub-section. Here also, both the **IAE** and the parametric models produce consistent profiles. Furthermore, for the **IAE** model, the profiles obtained with and without applying the smoothing on the temperature profiles are also consistent with each other. This can be also seen in the bottom panel where the ratio between reconstructed 3D temperature profiles with and without applying smoothing is seen to be less than 1% over most of the radial range. Figure 22 shows the logarithmic derivatives of the 3D temperature profiles of the clusters in the DR1 sample obtained with the **IAE** model. Also, in the bottom panel, we show the difference in logarithmic derivatives derived from **IAE** and parametric models (\(\Delta\)). We find that, although dispersion in the difference increases with the radius, the difference is consistent with zero throughout the radial range. While it is difficult to quantify this difference in the inner region, since logarithmic derivatives are close to zero, in the range [0.5-2] R\({}_{500}\) the difference in logarithmic derivatives between the **IAE** and the parametric model can be more than as 20%. The impact of this on the total mass estimate is not straightforward but is expected to be about 5%-30%. ## 7 Discussion and conclusions Classical statistical modelling techniques can be sensitive to inaccuracies and may lead to poor performance if the data are complex (non-linear) and/or have a dynamic structure. Data-driven (model-agnostic) deep-learning techniques are now becoming increasingly popular. They make use of the topology to learn the underlying structure of the data, and often have been found to give superior performance in terms of accuracy and precision when the underlying structure of data are non-linear. However, one typically requires a massive dataset and vast computational resources to train the neural network, limiting their applicability for some scenarios. In this paper, we demonstrate the first use of deep learning techniques to build a model of galaxy cluster temperature profiles and apply this model to the problem of temperature profile deprojection. Using a non-linear interpolatory scheme with five anchor points (temperature profiles), allows us to have frugal learning with a sparse training set, and the neural network is able to uncover the lower dimensional non-linear manifold of data by way of mapping between latent space and real space. The resulting Interpolatory Auto-Encoder (**IAE**) model is trained and evaluated in the radial range of [0.02-2] R\({}_{500}\) using a simulated dataset of 315 temperature profiles from the Three Hundred Project. We then implement a new deconvolution scheme using efficient and cost-effective learning-based regularisation to achieve a stable and accurate reconstruction of the 3D temperature profiles by optimising the latent parameters (barycentric weights) of the anchor points using MCMC. Moreover, the deconvolution algorithm can be easily extended to include the instrumental PSF effect. We test the **IAE** with a different set of deconvolution schemes with respect to the resolution, projection, and quality of the data. We find that, in general, the **IAE** model can recover unbiased 3D temperature profiles in the fitting range. The performance of the **IAE** model to recover the true temperature profiles can be summarised as follows: * We first considered the simplest case, where we tested the efficiency of the **IAE** model in directly fitting the high resolution simulated 3D temperature profiles, defined in 48 fixed radial bins in the range [0.02-2] R\({}_{500}\), the resolution with which the **IAE** model is trained. We find that in this case, the reconstruction of temperature profiles from the **IAE** model is robust, with the median fractional residuals centered around zero and a 1-\(\sigma\) dispersion (determined by the 16th and 84th percentile range of fractional residuals) of about \(\pm\)5% over most of the radial range. The dispersion in the outskirts is somewhat larger (about \(\pm\)10%). This can be interpreted as being due to the complex nature of the ICM as a result of merging/accretion processes that are dominant there. Moreover, dispersion in the fractional residuals for the sub-sample of 20 most relaxed clusters (MR20) and smooth temperature profiles (MS20) is about 35% smaller compared to the sub-sample of 20 most disturbed clusters (MD20) and irregular temperature profiles (MI20). We find that the model fidelity can be further improved by increasing the number of anchor points in the **IAE** model. However, since observed temperature profiles are generally of much lower resolution, increasing the complexity of the model is undesirable as it could lead to overfitting. * We then considered a case where we fitted the high resolution simulated 2D temperature profiles to the **IAE** model using classical emission measure weights. Here too we find the median fractional residual is centered around zero with a 1-\(\sigma\) dispersion of about \(\pm\)5% over most of the radial range. In the first few innermost bins, however, we find that the dispersion is increased to about \(\pm\)10%. This is understandable since the projection operation introduces a degeneracy in the 3D temperature profiles which is significant in the inner regions i.e the mapping between input 2D temperature profiles and **IAE** reconstructed 3D temperature profiles is not as strong as compared to a mapping between input 3D temperature profiles to **IAE** reconstructed 3D temperature profiles. However, this degeneracy can be mitigated to a large extent in the observational-like cases since the 2D temperature profiles in the inner bins have relatively smaller errors associated with them as compared to the rest of the radial bins. Moreover, as in the previous case, the distribution of the fractional residuals over all radii for the MR20 (MS20) sub-sample is narrowly peaked compared to the MD20 (MI20) sub-sample. * We next considered observation-like fitting cases, with typical temperature profile data quality such as would be obtained from the _XMM-Newton_ or _Chandra_ satellites. We first considered a case where we fit 2D temperature profiles defined at twelve radial points and up to R\({}_{500}\) only, mimicking the profile expected from the moderately deep X-ray exposures. We find that in the 2D fitting range i.e. [0.02-1] R\({}_{500}\), with the relatively low resolution input 2D temperature profiles, the performance of the **IAE** model is negligibly degraded. However, beyond R\({}_{500}\), where we do not consider any 2D data in the fit, the 1-\(\sigma\) dispersion in the 3D reconstruction increases with radius and becomes about \(\pm\)20% in the last bin. The 3D median fractional residual is found close to zero over most of the radial range, except beyond 1.5 R\({}_{500}\) where it is underestimated by about 7%. We also considered a case where we only use only six 2D temperature data points in the fit and find that the **IAE** is still able to provide an unbiased estimate of the reconstructed temperature profile, albeit with a slightly larger uncertainty. * We considered a more realistic temperature-dependent spectroscopic-like weighting scheme (Mazzotta et al., 2004) in the deprojection. We find that there is a small bias of about 4% excess in the fractional residual in the innermost few bins, in addition to the underestimation in the outer regions as in the previous case. * We also compared the **IAE** model with a parametric temperature model. With the high resolution hydrodynamical simulated temperature profiles, the parametric model based on Vikhlinin et al. (2006) showed poor performance when the realistic error covariance matrix is ignored in the fit. Including the error covariance matrix improved the fit. The non-linearity and parameter degeneracy of the parametric model also contributed to sub-optimal performance, making the 3D reconstruction dependent on the choice of priors. In contrast, the **IAE** model performed better, particularly in complex cases with ongoing merger shocks, demonstrating its superior adaptability to diverse data scenarios. * Finally, in a first application to X-ray data, we built an augmented version of the **IAE** model in the radial range [0.005-2] R\({}_{500}\). The data augmentation was necessary because the simulated profiles did not have sufficient resolution to probe the very core regions that are accessible to good quality X-ray data. The augmentation step was achieved by extrapolating the simulated profiles to lower radii (below \(\approx\)0.02 R\({}_{500}\)) by fitting them to the Vikhlinin et al. (2006) parametric model in the range \(\approx\) [0.02-0.5] R\({}_{500}\). We then used this updated **IAE** model to reconstruct the 3D temperature profiles and logarithmic derivative of the representative (DR1) sample galaxy clusters drawn from the CHEX-MATE project. The resulting non-parametric **IAE** profiles were compared to those derived from parametric deprojection and deconvolution. We find that, in such observational cases where the typical number of annular data points is much fewer compared to the simulations, the difference between the **IAE** and parametric model is less than 10% over most of the observed region. However, in the inner and outer regions, the difference between them can be as high as 25%. Moreover, the results from the Vikhlinin et al. (2006) parametric model, especially inner and outer regions, depends on the priors chosen on the parameters as they are very poorly constrained during the fit. It should be noted that the inner regions of the clusters, which involve processes such as AGN feeding/feedback, gas condensation, sloshing, etc., are complex and may not be accurately represented by current state-of-art cosmological simulations. Moreover, the augmentation of the central regions of the training set using the extrapolation of a parametric model could potentially introduce bias in the underlying model recovered from the **IAE**. Despite these limitations, we believe that the **IAE** model provides higher-fidelity results compared to traditional parametric modelling, as demonstrated in this study. As the size and quality of both X-ray observations and simulations are set to improve in the coming years, the robustness of **IAE** will also be enhanced resulting in a much lower scatter. Our future plan is to perform network training and testing on different sets of simulations so as to have a larger training and validation sample. This will potentially also help us to understand the systematics, if any, in the **IAE** model inherited from the particular set of numerical simulations used for training. For example, De Luca et al. (2021) showed that the dynamical state of clusters in the Tinker Hunbred Project clusters varies with redshift: the relaxed clusters decrease in number from redshift \(z=0\) to \(z=1\). It remains to be seen if issues such as possible redshift dependence have any impact on learning. This effect, in principle, can be taken into account by training the model using simulated clusters across a large redshift range. Another important step in improving the deconvolution scheme will be to force the neural network model to learn the features shared between simulations and real data using transfer/adversarial learning (Ganin et al., 2016). This will essentially mitigate the biases inherited by the neural network model from simulations. Moreover, we expect with an upgraded **IAE** model, the reconstruction of 3D temperature profiles beyond the observational range of R\({}_{500}\) will be significantly improved due to an increase in the size of the training sample. We further plan to implement a more robust model extrapolation technique in future work. The usefulness of the **IAE** is not only limited to the estimation of the temperature of the galaxy clusters. We further plan to use the **IAE** interpolatory technique to recover the underlying density, pressure and hence dark matter profiles in the galaxy clusters. An important extension of this will be to train a neural network to estimate the total mass profiles of the galaxy clusters directly from the thermal profiles of the ICM without considering the hydrostatic equation. Another interesting prospect for our work will be to implement the deconvolution technique in SZ and lensing data, to recover the robust model of the galaxy clusters. This will further help us to understand the biases introduced in calibrating the mass and scaling relations for cosmological studies. Such studies might be also used to assess more robustly relative density/temperature fluctuations, hence constraining turbulence and relative parameters (Mach number, injection scale, etc). Our methodology can also be implemented in other areas of astrophysics and cosmology. In fact, the **IAE** scheme has already been implemented in the source separation algorithm to tackle physical hyper-spectral data (Gerotsoi et al., 2023). One of our immediate plans is to implement the proposed deconvolution technique to the most recent high quality CHEX-MATE X-ray sample of clusters (CHEX-MATE Collaboration, 2021), and compare to other approaches such as those used in Bartalucci et al. (2018) (semi-parametric reconstruction) and Eckert et al. (2022) (multi-scale non-parametric reconstruction). The comparison of the estimated logarithmic derivatives will be instructive since these are highly related to the shape of mass profiles of clusters. Our ultimate goal will be to test the \(\Lambda\)CDM predictions on the total mass distribution in galaxy clusters using a new and sophisticated fully non-parametric approach. ###### Acknowledgements. The work of Al was supported by CNES, the French space agency, SE, LL and FG acknowledge the financial contribution from the contracts ASI-INAF Athena 2019-27-HH.0, "Attivia di Studio per la comunita scientifica di Astrofisica della Liste"e Ensele e Fisica Astroparticellare" (Accordato Avanitay ASI-INAF. 2017-1-4-H.0), and from the European Union's Horizon 2020 Programme under the AHEAD2020 project (grant agreement n. 871158). This research was supported by the International Space Science Institute (ISSI) in Bern, through ISIS International Team project #565 (Multi-Wavelength Studies of the Culmination of Structure Formation in the Universe). MS acknowledges the financial contribution from contract ASI-INAF n2017-14-H.0, and from contract INAF mainstream project #105.01.86.10. EP acknowledges the financial support of CNRS/INSU and of CNES, the French Space Agency. MED acknowledges partial financial support from a NASA ADAP award/SAO subward SV9-89010. MDP and AF acknowledge financial contribution from Sapienza Universita di Roma, thanks to Progetti di Ricerca Medi 2020, RM12017283205BE2. AF thanks financial support from Universidad de La Laguna (ULL), Nestorefanicie/UP/PRR, and Ministerio de Universidades (MIU) (UNV511/2021) through grant "Margarita Salas". HB, DGL, and PM acknowledge support from the Spoke 3 Astrophysics and Cosmos Obscurations. National Recovery and Resilience Plan (Piano Nazionale di Briesga Resilenza, PNRR) Project ID CN_00000013 "Italian Research Center on High-Performance Computing, Big Data and Quantum Computing" funded by MUR Mission 4 Competence 2 Investigation 1.4: Poteantimers'no titreica de cerao di "campioni razionali di R&S (M4C2-19)" - Next Generation EU (NGEU) and from the European Union's Horizon 2020 Programme under the AHEAD2020 project (grant agreement n. 871158). The authors would like to thank the reviewer for his/her careful, constructive and insightful comments in relation to this work.
2304.13184
Tensor network variational optimizations for real-time dynamics: application to the time-evolution of spin liquids
Within the Projected Entangled Pair State (PEPS) tensor network formalism, a simple update (SU) method has been used to investigate the time evolution of a two-dimensional U(1) critical spin-1/2 spin liquid under Hamiltonian quench [Phys. Rev. B 106, 195132 (2022)]. Here we introduce two different variational frameworks to describe the time dynamics of SU(2)-symmetric translationally-invariant PEPS, aiming to improve the accuracy. In one approach, after using a Trotter-Suzuki decomposition of the time evolution operator in term of two-site elementary gates, one considers a single bond embedded in an environment approximated by a Corner Transfer Matrix Renormalization Group (CTMRG). A variational update of the two tensors on the bond is performed under the application of the elementary gate and then, after symmetrization of the site tensors, the environment is updated. In the second approach, a cluster optimization is performed on a finite (periodic) cluster, maximizing the overlap of the exact time-evolved state with a symmetric finite-size PEPS ansatz. Observables are then computed on the infinite lattice contracting the infinite-PEPS (iPEPS) by CTMRG. We show that the variational schemes outperform the SU method and remain accurate over a significant time interval before hitting the entanglement barrier. Studying the spectrum of the transfer matrix, we find that the asymptotic correlations are very well preserved under time evolution, including the critical nature of the singlet correlations, as expected from the Lieb-Robinson (LR) bound theorem. Consistently, the system (asymptotic) boundary is found to bedescribed by the same Conformal Field Theory of central charge c = 1 during time evolution. We also compute the time-evolution of the short distance spin-spin correlations and estimate the LR velocity.
Ravi Teja Ponnaganti, Matthieu Mambrini, Didier Poilblanc
2023-04-25T22:41:00Z
http://arxiv.org/abs/2304.13184v3
**Tensor network variational optimizations for real-time dynamics: application to the time-evolution of spin liquids** ## Abstract Within the Projected Entangled Pair State (PEPS) tensor network formalism, a simple update (SU) method has been used to investigate the time evolution of a two-dimensional U(1) critical spin-1/2 spin liquid under Hamiltonian quench [Phys. Rev. B **106**, 195132 (2022)]. Here we introduce two different variational frameworks to describe the time dynamics of SU(2)-symmetric translationally-invariant PEPS, aiming to improve the accuracy. In one approach, after using a Trotter-Suzuki decomposition of the time evolution operator in term of two-site elementary gates, one considers a single bond embedded in an environment approximated by a Corner Transfer Matrix Renormalization Group (CTMRG). A variational update of the two tensors on the bond is performed under the application of the elementary gate and then, after symmetrization of the site tensors, the environment is updated. In the second approach, a cluster optimization is performed on a finite (periodic) cluster, maximizing the overlap of the exact time-evolved state with a symmetric finite-size PEPS ansatz. Observables are then computed on the infinite lattice contracting the infinite-PEPS (iPEPS) by CTMRG. We show that the variational schemes outperform the SU method and remain accurate over a significant time interval before hitting the entanglement barrier. Studying the spectrum of the transfer matrix, we found that the asymptotic correlations are very well preserved under time evolution, including the critical nature of the singlet correlations, as expected from the Lieb-Robinson (LR) bound theorem. We also compute the time-evolution of the short distance spin-spin correlations and estimate the LR velocity. ###### Contents * 1 Introduction * 2 Methods * 2.1 Symmetric PEPS ansatz and summary of SU results * 2.2 Embedded-bond variational optimization * 2.2.1 Trotter Suzuki decomposition * 2.2.2 Bond optimization * 2.2.3 Symmetrization * 2.3 Cluster Variational Optimization * 2.3.1 Optimization #### 2.3.2 Physical observables * 3 Results * 3.1 Energy density * 3.2 Transfer matrix spectrum and asymptotic properties * 3.2.1 Lieb-Robinson bound * 3.2.2 Transfer matrix spectrum * 3.2.3 Criticality * 3.2.4 Finite correlation lengths * 3.3 Finite distance correlations * 4 Conclusions * A Transfer matrix spectra in the CVO methods * B Short-time expansion of the spin-spin correlations ## 1 Introduction The search for spin liquids in condensed matter materials is a very rapidly developing area of quantum magnetism [1]. In a classical magnetic system, the magnetic moments of the constituent particles align to form a well-defined pattern or order, such as ferromagnetism or antiferromagnetism. In contrast, in a spin liquid, the magnetic moments are highly entangled and do not exhibit any long-range order, despite being at low temperatures. This gives rise to a state of matter that is neither a solid, liquid, nor gas, but rather a "quantum spin liquid." The entangled magnetic moments in a spin liquid are "frustrated," meaning that they are unable to achieve their lowest energy state due to the geometry of the system. This leads to a highly degenerate manifold of many possible configurations of the magnetic moments as in the prototypical Resonating Valence Bond (RVB) state proposed by Anderson [2] which shows no symmetry breaking, even down to zero temperature, due to enhanced zero-point quantum fluctuations. The highly entangled nature of the spin liquids leads to unique exotic properties [3, 4, 5] such as fractionalization of excitations, emergent gauge fields, topological properties, etc. Spin liquids hold promise for applications in quantum computing and information processing [1, 6, 7]. However, they are also challenging to study experimentally due to their elusive and highly entangled nature [8, 9]. A quantum quench refers to a sudden change in the parameters of the quantum system, leading to a rapid and non-equilibrium evolution of the system. Quantum quenches have been studied extensively in recent years [10, 11, 12, 13, 14, 15], both theoretically and experimentally, because they provide a way to probe the non-equilibrium dynamics of quantum systems. They are also relevant to a wide range of physical systems, including condensed matter systems, ultracold atomic gases, and quantum field theories. Nowdays, quantum simulators based on cold atoms on two-dimensional (2D) optical lattices are being realized experimentally with the realistic perspective of emulating simple models of condensed matter physics [16, 17, 18, 19, 20]. Rydberg atoms platforms are also used to realize dimer liquids or spin liquids [21, 22]. It is therefore necessary to develop new theoretical tools to compute faithfully the time evolution of 2D spin systems in order to address e.g. adiabatic evolutions, quench or Floquet dynamics in various experimental set-ups. In the following we shall address the quantum dynamics of a RVB spin liquid after a quantum quench. Here the quench will be implemented by a sudden change in the Hamiltonian that governs the system's behavior, leading to a rapid change in the system state. Then, the system is driven out of its equilibrium state, and the time evolution of the system becomes very complex and difficult to predict. The variational methods we describe here apply to simple quench set-ups starting from an initial quantum state \(\ket{\Psi_{0}}\) preserving lattice and SU(2)-rotation symmetries, namely a spin liquid. Here, as a simple example, we shall consider a nearest neighbor (NN) Resonating Valence Bond (RVB) spin liquid on an infinite square lattice. The NN RVB state consists of resonating NN singlets and is a special point of an enlarged spin liquid family [4] including longer-range singlet bonds as well. When only NN bonds are present, the RVB state shows long-range dimer correlations originating from a local U(1) gauge symmetry. Recent work [4, 23] suggested that topological order appears immediately whenever longer-range singlets are present breaking the U(1) gauge symmetry to \(\mathbb{Z}_{2}\). Our methods deal with quench Hamiltonians which preserve both lattice and spin symmetries. Such symmetries in the initial state and in the quench Hamiltonian will be used explicitly in the procedure. However, the preservation of the U(1) gauge symmetry depends on the method used as discussed later. To illustrate the methods we shall consider a simple quench protocol, \[\mathcal{H}(t)=\left\{\begin{array}{ll}0,&\text{for }t\leq 0\\ H=J\sum_{\left\langle x,y\right\rangle}\mathbf{S}_{x}\cdot\mathbf{S}_{y},& \text{for }t>0,\end{array}\right. \tag{1}\] where \(H\) is the NN Heisenberg model on the square lattice, and take advantage of the small entanglement of our initial state as well as the full lattice and spin symmetries to study the time evolution \(\ket{\Psi(t)}=\exp{(-iHt)}\ket{\Psi_{0}}\) over a small time \(t>0\) interval. Note that, hereafter, the time \(t\) will be expressed in units of the inverse-coupling \(1/J\). In recent years, progress have been made in developing 2D tensor network methods for real and imaginary time evolution. In particular, Projected Entangled Pair States (PEPS) on the infinite lattice (infinite-PEPS or iPEPS) have been used to study the dynamics in the 2D quantum Ising model after a quench from a fully polarized (product) state [24, 25], with the goal of (approximately) maximizing the overlap of the PEPS with the exact time-evolved state. Very recently, in order to go beyond the previous simplified schemes, the optimization was performed in a tangent space of the iPEPS variational manifold [26]. Our current developments follow the same conceptual ideas but take into account explicitly all the symmetries in the problem to improve the accuracy and efficiency of the variational optimization scheme. Note also that our initial state is a correlated entangled state, in contrast to most studies starting from a product state. The new schemes will be compared to the simplest Simple Update (SU) scheme, which will serve later on as a reference. ## 2 Methods ### Symmetric PEPS ansatz and summary of SU results In a recent paper [27] we used the SU method to investigate the time evolution of the NN RVB state under the Hamiltonian quench defined in Eq. (1) using symmetric PEPS. The method is based on : (i) a classification of SU(2) invariant and \(C_{4v}\)-symmetric tensors [28], (ii) the identification of a tensor manifold (PEPS ansatz) shown to be relevant to capture the quench dynamics at small times, (iii) a SU procedure allowing to compute the time evolution within the PEPS manifold defined by the site tensor class. In this section, we briefly recall these three steps. For a more complete description the reader may refer to reference [27]. _Symmetric tensors classification_ - Both the initial state and the Hamiltonian governing the dynamics of our problem are SU(2) symmetric and transform according to the trivial representation of the square lattice point group \(C_{4v}\). It is therefore crucial to enforce these key properties at every step of our scheme. The trivial representation of \(C_{4v}\) is simply obtained by choosing a uniform site tensor ansatz on the lattice. As explained in details in Ref. [28], the continuous SU(2) of PEPS can be implemented a the level of local tensors by imposing that the \(D\)-dimensional subspace of each virtual legs has the structure of a reducible representation of SU(2), namely a direct sum of SU(2) irreducible representations. As an example, the NN RVB state has a simple representation using \(\nu=0\oplus 1/2\) virtual bonds (\(D=3\)). More generally for any given \(\nu\), it is possible to classify tensors and define manifolds in which all tensors are linearly independent and mutually orthogonal. Working in a given manifold not only allows to fix the PEPS symmetry but also greatly reduces the number of parameters describing the PEPS family. This last point is a major advantage for computations based on optimization. Figure 1: Embedded-bond variational optimization (EBVO): the overlaps \(\Omega_{\alpha}\), \(\alpha=0,1,2\), are obtained by embedding the 2-site double layer tensors \(\omega_{\alpha}\) (generically represented as a light blue parallelogram) in a fixed-point environment (in yellow) obtained by CTMRG. \(\omega_{1}\) includes the unitary (two-site) gate. _Local tensor ansatz and SU method_ - As explained in Ref. [27] and recalled in subsection 2.2.1, the time evolution unitary operator is split using a Trotter-Suzuki (TS) decomposition in a product of 2-site unitary gates. Once applied on two neighboring tensors, the virtual bond dimension is no longer \(D=3\) but still has the structure of a SU(2) reducible representation. In order to elucidate the SU(2) content of this updated virtual bond, this 2-site object can be reinterpreted as a _symmetric complex_ matrix that, in turn, has to be reduced. To keep a uniform PEPS representation (i.e. with same tensor on every lattice site), a symmetric reduction is required. This leads to a technical difficulty: the non-Hermiticity of the matrix prevents diagonalization, and on the other hand conventional SVD leads to a non-uniform representation. This can be circumvented by using a symmetric SVD (Autonne-Takagi decomposition [29, 30, 31]). For small TS time step, the analysis reveals that the relevant virtual subspace [27] is \(0\oplus 1/2\oplus 1\) which correspond to \(D=6\). This subspace defines a 11-dimensional \(C_{4v}\)-symmetric tensor manifold that generalizes and includes the initial \(D=3\) tensor manifold. Due to SU(2) fusion rules the number of half-integer spins hosted on the four virtual bonds has to be odd. In this case, only tensors with one or three \(1/2\) spins are allowed. This defines a \(\mathbb{Z}_{2}\) symmetry. Interestingly, the dynamics computed in the SU scheme [27] selects a 8 tensors submanifold corresponding to a U(1) gauge symmetry where the total number of \(1/2\) spins hosted on the four virtual bonds is conserved and fixed to one. This fact can be understood from a Projected Entangled Pair Operator (PEPO) representation of the 2-site Heisenberg gate whose virtual bond (\(D=4\)) has the structure \(0\oplus 1\) involving only integer spins. Such a gate cannot change the integer (resp. half-integer) spin nature of the updated bond, and hence conserves the number of integer (resp. half-integer) spins hosted by the site tensor virtual bonds. ### Embedded-bond variational optimization #### 2.2.1 Trotter Suzuki decomposition In the embedded-bond variational optimization (EBVO) scheme we start, as for the simple update (SU) scheme, from a usual Trotter-Suzuki (TS) decomposition [32, 33] of the time evolution operator \(\exp{(-iHt)}\) in term of elementary gates \[\exp{(-iHt)} = \prod_{1}^{N_{\tau}}\exp{(-iH\tau)}\] \[\simeq \prod_{1}^{N_{\tau}}\mathcal{G}^{D}(\tau)\mathcal{G}^{C}(\tau) \mathcal{G}^{B}(\tau)\mathcal{G}^{A}(\tau)\] where \(\tau=t/N_{\tau}\) is a small time step (\(\tau\ll 1\)) and the Heisenberg Hamiltonian has been split into four parts, \(H=H^{A}+H^{B}+H^{C}+H^{D}\), each acting on one of the four staggered configurations of _disconnected_ horizontal or vertical bonds labelled by \(\alpha=A,B,C,D\). Note that the second identity involves the standard systematic TS error vanishing in the limit \(\tau\to 0\). The four unitaries \(\mathcal{G}^{\alpha}(\tau)\) can then be naturally decomposed in terms of commuting two-site gates acting on nearest-neighbor (NN) \(\langle x,y\rangle\) bonds, \[\mathcal{G}^{\alpha}(\tau) = \exp{(-iH^{\alpha}\tau)} \tag{3}\] \[= \prod_{\langle\mathrm{x},\mathrm{y}\rangle\in\mathrm{C}_{\alpha} }\mathcal{G}^{\alpha}_{xy}(\tau)\,,\] where \(\mathcal{G}^{\alpha}_{xy}(\tau)=\exp{(-iH^{\alpha}_{xy}\tau)}\). In the current method, at every time step \(t\), we focus on a particular \(\langle x,y\rangle\) bond (e.g. on the \(A\) staggered bond configuration) and update the coefficients \(\mu_{a}(t)\rightarrow\mu_{a}(t+\tau)\) of the tensors on sites \(x\) and \(y\) under the action of \(\mathcal{G}^{A}_{xy}(\tau)\) defining two new tensors \(\mathcal{A}^{\prime}\) and \(\mathcal{A}^{{}^{\prime\prime}}\) on sites \(x\) and \(y\), respectively, related by \(180^{o}\) rotation. To do so one explicitly takes into account the environment of the infinite lattice around the active bond introducing some non-locality (in contrast to SU) using the optimization algorithm described below. #### 2.2.2 Bond optimization To update the \(\mathcal{A}\) tensor one define fidelities i.e. overlaps between ket and bra states as \(\Omega_{0}=\langle\Psi_{\mathcal{A}}|\Psi_{\mathcal{A}}\rangle\), \(\Omega_{1}=\langle\Psi_{\mathcal{A}^{\prime}}|\mathcal{G}^{A}_{xy}(\tau)| \Psi_{\mathcal{A}}\rangle\) and \(\Omega_{2}=\langle\Psi_{\mathcal{A}^{\prime}}|\Psi_{\mathcal{A}^{\prime}}\rangle\) depicted in Fig. 1. Here the two \(\mathcal{A}^{\prime}\) tensors on sites \(x\) and \(y\) are SU(2)-symmetric tensors exhibiting mirror symmetry w.r.t. the \(xy\) axis (\(C_{s}\subset C_{4v}\) point group) and related by inversion symmetry w.r.t the bond center. Outside of the active 2-site region, all fidelities involve the same uniform tensor network of on-site \(C_{4v}\)-symmetric double-layer \(\mathcal{A}^{\dagger}\mathcal{A}\) tensor contracted over physical degrees of freedom. This approximation is very well justified when the tensor \(\mathcal{A}^{\prime}\) is close to the exact solution realizing the two site evolution. We have used a single-site (symmetric) Corner Transfer Matrix Renormalization Group (CTMRG) [34, 35, 36, 37], more specifically its single-site symmetric version [28], to contract the network around the active bond, resulting into a converged (so-called "fixed-point") SU(2)-symmetric environment of adjustable bond dimension \(\chi=\chi_{\rm opt}\). Note however that the corner transfer matrix \(\mathcal{C}_{\rm TM}\) is here a (non-Hermitian) complex symmetric matrix so that, instead of the usual SVD decomposition, an orthogonal factorization is used, \[\mathcal{C}_{\rm TM}=OWO^{T}, \tag{4}\] where \(O\) is a (non-unitary) complex orthogonal matrix (\(OO^{T}=\rm I_{d}\)) and \(W\) is a complex (eigenvalue) diagonal matrix (see details in Ref [27]). Interestingly, the symmetric character of the transfer matrix is preserved under truncation \(\mathcal{C}_{\rm trunc}=OW_{\rm trunc}O^{T}\), keeping in \(W_{\rm trunc}\) the \(\chi\) largest (in modulus) eigenvalues of \(W\). Using a conjugate gradient method, the new \(C_{s}\)-symmetric tensor \(\mathcal{A}^{\prime}\) is obtained by maximizing (following the lines of the variational optimization scheme [38, 39]) the normalized fidelity \(\omega=\Omega_{1}/(\Omega_{0}\Omega_{2})^{1/2}\) over the parameters \(\{\mu^{{}^{\prime}}_{a}\}\) defining its expansion \(\mathcal{A}^{\prime}=\sum_{a=1}^{P}\mu^{{}^{\prime}}_{a}T^{{}^{\prime}}_{a}\) in term of the \(P\simeq 4M\) elements of the \(C_{s}\)-symmetric tensor basis \(\{T^{{}^{\prime}}_{a}\}\). #### 2.2.3 Symmetrization Since the NN bonds of configuration \(A\) are disconnected, the same increment of the tensors can be performed on all the A bonds simultaneously. The action of the 3 remaining set of gates \(\mathcal{G}^{B}_{xz}(\tau)\), \(\mathcal{G}^{C}_{xu}(\tau)\) and \(\mathcal{G}^{D}_{xv}(\tau)\) leads approximately to the same increment of the tensor \(\mathcal{A}\) at site \(x\) along the \(xz\), \(xu\) and \(xv\) bonds obtained by 90, 180 and 270-degrees rotations of the original \(xy\) bond with respect to \(x\), respectively. More precisely, we can write for the \(\mathcal{A}^{{}^{\prime}}\) tensor on site \(x\): \[\mathcal{A}^{\prime} = \mathcal{A}^{\prime}{}_{\parallel}+\delta\mathcal{A}, \tag{5}\] \[\text{where} \mathcal{A}^{\prime}{}_{\parallel}(t)=\frac{(\mathcal{A}|\mathcal{ A}^{\prime})}{(\mathcal{A}|\mathcal{A})}\mathcal{A}(t)\] and \((\mathcal{A}|\mathcal{B})\) is the scalar product of \(\mathcal{A}\) and \(\mathcal{B}\) tensors (contracting over both physical and virtual links). The updated \(C_{4v}\)-symmetric on-site tensor becomes then \[\mathcal{A}(t+\tau)\simeq\mathcal{A^{\prime}}_{\parallel}(t)+\sum_{0}^{3}R^{n}( \delta\mathcal{A}), \tag{6}\] where \(R\) is the 90-degree rotation of the local tensor (and \(R^{0}=Id\)). The same procedure applied to the \(\mathcal{A^{{}^{\prime\prime}}}\) tensor on site \(y\) leads to the same tensor so that the updated PEPS remains uniform and symmetric. Note that, to compute observables with the optimized site tensor, a different CTMRG environment can be used with a larger bond dimension \(\chi\). Before moving further to the second variational optimization scheme, it is instructive to discuss the possible sources of error in this method. i) As all other methods discussed here, the fundamental limitation is the truncation of the PEPS ansatz to \(D=6\) which does not allow to accommodate the rapid growth of entanglement of the wavefunction beyond some typical time; one would then hit the "entanglement barrier". ii) Another source of error is the optimization procedure of the local tensor which, although variational, is done locally. In other words, despite the fact that information about the whole system is included via the environment, the EBVO is intrinsically a local update, the optimization being carried out at _fixed_ environment. iii) Lastly, a significant source of error may also come from the simplified symmetrization procedure (6). ### Cluster Variational Optimization #### 2.3.1 Optimization We now move to the Cluster Variational Optimization (Cluster VO or CVO) algorithm. The idea is to consider a finite periodic cluster and optimize the local tensor of a uniform PEPS defined on that cluster to get the largest fidelity with the time-evolved state. In fact, for a small size cluster like a 16-site \(4\times 4\) torus, the time-evolved state \(\ket{\Psi(t)}\), \(t>0\), can be obtained exactly (i.e. up to machine precision) by a series expansion \(\sum_{n=0}^{n_{\text{max}}}\frac{(it)^{n}}{n!}\ket{\Psi_{n}}\), where \(\ket{\Psi_{n}}=H^{n}\ket{\Psi_{0}}\), which converges rapidly as a function of \(n_{\text{max}}\). Note that, since the time-evolved state is a spin singlet, the calculation can be performed in the reduced \(S_{Z}=0\) sector. Note also that the calculation is made simple using the recurrence relation \(\ket{\Psi_{n}}=H\ket{\Psi_{n-1}}\) (i.e. avoiding to compute the operators \(H^{n}\)). To compute the overlap \(\Omega_{\mathcal{A}}(t)=\big{|}\big{\langle}\Psi(t)\ket{\Psi_{\text{PEPS}}( \mathcal{A})}\big{\rangle}\) one needs to first compute the PEPS \(\ket{\Psi_{\text{PEPS}}(\mathcal{A})}\) expressed in the \(S_{Z}=0\) basis (of dimension 12 870 for the 16-site cluster). The optimization of \(\Omega_{\mathcal{A}}(t)\) with respect to the coefficients of \(\mathcal{A}\) is done using a conjugate-gradient method which requires to recompute \(\ket{\Psi_{\text{PEPS}}(\mathcal{A})}\) and \(\ket{\Psi_{\text{PEPS}}(\mathcal{A}+\partial\mathcal{A})}\) (to get the numerical gradient of the overlap) along the minimization path in the multi-dimensional parameter space. From this procedure, one eventually gets the optimized tensor \(\mathcal{A}^{*}(t)\) and hence the corresponding optimized PEPS \(\ket{\Psi_{\text{PEPS}}(\mathcal{A}^{*})}\). Fig. 2 shows the infidelity \(\mathcal{I}(t)=1-\Omega_{\mathcal{A}}(t)\) defined by the deviation from 1 of the overlap of the optimized PEPS with the initial state. Physically \(\mathcal{I}(t)\) can be seen as a quantitative measure of the "distance" of the best PEPS from the exact time-evolved state. This quantity should be compared to a reference \(\mathcal{I}_{0}(t)\) obtained by replacing \(\ket{\Psi(t)}\) by the initial state i.e. \(\mathcal{I}_{0}(t)=1-\big{|}\big{\langle}\exp\left(-iHt\right)\big{\rangle}_{ \text{RVB}}\big{|}\). A fit of the numerical data shows that \(\mathcal{I}_{0}(t)\simeq\exp\left(-Ct^{-2}\right)\), becoming exponentially small at small time. Nevertheless, the Cluster VO enables to gain several order of magnitude in \(\mathcal{I}(t)\), e.g. for \(t=0.1\) we get \(\mathcal{I}(t)\sim 10^{-6}\,\mathcal{I}_{0}(t)\). In fact we expect the optimization to become exact (on the cluster) in the \(t\to 0\) limit, meaning \(\mathcal{I}(t)/\mathcal{I}_{0}(t)\to 0\). At the same time FSE should also disappear asymptotically since the Lieb-Robinson theorem [40] states that, after the quench, correlations propagate with a bounded velocity. So we believe the cluster method becomes conceptually exact in the small time limit. For increasing time, the optimization deteriorates a bit (as expected since the entanglement of the time-evolved state grows) but remains still quantitatively very good, e.g. we get \(\mathcal{I}(t)\sim 10^{-3}\,\mathcal{I}_{0}(t)\) for \(t=0.5\). For comparison, we also show the infidelity for finite-size PEPS on the \(4\times 4\) torus using the site tensors obtained in the SU and EBVO methods (with the same virtual space). The infidelity of the SU PEPS is clearly a few order of magnitude higher that the one of the CVO. The EBVO is doing better than SU for \(t>0.15\) but still with an infidelity an order of magnitude (i.e. roughly \(\times 10\)) higher than the CVO for \(t\sim 0.4-0.5\). However, this does not necessarily implies any hierarchy in the relevance of the various frameworks for the _infinite lattice_ since, for this perspective, the CVO is subject to finite-size effects. Note that the CVO has been performed within two different PEPS manifolds using either U(1) or \(\mathbb{Z}_{2}\) gauge-symmetric tensors. It has been argued (using TS decomposition) that, in the Figure 2: Infidelity (i.e. deviation from 1 of the overlap with the exact time-evolved state) in log scale of various finite-PEPS on a \(4\times 4\) cluster, as a function of time \(t\). The reference \(\mathcal{I}_{0}(t)\) (green open circles) is fitted using \(\exp{(-Ct^{-2})}\) (dashed green curve). The Cluster VO has been performed within the U(1) and the enlarged \(\mathbb{Z}_{2}\) PEPS manifolds (filled circles). The EBVO finite-PEPS has been obtained using the U(1) site tensor optimized on the infinite lattice with \(\chi=D^{2}\) and \(\tau=0.025\). thermodynamic limit, the \(U(1)\)-gauge symmetry of the initial state is preserved during time evolution provided the quench Hamiltonian only acts on NN bonds. The \(\mathbb{Z}_{2}\) gauge-symmetric PEPS are obtained by adding 3 extra tensors to the U(1) local tensor basis. Hence, the \(\mathbb{Z}_{2}\) manifold is larger and includes the U(1) manifold, and we expect the optimization to give a better (smaller) overlap (infidelity). We remind however that, in the SU and EBVO frameworks, the weights of the 3 additional tensors were systematically found to vanish [27]. In contrast, Fig. 1 shows that, on a finite cluster, enlarging the PEPS manifold from U(1) to \(\mathbb{Z}_{2}\) always lead, after optimization, to a larger overlap (or smaller infidelity) with the exact finite-size time-evolved state. Whether it is a finite size effect or the \(\mathbb{Z}_{2}\) tensors are also relevant in the thermodynamic limit shall be discussed later on. The two procedures will be denoted as U(1)-CVO and \(\mathbb{Z}_{2}\)-CVO in the following, for convenience. #### 2.3.2 Physical observables The above cluster optimization scheme provides a site tensor which can be used to construct a translationally invariant iPEPS. Using the CTMRG algorithm described above, this enables to compute various physical quantities quantities on the infinite lattice (such as the energy density or the entanglement entropy) to be compared directly to the two other methods. As all methods, the CVO is fundamentally limited by the truncation of the PEPS ansatz to \(D=6\) whenever hitting the "entanglement barrier" after some typical time. Another source of error is the finite-size effects (FSE) which limits the accuracy of the optimization procedure. The computation of the observables however is not subject to further FSE since performed in the thermodynamic limit using the optimized U(1) or \(\mathbb{Z}_{2}\) site tensors. ## 3 Results ### Energy density We now turn to the results obtained by using the two optimization methods which we would like to compare to our previous SU results. The evolution of our closed system is unitary so that its energy should be a constant of motion, hence offering a simple test of the accuracy of our procedures. The energy deviation (per site) w.r.t. its initial \(t=0\) value plotted in Fig. 3 shows that the EBVO scheme provides a very significant improvement w.r.t the simple SU scheme. However such an improvement is obtained at the price of being more expensive in computer time. Also we observe that the symmetric CTMRG (used to compute the environment at each TS step) becomes unstable (or is subject to small spontaneous SU(2)-symmetry breaking) above some characteristic time, \(t\gtrsim 0.55\) (\(t\gtrsim 0.31\)) for \(\chi_{\rm opt}=36\) (\(\chi_{\rm opt}=72\)). Hereafter, EBVO computations will be done using \(\tau=0.025\), \(\chi_{\rm opt}=36\) and \(\chi=144\), providing the best results. Let us now move to the CVO results reported also in Fig. 3. Here we have performed optimizations within the \(4\times 4\) PEPS manifolds constructed from both U(1) and \(\mathbb{Z}_{2}\) classes of site tensors. From the optimized site tensor one can then construct the iPEPS to compute the energy density in the thermodynamic limit. We find that the deviation of the energy density w.r.t. its initial value is remarkably small both in the U(1)-CVO and \(\mathbb{Z}_{2}\)-CVO procedures, but with a different sign. While the optimization itself is always well behaved, the symmetric CTMRG using the optimized site tensor to compute observables seems again to become unstable beyond some typical time of order \(t\sim 0.55\). In any case, we find that all our variational methods show a much smaller energy deviation compared to the SU procedure, a first sign pointing for a better accuracy. ### Transfer matrix spectrum and asymptotic properties In this subsection we investigate the long-distance \(r>>1\) asymptotic limit. We argue that physical properties (like correlations) should not be modified in this limit, i.e. outside of some "light cone", during time evolution. We confront this statement to our calculations, using the TM as a tool to access the asymptotic limit. We show that the deviations from the expected time-invariance of the asymptotic correlations remain very small in the VO methods. #### 3.2.1 Lieb-Robinson bound It is known from Lieb and Robinson's work [40] that, after the quench, the rate at which the information can propagate is bounded by the Lieb-Robinson (LR) velocity \(v_{\rm LR}\), providing Figure 3: Energy difference (w.r.t. the \(t=0\) initial value) in the thermodynamic limit obtained with the embedded-bond and cluster VO methods and compared to the SU results. Difference iPEPS environment dimensions \(\chi\) have been used as indicated in the legend. For the EBVO method, \(\chi_{\rm opt}=36\) and \(\chi_{\rm opt}=72\) have been used in the optimizations with a TS step \(\tau=0.025\) and \(\tau=0.05\), respectively (and the initial tensor is chosen as the SU tensor obtained at \(t=\tau\)). Computations have been done with \(\chi=\chi_{\rm opt}\) or \(\chi=144\). the notion of a light-cone. As a consequence, there is a finite speed at which correlations and entanglement can be distributed [41, 42, 43]. These notions have been extended to the case where the initial state has power-law decaying correlations [44, 45]. The existence of a LR bound means that, after a finite time \(t\), any correlation function \(C(r,t)\) in the time-evolved state \(|\Psi(t)\rangle\) can only be modified (in comparison to the \(t=0\) initial state) at finite distance \(r\lesssim v_{\rm LR}t\), apart from exponentially small tails. It means that the correlation \(C(r,t)\) should retain its character, critical or short-range. In the second case, the finite correlation length characterizing the asymptotic behavior \(r>>v_{\rm LR}t\) should be time-independent. Figure 4: TM spectra in the initial RVB state **(a)** and at time \(t=0.5\)**(b-c)** for different values of \(\chi\). The finite-time spectrum has been computed using the SU and EBVO methods. Only the largest 40 eigenvalues are shown, normalized such that the leading one is set to 1. Different symbols are used to highlight particular levels, the subleading eigenvalue (\(g=1\)), and the largest eigenvalues with degeneracy \(g=4\) and \(g=3\). Note that the \(g=3\) multiplet of the NN RVB state is exactly degenerate with \(g=2,4\) and 6 multiplets. Note also that some SU data have already been reported in [27]. #### 3.2.2 Transfer matrix spectrum The double layer \(\left\langle\Psi(t)|\Psi(t)\right\rangle\) TN on the (let's say) upper infinite half-plane leads to a one-dimensional (1d) boundary which can be approximated by a Matrix Product States (MPS) defined by a single site tensor \(T\) of finite bond dimension \(\chi\) (same as the environment tensor obtained by CTMRG). Useful information - like any correlation function at all distances - is encoded in the \(\chi^{2}\times\chi^{2}\) transfer matrix \(\mathcal{T}=T^{\otimes 2}\) obtained by contracting over the \(D^{2}\) virtual indices (up to finite-\(\chi\) errors). However, the _spectrum_ of \(\mathcal{T}\) only provides information on the correlations in the asymptotic \(r\to\infty\) limit which should not be affected by finite-time propagation, from the existence of a bound in the velocity of the information spreading. Therefore, the invariance of the spectrum with time provides an additional precise test of the different methods. In the case of SU(2) symmetry, the TM eigenvalues can be labeled by their degeneracy \(g=2S+1\), defining spin sectors. Investigating the deviations of the leading eigenvalues of the various SU(2) sectors, compared to their initial values, offers a stringent test. #### 3.2.3 Criticality From the existence of the LR bound, we expect the critical (dimer) correlations in the asymptotic limit (\(r\to\infty\)) to be robust during the time evolution. This can be tested by examining the gap between the leading and subleading eigenvalues of the TM. The TM spectra obtained in the initial NN RVB and, for \(t=0.5\), in the SU and EBVO methods are shown in Fig. 4 for increasing environment dimension \(\chi\). Additional data for spectra obtained using CVO are shown in Appendix A. All results look very much alike and are consistent with a vanishing singlet gap (defined by the difference between the leading and subleading \(g=1\) eigenvalues) in the limit \(\chi\to\infty\) characteristic of a critical phase. For a more quantitative analysis we have computed the maximum correlation length from the ratio of the first two leading eigenvalues \(\lambda_{0}=1\) and \(\lambda_{1}<1\), \(\xi_{\rm max}=-1/\ln\left(\lambda_{1}/\lambda_{0}\right)\). The results are shown in Fig. 5 and in Appendix A, revealing the absence of saturation with \(\chi\) for all methods. A diverging (maximum) correlation length is indeed consistent with the preservation of the asymptotic critical nature of the (dimer) correlations under time evolution. We observe that, quite generically, \(\xi_{\rm max}(t,\chi)\propto\chi\). However we note that the slope \(s_{\xi}(t)=\partial\xi_{\rm max}(t,\chi)/\partial\chi\) is significantly reduced for increasing time. E.g., at \(\chi=144\), maximum correlation lengths \(\sim 20\) are obtained for \(t=0.5\), compared to \(\sim 60\) at \(t=0\). We believe the change with time of the finite-\(\chi\) corrections is not inconsistent with the LR bound argument. In addition to the critical nature of the bulk, we expect the boundary MPS to be described, for any finite time, by the same Conformal Field Theory of central charge \(c=1\) as for the initial NN RVB state. To verify this important feature we have computed the MPS Von Neumann entanglement entropy (partitioning the infinite chain into two halves). A selection of the data is plotted in Fig. 6 as a function of the logarithm of \(\xi_{\rm max}\). The data are found to be consistent with a linear scaling \(S_{\rm vN}=\frac{c}{6}\ln\xi_{\rm max}\)[46, 47], even though the range of variation of \(\ln\xi_{\rm max}\) shrinks significantly for increasing time making the comparison to the CFT prediction less precise. #### 3.2.4 Finite correlation lengths Despite the existence of critical dimer correlations, many correlations remain short-range like the spin-spin correlation. The LR velocity bound implies that such correlations should not Figure 5:. Maximum correlation length plotted as a function of the environment dimension \(\chi\), obtained in the SU method (panel **(a)** - data from Ref. [27]) and in the EBVO methods (panel **(b)**). The different symbols correspond to different values of the time \(t\) (see legend). Figure 6: Von-Neumann entanglement entropy of the 1d boundary theory as a function of \(\ln\xi_{\rm max}\) for various values of the time \(t\) as shown in the legend. The dashed line is a guide to the eye showing the expected CFT scaling with central charge \(c=1\). The SU (**a**) and the EBVO (**b**) schemes have been used. Figure 7: Asymptotic correlation lengths computed at \(\chi=144\) are plotted versus time. All correlation lengths are labeled by the degeneracy of the corresponding TM eigenvalues and are associated to different operators. SU (crosses), EBVO (diagonal crosses), U(1)-CVO (circle) and \(\mathbb{Z}_{2}\)-CVO (squares) results are shown. Panel **(a)**: leading correlation length \(\xi^{(4)}\) associated to the gap of the \(g=4\) TM spectrum and corresponding to the spinon correlations [27]. Panel **(b)**: leading correlation lengths \(\xi^{(3)}\) associated to the gaps of the \(g=3\) TM spectra (originating from the \(g=15\) largest eigenvalue of the \(t=0\) TM spectrum). Note that \(\xi^{(3)}\) corresponds to the spin-spin correlation length [27]. depend on time asymptotically at long distance \(r>v_{\rm LR}t\). In other words, the maximum correlation length associated to a given observable should be time invariant. Interestingly, the TM provides information about these asymptotic (finite) correlation lengths which can be distinguished by the degeneracy \(g\) of the corresponding eigenvalues. As shown in Appendix A, the spectra of all \(g>1\) eigenvalues is gapped in the \(\chi\to\infty\) limit, in contrast to the gapless \(g=1\) (singlet) spectrum (associated to the dimer critical correlations). Since in that limit all spectra become dense, one way to compare TM spectra at different times is to compare their gaps \(1-\lambda^{(g)}(t)\) or, equivalently, their associated leading correlation lengths \(\xi^{(g)}(t)=-1/\ln{(\lambda^{(g)}(t))}\), where \(\lambda^{(g)}\) is the leading eigenvalue of the \(g\)-degenerate eigenvalue spectrum. Let us first start with the reference NN RVB state (at \(t=0\)). In the top panel of Fig. 4, we have identified in the TM spectrum (apart from the gapless singlet spectrum discussed above) the smallest gaps of the \(g=4\) and \(g=15\) eigenvalues, connected to the largest range correlations in the system (apart from the critical dimer correlations). The large \(g=15\) degeneracy reflects the very special fine-tuned nature of the NN RVB with different operators exhibiting identical (asymptotic) correlations. In fact, at small time \(t\), the \(g=15\) levels are split into four almost degenerate levels with \(g=2,3,4,6\), as a result of the approximate nature of the time-evolved state. Note that, from previous work [27], we know that the \(g=3\) spectrum corresponds to the spin-spin correlation function. From the bottom panels of Fig. 4 we see that the TM spectra, in particular the \(g=4\) and \(g=3\) gaps, do not change very much at finite time, \(t=0.5\). The deviations of the corresponding leading correlation lengths \(\xi^{(g)}(t)\), \(g=4,3\)- shown in Fig. 7 using \(\chi=144\) - w.r.t. the ones of the NN RVB \(\xi^{(g)}(0)\) give another quantitative measure of the accuracy of our methods, in addition to the energy conservation check. Again, we observe that the VO methods give rise to smaller deviations than the SU method, consistently with the analysis of the energy density. Interestingly, we note that the sign of the (small) correlation length deviations, an artifact of our approximate treatments, depends on the method. ### Finite distance correlations Although the asymptotic behavior should not be affected for increasing time \(t\), correlations at finite distances should get stronger. As an example we investigate the spin-spin correlations \(C_{S}(d,t)=\left\langle\Psi(t)|\mathbf{S}_{i}\cdot\mathbf{S}_{i+d}|\Psi(t)\right\rangle\) between two sites along one of the crystal axis of the square lattice, separated by a short distance \(d\). Figure 8(a) shows the variations of the correlations \(C_{S}(d,t)-C_{S}(d,0)\) for increasing time up to \(t=0.5\). Here a EBVO optimization procedure followed by an iPEPS/CTMRG computation using \(\chi=144\) was used. We observe a rapid increase of the antiferromagnetic short-distance correlations, reaching at \(t=0.5\) and distances \(d\sim 3-5\) of the order of \(150\) to \(300\,\%\) of their initial values. Similar numerical studies have also been performed in the context of experiments on 2D arrays of Rydberg's atoms [48]. We found that correlations increase at short-time as \(t^{2}\) for all distance \(d>1\), in contrast to the \(t^{2+4d}\) behavior found in Ref. [48]. We argue in Appendix B that this qualitative difference is due to the existence of finite correlations in the initial state while a product state was considered instead in Ref. [48]. To be more quantitative we have defined the relative increase as \(\Delta_{S}(d,t)=(\tilde{C}_{S}(d,t)-\tilde{C}_{S}(d,0))/\tilde{C}_{S}(d,0)\), where the antiferromagnetic oscillations of \(C_{S}(d,t)\) are absorbed by defining \(\tilde{C}_{S}(d,t)=(-1)^{d}C_{S}(d,t)>0\). \(\Delta_{S}\) is plotted in Fig. 8(b) as a function of time, using logarithmic scales on both axis. Note that we restrict here to short distances, typically below Figure 8: (a) Spin-spin correlations \(C_{S}(d,t)=\left\langle\mathbf{S}_{0}.\mathbf{S}_{d}\right\rangle_{t}\) (w.r.t. the \(t=0\) values) along the horizontal (or vertical) axis up to distance \(d=5\), for \(t=0.1,0.2,0.3,0.4,0.5\). Tensor optimizations have been done in the EBVO scheme (\(\tau=0.025\), \(\chi_{\rm opt}=36\)) and the correlations have been computed in the thermodynamic limit with \(\chi=144\). (b) Relative change of the _staggered_ spin-spin correlation \(\Delta_{S}(d,t)\) (see text for exact definition) versus time (log-log plot) for \(d=2,3,4,5\). The dashed line corresponds to the \(t^{2}\) behavior obtained for \(d=2\) on the \(4\times 4\) cluster: \(\Delta_{S}(d=2,t)\sim 1.385\,t^{2}\). \(d\sim 5\) since, beyond that, we enter the asymptotic regime \(d>>1\) which is governed primarily by the TM spin-spin correlation length, \(C_{S}(d,t)\propto\exp{(-d/\xi^{(3)}(t))}\). Since (i) due to our approximate scheme, \(\xi^{(3)}(t)\) deviates slightly from the initial value \(\xi^{(3)}(0)\) of the NN RVB state and (ii) it concerns a regime of very small magnitudes of correlations \(\tilde{C}_{S}(d,t)<10^{-5}\), no qualitative analysis can be done for \(d\geq 5\). The \(t^{2}\) behavior can be understood from a simple (crude) argument based on the LR bound, from which we can also estimate some LR velocity. For a given distance \(d>\xi^{(3)}\), at (sufficiently) short-time one can assume to be outside of the LR light-cone where correlations are non-zero from the very beginning of the time evolution. In that region, the correlation function in fact exhibits "leaks" with spatio-temporal exponential decay [45], \[C_{S}(d,t)\simeq K(\exp{[-(d-v_{LR}t)/\xi^{(3)}]}+\exp{[-(d+v_{LR}t)/\xi^{(3)}]}).\] Hence, in that limit, the relative increase of the correlations becomes \[\Delta_{S}(d,t)\simeq\cosh{(v_{LR}t/\xi^{(3)})}-1\sim\frac{1}{2}(v_{LR}t/\xi^{ (3)})^{2}.\] From the numerical estimation of the \(t^{2}\) coefficient for \(d=3\) we estimate \(v_{LR}\simeq 4.1\) in unit of \(J\). ## 4 Conclusions In summary, we have applied tensor network techniques to investigate the time evolution of a 2D (critical) spin liquid on the square lattice after a sudden quench. The quench Hamiltonian is the simple NN Heisenberg model so that all symmetries of the initial state (invariance under space group and spin-rotation symmetries) are preserved during the time evolution. Practically, this allows to represent the time-evolving state by a translationnally invariant singlet PEPS defined by a single symmetric site tensor. The time-evolution of the state is therefore simply encoded in the time-evolution of the site tensor. Using a basis of the local site tensors (of a given virtual bond dimension \(D=6\)), the (highly non-linear) optimization problem hence translates in finding the best linear combination of the basis tensors. The U(1) gauge symmetry of the initial state plays also a special role conferring automatically critical dimer correlations to the time-evolving state. The U(1) gauge symmetry is enforced by construction in the U(1)-CVO method and preserved by the application of the two-site gate in the SU and EBVO frameworks. Nevertheless, it is no longer explicit at the level of the on-site tensor in the \(\mathbb{Z}_{2}\)-CVO method. In that case, since the PEPS ansatz seems to also remain critical under time evolution, one possibility is that the U(1) gauge symmetry is somehow "hidden". Another possibility would be that the breaking of the gauge symmetry from U(1) to \(\mathbb{Z}_{2}\) would be a finite size effect of the optimization on a finite cluster. We have tested the respective accuracy of our methods by different means. First, as expected in the case of a unitary evolution, the energy (i.e. the expectation value of the quench Hamiltonian in the time-evolving state) should be conserved. The observed deviation of the energy (per site) remains quite small in the VO methods - typically smaller than \(1.5\%\) for \(t\leq 0.5\) - while it rises to around \(8\%\) in the SU method at \(t=0.5\). We have also considered the consequences of the Lieb-Robinson theorem stating an upper-bound of the velocity at which information propagates. From this theorem, there exists a "light-cone" beyond which correlations should remain unchanged (apart from exponentially small tails), imposing some constraints on the properties of the time-dependent TM which governs all asymptotic properties. In particular, we have investigated the criticality of the system, finding it is preserved under time-evolution in all methods. To be more quantitative, we have also studied the finite correlation lengths associated to spinon and spin correlations. In our approximate schemes small deviations are of course expected but we found the latter remain quite small, especially in the VO methods, up to accessible times of the order of \(0.5-0.6\). To investigate how correlations develop at finite distances we have considered spin-spin correlations which are short-ranged in the initial NN RVB state and, therefore, are simpler to analyse. Considering spacio-temporel parameters outside of the (potential) light-cone the rate of spreading of correlations is estimated. Finally, let us describe the pros and cons of the various methods we have used. For all of them the main limitation is of course the finite virtual space dimension \(D=6\) that limits the bond entanglement to \(\ln 6\) while the latter is known to increase indefinitely with time. For longer times than studied here, it would be therefore necessary to increase the bond dimension, adding some extra SU(2) multiplet(s) to the virtual space. Even though the "entanglement wall" limits all methods, it is nevertheless meaningful to compare the methods at small times. We have provided arguments that the VO methods, although much more costly in CPU-time, provide a higher accuracy, according to the criteria mentioned above. Note that the source of errors of the different VO methods are clearly different: while the optimization suffers from finite size effects in the CVO, the tensor update remains essentially local in the EBVO (although taking into account the environment). Let us also mention that, for all methods, the computation of observables on the infinite 2D lattice relies on a (symmetric) CTMRG procedure. Generically we experience some instability issues for \(t>0.5\) (typically spin-rotation symmetry breaks down spontaneously). It may signal the fact that the iPEPS deviates too much from the true time-evolved state and is no longer physical. Lastly, we would like to point out that our methods are quite versatile and can be applied to other lattices/type of spin liquids/quench Hamiltonians. In particular, topological short-range NN RVB on non-bipartite lattices could be studied with the same methods. Note that adding longer range interactions (like a next-NN frustrating Heisenberg coupling) to the quench Hamiltonian is an easy task in the CVO framework. Interestingly, the CVO method will also enable to investigate Floquet dynamics at long times. ## Appendix A Transfer matrix spectra in the CVO methods For completeness we provide in this Appendix results on the TM spectra obtained using the CVO method - to get the PEPS site tensor - followed by a CTMRG procedure to obtain the fixed-point MPS boundary. We focus here on the finite-\(\chi\) effects. A comparison of the \(t=0.5\) spectra using Cluster Variational Optimization within the restricted U(1) **(a)** of full \(\mathbb{Z}_{2}\) tensor basis **(b)** is shown in Fig. 9, for increasing \(\chi\) values. These spectra resemble very much those obtained in the SU and in the EBVO methods shown in the main text. In particular, the data are consistent with a vanishing gap in the \(\chi\to\infty\) limit although with finite gaps associated to spinon (leading \(g=4\) multiplet) and spin-spin (leading \(g=3\) multiplet) correlations. The maximum correlation lengths extracted from the TM gaps plotted in Fig. 10 as a function of the environment dimension \(\chi\) show the same linear behaviors as those obtained within the SU or EBVO methods, suggesting again a critical state at all times. It is natural to expect such a critical behavior in the U(1)-CVO method since the ansatz bears the same U(1) gauge symmetry as the initial NN RVB state. However, interestingly enough, this gauge symmetry is absent at the level of the \(\mathbb{Z}_{2}\) site basis tensors and, therefore, may be hidden in the ansatz obtained by the \(\mathbb{Z}_{2}\)-CVO method. For completeness we also show in Fig. 11 the behavior of the boundary MPS entanglement entropy versus the logarithm of the maximum correlation length. The results are again very similar to the ones of the SU and EBVO methods, in agreement with the linear scaling expected in a \(c=1\) central charge CFT. Lastly, we show in Fig. 12 the leading correlation lengths of the NN RVB state and of the time-evolved state at \(t=0.5\) plotted as a function of \(1/\chi\). The time-evolved state is computed here using the U(1)-CVO scheme. The data reveal moderate finite-\(\chi\) effects with small linear \(1/\chi\) corrections in the asymptotic \(\chi\to\infty\) limit corresponding to the exact contraction of the iPEPS tensor network. We see that the data obtained with the largest available \(\chi=144\), as mostly reported in the text, are already accurate. The same conclusion applies to the data obtained by all other methods. Figure 9: TM spectra at time \(t=0.5\) for different values of \(\chi\) computed using Cluster Variational Optimization within the restricted U(1) **(a)** or full \(\mathbb{Z}_{2}\)**(b)** tensor basis. Only the largest 40 eigenvalues are shown, normalized such that the leading one is set to 1. Different symbols are used to highlight particular levels, the subleading eigenvalue (\(g=1\)), and the largest eigenvalues with degeneracy \(g=4\) and \(g=3\). Figure 10: Maximum correlation length plotted as a function of the environment dimension \(\chi\), obtained in the U(1) (panel **(a)**) and \(\mathbb{Z}_{2}\) (panel **(b)** CVO methods. The different symbols correspond to different values of the time \(t\) (see legend). Figure 11: Von-Neumann entanglement entropy of the 1d boundary theory as a function of \(\ln\xi_{\rm max}\) for various values of the time \(t\) as shown in the legend. The dashed line is a guide to the eye showing the expected CFT scaling with central charge \(c=1\). The U(1) (**a**) and the \(\mathbb{Z}_{2}\) (**b**) CVO schemes have been used. Figure 12 Scaling of various (finite) correlation lengths versus \(1/\chi\). Data are obtained at \(t=0.5\) with the Cluster VO-U(1) method. The data obtained in the reference NN RVB state are shown as filled symbols. All correlation lengths \(\xi^{(g)}\) are labeled by the degeneracy \(g\) of the corresponding TM eigenvalues and, hence, correspond to different type of correlation functions (e.g. \(g=3\) corresponds to spin-spin correlations). Short-time expansion of the spin-spin correlations Here we estimate the short-time behavior of the spin-spin correlation \(C_{S}(d,t)\) where \(d\) is the linear (or Manhattan) distance between two sites \(i\) and \(j\), \(d>1\). Expanding the correlation function \(C_{S}(d,t)=\big{\langle}\exp\left(iHt\right)\mathbf{S}_{i}\cdot\mathbf{S}_{j} \,\exp\left(-iHt\right)\big{\rangle}_{0}\) to second order in \(t\), \(C_{S}(d,t)\sim C_{S}(d,0)+C_{S}^{(1)}(d,t)+C_{S}^{(2)}(d,t)\), one gets \[C_{S}^{(1)}(d,t) = -i\,t\,\big{\langle}[\mathbf{S}_{i}\cdot\mathbf{S}_{j},H]\big{ }_{0}, \tag{7}\] \[C_{S}^{(2)}(d,t) = -\frac{t^{2}}{2}\,\big{\langle}[[\mathbf{S}_{i}\cdot\mathbf{S}_{j },H],H]\big{\rangle}_{0}, \tag{8}\] where \(\big{\langle}\dots\big{\rangle}_{0}\) is the expectation value taken in the initial state \(\ket{\Psi_{0}}\). Using the expression of the Heisenberg hamiltonian \(H\), the commutator \(O_{ij}^{(1)}=[\mathbf{S}_{i}\cdot\mathbf{S}_{j},H]\) can be written as, \[O_{ij}^{(1)}=\sum_{k(i)}\mathbf{S}_{j}\cdot(\mathbf{S}_{i}\times\mathbf{S}_{k })+\sum_{p(j)}\mathbf{S}_{i}\cdot(\mathbf{S}_{j}\times\mathbf{S}_{p}),\] where \(k(i)\) and \(p(j)\) are NN sites of \(i\) and \(j\), respectively. Note that this operator has a zero expectation value in the RVB state (which is invariant under time-reversal) so that the \(t\)-linear term vanishes in the short-time expansion. Although the exact analytic expression becomes complicated, the double-commutator in \(C_{S}^{(2)}(d,t)\) has the following structure, \[O_{ij}^{(2)}=G_{i}^{(3)}G_{j}^{(1)}+G_{i}^{(1)}G_{j}^{(3)}+G_{i}^{(2)}G_{j}^{( 2)},\] where \(G_{k}^{(n)}\) is a \(n\)-spin operator defined on a finite support of size \(2\) including site \(k\). Since all terms are invariant under time-reversal, \(O_{ij}^{(2)}\) has generically a finite expectation value. This property has been checked numerically on the \(4\times 4\) torus for all available distances between \(i\) and \(j\). Note also that this property will cease to be true if the expectation value is taken in a product state (like the classical Neel state) and the supports of the operator do not overlap, i.e. for \(d>4\). _Acknowledgments--_ We acknowledge inspiring discussions with Mari-Carmen Banyuls, Andreas Lauchli, Norbert Schuch, Luca Tagliacozzo, and Frank Verstraete and support from the TNTOP ANR-18-CE30-0026-01 grant awarded by the French Research Council. This work was granted access to the HPC resources of CALMIP center under the allocation 2022-P1231.
2305.15804
Smoothed Complexity of SWAP in Local Graph Partitioning
We give the first quasipolynomial upper bound $\phi n^{\text{polylog}(n)}$ for the smoothed complexity of the SWAP algorithm for local Graph Partitioning (also known as Bisection Width), where $n$ is the number of nodes in the graph and $\phi$ is a parameter that measures the magnitude of perturbations applied on its edge weights. More generally, we show that the same quasipolynomial upper bound holds for the smoothed complexity of the 2-FLIP algorithm for any binary Maximum Constraint Satisfaction Problem, including local Max-Cut, for which similar bounds were only known for $1$-FLIP. Our results are based on an analysis of cycles formed in long sequences of double flips, showing that it is unlikely for every move in a long sequence to incur a positive but small improvement in the cut weight.
Xi Chen, Chenghao Guo, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Mihalis Yannakakis
2023-05-25T07:37:00Z
http://arxiv.org/abs/2305.15804v1
# Smoothed Complexity of SWAP in Local Graph Partitioning ###### Abstract We give the first quasipolynomial upper bound \(\phi n^{\mathrm{polylog}(n)}\) for the smoothed complexity of the SWAP algorithm for local Graph Partitioning (also known as Bisection Width), where \(n\) is the number of nodes in the graph and \(\phi\) is a parameter that measures the magnitude of perturbations applied on its edge weights. More generally, we show that the same quasipolynomial upper bound holds for the smoothed complexity of the 2-FLIP algorithm for any binary Maximum Constraint Satisfaction Problem, including local Max-Cut, for which similar bounds were only known for 1-FLIP. Our results are based on an analysis of cycles formed in long sequences of double flips, showing that it is unlikely for every move in a long sequence to incur a positive but small improvement in the cut weight. ## 1 Introduction _Local search_ has been a powerful machinery for a plethora of problems in combinatorial optimization, from the classical Simplex algorithm for linear programming to the gradient descent method for modern machine learning problems, to effective heuristics (e.g. Kernighan-Lin) for basic combinatorial problems such as the Traveling Salesman Problem and Graph Partitioning. A local search algorithm begins with an initial candidate solution and then follows a path by iteratively moving to a better neighboring solution until a local optimum in its neighborhood is reached. The quality of the obtained solutions depends of course on how rich is the neighborhood structure that is explored by the algorithm. Local search is a popular approach to optimization because of the general applicability of the method and the fact that the algorithms typically run fast in practice. In contrast to their empirical fast convergence, however, many local search algorithms have exponential running time in the worst case due to delicate pathological instances that one may never encounter in practice. To bridge this striking discrepancy, Spielman and Teng [1] proposed the framework of _smoothed analysis_, a hybrid of the classical worst-case and average-case analyses. They used it to provide rigorous justifications for the empirical performance of the Simplex algorithm by showing its smoothed complexity to be polynomial. Since then, the smoothed analysis of algorithms and problems from combinatorial optimization [2, 3, 4], among many other research areas such as numerical methods [5, 6, 4, 7], machine learning [8, 9, 10, 11] and algorithmic game theory[12, 13, 14, 15], has been studied extensively. In this paper we study the smoothed complexity of local search algorithms for the classical problem of _Graph Partitioning_ (also known as _Bisection Width_ in the literature). In the problem we are given edge weights \(X=(X_{e}:e\in E_{2n})\) of a complete graph \(K_{2n}=(V_{2n},E_{2n})\) with \(X_{e}\in[-1,1]\), and the goal is to find a _balanced_ partition \((U,V)\) of \(V_{2n}\) into two equal-size subsets \(U\) and \(V\) to minimize the weight of the corresponding cut (i.e., the sum of weights of edges with one node in \(U\) and the other node in \(V\)). Graph Partitioning has been studied extensively, especially in practice. It forms the basis of divide and conquer algorithms and is used in various application domains, for example in laying out circuits in VLSI. It has also served as a test bed for algorithmic ideas [16]. Given its NP-completeness [17], heuristics have been developed to solve Graph Partitioning in practice. A commonly used approach is based on local search: starting with an initial balanced partition, local improvements on the cut are made iteratively until a balanced partition that mimimizes the cut within its _neighborhood_ is reached. The simplest neighborhood is the SWAP neighborhood, where two balanced partitions are neighbors if one can be obtained from the other by swapping two nodes, one from each part. A locally optimal solution under the SWAP neighborhood can be found naturally by the SWAP algorithm, which keeps swapping two nodes as long as the swap improves the cut. A more sophisticated neighborhood structure, which yields much better locally optimal solutions in practice, is that of the Kernighan-Lin (KL) algorithm which performs in each move a sequence of swaps [18]. These local search algorithms for Graph Partitioning typically converge fast in practice. (For a thorough experimental analysis of their performance, and comparison with simulated annealing, regarding both the quality of solutions and the running time, see [16].) In contrast, it is also known that the worst-case complexity is exponential. (Finding a locally optimal solution for Graph Partitioning under the sophisticated Kernighan-Lin neighborhood, and even under the SWAP neighborhood is complete in PLS [19, 20]. The hardness reductions give instances on which these algorithms take exponential time to converge.) _This significant gap in our understanding motivates us to work on the smoothed complexity of the SWAP algorithm for Graph Partitioning in this paper._ We work on the full perturbation model, under which edge weights are drawn independently from a collection of distributions \(\mathcal{X}=(\mathcal{X}_{e}:e\in E_{2n})\). Each \(\mathcal{X}_{e}\) is supported on \([-1,1]\), and has its density function bounded from above by a parameter \(\phi>0\). Our goal is to understand the expected number of steps the SWAP algorithm takes, as a function of \(n\) and \(\phi\), against any edge weight distributions \(\mathcal{X}\).1 Note that the SWAP algorithm, similar to the Simplex algorithm, is a family of algorithms since one can implement it using different pivoting rules, deterministic or randomized, to pick the next pair of nodes to swap when more than one pairs can improve the cut. We would like to establish upper bounds that hold for any implementation of the SWAP algorithm. ### Related work: Smoothed analysis of 1-FLIP for Max-Cut There has not been any previous analysis on SWAP under the smoothed setting, as far as we are aware. In contrast, much progress has been made on the smoothed analysis of the _1-FLIP algorithm for Max-Cut_[21, 22, 23, 24, 25]. The major challenge for the analysis of SWAP, as we discuss in more details in Section 1.3, is to overcome substantial new obstacles posed by the richer neighborhood structure of SWAP, which are not present in the simpler \(1\)-_change neighborhood_ behind \(1\)-FLIP. Recall in Max-Cut, we are given edge weights \(X=(X_{e}:e\in E_{n})\) of a complete graph \(K_{n}=(V_{n},E_{n})\) with \(X_{e}\in[-1,1]\) and the goal is to find a (_not necessarily balanced_) partition of \(V_{n}\) to maximize the cut. 2 The simplest neighborhood structure for local search on Max-Cut is the so-called _\(1\)-change neighborhood_, where two partitions are neighbors if one can be obtained from the other by moving a single node to the other side. The \(1\)-FLIP algorithm finds such a locally optimal solution by keeping moving nodes, one by one, as long as each move improves the cut. For the structured perturbation model, where a graph \(G\) (not necessarily a complete graph) is given and only weights of edges in \(G\) are perturbed, [22] showed that the expected number of steps \(1\)-FLIP takes to terminate is at most \(\phi n^{\log n}\). Subsequently, the bound was improved by [23] to \(\phi\cdot\operatorname{poly}(n)\) for the full perturbation model, with further improvements in [24] on the polynomial part of \(n\). The upper bound of [22] for the structured model was recently improved to \(\phi n^{\sqrt{\log n}}\) in [25]. Footnote 2: Since we allow weights in \([-1,1]\), maximizing the cut is the same as minimizing the cut after negating all edge weights. Hence the only difference of Max-Cut, from Graph Partitioning, is that the partition does not have to be balanced. ### Our Contributions We present the first smoothed analysis of the SWAP algorithm for Graph Partitioning. Our main result for SWAP is a quasipolynomial upper bound on its expected running time:3 Footnote 3: We did not make an attempt to optimize the constant \(10\) in the polylog exponent. **Theorem 1.1**.: _Let \(\mathcal{X}=(\mathcal{X}_{e}:e\in E_{2n})\) be distributions of edge weights such that each \(\mathcal{X}_{e}\) is supported on \([-1,1]\) and has its density function bounded from above by a parameter \(\phi>0\). Then with probability at least \(1-o_{n}(1)\) over the draw of edge weights \(X\sim\mathcal{X}\), any implementation of SWAP takes at most \(\phi n^{O(\log^{10}n)}\) steps to terminate._ The proof of Theorem 1.1 for SWAP is based on techniques we develop for a more challenging problem: the smoothed analysis of \(2\)-_FLIP for Max-Cut_. Starting with an initial partition (not necessarily balanced), in each round, \(2\)-FLIP can move either one node (like \(1\)-FLIP) or two nodes (not necessarily in different parts) as long as the cut is improved. If we restrict the algorithm to only use double flips in every move, then we call this variant _pure_\(2\)-FLIP. Feasible moves in SWAP are clearly feasible in pure \(2\)-FLIP as well but not vice versa. Thus, an improving sequence of SWAP in the Graph Partitioning problem is also an improving sequence of pure \(2\)-FLIP in the Max-Cut problem on the same instance. We do not make again any assumption on the pivoting rule used by \(2\)-FLIP (i.e., which move is selected in each step if there are multiple improving moves), except that if both single and double flips are allowed, then the algorithm never moves a pair of nodes when moving only one of the two nodes would yield a better cut. Clearly, any reasonable implementation of \(2\)-FLIP satisfies this property. Our main result on \(2\)-FLIP is a similar quasipolynomial upper bound on its expected running time. The same result holds also for any implementation of the pure \(2\)-FLIP algorithm that performs only \(2\)-flips. This is the first smoothed analysis of \(2\)-FLIP: **Theorem 1.2**.: _Let \(\mathcal{X}=(\mathcal{X}_{e}:e\in E_{n})\) be distributions of edge weights such that each \(\mathcal{X}_{e}\) is supported on \([-1,1]\) and has its density function bounded from above by a parameter \(\phi>0\). Then with probability at least \(1-o_{n}(1)\) over the draw of edge weights \(X\sim\mathcal{X}\), any implementation of the 2-FLIP algorithm takes at most \(\phi n^{O(\log^{10}n)}\) steps to terminate._ A more general class of problems that is related to Max-Cut is the class of _Maximum Binary Constraint Satisfaction Problems_ (MAX 2-CSP). In a general Max-2CSP, the input consists of a set of Boolean variables and a set of weighted binary constraints over the variables; the problem is to find an assignment to the variables that maximizes the weight of the satisfied constraints. Max-Cut is the special case when every constraint is a \(\neq\) (XOR) constraint. Other examples are Max 2SAT and Max Directed Cut (i.e., the Max Cut problem on weighted directed graphs). More generally, in a _Binary Function Optimization Problem_ (BFOP), instead of binary constraints the input has a set of weighted binary functions on the variables, and the objective is to find an assignment that maximizes the sum of the weights of the functions (see Section 7 for the formal definitions). It was shown in [25] that the results for 1-FLIP for Max-Cut generalize to all Max 2-CSP and BFOP problems. We prove that this is the case also with 2-FLIP. We say an instance of Max 2-CSP or BFOP is _complete_ if it includes a constraint or function for every pair of variables. **Theorem 1.3**.: _Let 1 be an arbitrary complete instance of a MAX 2-CSP (or BFOP) problem with \(n\) variables and \(m\) constraints (functions) with independent random weights in \([-1,1]\) with density at most \(\phi>0\). Then, with probability at least \(1-o_{n}(1)\) over the draw of the weights, any implementation of 2-FLIP takes at most \(m\phi n^{O(\log^{10}n)}\) steps to terminate._ For all the aforementioned problems, by controlling the tail-bound of the failure probability, we can strengthen our analysis to derive the same bound for the expected number of steps needed to terminate as in the standard smoothed analysis prototype (See Corollary 3.5). ### Our Approach Here, we give an overview of our proof approach, focusing on the analysis of the 2-FLIP algorithm for Max-Cut (Theorem 1.2). Many details are omitted in this subsection, to help the reader get an overall view of some of the key ideas and the structure of the proof. Note that 2-FLIP clearly subsumes 1-FLIP, since it explores a much larger neighborhood structure. For example, a 2-FLIP algorithm could apply improving 1-flips as long as possible, and only when the partition is locally optimal with respect to the 1-flip neighborhood apply an improving 2-flip. Therefore, the complexity (whether smoothed or worst-case) of 2-FLIP is clearly at least as large as the complexity of 1-FLIP, and could potentially be much larger. Similarly, the analysis of 2-FLIP has to subsume the analysis of 1-FLIP, but it needs to address many more challenges, in view of the larger space of possible moves in each step (quadratic versus linear). In a sense, it is analogous to the difference between a two-dimensional and a one-dimensional problem. First, let's briefly review the approach of previous work [22] on the simpler 1-FLIP problem. Since the edge weights are in \([-1,1]\), the weight of any cut is in \([-n^{2},n^{2}]\). For the execution of the FLIP algorithm to be long, it must have many moves where the gain in the cut weight is very small, in \((0,\epsilon]\) for some small \(\epsilon>0\). It is easy to see that any single move by itself has small probability (\(\phi\epsilon\)) of this being the case. If different moves were uncorrelated, then the probability that a sequence increases the weight of the cut by no more than \(\epsilon\) would go down exponentially with the length of the sequence. Of course, different moves are correlated. However, the same effect holds if the improvements of the moves are linearly independent in the following sense. For any sequence of the FLIP algorithm, the _improvement vector_ of one move is the vector indexed by the edges with entries in \(\{-1,0,1\}\) indicating whether each edge is added or removed from the cut as a result of the move. Most work along this line of research is based on the following fact (see Corollary 2.1 for the formal statement): If the rank of the set of improvement vectors is \(\mathsf{rank}\), then the sequence has improvement at most \(\epsilon\) with probability at most \((\phi\epsilon)^{\mathsf{rank}}\). On the other hand, if all sequences with length at most \(\Theta(n)\) have an improvement of at least \(\epsilon\), then the number of steps of FLIP is bounded by \(\Theta(n)\cdot(2n^{2}/\epsilon)=\mathrm{poly}(n)/\epsilon\), as the total improvement cannot exceed \(2n^{2}\). So a natural approach is to union bound over all possible sequences of length \(\Theta(n)\) and all \(2^{n}\) possible initial configurations, which yields a probability upper bound of \(2^{n}n^{\Theta(n)}(\phi\epsilon)^{\mathsf{rank}}\). Getting a quasi-polynomial complexity bound using the union bound above requires the \(\mathsf{rank}\) of any sequence of length \(\Theta(n)\) to be at least \(\Omega(n/\log n)\). However, this is not always true (consider, e.g., a sequence in which only \(n^{0.1}\) distinct nodes moved). One key idea of [22] is to avoid union bound over all initial configurations and only union bound over initial configurations of _active_ nodes (nodes that move at least once in the sequence) by looking at _arcs_. An arc is defined to be two adjacent moves of the same node. By taking the sum of improvement vectors of the two moves of an arc, edges that involve inactive nodes are cancelled, so the union bound over sequences of length \(\ell\) becomes \(2^{\ell}n^{\ell}(\phi\epsilon)^{\mathsf{rank}_{\mathsf{arcs}}}\). To lower bound the rank of arcs of sequence \(S\), \(\mathsf{rank}_{\mathsf{arcs}}(S)\), they proved it is at least half of the number of nodes that appear more than once in the sequence, denoted \(V_{2}(S)\). The essential combinatorial claim made by [22] is that for any sequence of length \(\Omega(n)\), there exists a substring of length \(\ell\) with \(V_{2}(S)\) at least \(\Omega(\ell/\log n)\). This can be shown by bucketing arcs by length into buckets \([2^{i},2^{i+1})\) and picking the largest bucket as length of the substring. On average, a random substring would contain \(\Omega(\ell/\log n)\) arcs with similar length, and therefore, \(\Omega(\ell/\log n)\) arcs with distinct nodes. The similar idea is used in Case 1 of our Section 6 to handle 1-moves (moves that flip a single node). Now let's return to the case of the 2-FLIP algorithm. A step now can move two nodes at the same time, and this fact poses qualitatively new challenges to the proof framework. Now we have to deal not just with sets (e.g., the set of nodes that move more than once) but instead with relations (graphs). Define an _auxiliary graph_\(H\) for the sequence of moves that contains \(K_{n}\) as vertices and an edge for each 2-move of the sequence. If we still want to eliminate the influence of inactive nodes in the improvement vector by summing or subtracting two moves as in the 1-FLIP case, the moves have to contain the _exact_ same pair of nodes. This happens too sparsely in the improving sequence of 2-FLIP to provide enough rank. To this end, we generalize the notion of arcs to _cycles_. A cycle is a set of 2-moves of the sequence whose corresponding edges form a cycle in \(H\). But not all cycles of \(H\) are useful. We are interested only in cycles for which there is a linear combination of the improvement vectors of the moves of the cycle that cancels all edges of \(K_{n}\) that involve an inactive node (i.e., the corresponding entry in the linear combination is 0); these are the cycles that are useful to the rank and we call them _dependent cycles_. So the goal is to find a substring \(S\) of length \(\ell\) where we can lower bound \(\mathsf{rank}_{\mathsf{cycles}}(S)\) by \(\ell/\mathrm{polylog}(n)\). The ideal case would be the case where all nodes have \(O(\mathrm{polylog}(n))\) but at least 2 appearances in the substring, i.e., all nodes have degree between 2 and \(O(\mathrm{polylog}(n))\) in \(H\). In this case, we can repeat the following process to find enough cycles. Find a dependent cycle in \(H\), pick an edge in \(K_{n}\) that is non-zero in the improvement vector of the cycle (we call this the _witness_ of the cycle) and delete both nodes of the witness from \(H\). This way the improvement vector of cycles of \(H\) we pick in the future will not contain witnesses from previous cycles, and improvement vectors of cycles we pick would form a triangular matrix that has full rank. Since any node has \(O(\operatorname{polylog}(n))\) degree in \(H\), each iteration deletes \(O(\operatorname{polylog}(n))\) edges. So the process can be repeated at least \(\Omega(\ell/\operatorname{polylog}(n))\) times. However, it is not hard to construct sequences with polynomial length, such that any substring consists mostly of moves involving one high-degree node (with degree even \(\Omega(\ell)\)) and one degree-\(1\) node, so deleting the high-degree node would have a significant impact on the graph and the process can only repeat for a few rounds and lead to a few cycles. So the challenge is to run a similar process, but reuse high-degree nodes carefully without repeating witnesses found in previous cycles. Suppose we find a cycle \(C\) with witness edge \((u,v)\). To avoid including the edge in another cycle \(C^{\prime}\), a sufficient condition is that: (1) \(u\) is not included in \(C^{\prime}\). (2) For any two adjacent edges (edges in \(H\), not \(K_{n}\)) of \(v\) in \(C^{\prime}\), \(u\) never moved between the two corresponding moves of the edges. To meet condition 1, we can delete \(u\) from \(H\). To meet condition 2, we can make multiple copies of \(v\) in \(H\) where each copy corresponds to moves in \(S\) where \(u\) doesn't move between them. We call this operation _splitting_ since the new graph is generated by deleting and splitting the original \(H\). The new graph after splitting is denoted by _splitted auxiliary graph_. Our algorithm for finding a large number of linearly independent cycles can be described as repeatedly performing the following process. Find a cycle in the splitted auxiliary graph with witness \((u,v)\) by a tree-growing argument, delete \(u\) and split the graph by creating multiple copies of \(v\). We have to choose carefully the witness edges \((u,v)\) and do the splitting, so that the number of nodes does not proliferate in this process. Compared to the original auxiliary graph, the number of edges deleted and the number of new nodes introduced is proportional to the degree of \(u\), so the number of cycles for a sequence of length \(\ell\) we can find in the algorithm is bounded by \(\ell/deg(u)\). To find \(\ell/poly(\log n)\) cycles, we need a window where decent amount of moves involve a node \(u\) that has \(deg(u)\) bounded by \(poly(\log n)\). The existence of such window in an arbitrary sequence that is long enough can be proven via a sophisticated bucketing and counting argument. The overall argument then for 2-FLIP is that, given a sufficiently long sequence of improving moves (specifically, of length \(n\cdot poly(\log n)\)), we can find a window (a substring) such that the rank of the arcs and cycles in the window is within a \(poly(\log n)\) factor of the length of the window. As a consequence, with high probability the weight of the cut improves by a nontrivial amount \(\epsilon\) (\(1/\text{quasi-polynomial}\)) during this window. This can happen at most \(n^{2}/\epsilon\) times, hence the length of the execution sequence of 2-FLIP is at most quasi-polynomial. ### Organization of the paper. The rest of the paper is organized as follows. Section 2 gives basic definitions of the problems and the smoothed model, defines the central concepts of arcs and cycles, their improvement vectors, and proves a set of basic lemmas about them that are used throughout in the subsequent analysis. Section 3 states the main lemma on the existence of a nice window in the move sequence such that the arcs and cycles in the window have high rank, and shows how to derive the main theorem from this lemma. Sections 4 and 5 prove the main lemma in the case that all the moves are 2-moves (this is the more challenging case). First we show in Section 4 the existence of a nice window (in fact a large number of nice windows, since this is needed in the general case) such that many moves in the window have the property that both nodes of the move appear a substantial number of times in the window (at least \(\operatorname{polylog}(n)\) times), and one of them does not appear too many times (at most a higher \(\operatorname{polylog}(n)\)). In Section 5 we show how to find in such a nice window a large number of cycles whose improvement vectors are linearly independent. Section 6 extends the proof of the main lemma to the general case where the sequence of moves generated by 2-FLIP contains both 1- and 2-moves. Finally, in Section 7 we extend the results to the class of Maximum Binary Constraint Satisfaction and Function Optimization problems. ## 2 Preliminaries We write \([n]\) to denote \(\{1,\ldots,n\}\). Given two integers \(a\leq b\), we write \([a:b]\) to denote \(\{a,\ldots,b\}\). Given \(\gamma,\gamma^{\prime}\in\{\pm 1\}^{n}\) we use \(d(\gamma,\gamma^{\prime})\) to denote the Hamming distance between \(\gamma\) and \(\gamma^{\prime}\), i.e., the number of entries \(i\in[n]\) such that \(\gamma_{i}\neq\gamma_{i}^{\prime}\). ### Local Max-Cut and the FLIP Algorithm Let \(K_{n}=(V_{n},E_{n})\) with \(V_{n}=[n]\) be the complete undirected graph over \(n\) nodes. Given edge weights \(X=(X_{e}:e\in E_{n})\) with \(X_{e}\in[-1,1]\), the \(k\)_-local Max-Cut problem_ is to find a partition of \(V_{n}\) into two sets \(V_{1}\) and \(V_{2}\) such that the weight of the corresponding cut (the sum of weights of edges with one node in \(V_{1}\) and the other in \(V_{2}\)) cannot be improved by moving no more than \(k\) nodes to the other set. Formally, the objective function of our interest is defined as follows: Given any _configuration_\(\gamma\in\{\pm 1\}^{n}\) (which corresponds to a partition \(V_{1},V_{2}\) with \(V_{1}=\{u\in V_{n}:\gamma(u)=-1\}\) and \(V_{2}=\{u\in V_{n}:\gamma(u)=1\}\)), the objective function is \[\operatorname{\mathsf{obj}}_{X}(\gamma):=\sum_{(u,v)\in E_{n}}X_{(u,v)}\cdot \mathbf{1}\{\gamma(u)\neq\gamma(v)\}=\frac{1}{2}\sum_{(u,v)\in E_{n}}X_{(u,v) }\cdot\big{(}1-\gamma(u)\gamma(v)\big{)}. \tag{1}\] Our goal is to find a configuration \(\gamma\in\{\pm 1\}^{n}\) that is a \(k\)-local optimum, i.e., \(\operatorname{\mathsf{obj}}_{X}(\gamma)\geq\operatorname{\mathsf{obj}}_{X}( \gamma^{\prime})\) for every configuration \(\gamma^{\prime}\in\{\pm 1\}^{n}\) with Hamming distance no more than \(k\) from \(\gamma\). A simple local search algorithm for \(k\)-Local Max-Cut is the following \(k\)-FLIP algorithm: _Start with some initial configuration \(\gamma=\gamma_{0}\in\{\pm 1\}^{n}\). While there exists a configuration \(\gamma^{\prime}\) with \(d(\gamma^{\prime},\gamma)\leq k\) such that \(\operatorname{\mathsf{obj}}_{X}(\gamma^{\prime})>\operatorname{\mathsf{obj}} _{X}(\gamma)\), select one such configuration \(\gamma^{\prime}\) (according to some pivoting criterion), set \(\gamma=\gamma^{\prime}\) and repeat, until no such configuration \(\gamma^{\prime}\) exists._ The execution of \(k\)-FLIP on \(K_{n}\) with edge weights \(X\) depends on both the initial configuration \(\gamma_{0}\) and the pivoting criterion used to select the next configuration in each iteration. The larger the value of \(k\), the larger the neighborhood structure that is being explored, hence the better the quality of solutions that is expected to be generated. However, the time complexity of each iteration grows rapidly with \(k\): there are \(\Theta(n^{k})\) candidate moves, and with suitable data structures we can determine in \(O(n^{k})\) if there is an improving move and select one. Thus, the algorithm is feasible only for small values of \(k\). For \(k=1\), it is the standard FLIP algorithm. Here we are interested in the case \(k=2\). We will not make any assumption on the pivoting criterion in our results, except that we assume that the algorithm does not choose to flip in any step two nodes when flipping only one of them would produce a strictly better cut. This is a natural property satisfied by any reasonable implementation of 2-FLIP. For example, one approach (to optimize the time of each iteration) is to first check if there is an improving 1-flip (\(n\) possibilities), and only if there is none, proceed to search for an improving 2-flip (\(O(n^{2})\) possibilities). Clearly any implementation that follows this approach satisfies the above property. Also, the greedy approach, that examines all \(O(n^{2})\) possible 1-flips and 2-flips and chooses one that yields the maximum improvement, obviously satisfies the above property. Our results hold also for the variant of 2-FLIP that uses only 2-flips (no 1-flips). We refer to this variant as _Pure 2-FLIP_. ### Graph Partitioning and the SWAP Algorithm In the Graph Partitioning (or Bisection Width) problem, we are given a graph \(G\) on \(2n\) nodes with weighted edges; the problem is to find a partition of the set \(V\) of nodes into two equal-sized subsets \(V_{1},V_{2}\) to minimize the weight of the cut.4 As in the Max Cut problem, in this paper we will assume the graph is complete and the edge weights are in \([-1,1]\). A simple local search algorithm is the SWAP algorithm: Starting from some initial partition \((V_{1},V_{2})\) with \(n\) nodes in each part, while there is a pair of nodes \(u\in V_{1},v\in V_{2}\) whose swap (moving to the other part) decreases the weight of the cut, swap \(u\) and \(v\). We do not make any assumption on the pivoting rule, i.e. which pair is selected to swap in each iteration if there are multiple pairs whose swap improves the cut. At the end, when the algorithm terminates it produces a locally optimal balanced partition, i.e. one that cannot be improved by swapping any pair of nodes. The SWAP algorithm is clearly a restricted version of Pure 2-FLIP (restricted because the initial partition is balanced, and in each step the 2-flip must involve two nodes from different parts of the partition). Footnote 4: Since the weights can be positive or negative, there is no difference between maximization and minimization. The Graph Partitioning problem is usually stated as a minimization problem. The SWAP algorithm is the simplest local search algorithm for the Graph Partitioning problem, but it is a rather weak one, in the sense that the quality of the locally optimal solutions produced may not be very good. For this reason, more sophisticated local search algorithms have been proposed and are typically used, most notably the Kernighan-Lin algorithm [18], in which a move from a partition to a neighboring partition involves a sequence of swaps. If a partition has a profitable swap, then Kernighan-Lin (KL) will perform the best swap; however, if there is no profitable swap then KL explores a sequence of \(n\) greedy steps, selecting greedily in each step the best pair of nodes to swap that have not changed sides before in the current sequence, and if this sequence of swaps produces eventually a better partition, then KL moves to the best such partition generated during this sequence. A related variant, to reduce the time cost of each iteration, was proposed by Fiduccia and Matheyses [26]. This idea of guided deep neighborhood search is a powerful method in local search that was introduced first in the [18] paper of Kernighan and Lin on Graph Partitioning, and was applied subsequently successfully to the Traveling Salesman Problem and other problems. ### Smoothed Analysis We focus on the 2-FLIP algorithm from now on. Under the smoothed complexity model, there is a family \(\mathcal{X}=(\mathcal{X}_{e}:e\in E_{n})\) of probability distributions, one for each edge in \(K_{n}=(V_{n},E_{n})\). The edge weights \(X=(X_{e}:e\in E_{n})\) are drawn independently with \(X_{e}\sim\mathcal{X}_{e}\). We assume that each \(\mathcal{X}_{e}\) is a distribution supported on \([-1,1]\) and its density function is bounded from above by a parameter \(\phi>0\). (The assumption that the edge weights are in \([-1,1]\) is no loss of generality, since they can be always scaled to lie in that range.) Our goal is to bound the number of steps the 2-FLIP algorithm takes to terminate when running on \(K_{n}\) with edge weights \(X\sim\mathcal{X}\), in terms of \(n\) and the parameter \(\phi\). ### Move Sequences We introduce some of the key definitions that will be used in the smoothed analysis of 2-FLIP. A _move sequence_\(\mathcal{S}=(\mathcal{S}_{1},\ldots,\mathcal{S}_{\ell})\) is an \(\ell\)-tuple for some \(\ell\geq 1\) such that \(\mathcal{S}_{i}\) is a subset of \(V_{n}\) of size either one or two. We will refer to the \(i\)-th move in \(\mathcal{S}\) as a 1-_move_ if \(|\mathcal{S}_{i}|=1\) and a 2-_move_ if \(|\mathcal{S}_{i}|=2\), and write \(\mathsf{len}(\mathcal{S}):=\ell\) to denote its length. Additionally, let 1-_move_\((\mathcal{S})\) and 2-_move_\((\mathcal{S})\) denote the corresponding subsequence of single flip or double flips correspondingly. We say a node \(u\in V_{n}\) is _active_ in \(\mathcal{S}\) if \(u\) appears in \(\mathcal{S}_{i}\) for some \(i\), and is _inactive_ otherwise. We write \(V(\mathcal{S})\subseteq V_{n}\) to denote the set of active nodes in \(\mathcal{S}\). Given \(\gamma_{0}\in\{\pm 1\}^{n}\) as the initial configuration, a move sequence \(\mathcal{S}=(\mathcal{S}_{1},\ldots,\mathcal{S}_{\ell})\) naturally induces a sequence of configurations \(\gamma_{0}\), \(\gamma_{1},\ldots,\gamma_{\ell}\in\{\pm 1\}^{n}\), where \(\gamma_{i+1}\) is obtained from \(\gamma_{i}\) by flipping the nodes in \(\mathcal{S}_{i+1}\). We say \((\gamma_{0},\mathcal{S})\) is _improving_ with respect to edge weights \(X\) if \[\mathsf{obj}_{X}(\gamma_{i})>\mathsf{obj}_{X}(\gamma_{i-1}),\quad\text{for all $i\in[\ell]$}\] and is \(\epsilon\)-_improving_ with respect to edge weights \(X\), for some \(\epsilon>0\), if \[\mathsf{obj}_{X}(\gamma_{i})-\mathsf{obj}_{X}(\gamma_{i-1})\in(0,\epsilon], \quad\text{for all $i\in[\ell]$}.\] For each \(i\in[\ell]\), the change \(\mathsf{obj}_{X}(\gamma_{i})-\mathsf{obj}_{X}(\gamma_{i-1})\) from the \(i\)-th move \(\mathcal{S}_{i}\) can be written as follows: 1. When \(\mathcal{S}_{i}=\{u\}\), \[\mathsf{obj}_{X}(\gamma_{i})-\mathsf{obj}_{X}(\gamma_{i-1})=\sum_{w\in V_{n }:w\neq u}\gamma_{i-1}(w)\gamma_{i-1}(u)X_{(u,w)}.\] (2) 2. When \(\mathcal{S}_{i}=\{u,v\}\), \[\mathsf{obj}_{X}(\gamma_{i})-\mathsf{obj}_{X}(\gamma_{i-1})=\sum_{w\in V_{n }:w\neq\{u,v\}}(\gamma_{i-1}(w)\gamma_{i-1}(u)X_{(w,u)}+\gamma_{i-1}(w)\gamma_ {i-1}(v)X_{(w,v)}).\] (3) Figure 1: Example of a 1-_move_, showing edges in the cut only. For each \(i\in[\ell]\), we write \(\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(i)\) to denote the improvement vector in \(\{0,\pm 1\}^{E_{n}}\) such that \[\mathsf{obj}_{X}(\gamma_{i})-\mathsf{obj}_{X}(\gamma_{i-1})=\mathsf{imprv}_{ \gamma_{0},\mathcal{S}}(i)\cdot X. \tag{4}\] Next, let \(E(\mathcal{S})\) denote the set of edges \((u,v)\in E_{n}\) such that both \(u\) and \(v\) are active in \(\mathcal{S}\). We write \(\mathsf{imprv}^{*}_{\gamma_{0},\mathcal{S}}(i)\in\{0,\pm 1\}^{E(\mathcal{S})}\) to denote the projection of \(\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(i)\) on entries that correspond to edges in \(E(\mathcal{S})\). We note that \(\mathsf{imprv}^{*}_{\gamma_{0},\mathcal{S}}(i)\) only depends on the initial configuration of active nodes \(V(S)\) in \(\gamma_{0}\). Given a (partial) configuration \(\tau_{0}\in\{\pm 1\}^{V(S)}\) of \(V(S)\), we let \[\mathsf{imprv}_{\tau_{0},\mathcal{S}}(i):=\mathsf{imprv}^{*}_{\gamma_{0}, \mathcal{S}}(i)\in\{0,\pm 1\}^{E(\mathcal{S})},\] where \(\gamma_{0}\in\{\pm 1\}^{n}\) is an arbitrary (full) configuration that is an extension of \(\tau_{0}\). (To aid the reader we will always use \(\gamma\) to denote a full configuration and \(\tau\) to denote a partial configuration in the paper.) Note that if \(\mathcal{S}\) is a sequence of moves generated by an execution of the 2-FLIP algorithm then \(\mathcal{S}\) must be improving, because every move must increase the weight of the cut and therefore every \(1-\) or \(2-\) move is improving. On the other hand, if every move in \(\mathcal{S}\) increases the cut weight by no more than \(\epsilon\) then we can not directly guarantee that after \(poly(|\mathcal{S}|,n,1/\epsilon)\) steps the algorithm would certainly terminate. From probabilistic perspective, in order to provide a smoothed upper bound on the running time of 2-FLIP method, it suffices to show that it is exponentially small probability for every move in a long enough sequence to incur only a \(o(1/poly(n))\) improvement in our objective. Indeed, in an idealized scenario where the improvements of different moves of a sequence were disentangled, the event for a linear-length sequence to be at most \(\epsilon-\)improving would have exponentially small probability. Unfortunately, going back to the 2-FLIP algorithm, there could be improving steps that are strongly correlated (as an extreme situation there could be two flips with almost the same improvement vector). Thus, as one may expect the probability exponential decay holds only for linearly independent \(\mathsf{imprv}_{\tau_{0},\mathcal{S}}(\cdot)\), introducing the necessity of analysis of the \(\mathsf{rank}\left(\{\mathsf{imprv}_{\tau_{0},\mathcal{S}}(i)|i\in\mathcal{S }^{\prime}\}\right)\), for some neatly chosen subset \(\mathcal{S}^{\prime}\) of moves from the sequence \(\mathcal{S}\). **Corollary 2.1** ([22]).: _Let \(X_{1},...,X_{m}\) be independent real random variables and let \(f_{i}:\mathbb{R}\rightarrow[0,\phi]\) for some \(\phi>0\) denote the density of \(X_{i}\) for each \(i\in[m]\). Additionally, let \(\mathcal{C}\) be a collection of \(k\) not necessarily linearly independent integer row vectors, namely \(\mathcal{C}=\{V_{1},\cdots,V_{k}\}\). Then it holds that for any interval \(I\subset\mathbb{R}\)_ \[\Pr[F_{\epsilon}]=\Pr\left[\bigcap_{i\in[k]}\{V_{i}\cdot X\in I\}\right]\leq( \phi\mathsf{len}(I))^{\mathsf{rank}(\mathcal{C})}\] However, one standard issue, which typically occurs with the direct usage of improvement vectors of sequence's moves, is their dependence also on the initial configuration \(\gamma\) of inactive nodes that do not appear in the sequence \(\mathcal{S}\). Their number may be much larger than the rank of the active nodes, and thus Figure 2: Example of a _2-move_, showing edges in the cut only. considering all their possible initial values in a union-bound will overwhelm the probability \((\phi e)^{r}\). For these reasons, in the literature [22, 24, 25] more complex combinatorial structures have been proposed, like pairs of (consecutive) moves of the same node. Interestingly, for the case of \(2\)-FLIP, new challenges have to be overcome due to the \(2\)-move case. To alleviate these harnesses, we introduce the idea of dependent cycles whose role will be revealed in the case that our sequence abounds with \(2\)-moves. ### Arcs and Cycles **Definition 2.2**.: _An arc \(\alpha\) in a move sequence \(\mathcal{S}=(\mathcal{S}_{1},\ldots,\mathcal{S}_{\ell})\) is an ordered pair \((i,j)\) with \(i<j\in[\ell]\) such that \(\mathcal{S}_{i}=\mathcal{S}_{j}=\{u\}\) for some node \(u\in V_{n}\) and for any \(i<k<j\), \(\mathcal{S}_{k}\neq\{u\}\)._ Let \(\tau_{0}\in\{\pm 1\}^{V(\mathcal{S})}\) be a configuration of active nodes in \(\mathcal{S}\), and let \(\tau_{0},\tau_{1},\ldots,\tau_{\ell}\in\{\pm 1\}^{V(\mathcal{S})}\) be the sequence of configurations induced by \(\mathcal{S}\), i.e., \(\tau_{i}\) is obtained from \(\tau_{i-1}\) by flipping nodes in \(\mathcal{S}_{i}\). We make the following observation: **Lemma 2.3**.: _For any configuration \(\gamma_{0}\in\{\pm 1\}^{n}\) that is an extension of \(\tau_{0}\), letting \(\gamma_{0},\gamma_{1},\ldots,\gamma_{\ell}\in\{\pm 1\}^{n}\) be the sequence of configurations induced by \(\mathcal{S}\) and letting \(w[u,i,j]:=\gamma_{i}(u)\cdot\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(i)- \gamma_{j}(u)\cdot\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(j)\), we have that_ \[(w[u,i,j]_{e})_{e\in E_{n}}=\begin{cases}(\tau_{i}(u)\cdot\mathsf{imprv}_{ \tau_{0},\mathcal{S}}(i)-\tau_{j}(u)\cdot\mathsf{imprv}_{\tau_{0},\mathcal{S} }(j))_{e}&\text{for every entry $e\in E(\mathcal{S})$,}\\ 0&\text{otherwise.}\end{cases}\] _for any arbitrary choice of \(u\in V(\mathcal{S})\)._ Motivated by Lemma 2.3, we define for an arc \(\alpha=(i,j)\) of a node \(u\), \[\mathsf{imprv}_{\tau_{0},\mathcal{S}}(\alpha):=\tau_{i}(u)\cdot\mathsf{imprv}_ {\tau_{0},\mathcal{S}}(i)-\tau_{j}(u)\cdot\mathsf{imprv}_{\tau_{0},\mathcal{S }}(j)\in\mathbb{Z}^{E(\mathcal{S})}. \tag{5}\] Let \(\mathsf{arcs}(\mathcal{S})\) denote the set of all arcs in \(\mathcal{S}\). We will be interested in the rank of \[Q_{\mathsf{arcs}}:=\left\{\mathsf{imprv}_{\tau_{0},\mathcal{S}}(\alpha): \alpha\in\mathsf{arcs}(\mathcal{S})\right\} \tag{6}\] It is easy to show that the rank does not depend on the choice of \(\tau_{0}\) so we will denote it by \(\mathsf{rank}_{\mathsf{arcs}}(\mathcal{S})\). **Lemma 2.4**.: _The rank of the set of vectors in (6) does not depend on the choice of \(\tau_{0}\)._ **Definition 2.5**.: _A cycle \(C\) in a move sequence \(\mathcal{S}=(\mathcal{S}_{1},\ldots,\mathcal{S}_{\ell})\) is an ordered tuple \(C=(c_{1},\ldots,c_{t})\) for some \(t\geq 2\) such that \(c_{1},\cdots c_{t}\) are distinct, and \(\mathcal{S}_{c_{j}}=\{u_{j},u_{j+1}\}\) for all \(j\in[t-1]\) and \(\mathcal{S}_{c_{t}}=\{u_{t},u_{1}\}\) for some nodes \(u_{1},\ldots,u_{t}\in V_{n}\). (Every \(\mathcal{S}_{c_{j}}\) is a \(2\)-move. The same vertex may appear in multiple \(\mathcal{S}_{c_{j}}\)s)._ **Definition 2.6**.: _Given a configuration \(\tau_{0}\in\{\pm 1\}^{V(\mathcal{S})}\), we say a cycle \(C=(c_{1},\ldots,c_{t})\) in \(\mathcal{S}\) is dependent with respect to \(\tau_{0}\) if there exists \(b\in\{\pm 1\}^{t}\) such that_ _For all \(j\in[t-1]\) we have that \(b_{j}\cdot\tau_{c_{j}}(u_{j+1})+b_{j+1}\cdot\tau_{c_{j+1}}(u_{j+1})=0\) and \(b_{t}\cdot\tau_{c_{t}}(u_{1})+b_{1}\cdot\tau_{c_{1}}(u_{1})=0\),_ _where \(\tau_{0},\tau_{1},\ldots,\tau_{\ell}\) are configurations induced by \(\mathcal{S}\) starting from \(\tau_{0}\)._ We note that such a vector \(b\), if it exists, it has the form \(b=b_{1}\cdot\big{(}1,\cdots,(-1)^{k-1}\prod_{i\in[2:k]}\tau_{c_{i-1}}(u_{i})\tau_{ c_{i}}(u_{i}),\cdots\big{)}^{\top}\) and hence it is unique if we further require \(b_{1}=1\). After elimination of the above equations, we see the following equivalent criterion: **Remark 2.7** (Dependence Criterion).: _A cycle \(C\) is dependent\(\Leftrightarrow(-1)^{t}=\tau_{c_{t}}(u_{1})\tau_{c_{1}}(u_{1})\cdot\prod_{i=2}^{t} \tau_{c_{i-1}}(u_{i})\tau_{c_{i}}(u_{i})\)._ We will refer to the unique vector \(b\in\{\pm 1\}^{t}\) as the _cancellation vector_ of \(C\). Notice that whether a cycle \(C\) in \(\mathcal{S}\) is dependent or not actually does not depend on the choice of \(\tau_{0}\). **Lemma 2.8**.: _If a cycle \(C\) of \(\mathcal{S}\) is dependent with respect to some \(\tau_{0}\in\{\pm 1\}^{V(\mathcal{S})}\) using \(b\) as its cancellation vector, then it is dependent with respect to every configuration \(\tau_{0}^{\prime}\in\{\pm 1\}^{V(\mathcal{S})}\) using the same \(b\) as its cancellation vector._ As a result, we can refer to cycles of \(\mathcal{S}\) as dependent cycles without specifying a configuration \(\tau_{0}\); the same holds for cancellation vectors. Next we prove a lemma that is similar to Lemma 2.3 for arcs: **Lemma 2.9**.: _Let \(C=(c_{1},\ldots,c_{t})\) be a dependent cycle of \(\mathcal{S}\) and let \(b\) be its cancellation vector. Then for any configurations \(\tau_{0}\in\{\pm 1\}^{V(\mathcal{S})}\) and \(\gamma_{0}\in\{\pm 1\}^{n}\) such that \(\gamma_{0}\) is an extension of \(\tau_{0}\), letting \(w[C]:=\sum_{j\in[t]}b_{j}\cdot\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(c_{j})\), we have that_ \[(w[C]_{e})_{e\in E_{n}}=\begin{cases}(\sum_{j\in[t]}b_{j}\cdot\mathsf{imprv}_{ \gamma_{0},\mathcal{S}}(c_{j}))_{e}&\text{ for every entry }e\in E(\mathcal{S}),\\ 0&\text{ otherwise.}\end{cases}\] Given \(\tau_{0}\in\{\pm 1\}^{V(\mathcal{S})}\) and a dependent cycle \(C\) of \(\mathcal{S}\) with \(b\) as its cancellation vector, we define \[\mathsf{imprv}_{\tau_{0},\mathcal{S}}(C):=\sum_{j\in[t]}b_{j}\cdot\mathsf{imprv }_{\tau_{0},\mathcal{S}}(c_{j}). \tag{7}\] Let \(\mathsf{cycles}(\mathcal{S})\) denote the set of all dependent cycles in \(\mathcal{S}\). We will be interested in the rank of \[Q_{\mathsf{cycles}}:=\big{\{}\mathsf{imprv}_{\tau_{0},\mathcal{S}}(C):C\in \mathsf{cycles}(\mathcal{S})\big{\}} \tag{8}\] Similarly we note that the rank does not depend on the choice of \(\tau_{0}\) so we denote it by \(\mathsf{rank}_{\mathsf{cycles}}(\mathcal{S})\). **Lemma 2.10**.: _The rank of the set of vectors in (8) does not depend on the choice of \(\tau_{0}\)._ For the sake of readability we defer the proofs of initial configuration invariance for the rank of improvement vectors of arcs and cycles to Appendix B. Having defined the sets of \(\mathsf{arcs}(\mathcal{S})\) and \(\mathsf{cycles}(\mathcal{S})\), we conclude this section by showing that for a fixed parameter \(\epsilon>0\), a move sequence \(\mathcal{S}\) and an initial configuration \(\tau_{0}\in\{\pm 1\}^{V(\mathcal{S})}\), if either \(\mathsf{rank}_{\mathsf{arcs}}(\mathcal{S})\) or \(\mathsf{rank}_{\mathsf{cycles}}(\mathcal{S})\) is high, then most likely (over \(X\sim\mathcal{X}\)) (\(\gamma_{0},\mathcal{S}\)) is not \(\epsilon\)-improving for every \(\gamma_{0}\in\{\pm 1\}^{n}\) that is an extension of \(\tau_{0}\). **Lemma 2.11**.: _Let \(\epsilon>0\). With probability at least_ \[1-\big{(}2\mathsf{len}(\mathcal{S})\cdot\phi\epsilon\big{)}^{\max\big{(} \mathsf{rank}_{\mathsf{arcs}}(\mathcal{S}),\,\mathsf{rank}_{\mathsf{cycles}}( \mathcal{S})\big{)}}\] _over \(X\sim\mathcal{X}\), we have that \((\gamma_{0},\mathcal{S})\) is not \(\epsilon\)-improving for every \(\gamma_{0}\in\{\pm 1\}^{n}\) that is an extension of \(\tau_{0}\)._ Proof.: Let \(\mathcal{E}_{moves}\) be the event of a given \((\gamma_{0},\mathcal{S})\) being \(\epsilon\)-improving with respect to edge weights \(X\), for some fixed \(\epsilon>0\): \[\mathcal{E}_{moves}:\left\{\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(i)\cdot X \in(0,\epsilon],\quad\text{for all }i\in[\ell]\right\}\] where \(\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(i)\) correspond to the improvement vector of \(\mathcal{S}_{i}\) move(See (2),(3)). Now, notice that the improvement vector of an arc (See (5)) or of a dependent cycle (See (7)) can be written as the \(\{-1,0,1\}\) sum of all the improvement vectors of either 1 or 2-moves in \(\mathcal{S}\). Thus, we define the corresponding event for cycles and arcs for a given sequence \((\gamma_{0},\mathcal{S})\) with respect to edge weights \(X\): \[\mathcal{E}_{arcs/cycles}:\left\{\mathsf{imprv}_{\gamma_{0},\mathcal{S}}( \beta)\cdot X\in[-\mathsf{len}(\mathcal{S})\epsilon,\mathsf{len}(\mathcal{S}) \epsilon],\quad\text{for any }\beta\in\mathsf{arcs}(\mathcal{S})/\mathsf{cycles}( \mathcal{S})\right\}\] So it is easy to see that \(\mathcal{E}_{moves}\) implies \(\mathcal{E}_{arcs/cycles}\), or equivalently \(\Pr[\mathcal{E}_{moves}]\leq\min\{\Pr[\mathcal{E}_{arcs}],\Pr[\mathcal{E}_{ cycles}]\}\). Thus, by leveraging Corollary 2.1 for vectors in \(Q_{\mathsf{arcs}}\) and \(Q_{\mathsf{cycles}}\)), we get that: \[\Pr[(\gamma_{0},\mathcal{S})\text{ being an $\epsilon$-improving sequence}]\leq(2\mathsf{len}(\mathcal{S})\cdot\phi \epsilon)^{\max\big{(}\mathit{rank}_{\mathsf{arcs}(\mathcal{S})},\mathit{ rank}_{\mathsf{cycles}}(\mathcal{S})\big{)}$}\] This finishes the proof of the lemma. ## 3 Main Lemma and the Proof of Theorem 1.2 and Theorem 1.1 We start with the definition of _valid_ move sequences: **Definition 3.1**.: _We say a move sequence \(\mathcal{S}=(\mathcal{S}_{1},\ldots,\mathcal{S}_{\ell})\) is valid if it satisfies the following property: For every \(i<j\in[\ell]\), at least one node \(w\notin\mathcal{S}_{i}\) appears an odd number of times in \(\mathcal{S}_{i},\ldots,\mathcal{S}_{j}\)._ **Lemma 3.2**.: _The move sequence generated by 2-FLIP (or by pure 2-FLIP), for any pivoting rule and any instance, is valid._ Proof.: Let \(\mathcal{S}\) be a move sequence generated by 2-FLIP (or pure 2-FLIP). If there are two moves \(\mathcal{S}_{i},\mathcal{S}_{j}\), \(i\leq j\), such that no node appears an odd number of times in \(\mathcal{S}_{i},\ldots,\mathcal{S}_{j}\), then the configurations before \(\mathcal{S}_{i}\) and after \(\mathcal{S}_{j}\) are the same, contradicting the fact that all the moves increase the weight of the cut. Therefore, the set \(O\) of nodes that appear an odd number of times in \(\mathcal{S}_{i},\ldots,\mathcal{S}_{j}\) is nonempty. Suppose that \(i<j\) and \(O\subseteq\mathcal{S}_{i}\). If \(O=\mathcal{S}_{i}\), then the set of nodes that appear an odd number of times in \(\mathcal{S}_{i+1},\ldots,\mathcal{S}_{j}\) would be empty, a contradiction to the above property. Therefore, \(O\neq\mathcal{S}_{i}\). In the case of pure 2-FLIP, since all moves are 2-flips, \(O\) has even size, and hence \(O\neq\emptyset\) and \(O\neq\mathcal{S}_{i}\) imply the claim. In the case of 2-FLIP, \(O\neq\emptyset\), \(O\neq\mathcal{S}_{i}\) and \(O\subseteq\mathcal{S}_{i}\) imply that \(\mathcal{S}_{i}\) has size 2, say \(\mathcal{S}_{i}=\{u,v\}\) and \(O=\{u\}\) or \(O=\{v\}\). If \(O=\{u\}\) then the configuration \(\gamma_{j}\) differs from \(\gamma_{i-1}\) only in that node \(u\) is flipped. Thus, at configuration \(\gamma_{i-1}\), flipping node \(u\) results in configuration \(\gamma_{j}\) which has strictly greater cut than the configuration \(\gamma_{i}\) that results by flipping the pair \(\{u,v\}\), contradicting our assumption about 2-FLIP. A similar argument holds if \(O=\{v\}\). In either case we have a contradiction to \(O\subseteq\mathcal{S}_{i}\). The claim follows. Given a move sequence \(\mathcal{S}\), a _window_\(W\) of \(\mathcal{S}\) is a substring of \(\mathcal{S}\), i.e., \(W=(\mathcal{S}_{i},\ldots,\mathcal{S}_{j})\) for some \(i<j\in[\ell]\) (so \(W\) itself is also a move sequence). Our main technical lemma below shows that every long enough valid move sequence has a window \(W\) such that either \(\mathsf{rank}_{\mathsf{arcs}}(W)\) or \(\mathsf{rank}_{\mathsf{cycles}}(W)\) is large relative to \(\mathsf{len}(W)\). **Lemma 3.3**.: _Let \(\mathcal{S}\) be a valid move sequence with \(\mathsf{len}(\mathcal{S})\geq n\log^{10}n\). Then \(\mathcal{S}\) has a window \(W\) such that_ \[\max\left(\mathsf{rank}_{\mathsf{arcs}}(W),\mathsf{rank}_{\mathsf{cycles}}(W )\right)\geq\Omega\left(\frac{\mathsf{len}(W)}{\log^{10}n}\right). \tag{9}\] We prove Lemma 3.3 when \(\mathcal{S}\) consists of \(2\)-moves only in Section 4 and 5, and then generalize the proof to work with general move sequences in Section 6. Assuming Lemma 3.3, we use it to establish our main theorem, restated below: **Theorem 1.2**.: _Let \(\mathcal{X}=(\mathcal{X}_{\epsilon}:e\in E_{n})\) be distributions of edge weights such that each \(\mathcal{X}_{\epsilon}\) is supported on \([-1,1]\) and has its density function bounded from above by a parameter \(\phi>0\). Then with probability at least \(1-o_{n}(1)\) over the draw of edge weights \(X\sim\mathcal{X}\), any implementation of the \(2\)-FLIP algorithm takes at most \(\phi n^{O(\log^{10}n)}\) steps to terminate._ Proof.: Let \(\epsilon>0\) be specified as follows: \[\epsilon:=\frac{1}{\phi n^{c_{1}\log^{10}n}}\] for some large enough constant \(c_{1}>0\) to be specified later. We write \(\mathcal{F}\) to denote the following event on the draw of edge weights \(X\sim\mathcal{X}\): Event \(\mathcal{F}\): For every move sequence \(W\) of length at most \(n\log^{10}n\) such that (letting \(a>0\) be the constant hidden in (9)) \[\max\left(\mathsf{rank}_{\mathsf{arcs}}(W),\mathsf{rank}_{\mathsf{cycles}}(W )\right)\geq\frac{a}{\log^{10}n}\cdot\mathsf{len}(W). \tag{10}\] and every configuration \(\gamma_{0}\in\{\pm 1\}^{n}\), \((\gamma_{0},W)\) is not \(\epsilon\)-improving with respect to \(X\). We break the proof of the theorem into two steps. First we show that \(\mathcal{F}\) occurs with probability at least \(1-o_{n}(1)\). Next we show that when \(\mathcal{F}\) occurs, any implementation of \(2\)-FLIP must terminate in at most \(\phi n^{O(\log^{10}n)}\) many steps. For the first step, we apply Lemma 2.11 on every move sequence \(W\) of length at most \(n\log^{10}n\) that satisfies (10) and every configuration \(\tau_{0}\in\{\pm 1\}^{V(W)}\). It then follows from a union bound that \(\mathcal{F}\) occurs with probability at least \[1-\sum_{\ell\in[n\log^{10}n]}n^{2\ell}\cdot 2^{2\ell}\cdot(\ell\phi e^{\frac{ \alpha\ell}{\log^{10}n}}=1-\sum_{\ell\in[n\log^{10}n]}\left((2n)^{\frac{2\log ^{10}n}{\alpha}}\cdot\ell\phi e\right)^{\frac{\alpha\ell}{\log^{10}n}}=1-o_ {n}(1),\] where the factor \(n^{2\ell}\) is an upper bound for the number of \(W\) of length \(\ell\) and \(2^{2\ell}\) is an upper bound for the number of configurations \(\tau_{0}\) since \(|V(W)|\leq 2\ell\). The last equation follows by setting the constant \(c_{1}\) sufficiently large. For the second step, we assume that the event \(\mathcal{F}\) occurs, and let \(\gamma_{0},\ldots,\gamma_{N}\in\{\pm 1\}^{n}\) be a sequence of \(N\) configurations that is the result of the execution of some implementation of \(2\)-FLIP under \(X\). Let \(\mathcal{S}=(\mathcal{S}_{1},\ldots,\mathcal{S}_{N})\) denote the move sequence induced by \(\gamma_{0},\ldots,\gamma_{N}\). So \((\gamma_{0},\mathcal{S})\) is improving with respect to edge weights \(X\). By Lemma 3.2, \(\mathcal{S}\) is a valid sequence. We use the event \(\mathcal{F}\) to bound the length of \(\mathcal{S}\). Because of \(\mathcal{F}\) and that \(\mathcal{S}\) is a valid move sequence, it follows from Lemma 3.3 that the objective function gets improved by at least \(\epsilon\) for every \(n\log^{10}n\) consecutive moves in \(\mathcal{S}\). Given that the objective function lies between \([-n^{2},n^{2}]\), we have \[\mathsf{len}(\mathcal{S})\leq n\log^{10}n\cdot\frac{2n^{2}}{\epsilon}\leq\phi n ^{O(\log^{10}n)}. \tag{11}\] This finishes the proof of the theorem. **Corollary 3.4**.: _Under the same setting of Theorem 1.2, the same result holds for Pure 2-FLIP._ Proof.: The only property of the sequence of moves used in the proof of Theorem 1.2 is that it is a valid sequence, and this property holds for Pure 2-FLIP as well. Notice that by twining the constant \(c_{1}\) in the exponent, we can control the tail-bound of the failure probability. Thus, we can strengthen our proof to get the same bound for the expected number of steps needed to terminate as in the standard smoothed analysis prototype : **Corollary 3.5**.: _Under the same setting of Theorem 1.2, any implementation of the 2-FLIP algorithm (or Pure 2-FLIP) takes at most \(\phi n^{O(\log^{10}n)}\) many steps to terminate on expectation._ Proof.: We let \(\mathcal{F}_{\epsilon}\) denote the event \(\mathcal{F}\) in the proof of Theorem 1.2 with a specified \(\epsilon>0\). Let \(\epsilon_{0}=1/\left(\phi\cdot n^{c_{1}\log^{10}n}\right)\), where \(c_{1}>0\) is a constant to be fixed shortly. For any \(\epsilon<\epsilon_{0}\), we have that \[\Pr[\neg\mathcal{F}_{\epsilon}] \leq\sum_{\ell\in[n\log^{10}n]}n^{2\ell}\cdot 2^{2\ell}\cdot(\ell \phi\epsilon)^{\max\left(\text{rank}_{\text{arcs}}(W),\text{rank}_{\text{ cycles}}(W)\right)}\leq\sum_{\ell\in[n\log^{10}n]}n^{2\ell}\cdot 2^{2\ell}\cdot( \ell\phi\epsilon\cdot\frac{\epsilon_{0}}{\epsilon_{0}})^{\lceil\frac{\alpha \ell}{\log^{10}n}\rceil}\] \[\leq\sum_{\ell\in[n\log^{10}n]}n^{2\ell}\cdot 2^{2\ell}\cdot(\ell \phi\epsilon_{0})^{\lceil\frac{\alpha\ell}{\log^{10}n}\rceil}\cdot(\frac{ \epsilon}{\epsilon_{0}})^{\lceil\frac{\alpha\ell}{\log^{10}n}\rceil}\leq\sum_ {\ell\in[n\log^{10}n]}\frac{1}{n^{3}}\left(\frac{\epsilon}{\epsilon_{0}} \right)^{\lceil\frac{\alpha\ell}{\log^{10}n}\rceil}\leq\frac{\epsilon}{c_{2}n \epsilon_{0}}\] where \(c_{1}=2+6/a\) (letting \(a>0\) be the constant hidden in (9)) and \(c_{2}=10^{8}\). From the proof of Theorem 1.2, conditionally to the event \(\mathcal{F}_{\epsilon}\) for any \(\epsilon\leq\epsilon_{0}\), \(\mathsf{len}(\mathcal{S})\leq L(\epsilon):=\frac{2n^{3}\log^{10}n}{\epsilon}\). Notice that for any \(\epsilon(\rho):=\epsilon_{0}/\rho\), \(L(\epsilon(\rho))=\rho L(\epsilon_{0})\). Thus, the probability that \(\mathsf{len}(\mathcal{S})\) is larger than \(cL(\epsilon)\) for any \(\rho\geq 1\) is \[\Pr\bigl{[}\mathsf{len}(\mathcal{S})>\rho L(\epsilon)\bigr{]}\leq\Pr\bigl{[} \neg F_{\epsilon_{0}/\rho}\bigr{]}\leq\frac{1/\rho}{c_{2}\cdot n}.\] Note that \(L\) is always trivially bounded by the total number of configurations, \(2^{n}\). Therefore, we have \[\mathbb{E}[\mathsf{len}(\mathcal{S})] =\sum_{s=1}^{2^{n}}\Pr[\mathsf{len}(\mathcal{S})\geq s]\leq\sum_{ s=1}^{\lceil L(\epsilon_{0})\rceil}\Pr[L\geq s]+\sum_{s=\lceil L(\epsilon_{0}) \rceil}^{2^{n}}\Pr[L\geq s]\leq L(\epsilon_{0})+\sum_{s=\lceil L(\epsilon_{0}) \rceil}^{2^{n}}\Pr[L\geq s]\] \[\leq L(\epsilon_{0})+\sum_{s=\lceil L(\epsilon_{0})\rceil}^{2^{n} }\Pr\bigl{[}L\geq\tfrac{s}{L(\epsilon_{0})}\cdot L(\epsilon_{0})\bigr{]}\leq L (\epsilon_{0})+\sum_{s=\lceil L(\epsilon_{0})\rceil}^{2^{n}}\frac{L(\epsilon_{0 })/s}{c_{2}n}=O(n)\cdot L(\epsilon_{0})=\phi n^{O(\log^{10}n)}.\] This finishes the proof of Corollary 3.5. The same results hold for the Graph Partitioning problem and the SWAP neighborhood. **Theorem 1.1**.: _Let \(\mathcal{X}=(\mathcal{X}_{e}:e\in E_{2n})\) be distributions of edge weights such that each \(\mathcal{X}_{e}\) is supported on \([-1,1]\) and has its density function bounded from above by a parameter \(\phi>0\). Then with probability at least \(1-o_{n}(1)\) over the draw of edge weights \(X\sim\mathcal{X}\), any implementation of SWAP takes at most \(\phi n^{O(\log^{10}n)}\) steps to terminate._ Proof.: Every move sequence \(\mathcal{S}\) generated by SWAP (for any pivoting rule, any weights, and any initial balanced partition) is also a legal move sequence for Pure 2-FLIP on the same instance, except that the sequence may be incomplete for Pure 2-FLIP, that is, the final partition may not be locally optimal for Pure 2-FLIP, since there may be a 2-move (but not a swap) that improves the weight of the cut (the resulting partition would not be balanced), and Pure 2-FLIP would continue and produce a longer sequence. Hence, the number of steps of SWAP is upper bounded by the number of steps of Pure 2-FLIP, and thus it is at most \(\phi n^{O(\log^{10}n)}\) with probability \(1-o_{n}(1)\), as well as in expectation. ## 4 Windows in a Valid Sequence of 2-Moves We will start with the proof of Lemma 3.3 for the case when \(\mathcal{S}\) consists of 2-moves only in Section 4 and 5, and generalize it to deal with general move sequences in Section 6. We start with a combinatorial argument about sets and subsequences of \([N]\), where \(N=poly(n)\) for any polynomial at \(n\). Let \(I\) be a subset of \([N]\) with \(|I|\geq\log^{10}n\). Intuitively, later in this section \(I\) will be chosen to be the set \(I_{u}\) representing the appearances of some frequently appeared active node \(u\in V(\mathcal{S})\) in a move sequence \(\mathcal{S}\). We will write \(\mathsf{order}(i)\) to denote the order of \(i\in I\). In other words, the smallest index in \(I\) has order \(1\) and the largest index in \(I\) has order \(|I|\). To give an example if \(I=\{2,5,9,11\}\) then \(\mathsf{order}(2)=1,\mathsf{order}(5)=2\) and so on. Let \(\delta=0.01\). We start by quantifying how much large windows centered around an index \(i\in I\) should be to cover the majority of a set \(I\). Afterwards, we present the combinatorial lemmas about subset \(I\). **Definition 4.1**.: _Let \(I\subseteq[N]\). We say an index \(i\in I\) is \(\ell\)-good for some positive integer \(\ell\) if_ \[\left[i-\lceil(1+2\delta)L^{\prime}\rceil:i+\lceil(1+2\delta)L^{\prime}\rceil \right]\subseteq[N],\quad\text{where }L^{\prime}=\lceil(1+\delta)^{\ell-1}\rceil\] _and \(I\) satisfies_ \[\left|I\cap[i-L^{\prime}:i+L^{\prime}]\right|\geq\log^{3}n\quad\text{and} \quad\left|I\cap\left[i-\lceil(1+2\delta)L^{\prime}\rceil:i+\lceil(1+2\delta) L^{\prime}\rceil\right]\right|\leq\log^{7}n. \tag{12}\] _If there exists no such constant \(\ell\), we call the corresponding index bad._ Figure 3: Two examples of windows whose intersection in \(I\) is between \([\log^{3}n,\log^{7}n]\). **Remark 4.2**.: _Some motivation behind the definition 4.1: When \(i\in I\) is \(\ell\)-good (letting \(L=L^{\prime}+\lceil(1+2\delta)L^{\prime}\rceil+1\) and \(L^{\prime}=\lceil(1+\delta)^{\ell-1}\rceil\)), it implies that all the \(\lceil(1+2\delta)L^{\prime}\rceil-L^{\prime}\geq 2\delta L^{\prime}=\Omega(L)^{5}\) windows \(W\) of length \(L\), i.e., those start at \(i-\lceil(1+2\delta)L^{\prime}\rceil,\ldots,i-L^{\prime}\), satisfy_ \[i\in W,\quad|I\cap W|\geq\log^{3}n\quad\text{and}\quad|I\cap W|\leq\log^{7}n.\] **Remark 4.3**.: _By definition 4.1, \(\ell\) can get at most \(\log_{1+\delta}N=\Theta(\log n)\), for any \(N=poly(n)\)._ **Lemma 4.4**.: _Suppose \(I\) is a subset of \([N]\) with \(|I|\geq\log^{8}n\). Then at least a \((1-O(1/\log n))\)-fraction of \(i\in I\) is \(\ell_{i}\)-good for some nonnegative integer \(\ell_{i}\)._ Proof.: We start by defining an \(\ell_{i}\) for each \(i\in I\) (except for the smallest \(\lceil\log^{7}n\rceil\) indices and the largest \(\lceil\log^{7}n\rceil\) indices in \(I\), which are negligible since \(|I|\geq\log^{8}n\)) and then show that most \(i\in I\) is \(\ell_{i}\)-good. Let \(I^{\prime}\) be the subset of \(I\) after removing the smallest \(\lceil\log^{7}n\rceil\) indices and the largest \(\lceil\log^{7}n\rceil\) indices in \(I\). For each \(i\in I^{\prime}\), let * \(j\in I^{\prime}\) be the index in \(I^{\prime}\) of order \(\textsf{order}(i)-\lfloor\log^{7}n/2\rfloor+1\), * \(k\in I^{\prime}\) be the index in \(I^{\prime}\) of order \(\textsf{order}(i)+\lfloor\log^{7}n/2\rfloor-1\). * \(\Delta\) be the minimum distance between index \(i\) and indices \(j,k\), \(\Delta=\min(i-j,k-i)\) and * \(\ell_{i}\) be the largest integer such that \(\lceil(1+2\delta)\cdot(1+\delta)^{\ell_{i}-1}\rceil\leq\Delta-2\). Using the fact that for any real positive number \(x\), it holds that \(0\leq\lceil(1+2\delta)\cdot\lceil x\rceil\rceil-\lceil(1+2\delta)\cdot x\rceil\leq 2\), we get that: \[\begin{cases}\lceil(1+2\delta)\cdot\lceil(1+\delta)^{\ell_{i}-1}\rceil\rceil \leq\Delta\\ \lceil(1+2\delta)\cdot(1+\delta)^{\ell_{i}}\rceil>\Delta-2\end{cases}.\] For the rest of the proof, let \(L^{\prime}_{i}=\lceil(1+\delta)^{\ell_{i}-1}\rceil\). It follows from the choice of \(\ell_{i}\) that \[|I\cap[i-\lceil(1+2\delta)L^{\prime}_{i}]:i+\lceil(1+2\delta)L^{ \prime}_{i}\rceil]|\leq 2(\lfloor\log^{7}n/2\rfloor-1)+1\leq\log^{7}n \tag{13}\] \[|I\cap[i-\lceil(1+\delta)\cdot(1+2\delta)L^{\prime}_{i}]:i+ \lceil(1+\delta)\cdot(1+2\delta)L^{\prime}_{i}\rceil]|\geq(\lfloor\log^{7}n/ 2\rfloor-1)+1-4 \tag{14}\] For (14), we use the observation that left-hand side is larger than \(|I\cap[i-(\Delta-2):i+(\Delta-2)]|\geq|I\cap[i-\Delta:i+\Delta]|-4\). Using \((1+\delta)(1+2\delta)\leq 1+4\delta\) with \(\delta=0.01\), the second inequality implies \[|I\cap[i-\lceil(1+4\delta)L^{\prime}_{i}]:i+\lceil(1+4\delta)L^{ \prime}_{i}\rceil|\geq\lfloor\log^{7}n/2\rfloor-4. \tag{15}\] On the other hand, (13) implies that \(i\in I^{\prime}\) is \(\ell_{i}\)-good unless \[|I\cap[i-L^{\prime}_{i}:i+L^{\prime}_{i}]|\leq\log^{3}n. \tag{16}\] Assume now for a contradiction that the number of \(i\in I^{\prime}\) that are bad is at least \(|I|/\log n\). Addition ally, for any possible exponent \(\ell\in[\log_{1+\delta}N]\), let \(\mathcal{R}_{\ell}\) be the set of the indices \(i\) that are not \(\ell\)-good: \[\mathcal{R}_{\ell}:=\{i\in I^{\prime}\text{ s.t }i\text{ is not }\ell-\text{good}\}\ \&\ \ell^{*}=\operatorname*{argmax}_{\ell\in[\log_{1+\delta}N]}|\mathcal{R}_{\ell}|\] Then, it holds that \[|\mathcal{R}_{\ell^{*}}|\geq\frac{|I|/\log n}{\log_{1+\delta}N}=\Omega(|I|/ \log^{2}n),\] where we use the facts that \(|I|\geq\log^{8}n\) and \(N=poly(n)\). We define then \(L^{*}=\lceil(1+\delta)^{\ell^{*}-1}\rceil\) and for each \(\rho\in\mathcal{R}_{\ell^{*}}\) we let \[B_{\rho}=\big{(}I\cap[\rho-\lceil(1+4\delta)L^{*}],\rho-L^{*}]\big{)}\cup\big{(} I\cap[\rho+L^{*},\rho+\lceil(1+4\delta)L^{*}]\big{)}.\] Note now that when an index \(i\) is not \(\ell\)-good, we have from (16) and (15) that \[\Big{|}\big{(}I\cap[i-\lceil(1+4\delta)L^{\prime}_{i}]:i-L^{\prime}_{i}]\big{)} \cup\big{(}I\cap[i+L^{\prime}_{i}:i+\lceil(1+4\delta)L^{\prime}_{i}]\big{)} \Big{|}\geq\lfloor\log^{7}n/2\rfloor-4-\log^{3}n=\Omega(\log^{7}n).\] Hence, we have that \(|B_{\rho}|\geq\Omega(\log^{7}n)\) for every \(\rho\in\mathcal{R}_{\ell^{*}}\) and thus, \[\sum_{\rho\in\mathcal{R}_{\ell^{*}}}|B_{\rho}|\geq\Omega\left(\frac{|I|}{\log ^{2}n}\right)\cdot\Omega(\log^{7}n)=\Omega\big{(}|I|\log^{5}n\big{)}.\] On the other hand, we can prove the following claim: **Claim 4.5**.: _For any \(i\in I\), the number of \(\rho\in\mathcal{R}_{\ell^{*}}\) such that \(i\in B_{\rho}\) is at most \(O(\log^{3}n)\)._ It follows then from the claim that \[\text{For any }i\in I\text{, we get }|\{\rho\in\mathcal{R}_{\ell^{*}}|i\in B_{ \rho}\}|=O(\log^{3}n)\Rightarrow\sum_{\rho\in\mathcal{R}_{\ell^{*}}}|B_{\rho} |\leq|I|\cdot O(\log^{3}n),\] which leads to a contradiction. Proof of Claim 4.5.: Fix any \(i\in I\). Let us assume then that \(\rho\) be a \(\rho\in\mathcal{R}_{\ell^{*}}\) such that \(i\in B_{\rho}\). We prove that the number of \(\rho\in\mathcal{R}_{\ell^{*}}\) with \(\rho>i\) and \(i\in B_{\rho}\) is at most \(O(\log^{3}n)\); the case with \(\rho<i\) is symmetric. If no such \(\rho\) exists then the claim is trivially true. Hence, let's assume that such one exists with \(\rho>i\). Given that \(i\in B_{\rho}\), we have that \(i\in[\rho-\lceil(1+4\delta)L^{*}],\rho-L^{*}]\) and we also have \[|I\cap[\rho-L^{*},\rho+L^{*}]|\leq\log^{3}n. \tag{17}\] On the other hand, every other \(\rho^{\prime}\in\mathcal{R}_{\ell^{*}}\) that satisfies \(\rho^{\prime}>i\) and \(i\in B_{\rho^{\prime}}\) also has the property that \(i\in[\rho^{\prime}-\lceil(1+4\delta)L^{*}\rceil,\rho^{\prime}-L^{*}]\) and thus, \(\rho^{\prime}\in[i+L^{*},i+\lceil(1+4\delta)\rceil L^{*}]\). But combining this with \(i\in[\rho-(1+4\delta)L^{*},\rho-L^{*}]\) we have \[\rho^{\prime}\leq i+\lceil(1+4\delta)L^{*}\rceil\leq\rho+\lceil(1+4\delta)L^{ *}\rceil-L^{*}\] \[\rho^{\prime}\geq i+L^{*}\geq\rho-\lceil(1+4\delta)L^{*}\rceil+L^{*}.\] So \(\rho^{\prime}\in[\rho-\lceil(1+4\delta)L^{*}\rceil+L^{*},\rho+\lceil(1+4\delta)L^ {*}\rceil-L^{*}]\subseteq[\rho-L^{*},\rho+L^{*}]\) and by (17) the number of such \(\rho^{\prime}\) is no more than \(\log^{3}n\). Now we return to work on our problem and an arbitrary move sequence \(\mathcal{S}=(\mathcal{S}_{1},\ldots,\mathcal{S}_{N})\). Let \(W\) be a window (move sequence) of \(\mathcal{S}\). For each active node \(u\in V(W)\), we write \(\#_{W}(u)\) to denote the number of occurrences of \(u\) in \(W\). The main result in this section is the following lemma: **Lemma 4.6**.: _Let \(\mathcal{S}\) be a move sequence of length \(N=n\log^{10}n\) that consists of \(2\)-moves only. There exists a positive integer \(L\) such that \(\mathcal{S}\) has at least \(\Omega((N-L+1)/\log n)\) many windows \(W=(W_{1},\ldots,W_{L})\) of length \(L\) such that at least \(\Omega(L/\log n)\) moves \(W_{i}=\{u,v\}\) of \(W\) satisfy_ \[\log^{3}n\leq\#_{W}(u)\leq\log^{7}n\quad\text{and}\quad\#_{W}(v)\geq\log^{3}n \tag{18}\] Proof.: For each node \(u\in V(\mathcal{S})\) we write \(I_{u}\subseteq[N]\) to denote the set of \(i\in[N]\) with \(u\in\mathcal{S}_{i}\). We say the \(i\)-th move \(\mathcal{S}_{i}=\{u,v\}\) is \(\ell\)-good for some positive integer \(\ell\) if \(i\) is \(\ell_{1}\)-good in \(I_{u}\) and \(i\) is \(\ell_{2}\)-good in \(I_{v}\) for some positive integers \(\ell_{1},\ell_{2}\) such that \(\ell=\max(\ell_{1},\ell_{2})\). Let \(\mathcal{S}_{i}=\{u,v\}\). Then we consider the following cases: 1. Either \(|I_{u}|\) or \(|I_{v}|\) is smaller than \(\log^{8}n\): Given that no more than \(n\log^{8}n\) moves can contain a vertex that appears less than \(\log^{8}n\) times in the sequence, we have the number of such \(i\) is at most \[n\log^{8}n=o(N/\log n);\] 2. \(|I_{u}|,|I_{v}|\geq\log^{8}n\) but either \(u\) is not \(\ell_{1}\)-good for any \(\ell_{1}\) or \(v\) is not \(\ell_{2}\)-good for any \(\ell_{2}\): By Lemma 4.4, the number of such \(i\) is at most (using \(\sum_{u}|I_{u}|=2N\)) \[\sum_{u:|I_{u}|\geq\log^{8}n}\frac{|I_{u}|}{\log n}\leq\frac{2N}{\log n}.\] 3. Otherwise, \(\mathcal{S}_{i}\) is \(\ell\)-good by setting \(\ell=\max(\ell_{1},\ell_{2})\). Thus, the number of \(i\in[N]\) such that the \(\mathcal{S}_{i}\) is \(\ell\)-good for some \(\ell\) is at least \((1-3/\log n)N\). Given that \(\ell\) is at most \(O(\log N)=O(\log n)\), there exists a positive integer \(\ell\) such that the number of moves in \(\mathcal{S}\) that are \(\ell\)-good is at least \(\Omega(N/\log n)\). Let \(L^{\prime}=\lceil(1+\delta)^{\ell-1}\rceil\) and \[L=L^{\prime}+\lceil(1+2\delta)L^{\prime}\rceil+1.\] For any move \(\mathcal{S}_{i}=\{u,v\}\) that is \(\ell\)-good, it is easy to verify that there are \(\Omega(L)\) windows \(W\) of length \(L\) that contain \(i\) and satisfy (18) (See Remark 4.2). Let's pick a window \(W\) of \(\mathcal{S}\) of size \(L\) uniformly at random; note that there are \(N-L+1\) many such windows in total. Let \(X\) be the random variable that denotes the number of moves in \(W\) that satisfy (18). Given that the number of moves that are \(\ell\)-good is at least \(\Omega(N/\log n)\), we have \[\operatorname{\mathbf{E}}\!\left[X\right]\geq\Omega\left(\frac{N}{\log n}\right) \cdot\frac{\Omega(L)}{N}=\Omega\left(\frac{L}{\log n}\right).\] Let \(a\) be the constant hidden above. Given that we always have \(X\leq L\), we have \[\Pr\left[X\geq\frac{aL}{2\log n}\right]\geq\frac{a}{2\log n} \tag{19}\] since otherwise, \[\operatorname{\mathbf{E}}\!\left[X\right]\leq\frac{a}{2\log n}\cdot L+\left(1- \frac{a}{2\log n}\right)\cdot\frac{aL}{2\log n}<\frac{aL}{\log n}.\] a contradiction. The lemma then follows directly from (19). ## 5 Finding Cycles Let \(\mathcal{S}=(\mathcal{S}_{1},\ldots,\mathcal{S}_{N})\) be a valid move sequence of length \(N=n\log^{10}n\) that consists of 2-moves only. By Lemma 4.6, \(\mathcal{S}\) has a window \(W=(W_{1},\ldots,W_{L})\) of length \(L\) such that the number of moves in \(W\) that satisfy (18) is at least \(\Omega(L/\log n)\). We show in this section that such a \(W\) satisfies \[\operatorname{rank}_{\mathsf{cycles}}(W)=\Omega\left(\frac{L}{\log^{10}n} \right). \tag{20}\] This will finish the proof of Lemma 3.3 when \(\mathcal{S}\) consists of 2-moves only. To this end, let \(\tau_{0}\in\{\pm 1\}^{V(W)}\) be the configuration with \(\tau_{0}(u)=-1\) for all \(u\in V(W)\) so that we can work on vectors \(\mathsf{imprv}_{\tau_{0},W}(i)\) and \(\mathsf{imprv}_{\tau_{0},W}(C)\) for dependent cycles of \(W\) (at the same time, recall from Lemma 2.10 that \(\operatorname{rank}_{\mathsf{cycles}}(W)\) does not depend on the choice of \(\tau_{0}\)). Let \(\tau_{0},\ldots,\tau_{L}\) denote the sequence of configurations induced by \(W\). Next, let us construct an auxiliary graph \(H=(V(W),E)\), where every move \(W_{i}=\{u,v\}\) adds an edge between \(u\) and \(v\) in \(E\). Note that we allow parallel edges in \(H\) so \(|E|=L\) and \(\#_{W}(u)\) is exactly the degree of \(u\) in \(H\). There is also a natural one-to-one correspondence between cycles of \(W\) and cycles of \(H\). The following lemma shows the existence of a nice looking bipartite graph in \(H\): **Lemma 5.1**.: _There are two disjoint sets of nodes \(V_{1},V_{2}\subset V(W)\) and a subset of edges \(E^{\prime}\subseteq E\) such that_ 1. _Every edge in_ \(E^{\prime}\) _has one node in_ \(V_{1}\) _and the other node in_ \(V_{2}\)_;_ 2. \(|V_{1}\cup V_{2}|=O(L/\log^{3}n)\) _and_ \(|E^{\prime}|=\Omega(L/\log n)\)_;_ 3. \(\#_{W}(u)\leq\log^{7}n\) _for every node_ \(u\in V_{1}\)_._ Proof.: Let \(V\) be the set of vertices \(v\) such that \(\#_{W}(v)\geq\log^{3}n\). We start our proof with the size of \(V\): \[|V|\log^{3}n\leq|V|\min_{v\in V}\#_{W}(u)=|V|\min_{v\in V}\deg_{H}(v)\leq\sum_ {v\in V}\deg_{H}(v)=2|E(H)|\leq 2L.\] We further partition \(V\) into \(V_{\ell}\) and \(V_{h}\) such that \(V_{\ell}\) contains those in \(V\) with \(\#_{W}(v)\leq\log^{7}n\) and \(V_{h}\) contains those with \(\#_{W}(v)>\log^{7}n\). By Lemma 4.6 we can assume the number of edges incident to at least one vertex in \(V_{\ell}\) (that is edges in \(V_{\ell}\times V_{\ell}\cup V_{\ell}\times V_{h}\)) is at least \(\Omega(L/\log n)\). Suppose we construct \(V_{1}\) and \(V_{2}\) by randomly put each node in \(V_{\ell}\) in \(V_{1}\) or \(V_{2}\) and put all nodes in \(V_{h}\) in \(V_{2}\). Any edge in \(V_{\ell}\times V_{\ell}\) or \(V_{\ell}\times V_{h}\) is between \(V_{1}\) and \(V_{2}\) with \(1/2\) probability. Thus \(\operatorname{E}[\operatorname{EdgesInCut}(V_{1},V_{2})]=|E|/2=\Omega(L/\log n)\). Thus, by standard probabilistic argument, there exist at least one assignment of \(V_{\ell}\) to \(V_{1}\) and \(V_{2}\) such that at least half of the edges in \(V_{\ell}\times V_{\ell}\cup V_{\ell}\times V_{h}\) are included. Hence, we get a bipartite graph between \(V_{1}\) and \(V_{2}\) with at least \(\Omega(L/\log n)\) edges, and any node \(v\) in \(V_{1}\) satisfies \(\log^{3}n\leq\#_{W}(v)\leq\log^{7}n\). Notice that since \(V=|V_{1}\cup V_{2}|=|V_{\ell}\cup V_{h}|\), we get that \(|V_{1}\cup V_{2}|=O(L/\log^{3}n)\). Recall the definition of dependent cycles of \(W\) (and their cancellation vectors) from Section 2. Since we only care about the rank of vectors induced by dependent cycles, we give the following definition which classify each edge of \(H\) into two types and then use it to give a sufficient condition for a cycle of \(W\) to be dependent: **Definition 5.2**.: _We say the \(i\)-th move \(W_{i}=\{u,v\}\) of \(W\) is of the same sign if \(\tau_{i}(u)=\tau_{i}(v)\), and is of different signs if \(\tau_{i}(u)\neq\tau_{i}(v)\)._ **Lemma 5.3**.: _Let \(C=(c_{1},\ldots,c_{t})\) be a cycle of \(W\) and assume that \(t\) is even. If all of \(W_{c_{1}},\ldots,W_{c_{t}}\) are of different signs, then \(C\) is a dependent cycle; If all of \(W_{c_{1}},\ldots,W_{c_{t}}\) are of the same sign, then \(C\) is a dependent cycle of \(W\)._ Proof.: Recall the Dependence Criterion (Remark 2.7) \[C\text{ is a dependent cycle of }W\ \Leftrightarrow(-1)^{t}=\tau_{c_{t}}(u_{1}) \tau_{c_{1}}(u_{1})\cdot\prod_{i=2}^{t}\tau_{c_{i-1}}(u_{i})\tau_{c_{i}}(u_{i})\] If all of \(W_{c_{1}},\ldots,W_{c_{t}}\) are of different signs, then \(\tau_{c_{j}}(u_{j})\tau_{c_{j}}(u_{j+1})=\tau_{c_{t}}(u_{1})\tau_{c_{i}}(u_{t })=-1\), and the above expression equals to \((-1)^{t}=(-1)^{t}\). If all of \(W_{c_{1}},\ldots,W_{c_{t}}\) are of the same sign, then \(\tau_{c_{j}}(u_{j})\tau_{c_{j}}(u_{j+1})=\tau_{c_{t}}(u_{1})\tau_{c_{t}}(u_{t })=1\), the above expression is also \(1=(-1)^{t}\), which holds since \(t\) is even. We assume in the rest of the proof that at least half of edges in \(E^{\prime}\) are of the same sign; the case when at least half of \(E^{\prime}\) are of different signs can be handled similarly. Let \(E^{\prime\prime}\) be the subset of \(E^{\prime}\) that consists of edges of the same sign, with \(|E^{\prime\prime}|\geq|E^{\prime}|/2\). In the following discussion, cycles in \(E^{\prime\prime}\) always refer to cycles that do not use the same edge twice (parallel edges are counted as different edges, since they correspond to different moves in the window \(W\)). The aforementioned discussion leads to the following corollary which reduces the existence of dependent cycle of \(W\) to a simple cycle in auxiliary graph \(H\): **Corollary 5.4**.: _Since every cycle in a bipartite graph has even length, every cycle in \(E^{\prime\prime}\) corresponds to a dependent cycle of \(W\). For convenience, given any cycle \(C\) of \(E^{\prime\prime}\) we will write \(\textsf{imprv}_{\tau_{0},W}(C)\) to denote the vector of its corresponding dependent cycle of \(W\)._ We first deal with the case when \(E^{\prime\prime}\) contains many parallel edges: **Lemma 5.5**.: _Let \(D\) be the subset of nodes in \(V_{1}\) that have parallel edges in \(E^{\prime\prime}\). Then \(\textsf{rank}_{\textsf{cycles}}(W)\geq|D|/2\)._ Proof.: We prove the lemma even if the sequence contains both 1-moves and 2-moves so that we can use it also in the general case in the next section. We note first that if \(S_{i}=\{u,v\}\), \(S_{j}=\{u,v\},i<j\) are two moves that involve the same two nodes, then there is at least one node \(z\neq u,v\) that appears an odd number of times between the two moves. This follows from the definition of a valid move sequence. We will construct a set \(Q\) of at least \(|D|/2\) 2-cycles, where each 2-cycle consists of two parallel edges in \(E^{\prime\prime}\). We use the following procedure. 1. While there is a 2-cycle \((u,v)\) with \(u\in D\), \(v\in V_{2}\), such that some node \(z\neq u\) of \(V_{1}\) moves an odd number of times between the two \(\{u,v\}\) moves of the 2-cycle, pick any such 2-cycle \((u,v)\) and add it to our set \(Q\), pick any such node \(z\neq u\) that moves an odd number of times between the two \(\{u,v\}\) moves, and delete \(u\) and \(z\) from \(D\) (if \(z\) is in \(D\)). 2. Suppose now that there are no more 2-cycles as in step 1. While \(D\) is not empty, let \(u\) be any remaining node in \(D\), take any two incident parallel edges \(\{u,v\}\) in \(E^{\prime\prime}\), add the corresponding 2-cycle to \(Q\), and delete \(u\) from \(D\). Firstly, notice that for every new entry at \(Q\) in the procedure, we delete at most 2 nodes from \(D\). Hence, this procedure will generate clearly a set \(Q\) of at least \(|D|/2\) 2-cycles. Let \((u_{1},v_{1}),(u_{2},v_{2}),\ldots,(u_{k},v_{k})\) be the sequence of 2-cycles selected, where the first \(d\) were selected in step 1, and the rest in step 2. The nodes \(u_{i}\) are distinct, while the nodes \(v_{i}\) may not be distinct. For each \(i=1,\ldots,d\), let \(z_{i}\) be the node in \(V_{1}\) that appears an odd number of times between the two \(\{u_{i},v_{i}\}\) moves that was selected by the algorithm. Note that node \(z_{i}\neq u_{j}\) for all \(j\geq i\), since \(z_{i}\) was deleted from \(D\) when \(u_{j}\) was selected. For each \(i=d+1,\ldots,k\), let \(z_{i}\) be any node, other than \(u_{i},v_{i}\), that appears an odd number of times between the two \(\{u_{i},v_{i}\}\) moves. Then \(z_{i}\) is not in \(V_{1}\) because in step 2 there are no odd nodes in \(V_{1}\). For each \(i=1,\ldots,k\), we view the edge \(\{u_{i},z_{i}\}\) of the complete graph as a witness for the 2-cycle \((u_{i},v_{i})\). Consider the matrix with columns corresponding to the selected 2-cycles \((u_{i},v_{i})\), \(i=1,\ldots,k\), and rows corresponding to the witness edges \(\{u_{i},z_{i}\}\). The entry for the corresponding witness edge \(\{u_{i},z_{i}\}\) is nonzero. Indeed, by definition 2.6, \(\mathsf{imprv}_{\tau_{0},W}(C=(u_{i},v_{i}))_{\{u_{i},z_{i}\}}=-b_{1}(\tau_{c_ {1}}(u_{i})\tau_{c_{1}}(z_{i}))-b_{2}(\tau_{c_{2}}(u_{i})\tau_{c_{2}}(z_{i}))\) and \(\begin{cases}\tau_{c_{1}}(z_{i})=-\tau_{c_{2}}(z_{i})\\ b_{1}=1\ the entry for \(\{u_{j},v_{i}\}\) is 0. \[\begin{cases}\mathcal{M}_{1\to d}^{\{\text{step 1}\}}=\begin{bmatrix}\text{imprv}_{ \tau_{0},W}(C_{1})_{\{x_{1},y_{1}\}}\neq 0&0&0\\ *&\ddots&\vdots\\ *&*&\text{imprv}_{\tau_{0},W}(C_{d})_{\{x_{i},y_{i}\}}\neq 0\end{bmatrix}& \Rightarrow\mathcal{M}=\begin{bmatrix}\mathcal{M}_{1\to d}^{\{\text{step 1}\}}& \boldsymbol{0}\\ *&\mathcal{M}_{d+1\to k}^{\{\text{step 2}\}}\end{bmatrix}\\ \mathcal{M}_{d+1\to k}^{\{\text{step 2}\}}=\text{diag}_{\mathfrak{t}\in( \mathfrak{d}+1)\to k}(\text{imprv}_{\tau_{0},W}(C_{d})_{\{x_{i},y_{i}\}}\neq 0) \end{cases}\] Thus, the matrix with columns corresponding to the selected 2-cycles \((u_{i},v_{i})\) and rows corresponding to their witness edges \(\{u_{i},z_{i}\}\) is a lower triangular matrix with non-zero diagonal entries. It follows that the columns are linearly independent. As a result, it suffices to deal with the case when \(|D|\) is \(o(L/\log^{8}n)\). Let \(E^{*}\) denote the subset of edges obtained from \(E^{\prime\prime}\) after deleting all nodes of \(D\) and their incident edges. The remaining bipartite graph has no parallel edges. Then we have \[|E^{*}|\geq|E^{\prime\prime}|-|D|\cdot\log^{7}n=\Omega(L/\log n).\] We list all properties of the bipartite graph \(H^{*}=(V_{1}\cup V_{2},E^{*})\) we need as follows: 1. \(H^{*}\) is a bipartite graph with no parallel edges; 2. \(|V_{1}\cup V_{2}|\leq O(L/\log^{3}n)\) and \(|E^{*}|\geq\Omega(L/\log n)\); and 3. \(\#_{W}(u)\leq\log^{7}n\) for every node \(u\in V_{1}\). 4. Every edge \(e=\{u,v\}\in E^{*}\) corresponds to a move \(W_{i}=\{u,v\}\) which is of the same sign. Recall that \(E(W)\) denotes the set of edges in \(K_{n}\) which have both nodes in \(V(W)\). These edges are indices of \(\text{imprv}_{\tau_{0},S}(i)\) and \(\text{imprv}_{\tau_{0},S}(C)\) for a given dependent cycle \(C\) of \(W\). Our main lemma is the following: **Lemma 5.6**.: _Fix an arbitrary \(s\in[0:L/\log^{10}(n)]\). Assume additionally that there exists a set of edges \(\mathcal{E}_{s}=\{\{x_{1},y_{1}\},\cdots\{x_{s},y_{s}\}\}\subseteq E(W)\) such that \(x_{i}\in V_{1}\) for all \(i\in[s]\). Then there exists a cycle \(C\) in \(H^{*}\) and an edge \(\{u,v\}\in E(W)\) with \(u\in V_{1}\setminus\{x_{1},y_{1},\ldots,x_{s},y_{s}\}\) such that_ \[\big{(}\text{imprv}_{\tau_{0},W}(C)\big{)}_{\{u,v\}}\neq 0\quad\text{and} \quad\big{(}\text{imprv}_{\tau_{0},W}(C)\big{)}_{\{x_{i},y_{i}\}}=0,\quad\text {for all $i\in[s]$.} \tag{21}\] Proof of (20) Assuming Lemma 5.6.: Start with \(\mathcal{E}_{s=0}=\emptyset\), For integer \(s\) going from 0 to \(\lfloor L/\log^{10}n\rfloor\), using Lemma 5.6, find cycle \(C_{s+1}\) and an edge \(\{u,v\}\) satisfying (21), let \(\mathcal{E}_{s+1}=\mathcal{E}_{s}\cup\{x_{s+1},y_{s+1}\}=\{u,v\}\) and repeat the above process. In the end, we get a set of cycles \(C_{1},\cdots,C_{k}\) where \(k=\lfloor L/\log^{10}n\rfloor\). And for any \(j\in[k]\), we have \[\big{(}\text{imprv}_{\tau_{0},W}(C_{j})\big{)}_{\{x_{j},y_{j}\}}\neq 0\quad \text{and}\quad\big{(}\text{imprv}_{\tau_{0},W}(C_{j})\big{)}_{\{x_{i},y_{i}\}} =0,\quad\text{for all $i\in[j-1]$.}\] Let \(\mathcal{M}\) be the \(k\times k\) square matrix where \(\mathcal{M}_{ij}=\big{(}\text{imprv}_{\tau_{0},W}(C_{j})\big{)}_{\{x_{i},y_{i}\}}\). \[\mathcal{M}=\begin{bmatrix}\text{imprv}_{\tau_{0},W}(C_{1})_{\{x_{1},y_{1}\}} \neq 0&0&\cdots&0\\ *&\text{imprv}_{\tau_{0},W}(C_{2})_{\{x_{2},y_{2}\}}\neq 0&0&\cdots&0\\ \vdots&*&\text{imprv}_{\tau_{0},W}(C_{3})_{\{x_{3},y_{3}\}}\neq 0&\cdots&0\\ \vdots&\vdots&*&\ddots&0\\ *&*&*&*&\text{imprv}_{\tau_{0},W}(C_{k})_{\{x_{i},y_{i}\}}\neq 0\end{bmatrix}\] As we can see, the matrix is lower triangular with non-zero diagonal entries, so it has full rank \(k\). Note that \(\mathcal{M}\) is a submatrix of the matrix formed by taking \(\text{imprv}_{\tau_{0},W}(C_{j})\) as column vectors, therefore we have \(\text{rank}_{\text{cycles}}(W)\geq k\geq L/\log^{10}n\). ### Proof of Lemma 5.6 Given a cycle \(C\) in \(H^{*}\), we say \(\{u,v\}\in E(W)\) is a _witness_ of \(C\) if \[\big{(}\text{imprv}_{\tau_{0},W}(C)\big{)}_{\{u,v\}}\neq 0.\] So the goal of Lemma 5.6 is to find a cycle \(C\) of \(H^{*}\) such that none of \((u_{i},v_{i})\in\mathcal{E}_{s}\) are witnesses of \(C\) and at the same time, \(C\) has a witness edge \(\{u,v\}\) with \(u\) being a new node in \(V_{1}\) not seen in \(\mathcal{E}_{s}\) before. The proof consists of two steps. First we introduce a so-called _split auxiliary graph_\(G\) using \(H^{*}\) and \(\mathcal{E}_{s}\), by deleting certain nodes and creating extra copies of certain nodes in \(H^{*}\). We show in Lemma 5.7 that certain simple cycles in \(G\) correspond to cycles in \(H^{*}\) that don't have any edge in \(\mathcal{E}_{s}\) as witnesses. Next we show in Lemma 5.9 how to find such a simple cycle in \(G\) that has a new witness \((u,v)\) such that \(u\in V_{1}\) and does not appear in \(\mathcal{E}_{s}\). Let \(\mathsf{wit}_{1}(\mathcal{E}_{s})\) be the set of \(u\in V_{1}\) that appear in \(\mathcal{E}_{s}\) and let \(\mathsf{wit}_{2}(\mathcal{E}_{s})\) be the set of \(v\in V_{2}\) that appear in \(\mathcal{E}_{s}\). For each \(v\in\mathsf{wit}_{2}(\mathcal{E}_{s})\), we write \(\mathsf{wit}_{1}(v)\neq\emptyset\) to denote the set of nodes \(u\in\mathsf{wit}_{1}(\mathcal{E}_{s})\) such that \((u,v)\in\mathcal{E}_{s}\), and let \(k_{v}\) denote the number of moves in \(W\) that involve at least one node in \(\mathsf{wit}_{1}(v)\). We have \(k_{v}\leq|\mathsf{wit}_{1}(v)|\cdot\log^{7}n\) since \(\#_{W}(u)\leq\log^{7}n\) for all \(u\in V_{1}\). Below, we give an example of such an auxiliary graph: We now define our split auxiliary (bipartite) graph \(G\). We start with its set of nodes \(V_{1}^{\prime}\cup V_{2}^{\prime}\): 1. \(V_{1}^{\prime}=V_{1}\setminus\mathsf{wit}_{1}(\mathcal{E}_{s})\); and 2. \(V_{2}^{\prime}=\cup_{v\in V_{2}}C(v)\), where \(C(v)=\{v^{(0)}\}\) if \(v\notin\mathsf{wit}_{2}(\mathcal{E}_{s})\) and \(C(v)=\{v^{(0)},v^{(1)},\ldots,v^{(k_{v})}\}\) if \(v\in\mathsf{wit}_{2}(\mathcal{E}_{s})\). So we deleted nodes \(\mathsf{wit}_{1}(\mathcal{E}_{s})\) from \(V_{1}\) and replaced each node \(v\in\mathsf{wit}_{2}(\mathcal{E}_{s})\) by \(k_{v}+1\) new nodes. Next we define the edge set \(E(G)\) of \(G\). Every move \(W_{i}=\{u,v\}\) in \(W\) that corresponds to an edge \((u,v)\) in \(H^{*}\) with \(u\in V_{1}\setminus\mathsf{wit}_{1}(\mathcal{E}_{s})\) and \(v\in V_{2}\) will add an edge in \(G\) as follows: 1. If \(v\notin\mathsf{wit}_{2}(\mathcal{E}_{s})\), then we add \((u,v^{(0)})\) to \(G\); and 2. Otherwise \((v\in\mathsf{wit}_{2}(\mathcal{E}_{s}))\), letting \(\mu_{i}\in[0:k_{v}]\) be the number of moves before \(W_{i}\) that contain at least one node in \(\mathsf{wit}_{1}(v)\) (note that \(W_{i}\) does not contain \(\mathsf{wit}_{1}(v)\); actually \(W_{i}\) cannot contain \(\mathsf{wit}_{1}(\mathcal{E}_{s})\)), we add \((u,v^{(\mu_{i})})\) to \(G\). Therefore, every edge in \(G\) corresponds to a move in \(W\) which corresponds to an edge in \(H^{*}\) that does not contain a node in \(\mathsf{wit}_{1}(\mathcal{E}_{s})\). It is clear that each simple cycle of \(G\) corresponds to a cycle of \(H\), which in turn corresponds to a dependent cycle of \(W\) (Since we assume w.l.o.g that all edges of auxiliary graph, and its split one, correspond to moves of the same sign ( See Corollary 5.4) ). So \(\mathsf{imprv}_{\tau_{0},W}(C)\) is well defined for simple cycles \(C\) of \(G\). Our motivation for constructing and working on \(G\) is because of the following lemma: **Lemma 5.7**.: _Let \(C\) be a simple cycle of \(G\). Then none of the edges in \(\mathcal{E}_{s}\) is a witness of \(C\)._ Proof.: Let \(e=\{u,v\}\in\mathcal{E}_{s}\) and \(u\in\mathsf{wit}_{1}(\mathcal{E}_{s})\). By the definition of \(G\), \(u\) has no copy in \(G\). So \(u\) does not appear on the cycle. Let \(C\) be a cycle in \(G\) with nodes \((w_{1}^{(i_{1})},w_{2}^{(i_{2})},\cdots,w_{t}^{(i_{t})},w_{1}^{(i_{1})})\) (we use \(w_{j}^{(0)}\) to denote \(w_{j}\) for \(w_{j}\in V_{1}^{\prime}\) ). If the cycle does not contain any vertex in \(C(v)\), then \(\mathsf{imprv}_{\tau_{0},W}(C)_{e}=0\). Now suppose the cycle contains nodes in \(C(v)\), specifically, \(w_{j_{1}}=\cdots=w_{j_{m}}=v\). Let the corresponding cycle \(C\) on \(W\) be \(c_{1},\cdots,c_{t}\) where \(W_{c_{i}}=\{w_{i},w_{i+1}\}\) if \(i<t\) and \(W_{c_{i}}=\{w_{t},w_{1}\}\), and let \(b\) be the cancellation vector of \(C\). We can write down the value of the improvement vector on edge \(e\). \[\begin{split}\mathsf{imprv}_{\tau_{0},W}(C)_{e}&= \sum_{k=1}^{m}\left(b_{j_{k}-1}\mathsf{imprv}_{\tau_{0},W}(c_{j_{k}-1})_{e}+b_{ j_{k}}\mathsf{imprv}_{\tau_{0},W}(c_{j_{k}})_{e}\right)\\ &=-\sum_{k=1}^{m}\left(b_{j_{k}-1}\tau_{c_{j_{k}-1}}(v)\tau_{c_{ j_{k}-1}-1}(u)+b_{j_{k}}\tau_{c_{j_{k}}}(v)\tau_{c_{j_{k}}-1}(u)\right)\end{split} \tag{22}\] By the construction of \(G\), \(u\) doesn't appear in any move between \(c_{j_{k}}\) and \(c_{j_{k}-1}\) (otherwise, in \(G\), the edge corresponding to \(W_{c_{j_{k}}}\) and edge corresponding to \(W_{c_{j_{k}}-1}\) wouldn't be connected to \(w_{j_{k}}^{(i_{j_{k}})}\) with the same \(i_{j_{k}}\).(For an illustrative explanation, see Figure 5 & 6) ) So \(\tau_{c_{j_{k}-1}-1}(u)=\tau_{c_{j_{k}}-1}(u)\). By definition of Figure 4: An exemplifying case of an auxiliary graph \(H^{*}\) and splitting graph \(G(H^{*},\mathcal{E}_{s})\) cancellation vector, \[b_{j_{k}-1}\tau_{c_{k-1}}(v)+b_{j_{k}}\tau_{c_{j_{k}}}(v)=0.\] So each term in (22) is \(0\), and \(\operatorname{impr}_{\nabla_{0},W}(C)_{e}=0\). To finish the proof, it suffices now to find a simple cycle \(C\) of \(G\) that has a witness \((u,v)\in E(W)\) with one of its vertices \(u\in V^{\prime}_{1}\). We start by checking that all conditions for \(H^{*}\) still hold for \(G\). It is clear that \(G\) is a bipartite graph with no parallel edges. By the definition of \(\operatorname{\mathsf{wit}}_{1}(v)\) for each \(v\in\operatorname{\mathsf{wit}}_{2}(\mathcal{E}_{s})\), we have \(\sum_{v\in\operatorname{\mathsf{wit}}_{2}(\mathcal{E}_{s})}|\operatorname{ \mathsf{wit}}_{1}(v)|\leq 2s\), also \(|\operatorname{\mathsf{wit}}_{1}(\mathcal{E}_{s})|\leq 2s\). The number of nodes \(|V^{\prime}_{1}\cup V^{\prime}_{2}|\) in \(G\) is at most \[O(L/\log^{3}n)+\sum_{v\in\operatorname{\mathsf{wit}}_{2}(\mathcal{E}_{s})}k_{ v}\leq O(L/\log^{3}n)+\sum_{v\in\operatorname{\mathsf{wit}}_{2}(\mathcal{E}_{s})}| \operatorname{\mathsf{wit}}_{1}(v)|\cdot\log^{7}n=O(L/\log^{3}n).\] where the last equality used that \(s\leq L/\log^{10}n\). The number of edges in \(G\) is at least \[\Omega(L/\log n)-|\operatorname{\mathsf{wit}}_{1}(\mathcal{E}_{s})|\cdot \log^{7}n=\Omega(L/\log n).\] Let's work on another preprocessing of \(G\) to simplify the proof. Note that the average degree of nodes in \(G\) is at least \(\Omega\left((L/\log n)/(L/\log^{3}n)\right)=\Omega(\log^{2}n)\). The following simple lemma shows that one can clean up \(G\) to get a bipartite graph \(G^{*}\) such that every node has degree at least \(100\log n\) and the number of edges in \(G^{*}\) remains to be \(\Omega(L/\log n)\): **Lemma 5.8**.: _There is a bipartite graph \(G^{*}=(V^{*}_{1}\cup V^{*}_{2},E(G^{*}))\) with \(V^{*}_{1}\subseteq V^{\prime}_{1}\), \(V^{*}_{2}\subseteq V^{\prime}_{2}\) and \(E(G^{*})\subseteq E(G)\) such that every node in \(G^{*}\) has degree at least \(100\log n\) and \(|E(G^{*})|=\Omega(L/\log n)\)._ Proof.: Keep deleting nodes in \(G\) with degree less than \(100\log n\) one by one (and its adjacent edges) until no such nodes exist. The number of edges we delete during the whole process is no more than \[(|V_{1}^{\prime}|+|V_{2}^{\prime}|)\cdot 100\log n\leq O(L/\log^{2}n).\] So the remaining graph (which trivially has minimum degree at least \(100\log n\)) has at least \(\Omega(L/\log n)\) many edges. Let us list the properties of \(G^{*}=(V_{1}^{*}\cup V_{2}^{*},E(G^{*}))\) we will use in the rest of the proof: 1. \(V_{1}^{*}\subseteq V_{1}^{\prime}=V_{1}\setminus\text{{\sl wit}}_{1}(\mathcal{ E}_{s})\) and \(V_{2}^{*}\subseteq V_{2}^{\prime}\) so each node in \(V_{2}^{*}\) is in \(C(v)\) for some \(v\in V_{2}\). 2. The degree of any node is at least \(100\log n\); and 3. For any \(u\in V_{1}^{*}\) and \(v\in V_{2}\), the number of neighbors of \(u\) in \(V_{2}^{*}\cap C(v)\) is at most one. 4. \(E(G^{*})\) has no parallel edges, \(|E(G^{*})|\geq\Omega(L/\log n)\) and w.l.o.g. each edge in \(E(G^{*})\) correspond to a move of same sign. We prove the following lemma to finish the proof: **Lemma 5.9**.: _Let \(u\in V_{1}^{*}\) and \(v\neq v^{\prime}\in V_{2}^{*}\) such that \((u,v^{(j)}),(u,v^{\prime(j^{\prime})})\in E(G^{*})\) for some \(j\) and \(j^{\prime}\), and the corresponding moves \(W_{i}=\{u,v\}\) and \(W_{i^{\prime}}=\{u,v^{\prime}\}\) in \(W\) are not consecutive6. Then, the graph \(G^{*}\) has a simple cycle \(C\) such that \(C\) has a witness \(e=\{u,w\}\in E(W)\) with \(w\in V_{1}^{*}\)._ Footnote 6: It is worth mentioning, that \(V_{2}^{*}\) always includes at least two vertices which are copies from different initial nodes \(v,v^{\prime}\). Indeed, if \(G^{*}\) was actually a star graph around \(V_{2}^{*}=\{v^{*}\}\), then \(\Omega(L/\log n)=E(G^{*})=\Theta(V(G^{*}))=O(L/\log^{3}n)\), which leads to a contradiction. Additionally, notice that \(v^{(j)}\) and \(v^{\prime(j^{\prime})}\) correspond to different nodes in the initial graph, otherwise the initial auxiliary graph \(H^{*}\) would have parallel edges. Proof.: We begin with a simple sufficient condition for a simple cycle of \(G^{*}\) to satisfy the above condition. First, let \(u\in V_{1}^{*}\) and \(v\neq v^{\prime}\in V_{2}^{*}\) such that \((u,v^{(j)}),(u,v^{\prime(j^{\prime})})\in E(G^{*})\) for some \(j\) and \(j^{\prime}\), and the corresponding moves \(W_{i}=\{u,v\}\) and \(W_{i^{\prime}}=\{u,v^{\prime}\}\) in \(W\) are not consecutive. Assume that \(i<i^{\prime}\) without loss of generality; then \(i+1<i^{\prime}\). The following claim shows that there must be a node \(w\notin\{u,v,v^{\prime}\}\) that moves an odd number times in \(W_{i+1},\ldots,W_{i^{\prime}-1}\): **Claim 5.10**.: _There is a node \(w\notin\{u,v,v^{\prime}\}\) that appears in an odd number of moves in \(W_{i+1},\ldots,W_{i^{\prime}-1}\)._ Proof.: This follows from the fact that \(W\) is a valid move sequence. We distinguish two cases. If \(v^{\prime}\) appears an even number of times in \(W_{i+1},\ldots,W_{i^{\prime}-1}\), then use the condition of validity on the subsequence \(W_{i},\ldots,W_{i^{\prime}-1}\): there is at least one node \(w\notin W_{i}=\{u,v\}\) that appears an odd number of times in \(W_{i},\ldots,W_{i^{\prime}-1}\). Since \(w\notin W_{i}\), node \(w\) appears an odd number of times in \(W_{i+1},\ldots,W_{i^{\prime}-1}\). Hence \(w\neq v^{\prime}\) and thus \(w\notin\{u,v,v^{\prime}\}\) and the claim follows. If \(v^{\prime}\) appears an odd number of times in \(W_{i+1},\ldots,W_{i^{\prime}-1}\), then \(v^{\prime}\) appears an even number of times in \(W_{i},\ldots,W_{i^{\prime}}\). Use the condition of validity on the subsequence \(W_{i},\ldots,W_{i^{\prime}}\): there is at least one node \(w\notin W_{i}=\{u,v\}\) that appears an odd number of times in \(W_{i},\ldots,W_{i^{\prime}}\). Then \(w\neq v^{\prime}\), and since also \(w\notin W_{i}=\{u,v\}\), it follows that \(w\) appears an odd number of times in \(W_{i+1},\ldots,W_{i^{\prime}-1}\) and the claim follows again. We remark that Claim 5.10 holds even when \(W\) is a mixture of \(1\)-moves and \(2\)-moves. This will be important when we deal with the general case in Section 6. We write \(w^{*}(u,v^{(j)},v^{(j^{\prime})})\in V(W)\) to denote such a node \(w\) promised in the above claim (if more than one exist pick one arbitrarily). The next claim gives us a sufficient condition for a simple cycle \(C\) of \(G^{*}\) to satisfy the condition of the lemma: **Claim 5.11**.: _Let_ \[C=u_{1}v_{1}^{(j_{1})}u_{2}v_{2}^{(j_{2})}\cdots u_{k}v_{k}^{(j_{k})}u_{1}\] _be a simple cycle of \(G^{*}\) for some nonnegative integers \(j_{1},\ldots,j_{k}\). Suppose for some \(i\in[k]\) we have that \(w:=w^{*}(u_{i},v_{i-1}^{(j_{i-1})},v_{i}^{(j_{i})})\in V(W)\) does not appear in \(C\) (where \(v_{i-1}^{(j_{i-1})}\) denotes \(v_{k}^{(j_{i})}\) if \(i=1\)), i.e., \(w\notin\{u_{1},\ldots,u_{k},v_{1},\ldots,v_{k}\}\), then \((u_{i},w)\in E(W)\) must be a witness of \(C\)._ Proof.: Let the corresponding cycle in \(W\) be \((c_{1},\cdots,c_{2k})\), edge \(\{u_{l},v_{l}^{(j_{l})}\}\) corresponds to move \(c_{2l-1}\) and edge \((v_{l}^{(j_{l})},u_{l+1})\) corresponds to \(c_{2l}\) (when \(l=k\), \(u_{l+1}\) denotes \(u_{1}\)). Let \(b\) be its cancellation vector. Recall \[\big{(}\mathsf{imprv}_{\tau_{0},W}(C)\big{)}_{\{w,u_{i}\}}=\sum_{l=1}^{2k}b_{l }\big{(}\mathsf{imprv}_{\tau_{0},W}(c_{l})\big{)}_{\{w,u_{i}\}}.\] Since \(w\) does not appear in \(C\), \(\big{(}\mathsf{imprv}_{\tau_{0},W}(c_{l})\big{)}_{\{w,u_{i}\}}\neq 0\) only when \(u_{i}\in W_{c_{l}}\), i.e., when \(l=2i-2\) or \(l=2i-1\) (if \(i=1\), it is \(l=2k\) or \(l=1\)). So \[\big{(}\mathsf{imprv}_{\tau_{0},W}(C)\big{)}_{\{w,u_{i}\}}=-b_{2i-2}\tau_{c_{ 2i-2}}(u_{i})\tau_{c_{2i-2}}(w)-b_{2i-1}\tau_{c_{2i-1}}(u_{i})\tau_{c_{2i-1}}( w).\] By the definition of \(w^{*}\), \(w\) moved odd number of times between move \(c_{2i-2}\) and \(c_{2i-1}\). So \(\tau_{c_{2i-2}}(w)=-\tau_{c_{2i-1}}(w)\). Also, by property 4 of \(G^{*}\), Corollary 5.4 and the definition of a dependent cycle, \(b_{2i-2}\tau_{c_{2i-2}}(u_{i})+b_{2i-1}\tau_{c_{2i-1}}(u_{i})=0\). So \[\big{(}\mathsf{imprv}_{\tau_{0},W}(C)\big{)}_{\{w,u_{i}\}}=-2b_{2i-2}\tau_{c_ {2i-2}}(u_{i})\tau_{c_{2i-2}}(w)\neq 0.\] This finishes the proof of the claim. Finally we prove the existence of a simple cycle \(C\) of \(G^{*}\) that satisfies the condition of the above claim. To this end, we first review a simple argument which shows that any bipartite graph with \(n\) nodes and minimum degree at least \(100\log n\) must have a simple cycle. Later we modify it to our needs. The argument goes by picking an arbitrary node in the graph as the root and growing a binary tree of \(\log n\) levels as follows: 1. In the first round we just add two distinct neighbors of the root in the graph as its children. 2. Then for each round, we grow the tree by one level by going through its current leaves one by one to add two children for each leaf. For each leaf \(u\) of the current tree we just pick two of its neighbors in the graph that do not appear in ancestors of \(u\) and add them as children of \(u\). Such neighbors always exist since the tree will have no more than \(\log n\) levels and each node has degree at least \(100\log n\) in the graph. Given that there are only \(n\) nodes in the graph, there must be a node that appears more than once in the tree at the end. Let's consider the first moment when we grow a leaf by adding one child (labelled by \(u\)) and \(u\) already appeared in the tree. Note that the two nodes labelled by \(u\) are not related in the tree since we maintain the invariant that the label of a node does not appear in its ancestors. Combining paths from these two nodes to the first node at which the two paths diverge, we get a simple cycle of the graph. We now adapt the above argument to prove the existence of a simple cycle \(C\) of \(G^{*}\) that satisfies the condition of Claim 5.11 by building a binary tree of \(2\log n\) levels as follows. We start with an arbitrary node \(u_{\mathsf{root}}\in V_{1}^{*}\) as the root of the tree and expand the tree level by level, leaf by leaf, as follows: 1. Case 1: The leaf we would like to grow is labelled a node \(u\in V_{1}^{*}\). In this case we add two children as follows. Let \(u_{1}v_{1}^{(j_{1})}\cdots u_{k-1}v_{k-1}^{(j_{k-1})}u\) be the path from the root \((u_{1})\) to \(u\) in the tree, where \(u_{1},\ldots,u_{k-1},u\in V_{1}^{*}\) and \(v_{1}^{(j_{1})},\ldots,v_{k-1}^{(j_{k-1})}\in V_{2}^{*}\). We pick two neighbors \(v^{(j)},v^{\prime(j)}\) of \(u\) in \(G^{*}\) with distinct \(v,v^{\prime}\in V_{2}\) as its children in the tree. We would like \(v\) and \(v^{\prime}\) to satisfy the following two properties: (1) \(v\) and \(v^{\prime}\) do not lie in \(\{v_{1},v_{2},\cdots,v_{k-1}\}\), (2) \(v\) and \(v^{\prime}\) are different from \(w^{*}(u_{i},v_{i}^{(j_{i})},v_{i}^{\prime(j_{i})})\) for every \(i=1,\ldots,k-1\), where \(v_{i}^{\prime(j_{i}^{\prime})}\) denotes the other child of \(v_{i}\) in the tree and (3) the move corresponding to \(\{u,v^{\prime(j)}\}\) in \(W\) and the move corresponding to \(\{u,v^{\prime(j)}\}\) in \(W\) are not consecutive moves. The existence of \(v^{(j)}\) and \(v^{\prime(j^{\prime})}\) that satisfy (1), (2) and (3) follows trivially from the fact that every node (in particular, \(u\) here) has degree at least \(100\log n\) in \(G^{*}\). Indeed, to satisfy (2) and (3), for each time we may reject at most 2 possible leafs. Given that the tree will only grow for \(2\log n\) levels -the half of times with \(V_{1}^{*}\) leafs and the rest half with \(V_{2}^{*}\) -, we have \(k\leq\log n\) and there are at most \(2k\leq 2\log n\) edges of \(u\) that need to be avoided. Moreover, no two edges from \(u\) go two the same \(C(v)\) for some \(v\in V_{2}\) (Because we don't allow parallel edges). 2. Case 2: The leaf we would like to grow is labelled a node \(v^{(j)}\in V_{2}^{*}\). In this case we just add one neighbor \(u\in V_{1}^{*}\) of \(v^{(j)}\) as its only child. Let \(u_{1}v_{1}^{(j_{1})}\cdots v_{k-1}^{(j_{k-1})}u_{k}v^{(j)}\) be the path from the root to \(v^{(j)}\). We pick a neighbor \(u\in V_{1}^{*}\) of \(v^{(j)}\) in \(G^{*}\) that satisfies (1) \(u\notin\{u_{1},\cdots,u_{k}\}\) and (2) \(u\) is different from \(w^{*}(u_{i},v_{i}^{(j_{i})},v_{i}^{\prime(j_{i}^{\prime})})\) for every \(i=1,\ldots,k-1\) and \(u\) is different from \(w^{*}(u_{k},v^{(j)},v^{\prime(j^{\prime})})\), where \(v_{i}^{\prime(j_{i}^{\prime})}\) denotes the other child of \(u_{i}\) and \(v_{i}^{\prime(j_{i}^{\prime})}\) denotes the other child of \(u_{k}\) in the tree. The existence of such \(u\) follows from the same argument as Case 1. Given that the tree has \(2\log n\) levels and there are only \(n\) nodes, there must be a node that appears more than once in the tree at the end, and let's consider the first moment when we grow a leaf by adding a child and the same node already appeared in the tree. Similarly we trace the two paths and let \(u\in V_{1}^{*}\) be the node where the two paths diverge; note that given the construction of the tree, this node must be a node in \(V_{1}^{*}\), given that nodes in \(V_{2}^{*}\) only have one child in the tree. On the one hand, the way we construct the tree makes sure that combining the two paths leads to a simple cycle \(C\) of \(G^{*}\). On the other hand, let \(v^{(j)},v^{\prime(j^{\prime})}\in V_{2}^{*}\) be the two children of \(u\) (which are next to \(u\) on the cycle). Then it is easy to verify that \(w^{*}(u,v^{(j)},v^{\prime(j^{\prime})})\) does not appear on the cycle we just found. This ends the proof of the lemma. ## 6 General Case We prove Lemma 3.3 for the general case. Let \(\mathcal{S}=(\mathcal{S}_{1},\ldots,\mathcal{S}_{N})\) be a valid move sequence of length \(N=n\log^{10}n\) that consists of both \(1\)-moves and \(2\)-moves. We will consider two cases and deal with them separately: (1) the number of \(1\)-moves in \(\mathcal{S}\) is at least \(N/\log^{5}n\); and (2) the number of \(1\)-moves is at most \(N/\log^{5}n\). ### Case \(1\) We consider the case when there are at least \(N/\log^{5}n\) many \(1\)-moves. In this case we show that there is a window \(W\) of \(\mathcal{S}\) such that \(\mathsf{rank}_{\mathsf{arcs}}(W)\) is large. The arguments used in this case are similar to those used in [27, 24, 13]. Given a window \(W\) of \(\mathcal{S}\), we write \(V_{2}(W)\) to denote the set of nodes \(u\in V(W)\) such that at least two \(1\)-moves in \(W\) are \(\{u\}\). **Lemma 6.1**.: _There is a window \(W\) of \(\mathcal{S}\) such that_ \[|V_{2}(W)|=\Omega\left(\frac{\mathsf{len}(W)}{\log^{6}n}\right).\] Proof.: Any \(1\)-move that is not the first \(1\)-move of the vertex generates a new arc of \(\mathcal{S}\), so the total number of arcs is at least \(|\mathsf{arcs}(\mathcal{S})|\geq N/\log^{5}n-n\). Define the length of an arc \(\alpha\)\((i,j)\), \(\mathsf{len}(\alpha)\), to be \(j-i\). Partition all arcs based on their length, for any integer \(i\) that \(0\leq i\leq\lfloor\log_{2}N\rfloor\), define \[\mathsf{arcs}_{i}(\mathcal{S}):=\left\{\alpha:\alpha\in\mathsf{arcs}( \mathcal{S}),\ \mathsf{len}(\alpha)\in[2^{i},2^{i+1})\right\}.\] Since \(\sum_{i=0}^{\lfloor\log_{2}N\rfloor}|\mathsf{arcs}_{i}(\mathcal{S})|\geq N/ \log^{5}n-n\), there exists \(i^{*}\) such that \[|\mathsf{arcs}_{i^{*}}(\mathcal{S})|\geq\frac{N/\log^{5}n-n}{\log_{2}N+1} \geq\frac{N}{10\log^{6}n}.\] Let \(W^{\prime}_{r}\) be a window of length \(2^{i^{*}+2}\) starting at a uniformly random position in \(\{-2^{i^{*}+2}+1,\cdots,N\}\), and \(W_{r}=W^{\prime}_{r}\cap[N]\). For any arc \(\alpha\in\mathsf{arcs}_{i^{*}}(\mathcal{S})\), there are \(\mathsf{len}(W^{\prime}_{r})-\mathsf{len}(\alpha)\) possible starting points for Figure 7: Example of Finding-Cycle Process. to contain \(\alpha\). So \[\Pr\big{[}\alpha\in\mathsf{arcs}(W_{r})\big{]}\geq\frac{\mathsf{len}(W_{r}^{ \prime})-\mathsf{len}(\alpha)}{N+2^{i+2}-1}\geq\frac{2^{i+1}}{N+2^{i+2}}.\] From linearity of expectation, and \(2^{i^{*}+2}\leq 4N\), \[\mathbb{E}\big{[}|\mathsf{arcs}_{i^{*}}(W_{r})|\big{]}\geq|\mathsf{arcs}_{i^ {*}}(\mathcal{S})|\cdot\frac{2^{i^{*}+1}}{N+2^{i^{*}+2}}\geq\frac{2^{i^{*}+1}}{ 50\log^{6}n}\geq\frac{\mathsf{len}(W_{r})}{100\log^{6}n}.\] We can pick \(W\) so that \(|\mathsf{arcs}_{i^{*}}(W)|\geq\mathsf{len}(W)/100\log^{6}n\). By definition of an arc, any vertex in one of the arcs in \(\mathsf{arcs}_{i^{*}}(W)\) must be in \(V_{2}(W)\). On the other hand, any arc \(\alpha\in\mathsf{arcs}_{i^{*}}(W)\) has length at least \(2^{i^{*}}\geq\mathsf{len}(W)/4\). So any vertex can have at most \(4\) arcs in \(\mathsf{arcs}_{i^{*}}(W)\). We have \[V_{2}(W)\geq\#\text{ vertices in }\mathsf{arcs}_{i^{*}}(W)\geq|\mathsf{arcs}_{ i^{*}}(W)|/4\geq\frac{\mathsf{len}(W)}{400\log^{6}n}\] This finishes the proof of the lemma. **Lemma 6.2**.: _We have \(\mathsf{rank}_{\mathsf{arcs}}(W)\geq\Omega(|V_{2}(W)|)\)._ Proof.: Let \(u_{1},u_{2},\cdots,u_{k}\) be the vertices in \(V_{2}(W)\), and \(\alpha_{j}=(s_{j},e_{j})\) be the arc of \(u_{j}\) formed by its first and second \(1\)-move. Since sequence \(W\) is a valid move sequence, there exists a vertex \(v_{j}\neq u_{j}\) that moved odd number of times between \(s_{j}\) and \(e_{j}\), i.e., \(\tau_{s_{j}-1}(v_{j})=-\tau_{e_{j}-1}(v_{j})\). Pick an arbitrary such \(v_{j}\) for each \(j\). Take a subset \(U\) of \(V_{2}(W)\) by the following process: * \(V\gets V_{2}(W)\) * \(U\leftarrow\emptyset\) * For \(j\) from \(1\) to \(k\), if \(u_{j}\in V\), \(V\gets V\backslash\{u_{j},v_{j}\}\), \(U\gets U\cup\{u_{j}\}\). In each step we delete at most two element from \(V\) and add one element to \(U\), so \(|U|\geq|V_{2}(W)|/2\). Let \(U=\{u_{i_{1}},u_{i_{2}},\cdots,u_{i_{m}}\}\), ordered by the sequence they are added. By the process, for any \(u_{i_{j}}\in U\), and any \(j^{\prime}>j\), \(v_{i_{j}}\neq u_{i_{j}}\). Recall \[\mathsf{imprv}_{\tau_{0},W}(\alpha_{i_{j}})_{(u_{i_{j}},v_{i_{j}})}=-\tau_{s_ {i_{j}}-1}(v_{i_{j}})+\tau_{e_{i_{j}}-1}(v_{i_{j}})=2\tau_{e_{i_{j}}-1}(v_{i_{ j}})\neq 0.\] And for any \(j^{\prime}>j\), \(u_{i_{j}}\neq u_{i_{j}}\), \(v_{i_{j^{\prime}}}\neq u_{i_{j}}\), so \(\mathsf{imprv}_{\tau_{0},W}(\alpha_{i_{j}})_{(u_{i_{j}},v_{i_{j}})}=0\). Consider the matrix formed by taking the \(j-\)th column to be \(\mathsf{imprv}_{\tau_{0},W}(\alpha_{i_{j}})\). The row indexed by \((u_{i_{j}},v_{i_{j}})\) would be of the form \[(\underbrace{*,*,\cdots,*}_{j-1\text{ unknown numbers}}2\tau_{e_{i_{j}}-1}(v_{i_{j}})\neq 0,0,\cdots,0).\] This means the matrix has a lower triangular square submatrix of size at least \(m\geq|V_{2}(W)|/2\). So we have \(\mathsf{rank}_{\mathsf{arcs}}(W)\geq|V_{2}(W)|/2\). ### Case 2 Let \(\mathcal{S}\) be a valid move sequence of length \(N\) with no more than \(N/\log^{5}n\) many \(1\)-moves. Let \(W\) be a window of \(\mathcal{S}\). We write \(\#_{W}(u)\) to denote the number of moves (including both \(1\)-moves and \(2\)-moves) that \(u\) appears in \(W\), and write \(\#^{2}_{W}(u)\) to denote the number of \(2\)-moves that \(u\) appears in \(W\). We start by showing a lemma similar to Lemma 4.6 in Section 4. **Lemma 6.3**.: _Let \(\mathcal{S}\) be a valid move sequence of length \(N=n\log^{10}n\) with no more than \(N/\log^{5}n\) many \(1\)-moves. Then there exists a window \(W\) of \(\mathcal{S}\) such that at least \(\Omega(\mathsf{len}(W)/\log n)\) many moves of \(W\) are \(2\)-moves \(W_{i}=\{u,v\}\) that satisfy_ \[\log^{3}n\leq\#^{2}_{W}(u)\leq\#_{W}(u)\leq 2\log^{7}n\quad\text{and}\quad\#^{2} _{W}(v)\geq\log^{3}n \tag{23}\] Proof.: We would like to apply Lemma 4.6 (which works on move sequences that consist of \(2\)-moves only). To this end, we let \(\mathcal{S}^{\prime}\) be the move sequence obtained from \(\mathcal{S}\) by removing all its \(1\)-moves. Let \(N^{\prime}:=\mathsf{len}(\mathcal{S}^{\prime})\geq(1-1/\log^{5}n)N\). Applying Lemma 4.6 on \(\mathcal{S}^{\prime}\),7 there must exist a positive integer \(L\) and among the \(N^{\prime}-L+1\) windows \(W^{\prime}\) of \(\mathcal{S}^{\prime}\) of length \(L\), at least \(\Omega(1/\log n)\)-fraction of them satisfy that \(\Omega(L/\log n)\) many \(2\)-moves \(\{u,v\}\) in it satisfy Footnote 7: Note that \(\mathcal{S}^{\prime}\) has length not exactly \(N\) but \((1-1/\log^{5}n)N\) but the statement of Lemma 4.6 still holds. \[\log^{3}n\leq\#^{2}_{W^{\prime}}(u)\leq\log^{7}n\quad\text{and}\quad\#^{2}_{W ^{\prime}}(v)\geq\log^{3}n. \tag{24}\] Let's denote these windows of \(\mathcal{S}^{\prime}\) by \(W^{\prime}_{1},\ldots,W^{\prime}_{s}\) for some \(s=\Omega((N^{\prime}-L+1)/\log n)\). For each \(W^{\prime}_{i}\) we let \(\mathcal{S}_{k_{i}}\) (or \(\mathcal{S}_{\ell_{i}}\)) to denote the move in \(\mathcal{S}\) that corresponds to the first (or last, respectively) move in \(W^{\prime}_{i}\), and let \(W_{i}\) denote the window \((\mathcal{S}_{k_{i}},\ldots,\mathcal{S}_{\ell_{i}})\) of \(\mathcal{S}\). If \(L\geq N^{\prime}/2\), we can just take \(W\) to be \(W_{1}\). We note that the number of \(2\)-moves in \(W_{1}\) that satisfy (24) is at least \(\Omega(L/\log n)=\Omega(\mathsf{len}(W_{1})/\log n)\) given that \(L\geq N^{\prime}/2\). On the other hand, the number of \(u\in V(W_{1})\) that appears in at least \(\log^{7}n\)\(1\)-moves of \(W_{1}\) is at most \((N/\log^{5}n)/\log^{7}n=O(N/\log^{12}n)\). Thus, the number of \(2\)-moves \(\{u,v\}\) in \(W_{1}\) that satisfy (24) but not \(\#_{W_{1}}(u)\leq 2\log^{7}n\) is at most \[\log^{7}n\cdot O\left(\frac{N}{\log^{12}n}\right)=o\left(\frac{L}{\log n} \right).\] So we assume below that \(L<N^{\prime}/2\). We claim that \(W_{i}\) can satisfy the condition of the lemma if \(\mathsf{len}(W_{i})\leq(1+1/\log^{2}n)L\). To see this is the case, we note that the number of nodes \(u\in W_{i}\) that appears in at least \(\log^{7}n\) many \(1\)-moves is at most \((L/\log^{2}n)/\log^{7}n=O(L/\log^{9}n)\). Thus, the number of \(2\)-moves \(\{u,v\}\) in \(W_{i}\) that satisfy (24) but not \(\#_{W_{i}}(u)\leq 2\log^{7}n\) is at most \[\log^{7}n\cdot O\left(\frac{N}{\log^{9}n}\right)=o\left(\frac{L}{\log n} \right).\] So it suffices to show that \(\mathsf{len}(W_{i})\leq(1+1/\log^{2}n)\cdot\mathsf{len}(W^{\prime}_{i})\) for some window \(W_{i}\). Assume this is not the case. Then the total number of \(1\)-moves in \(W_{1},\ldots,W_{s}\) is at least \[\frac{1}{\log^{2}n}\cdot\sum_{i\in[s]}\mathsf{len}(W^{\prime}_{i})\geq\Omega \left(\frac{(N^{\prime}-L+1)L}{\log^{3}n}\right)=\Omega\left(\frac{NL}{\log^{ 3}n}\right)\] using \(L<N^{\prime}/2\). However, each \(1\)-move can only appear in no more than \(L\) many windows of length \(L\) Given that there are only \(N/\log^{5}n\) many 1-moves, the same number can be upper bounded by \[\frac{N}{\log^{5}n}\cdot L,\] a contradiction. This finishes the proof of the lemma. So we now have a valid move sequence \(W\) of length \(L\), as a window of the original valid sequence \(\mathcal{S}\), such that the number of 2-moves in \(W\) that satisfy (23) is at least \(\Omega(L/\log n)\). The rest of the proof follows the same arguments used in Section 5. We give a sketch below. First we define the same auxiliary graph \(H=(V(W),E)\) such that there is a one-to-one correspondence between \(E\) and 2-moves in \(W\). Note that the degree of a node \(u\) in \(H\) is the same as \(\sharp_{W}^{2}(u)\). We then show that there are disjoint sets of nodes \(V_{1},V_{2}\subset V(W)\) and a subset of edges \(E^{\prime}\subseteq E\) that satisfy conditions similar to those of Lemma 5.1: **Lemma 6.4**.: _There are two disjoint sets of nodes \(V_{1},V_{2}\subset V(W)\) and a subset of edges \(E^{\prime}\subseteq E\) such that_ 1. _Every edge in_ \(E^{\prime}\) _has one node in_ \(V_{1}\) _and the other node in_ \(V_{2}\)_;_ 2. \(|V_{1}\cup V_{2}|=O(L/\log^{3}n)\) _and_ \(|E^{\prime}|=\Omega(L/\log n)\)_;_ 3. \(\#_{W}(u)\leq 2\log^{7}n\) _for every node_ \(u\in V_{1}\)_._ The proof is exactly the same as that of Lemma 5.1, except that we define \(V\) to be the set of nodes \(v\) with \(\sharp_{W}^{2}(v)\geq\log^{3}n\) and \(V_{h}\) to be the set of nodes \(v\) with \(\sharp_{W}(v)\geq 2\log^{7}n\). Next we focus on \(E^{\prime\prime}\), which contains all edges in \(E^{\prime}\) of the same sign (or edges in \(E^{\prime}\) of different signs, whichever contains more edges). Similarly every cycle in \(E^{\prime\prime}\) corresponds to a dependent cycle of \(W\). The case when \(E^{\prime\prime}\) contains many parallel edges can be handled exactly the same way as in Lemma 5.5. So we may delete all parallel edges from \(E^{\prime\prime}\), and finish the proof using Lemma 5.6. The proof of Lemma 5.6 for the general case is very similar, with the following changes: 1. In the definition of \(k_{v}\) for each \(v\in\mathsf{wit}_{2}(\mathcal{E}_{s})\), we need it to be the number of moves (including both 1-moves and 2-moves) in \(W\) that involve at least one node in \(\mathsf{wit}_{1}(v)\). This can still be bounded from above by \(|\mathsf{wit}_{1}(v)|\cdot 2\log^{7}n\) since we have \(\sharp_{W}(u)\leq\log^{7}n\) for all \(u\in V_{1}\) as promised in Lemma 6.4 above. 2. As we commented earlier, Claim 5.10 works even when \(W\) consists of both 1-moves and 2-moves. This finishes the proof of Lemma 3.3 for the general case. Binary Max-CSP and Function Optimization Problems We recall the definition of binary maximum constraint satisfaction problems, and more generally function optimization problems. **Definition 7.1**.: _An instance of Binary Max-CSP (Constraint Satisfaction Problem), or MAX 2-CSP, consists of a set \(V=\{x_{1},\ldots,x_{n}\}\) of variables that can take values over \(\{0,1\}\) and a set \(C=\{c_{1},\ldots,c_{m}\}\) of constraints with given respective weights \(w_{1},\ldots,w_{m}\), where each constraint is a predicate on a pair of variables. The MAX 2-CSP problem is: given an instance, find an assignment that maximizes the sum of the weights of the satisfied constraints._ Several problems can be viewed as special cases of Binary Max-CSP where the predicates of the constraints are restricted to belong to a fixed family \(\mathcal{P}\) of predicates; this restricted version is denoted Max-CSP(\(\mathcal{P}\)). For example, the Max Cut problem in graphs is equivalent to Max-CSP(\(\mathcal{P}\)) where \(\mathcal{P}\) contains only the "not-equal" predicate (\(x\neq y\), where \(x,y\) are the two variables). The Max Directed Cut problem, where the input graph is directed and we seek a partition of the nodes into two parts \(N_{1},N_{2}\) that maximizes the total weight of the edges directed from \(N_{1}\) to \(N_{2}\), corresponds to the case that \(\mathcal{P}\) contains only the \(<\) predicate (i.e. \(x<y\)). MAX 2SAT corresponds to the case that \(\mathcal{P}\) consists of all 4 possible clauses on two variables. A generalization of MAX 2-CSP is the class of _Binary function optimization problems_ (BFOP) where instead of constraints (predicates) we have functions on two arguments that take values in \(\{0,1,\ldots,d\}\) instead of \(\{0,1\}\), where \(d\) is a fixed constant (or even is polynomially bounded). For convenience and consistency with the notation of configurations in the Max Cut problem, we will use in the following \(\{-1,1\}\) as the domain of the variables instead of \(\{0,1\}\). That is, the problem is: Given a set \(V=\{x_{1},\ldots,x_{n}\}\) of variables with domain \(D=\{-1,1\}\), a set \(F=\{f_{1},\ldots,f_{m}\}\) of functions, where each \(f_{i}\) is a function of a pair \((x_{i_{1}},x_{i_{2}})\) of variables, and given respective weights \(w_{1},\ldots,w_{m}\), find an assignment \(\tau:V\to D\) to the variables that maximizes \(\sum_{i=1}^{m}w_{i}\cdot f_{i}(\tau(x_{i_{1}}),\tau(x_{i_{2}}))\). Even though a function in BFOP (or a constraint in Max-2CSP) has two arguments, its value may depend on only one of them, i.e. it may be essentially a unary function (or constraint). More generally, it may be that the two arguments of the function can be decoupled and the function can be separated into two unary functions. We say that a binary function \(f(x,y)\) is _separable_ if there are unary functions \(f_{1},f_{2}\) such that \(f(x,y)=f_{1}(x)+f_{2}(y)\) for all values of \(x,y\); otherwise \(f\) is _nonseparable_. For binary domains there is a simple criterion for separability: a function \(f(x,y)\) is separable if and only if \(f(-1,-1)+f(1,1)=f(-1,1)+f(1,-1)\)[25]. If in a given BFOP instance some binary functions are separable, then we can decompose them into the equivalent unary functions. Thus, we may assume, without loss of generality, that a given BFOP instance has unary and binary functions, where all the binary functions are nonseparable. We say that an instance is _complete_, if every pair of variables appear as the arguments of a (nonseparable) binary function in the instance. The 2-FLIP local search algorithm can be applied to a MAX 2-CSP or BFOP problem to compute a locally optimal assignment that cannot be improved by flipping the value of any one or two variables. We will show that the smoothed complexity of 2-FLIP for any complete MAX 2-CSP or BFOP instance is (at most) quasipolynomial. **Theorem 1.3**.: _Let \(I\) be an arbitrary complete instance of a MAX 2-CSP (or BFOP) problem with \(n\) variables and \(m\) constraints (functions) with independent random weights in \([-1,1]\) with density at most \(\phi>0\). Then, with probability at least \(1-o_{n}(1)\) over the draw of the weights, any implementation of 2-FLIP takes at most \(m\phi n^{O(\log^{10}n)}\) steps to terminate._ Proof.: Consider a (complete) instance \(I\) of a BFOP problem with \(n\) variables and \(m\) functions, and a sequence \(\mathcal{S}\) of moves of 2-FLIP starting from an initial configuration. The proof follows the same structure as the proof for Max Cut. The only thing that changes is the improvement vector in each step, which depends on the specific functions of the instance: the vector has one coordinate for each function \(f_{i}\) in the instance and the entry is equal to the change in the value of the function resulting from the move. Arcs and cycles of \(\mathcal{S}\) are defined in the same way as in Max Cut, and the improvement vectors of arcs and cycles are defined in an analogous way from the improvement vectors of the moves. The heart of the proof for Max Cut is Lemma 3.3 which showed that there is a window \(\mathcal{W}\) and a set of arcs or a set of cycles of \(\mathcal{W}\) whose improvement vectors have rank \(\Omega(\frac{\mathsf{len}(\mathcal{W})}{\log^{10}n})\). We will show that the lemma holds for any BFOP problem. We associate with the BFOP instance \(I\) the graph \(G\) where the nodes correspond to the variables of \(I\) and the edges correspond to the binary functions of \(I\); since \(I\) is a complete instance, the graph \(G\) is the complete graph, possibly with multiple edges connecting the same pair of nodes (if there are multiple functions with the same pair of arguments). We will identify the variables of \(I\) with the nodes of \(G\) and the functions of \(I\) with the edges of \(G\). In the general case of the Max Cut problem, in Case 1 where there is a large number of 1-moves, we identified a window \(\mathcal{W}\) and a large set \(A^{\prime}\) of arcs in the window whose set of improvement vectors are linearly independent. The argument relied only on the zero-nonzero structure of the improvement vectors: it showed that the matrix \(M\) formed by these vectors and a set of rows corresponding to a certain set \(E^{\prime}\) of witness edges is a lower triangular matrix with nonzero diagonal. Take a set \(F^{\prime}\) of functions of \(I\) that contains for each edge \(\{u,v\}\in E^{\prime}\) a function \(f_{k}(u,v)\) with this pair as arguments (it exists because the instance \(I\) is complete), and form the matrix \(M^{\prime}\) with the set \(F^{\prime}\) as rows and the set \(A^{\prime}\) of arcs as columns. We will show that the matrix \(M^{\prime}\) has the same zero-nonzero structure as \(M\), thus it also has full rank. Consider an arc of the move sequence \(\mathcal{S}\) corresponding to two moves \(\mathcal{S}_{i}=\{u\}\), \(\mathcal{S}_{j}=\{u\}\), \(i<j\), and a function \(f_{k}\) of \(I\). If \(u\) is not one of the arguments of the function, then the corresponding entry of the improvement vector of the arc is obviously \(0\). If \(u\) is one of the argument, i.e. the \(k\)-th function is \(f_{k}(u,v)\) (similarly if it is \(f_{k}(v,u)\)), then the corresponding entry of the improvement vector of the arc is \(\gamma_{i}(u)[f_{k}(\gamma_{i}(u),\gamma_{i}(v))-f_{k}(-\gamma_{i}(u),\gamma_ {i}(v))]-\gamma_{j}(u)[f_{k}(\gamma_{j}(u),\gamma_{j}(v))-f_{k}(-\gamma_{j}(u),\gamma_{j}(v))]\). If \(v\) moves an even number of times between \(\mathcal{S}_{i}\) and \(\mathcal{S}_{j}\), then \(\gamma_{i}(v)=\gamma_{j}(v)\) and it follows that the entry is \(0\), both in the case that \(\gamma_{i}(u)=\gamma_{j}(u)\) and in the case that \(\gamma_{i}(u)=-\gamma_{j}(u)\). On the other hand, if \(v\) moves an odd number of times between \(\mathcal{S}_{i}\) and \(\mathcal{S}_{j}\), then \(\gamma_{i}(v)=-\gamma_{j}(v)\) and it follows that the \(k\)-th entry of the improvement vector is \(\gamma_{i}(u)[f_{k}(\gamma_{i}(u),\gamma_{i}(v))-f_{k}(-\gamma_{i}(u),\gamma_ {i}(v))]-\gamma_{j}(u)[f_{k}(\gamma_{j}(u),-\gamma_{i}(v))-f_{k}(-\gamma_{j}( u),-\gamma_{i}(v))]\). Letting \(\gamma_{i}(u)=a,\gamma_{i}(v)=b\), the entry is \(a[f_{k}(a,b)+f_{k}(-a,-b)-f_{k}(-a,b)-f_{k}(a,-b)]\) (both when \(\gamma_{i}(u)=\gamma_{j}(u)\) and when \(\gamma_{i}(u)=-\gamma_{j}(u)\)); this quantity is nonzero because \(f_{k}\) is nonseparable. Thus, the entry for \(f_{k}(u,v)\) of the improvement vector of the arc is nonzero exactly when the entry of the arc in the Max Cut problem for the edge \((u,v)\) is nonzero. It follows that the matrix \(M^{\prime}\) has the same zero-nonzero structure as \(M\), thus it also has full rank. In Case 2 of the Max Cut problem, where the number of 2-moves is very large, there were two subcases. In the first subcase, where there are many parallel edges in the graph that we associated with the window of the move sequence, we found a large set of 2-cycles whose improvement vectors were linearly independent. In the other case, where there are "few" parallel edges, we constructed a large set of cycles (of length \(O(\log n)\)), again with linearly independent improvement vectors. In both cases, the proof of linear independence relied again only on the zero-nonzero structure of the vectors, and not on the precise value of the entries. We will argue that in both cases, the corresponding vectors of these cycles in the BFOP instance \(I\) have the same zero-nonzero structure. In the first subcase we found many 2-cycles \((u_{1},v_{1}),\ldots,(u_{k},v_{k})\), and corresponding "witness" edges \((u_{1},z_{1}),\ldots,(u_{k},z_{k})\) such that the matrix \(M\) with rows corresponding to the witness edges and columns corresponding to the 2-cycles in the Max Cut problem is lower triangular with nonzero diagonal. The nodes \(u_{i}\) are distinct (the \(v_{i}\) and the \(z_{i}\) may not be distinct) and \(z_{i}\neq u_{i},v_{i}\) for all \(i\). For each witness pair \((u_{i},z_{i})\) pick a function \(f_{r_{i}}\) of instance \(I\) with this pair of variables as arguments, in either order, say wlog the function is \(f_{r_{i}}(u_{i},z_{i})\). Consider the matrix \(M^{\prime}\) with rows corresponding to the functions \(f_{r_{i}}(u_{i},z_{i})\) and columns corresponding to the 2-cycles \((u_{i},v_{i})\). Note that the entry \(M(j,i)\) is nonzero if one of the nodes \(u_{j},z_{j}\) is in \(\{u_{i},v_{i}\}\) and the other node appears an odd number of times between the two moves \(\{u_{i},v_{i}\}\), and it is 0 otherwise, i.e. if \(\{u_{j},z_{j}\}\cap\{u_{i},v_{i}\}=\emptyset\), or if one of \(u_{j},z_{j}\) is in \(\{u_{i},v_{i}\}\) and the other node appears an even number of times between the two moves \(\{u_{i},v_{i}\}\). Importantly it cannot be that \(\{u_{j},z_{j}\}=\{u_{i},v_{i}\}\) because \(u_{j}\neq u_{i},v_{i}\). Examining the value \(M^{\prime}(j,i)\) in the same way as in the case of arcs above, we observe that if \(M(j,i)=0\) then also \(M^{\prime}(j,i)=0\), and if \(M(j,i)\neq 0\) then also \(M^{\prime}(j,i)\neq 0\). Thus, \(M^{\prime}\) has the same zero-nonzero structure as \(M\) and hence it has also full rank. In the second subcase of Case 2, we found many cycles \(C_{1},\ldots C_{k}\) and corresponding witness edges \(\{u_{i},v_{i}\}\) such that for every \(i\), (1) \(C_{i}\) does not contain any \(u_{j}\) for \(j<i\), nor \(v_{i}\), (2) \(C_{i}\) has exactly two edges incident to \(u_{i}\) and node \(v_{i}\) appears an odd number of times between the two moves corresponding to these two edges, (3) if \(C_{i}\) contains \(v_{j}\) for some \(j<i\) (the cycle \(C_{i}\) may go more than once through \(v_{j}\)), then \(u_{j}\) does not appear between any pair of moves that correspond to consecutive edges of the cycle \(C_{i}\) incident to \(v_{i}\). We used these properties in Max Cut to show that the matrix \(M\) whose rows correspond to the witness edges and the columns correspond to the cycles \(C_{i}\) is lower triangular with nonzero diagonal. As before, for each witness pair \((u_{i},v_{i})\) pick a function \(f_{r_{i}}\) of instance \(I\) with this pair of variables as arguments, and let \(M^{\prime}\) be the matrix with these functions as rows and the cycles \(C_{i}\) as columns. We can use the above properties to show that the matrix \(M^{\prime}\) is also lower triangular with nonzero diagonal. Property (2) and the fact that \(v_{i}\notin C_{i}\) (from property (1)) imply that \(M^{\prime}(i,i)\neq 0\) for all \(i\). Properties (1) and (3) can be used to show that \(M^{\prime}(j,i)=0\) for all \(j<i\). Therefore, \(M^{\prime}\) has full rank. Once we have Lemma 3.3 for the BFOP instance \(I\), the rest of the proof is the same as for Max Cut. The only difference is that, if the maximum value of a function in \(I\) is \(d\) (a constant, or even polynomial in \(n\)), then the maximum absolute value of the objective function is \(md\) instead of \(n^{2}\) that it was in Max Cut. Conclusions We analyzed the smoothed complexity of the SWAP algorithm for Graph Partitioning and the 2-FLIP algorithm for Max Cut and showed that with high probability the algorithms terminate in quasi-polynomial time for any pivoting rule. The same result holds more generally for the class of maximum binary constraint satisfaction problems (like Max-2SAT, Max Directed Cut, and others). We have not made any attempt currently to optimize the exponent of \(\log n\) in the bound, but we believe that with a more careful analysis the true exponent will be low. There are several interesting open questions raised by this work. We list some of them below. 1. Can our bounds be improved to polynomial? In the case of the 1-FLIP algorithm in the full perturbation model (i.e. when all edges of \(K_{n}\) are perturbed) a polynomial bound was proved in [23]. Can a similar result be shown for 2-FLIP and SWAP? 2. Can our results be extended to the structured smoothed model, i.e., when we are given a graph \(G\) and only the edges of \(G\) are perturbed? In the case of 1-FLIP we know that this holds [22, 25], but 2-FLIP is much more challenging. 3. We saw in this paper how to analyze local search when one move flips simultaneously two nodes. This is a qualitative step up from the case of single flips, that creates a number of obstacles which had to be addressed. This involved the introduction of nontrivial new techniques in the analysis of the sequence of moves, going from sets to graphs. Dealing with local search that flips 3 or more nodes will require extending the methods further to deal with hypergraphs. We hope that our techniques will form the basis for handling local search algorithms that flip multiple nodes in one move, e.g. \(k\)-FLIP for higher \(k\), and even more ambitiously powerful methods like Kernighan-Lin that perform a deep search in each iteration and flip/swap an unbounded number of nodes. 4. Can our results be extended to Max \(k\)-Cut or \(k\)-Graph Partitioning where the graph is partitioned into \(k>2\) parts? In the case of 1-FLIP for Max \(k\)-Cut quasi-polynomial bounds were shown in [24]. 5. Can similar results be shown for Max-CSP with constraints of higher arities, for example Max 3SAT? No bounds are known even for 1-FLIP. In fact, analyzing 1-FLIP for Max 3SAT seems to present challenges that have similarities with those encountered in the analysis of 2-FLIP for Max 2SAT and Max Cut, so it is possible that the techniques developed in this paper will be useful also in addressing this problem. Missing Proofs from Section 2 **Lemma A.1**.: _For any configuration \(\gamma_{0}\in\{\pm 1\}^{n}\) that is an extension of \(\tau_{0}\), letting \(\gamma_{0},\gamma_{1},\dots,\gamma_{\ell}\in\{\pm 1\}^{n}\) be the sequence of configurations induced by \(\mathcal{S}\) and letting \(w[u,i,j]:=\gamma_{i}(u)\cdot\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(i)-\gamma_ {j}(u)\cdot\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(j)\), we have that_ \[(w[u,i,j]_{e})_{e\in E_{n}}=\begin{cases}(\tau_{i}(u)\cdot\mathsf{imprv}_{ \gamma_{0},\mathcal{S}}(i)-\tau_{j}(u)\cdot\mathsf{imprv}_{\gamma_{0}, \mathcal{S}}(j))_{e}&\text{ for every entry }e\in E(\mathcal{S}),\\ 0&\text{ otherwise.}\end{cases}\] _for any arbitrary choice of \(u\in V(\mathcal{S})\)._ Proof.: Note that \(\tau_{i}(u)=\gamma_{i}(u)\), \(\tau_{j}(u)=\gamma_{j}(u)\), and by definition, \(\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(i)=\mathsf{imprv}_{\gamma_{0}, \mathcal{S}}(i)^{*}\) is the projection of \(\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(i)\) on \(E(\mathcal{S})\). Thus, for any edge \(e\in E(\mathcal{S})\), \(w_{e}\) is the same as \((\tau_{i}(u)\cdot\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(i)-\tau_{j}(u)\cdot \mathsf{imprv}_{\gamma_{0},\mathcal{S}}(j))_{e}\). For any edge \(e\notin E(\mathcal{S})\), if \(e=\{v_{1},v_{2}\}\) doesn't contain \(u\), the improvement vectors are \(0\) on \(e\) and correspondingly \(w[u,i,j]_{e}=0\). For the last case, let us assume \(e=\{u,v\}\) where \(v\) is inactive. We have that \[\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(i)_{e}=\gamma_{i-1}(v)\gamma_{i-1}(u) =-\gamma_{i-1}(v)\gamma_{i}(u)\;\;\&\;\;\mathsf{imprv}_{\gamma_{0}, \mathcal{S}}(j)_{e}=-\gamma_{j-1}(v)\gamma_{j}(u).\] Since \(v\) is not active, \(\gamma_{i}(v)=\gamma_{0}(v)\) for any \(i\in[\ell]\). So, we get that \[(\gamma_{i}(u)\cdot\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(i)-\gamma_{j}(u) \cdot\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(j))_{e}=-\gamma_{i-1}(v)+\gamma_ {j-1}(v)=0.\] This finishes the proof of the lemma. **Lemma A.2**.: _Let \(\mathcal{C}=(c_{1},\dots,c_{t})\) be a dependent cycle of \(\mathcal{S}\) and let \(b\) be its cancellation vector. Then for any configurations \(\tau_{0}\in\{\pm 1\}^{V(\mathcal{S})}\) and \(\gamma_{0}\in\{\pm 1\}^{n}\) such that \(\gamma_{0}\) is an extension of \(\tau_{0}\), letting \(w[\mathcal{C}]:=\sum_{j\in[t]}b_{j}\cdot\mathsf{imprv}_{\gamma_{0},\mathcal{S }}(c_{j})\), we have that_ \[(w[\mathcal{C}]_{e})_{e\in E_{n}}=\begin{cases}(\sum_{j\in[t]}b_{j}\cdot \mathsf{imprv}_{\tau_{0},\mathcal{S}}(c_{j}))_{e}&\text{ for every entry }e\in E(\mathcal{S}),\\ 0&\text{ otherwise.}\end{cases}\] Proof.: Again recall that \(\tau_{i}(u)=\gamma_{i}(u)\), \(\tau_{j}(u)=\gamma_{j}(u)\), and by definition, \(\mathsf{imprv}_{\tau_{0},\mathcal{S}}(i)=\mathsf{imprv}_{\gamma_{0},\mathcal{S }}(i)^{*}\) is the projection of \(\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(i)\) on \(E(\mathcal{S})\). Thus, for edge \(e\in E(\mathcal{S})\), \(w[\mathcal{C}]_{e}\) is the same as \(\sum_{j\in[t]}b_{j}\cdot\mathsf{imprv}_{\tau_{0},\mathcal{S}}(c_{j})_{e}\). For edge \(e\notin E(\mathcal{S})\), if \(e\) doesn't contain \(u_{i}\) for any \(i\in[t]\), the improvement vectors are \(0\) on \(e\) and correspondingly \(w[\mathcal{C}]_{e}=0\). For the last case, let us assume \(e=(u,v)\) where \(v\) is inactive. Then we have (where index \(0\) corresponds to \(t\)) \[\sum_{j\in[t]}b_{j}\cdot\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(c _{j})_{e}= \sum_{i\in[t]:u_{i}=u}\big{(}b_{i-1}\mathsf{imprv}_{\gamma_{0}, \mathcal{S}}(c_{i-1})_{e}+b_{i}\mathsf{imprv}_{\gamma_{0},\mathcal{S}}(c_{i})_{e} \big{)}\] \[= \sum_{i\in[t]:u_{i}=u}\big{(}-b_{i-1}\gamma_{c_{i-1}-1}(v)\gamma_{ c_{i-1}}(u_{i})-b_{i}\gamma_{c_{i}-1}(v)\gamma_{c_{i}}(u_{i})\big{)}.\] Since \(v\) is inactive, \(\gamma_{c_{i-1}-1}(v)=\gamma_{c_{i}-1}(v)=\gamma_{0}(v)\). Each term above is equal to \[-\gamma_{0}(v)(b_{i-1}\gamma_{c_{i-1}}(u_{i})+b_{i}\gamma_{c_{i}}(u_{i}))=0.\] This finishes the proof of the lemma. ## Appendix B Rank Invariance of Improving vectors over Initial Configuration In this section we prove that the rank of the improvement vectors for the set of _1-move_(\(\mathcal{S}\)), _2-move_(\(\mathcal{S}\)), \(\mathsf{arcs}(\mathcal{S})\) and \(\mathsf{cycles}(\mathcal{S})\) is independent of the initial configuration \(\gamma_{0}\) of the vertices. Proof of Lemmas 2.4, 2.8 & 2.10.: We start by recalling the following useful facts: **Fact B.1**.: _Let \(\gamma_{0},\gamma_{0}^{\prime}\in\{\pm 1\}^{n}\) be two arbitrary initial configurations. Then, it holds that_ \[\gamma_{0}(v)\gamma_{0}^{\prime}(v)=\gamma_{i}(v)\gamma_{i}^{\prime}(v)\quad \text{for any $i\in[\ell]$,}\] _where \(\gamma_{i},\gamma_{i}^{\prime}\) is obtained from \(\gamma_{i-1},\gamma_{i-1}^{\prime}\) by flipping nodes in \(\mathcal{S}_{i}\), for a move sequence \(\mathcal{S}=(\mathcal{S}_{1},\dots,\mathcal{S}_{\ell})\)._ **Fact B.2**.: _Let \(A\) be a \((k_{1}\times k_{2})\) real-valued matrix \(A\) and \(B,C\) are full-rank \((k_{1}\times k_{1})\) and \((k_{2}\times k_{2})\) squared matrices correspondingly. Then, it holds that \(\mathsf{rank}(A)=\mathsf{rank}(A^{\top})\) and \(\mathsf{rank}(A)=\mathsf{rank}(BAC)\)._ Let \(\mathcal{M}_{\text{1-move}}(\gamma_{0},\mathcal{S}),\mathcal{M}_{\text{2-move }}(\gamma_{0},\mathcal{S})\) be the matrices whose columns are the improvement vectors of 1-move for a given initial configuration \(\gamma_{0}\) and let \(\mathcal{M}_{\text{1-move}}(\gamma_{0},\mathcal{S}),\mathcal{M}_{\text{2-move }}(\gamma_{0},\mathcal{S})\) be the submatrices of \(\mathcal{M}_{\text{1-move}}(\tau_{0},\mathcal{S}),\mathcal{M}_{\text{2-move }}(\tau_{0},\mathcal{S})\) including only the rows which correspond to \(E(\mathcal{S})\). Schematically, we have that for the _1-move_ case: \[\mathcal{M}_{\text{1-move}}(\tau_{0},\mathcal{S})=\begin{bmatrix}&\vdots\\ \dots&(\mathsf{imprv}_{\tau 0,\mathcal{S}}(\mathcal{S}_{k}=\{u\})_{e=\{u,v\}}= \tau_{k-1}(u)\tau_{k-1}(v)&\dots\\ &\vdots\\ \dots&(\mathsf{imprv}_{\tau 0,\mathcal{S}}(\mathcal{S}_{k}=\{u\})_{e=\{z,w\}}=0\\ &\vdots\end{bmatrix}\qquad\qquad\qquad\cdots\\ \end{bmatrix}_{|E(\mathcal{S})|,|\text{1-move}(\mathcal{S})|}\] and for the _2-move_ case: \[\mathcal{M}_{\text{2-move}}(\tau_{0},\mathcal{S})=\begin{bmatrix}&\vdots\\ \dots&(\mathsf{imprv}_{\tau 0,\mathcal{S}}(\mathcal{S}_{k^{\prime}}=\{u,v\})_{e=\{u,v \}}=0&\dots\\ \dots&(\mathsf{imprv}_{\tau 0,\mathcal{S}}(\mathcal{S}_{k^{\prime}}=\{u,v\})_{e=\{u,z\}}= \tau_{k^{\prime}-1}(u)\tau_{k^{\prime}-1}(z)&\dots\\ \dots&(\mathsf{imprv}_{\tau 0,\mathcal{S}}(\mathcal{S}_{k^{\prime}}=\{u,v\})_{e=\{v,w\}}= \tau_{k^{\prime}-1}(v)\tau_{k^{\prime}-1}(w)&\dots\\ &\vdots\\ \dots&(\mathsf{imprv}_{\tau 0,\mathcal{S}}(\mathcal{S}_{k^{\prime}}=\{u,v\})_{e=\{z,w\}}= 0&\dots\\ &\vdots\end{bmatrix}_{|E(\mathcal{S})|,|\text{2-move}(\mathcal{S})|}\] Notice that we can derive \(\mathcal{M}_{1\text{-move}}(\tau^{\prime}_{0},\mathcal{S}),\mathcal{M}_{2\text{-move }}(\tau^{\prime}_{0},\mathcal{S})\) by multiplying \(\mathcal{M}_{1\text{-move}}(\tau_{0},\mathcal{S}),\mathcal{M}_{2\text{-move}}( \tau_{0},\mathcal{S})\) from left by the squared diagonal \(|E_{n}|\times|E_{n}|\) matrix \(\mathcal{D}[\tau_{0},\tau^{\prime}_{0}]\) such that \[\mathcal{D}[\tau_{0},\tau^{\prime}_{0}]_{(e,e)=((u,v),(u,v))}=\tau_{0}(u)\tau^{ \prime}_{0}(u)\tau_{0}(v)\tau^{\prime}_{0}(v)\text{ and }\mathcal{D}[\tau_{0},\tau^{\prime}_{0}]_{(e,e^{ \prime})}=0\text{ for }e\neq e^{\prime}.\] Indeed, we have that for an entry \((e,k)\) representing an edge \(e=(u,v)\) and the \(k\)-th \(\mu\)-move \[(\mathcal{D}[\tau_{0},\tau^{\prime}_{0}]\mathcal{M}_{\mu\text{-move }}(\tau_{0},\mathcal{S}))_{e,k}= \begin{cases}\tau_{0}(u)\tau^{\prime}_{0}(u)\tau_{0}(v)\tau^{ \prime}_{0}(v)\tau_{k}(u)\tau_{k}(v)&\text{if }\mathcal{M}_{\mu\text{-move}}(\tau_{0}, \mathcal{S}))_{e,k}=\tau_{k}(u)\tau_{k}(v)\\ 0&\text{if }\mathcal{M}_{\mu\text{-move}}(\tau_{0},\mathcal{S}))_{e,k}=0\\ \overset{Fact\ B.1}{=}&\begin{cases}\tau_{k}(u)\tau^{\prime}_{k}(u)\tau_{k}(v) \tau^{\prime}_{k}(v)\tau_{k}(u)\tau_{k}(v)&\text{if }\mathcal{M}_{\mu\text{-move}}(\tau_{0}, \mathcal{S}))_{e,k}=\tau_{k}(u)\tau_{k}(v)\\ 0&\text{if }\mathcal{M}_{\mu\text{-move}}(\tau_{0},\mathcal{S}))_{e,k}=0\end{cases}\] Since \(\tau_{i}(v)^{2}\) equals \(1\) for any \(v\in V_{n}\) for every \(i\in[\ell]\), we get that \[(\mathcal{D}[\tau_{0},\tau^{\prime}_{0}]\mathcal{M}_{\mu\text{-move}}(\tau_{0 },\mathcal{S}))_{e,k}= \begin{cases}\tau^{\prime}_{k}(u)\tau^{\prime}_{k}(v)&\text{if }\mathcal{M}_{\mu\text{-move}}(\tau_{0}, \mathcal{S}))_{e,k}=\tau_{k}(u)\tau_{k}(v)\\ 0&\text{if }\mathcal{M}_{\mu\text{-move}}(\tau_{0},\mathcal{S}))_{e,k}=0\end{cases}=( \mathcal{M}_{\mu\text{-move}}(\tau^{\prime}_{0},\mathcal{S}))_{e,k}\] for any \(\mu\in\{1,2\}\). Since \(\mathcal{D}[\tau_{0},\tau^{\prime}_{0}]\) is a full-rank matrix, leveraging Fact B.2, the above argument proves that \(\text{rank}(\mathcal{M}_{\mu\text{-move}}(\tau_{0},\mathcal{S}))\) is independent of the initial configuration. More interestingly, in order to prove Lemma 2.4, it suffices to prove that the rank of the column matrix with the improvement vectors of arcs is independent of the initial configuration. In fact, let \[M_{\mathsf{arcs}(\mathcal{S})}(\tau_{0},\mathcal{S}))=\begin{bmatrix}\vdots& \vdots&&\vdots\\ \mathsf{imprv}_{\tau_{0},\mathcal{S}}(\alpha_{1})&\cdots&\mathsf{imprv}_{\tau_ {0},\mathcal{S}}(\alpha_{k})&\cdots&\mathsf{imprv}_{\tau_{0},\mathcal{S}}( \alpha_{|\mathsf{arcs}(\mathcal{S})|})\\ \vdots&&\vdots&&\vdots\end{bmatrix}\] By definition, we have that for any \(\alpha\in\mathsf{arcs}(\mathcal{S})\): \[\mathsf{imprv}_{\tau_{0},\mathcal{S}}(\alpha)=\tau_{i}(u)\cdot\mathsf{imprv}_{ \tau_{0},\mathcal{S}}(i)-\tau_{j}(u)\cdot\mathsf{imprv}_{\tau_{0},\mathcal{S}}( j)\in\mathbb{Z}^{E(\mathcal{S})},\] or equivalently \(M_{\mathsf{arcs}(\mathcal{S})}(\tau_{0},\mathcal{S}))=\mathcal{M}_{1\text{- move}}(\tau_{0},\mathcal{S}))\cdot\mathcal{T}(\tau_{0},\mathcal{S})\), where \(\mathcal{T}(\tau_{0},\mathcal{S})\) is a sparse (\(|1\text{-move}(\mathcal{S})|\times|\mathsf{arcs}(\mathcal{S})|\)) rectangular matrix such that \[\mathcal{T}(\tau_{0},\mathcal{S})_{k\text{-th }1\text{-move},\alpha=(i,j)}= \begin{cases}(-1)^{\frac{(k-i)}{(j-i)}}\tau_{k}(u)&k\in\{i,j\}\\ 0&\text{otherwise}\end{cases},\text{ where }u\text{ is the corresponding node of arc }\alpha.\] Schematically, we have that matrix \(\mathcal{T}(\tau_{0},\mathcal{S})\) includes the \((\tau_{i}(node(\alpha)),-\tau_{j}(node(\alpha)))_{\alpha=(i,j)\in\mathsf{arcs}( \mathcal{S})}\) pairs expanded to \(\{0,\pm 1\}^{1\text{-move}(\mathcal{S})}\). \[\mathcal{T}(\tau_{0},\mathcal{S})=\begin{bmatrix}0&\tau_{i}(node(\alpha_{1}=(i, j)))&0&\cdots&-\tau_{j}(node(\alpha_{1}=(i,j)))&0&\cdots&0\\ \vdots&0&\vdots&\vdots&0&\vdots\\ 0&0&\tau_{i}(node(\alpha_{arcs}(\mathcal{S})=(i^{\prime},j^{\prime})))&0&\cdots& \tau_{i^{\prime}}(node(\alpha_{arcs}(\mathcal{S})=(i^{\prime},j^{\prime})))&0&0 \end{bmatrix}^{\top}.\] Again, notice that we can derive \(\mathcal{T}(\tau_{0}^{\prime})\) by multiplying \(\mathcal{T}(\tau_{0})\) from right by the squared diagonal \(|\mathsf{arcs}(S)|\times|\mathsf{arcs}(S)|\) matrix \(\mathcal{D}^{\prime}\) such that \[\mathcal{D}^{\prime}[\tau_{0},\tau_{0}^{\prime}]_{(\alpha,\alpha)=((i,j),(i,j)) }=\tau_{0}(u)\tau_{0}^{\prime}(u)\text{ where $u$ is the node of $\alpha$ and $\mathcal{D}^{\prime}[\tau_{0},\tau_{0}^{\prime}]_{(\alpha,\alpha^{\prime})}=0 $ for $\alpha\neq\alpha^{\prime}$.}\] Indeed, we have that for an entry \((k,\alpha)\) representing an arc \(\alpha=(i,j)\) and the \(k\)-th \(1\)-move \[(\mathcal{T}(\tau_{0})\mathcal{D}^{\prime}[\tau_{0},\tau_{0}^{\prime}])_{k, \alpha}=\begin{array}{ll}\tau_{0}(u)\tau_{0}^{\prime}(u)\tau_{i}(u)=\tau_{i }^{\prime}(u)\tau_{i}(u)=\tau_{i}^{\prime}(u)&\text{if $(\mathcal{T}(\tau_{0}))_{k, \alpha}=\tau_{i}(u)$}\\ -\tau_{0}(u)\tau_{0}^{\prime}(u)\tau_{j}(u)=\tau_{j}(u)\tau_{j}^{\prime}(u) \tau_{j}(u)=-\tau_{j}^{\prime}(u)&\text{if $(\mathcal{T}(\tau_{0}))_{k, \alpha}=-\tau_{j}(u)$}\\ 0&\text{if $(\mathcal{T}(\tau_{0}))_{k,\alpha}=0$}\end{array}\] where first equality leverages Fact B.1 and the second one uses the fact \(\tau_{i}(v)^{2}\) equals \(1\) for any \(v\in V_{n}\) for every \(i\in[\ell]\). To sum-up, it holds that \[M_{\mathsf{arcs}(S)}(\tau_{0}^{\prime},\mathcal{S})) =\mathcal{M}_{1\text{-move}}(\tau_{0}^{\prime},\mathcal{S}))\cdot \mathcal{T}(\tau_{0}^{\prime},\mathcal{S})=\left(\mathcal{D}[\tau_{0},\tau_{0} ^{\prime}]\mathcal{M}_{1\text{-move}}(\tau_{0},\mathcal{S}))\right)\cdot\left( \mathcal{T}(\tau_{0},\mathcal{S})\mathcal{D}^{\prime}[\tau_{0},\tau_{0}^{ \prime}]\right)\] \[=\mathcal{D}[\tau_{0},\tau_{0}^{\prime}]\left(\mathcal{M}_{1\text {-move}}(\tau_{0},\mathcal{S}))\right)\cdot\mathcal{T}(\tau_{0},\mathcal{S})) \mathcal{D}^{\prime}[\tau_{0},\tau_{0}^{\prime}]=\mathcal{D}[\tau_{0},\tau_{0}^ {\prime}]M_{\mathsf{arcs}(S)}(\tau_{0}^{\prime},\mathcal{S}))\mathcal{D}^{ \prime}[\tau_{0},\tau_{0}^{\prime}]\] Given that \(\mathcal{D}^{\prime}[\tau_{0},\tau_{0}^{\prime}]\) and \(\mathcal{D}[\tau_{0},\tau_{0}^{\prime}]\) are full-rank matrices, leveraging Fact B.2, we can prove that the above argument proves that \(\mathsf{rank}(\mathcal{M}_{\mathsf{arcs}(S)}(\tau_{0},\mathcal{S}))\) is independent of the initial configuration. Now, in order to prove Lemma 2.8, we recall the definition of a dependent cycle \(C\) of size \(t\), namely let \(C=\big{(}c_{1}=\{u_{1},u_{2}\},c_{2}=\{u_{2},u_{3}\}\ldots,c_{t}=\{u_{t},u_{1} \}\big{)}\) in a move sequence \(\mathcal{S}\). For an initial configuration \(\tau_{0}\in\{\pm 1\}^{V(\mathcal{S})}\), we say that cycle is _dependent_ with respect to \(\tau_{0}\) if there exists \(b\in\{\pm 1\}^{t}\) such that \[\begin{bmatrix}\tau_{1}(u_{1})&0&\cdots&\cdots&0&\tau_{i}(u_{1})\\ \tau_{1}(u_{2})&\tau_{2}(u_{2})&\cdots&\cdots&\cdots&0\\ 0&\tau_{2}(u_{3})&\tau_{3}(u_{3})&\cdots&\cdots&0\\ \vdots&\vdots&\ddots&\ddots&\ddots&0\\ \vdots&\vdots&\vdots&\ddots&\ddots&0\\ 0&0&0&0&\tau_{t-1}(u_{t})&\tau_{i}(u_{t})\end{bmatrix}\begin{bmatrix}b_{1}\\ b_{2}\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ b_{t}\end{bmatrix}=\begin{bmatrix}0\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ b_{t}\end{bmatrix}\equiv\Delta(\tau_{0})\cdot b=0_{t}\] Again, notice that we can derive \(\Delta(\tau_{0}^{\prime})\) by multiplying \(\Delta(\tau_{0})\) from left by the squared diagonal \(|C|\times|C|\) matrix \(\mathcal{D}^{\prime\prime}\) such that \[\mathcal{D}^{\prime\prime}[\tau_{0},\tau_{0}^{\prime}]_{(u_{k},u_{k}^{\prime}) }=\tau_{0}(u)\tau_{0}^{\prime}(u)\text{ for $k=k^{\prime}$ and $\mathcal{D}^{\prime\prime}[\tau_{0},\tau_{0}^{\prime}]_{(u_{k},u_{k}^{\prime}) }=0$ for $k\neq k^{\prime}$.}\] Indeed, we have that for an entry \((k,c_{i})\) representing a \(2\)-_move_\(c_{i}=\{u_{i\mod t},u_{i+1\mod t}\}\) and the \(u_{k}\)-th node of the cycle \(C\): \[(\mathcal{D}^{\prime\prime}[\tau_{0},\tau_{0}^{\prime}]\Delta(\tau_{0}))_{k,c_{i }}=\begin{array}{ll}\tau_{0}(u_{k})\tau_{0}^{\prime}(u_{k})\tau_{i}(u_{k})= \tau_{i}(u_{k})\tau_{i}^{\prime}(u_{k})\tau_{i}(u_{k})=\tau_{i}^{\prime}(u_{k})& \text{if $(\Delta(\tau_{0}))_{k,c_{i}}=\tau_{i}(u_{k})$}\\ 0&\text{if $(\Delta(\tau_{0}))_{k,c_{i}}=0$}\end{array}\] Since \(\mathcal{D}^{\prime\prime}[\tau_{0},\tau_{0}^{\prime}]\) is full-rank matrix, we get that \((\Delta(\tau_{0}))=(\Delta(\tau_{0}^{\prime}))\). Therefore if there exists a non-zero vector \(b\) such that \(\Delta(\tau_{0})\cdot b=0\) then \(\Delta(\tau_{0}^{\prime})\cdot b=0\) as well, completing the proof for Lemma 2.8. For the last case of Lemma 2.10, we start by the following observations: 1. If \(\Delta(\tau_{0})\cdot b=0\), then for any \(b^{\prime}=\lambda b\), it also holds that \(\Delta(\tau_{0})\cdot b^{\prime}=0\) for any non-zero \(\lambda\) constant. 2. If \(\Delta(\tau_{0})\cdot b=0\) and \(b_{k}=0\) for some \(k\in[t]\), then \(b\) is the zero vector, \(b=0\). 3. More precisely, the vector that belong to the (right) null space of \(\Delta(\tau_{0})\), i.e., all the vectors \(b\) such that \(\Delta(\tau_{0})\cdot b=0\), are of the following form: \[\Delta(\tau_{0})\cdot b=0\Leftrightarrow b=b_{1}\cdot\left(1,-\frac{\tau_{c_ {1}}(u_{2})}{\tau_{c_{2}}(u_{2})},\cdots,(-1)^{k-1}\frac{\prod_{i\in[2:k]} \tau_{c_{i-1}}(u_{i})}{\prod_{i\in[2:k]}\tau_{c_{i}}(u_{i})},\cdots,(-1)^{t-1} \frac{\prod_{i\in[2:t]}\tau_{c_{i-1}}(u_{i})}{\prod_{i\in[2:t]}\tau_{c_{i}}(u_ {i})}\right)^{\top}\] 4. The term \(\frac{\prod_{i\in[2:k]}\tau_{c_{i-1}}(u_{i})}{\prod_{i\in[2:k]}\tau_{c_{i}}(u_ {i})}\) is independent of the initial configuration. Items (1)-(3) are simple linear algebras derivation. Item (4) holds since \[\frac{\prod_{i\in[2:k]}\tau_{c_{i-1}}(u_{i})}{\prod_{i\in[2:k]}\tau_{c_{i}}(u_ {i})}=\frac{\prod_{i\in[2:k]}\tau_{c_{i-1}}(u_{i})}{\prod_{i\in[2:k]}\tau_{c_{ i}}(u_{i})}\times\underbrace{\frac{\prod_{i\in[2:k]}\tau_{c_{i-1}}^{\prime}(u_{i}) \tau_{c_{i-1}}(u_{i})}{\prod_{i\in[2:k]}\tau_{c_{i}}^{\prime}(u_{i})\tau_{c_{ i}}(u_{i})}}_{=1\text{ from Fact \ref{thm:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq \[\mathcal{B}(\mathcal{S})=\begin{bmatrix}0&b_{1}(C_{1})&0&b_{\rho}(C_{1})&0&b_{|C_{ 1}|}(C_{1})&0&\cdots&0\\ b_{1}(C_{2})&0&\cdots&b_{\rho^{\prime}}(C_{2})&\vdots&0&\cdots&b_{|C_{2}|}(C_{2}) &0\\ 0&\cdots&&\vdots&&\cdots&&0\\ b_{1}(C_{|\text{cycles}(\mathcal{S})|})&b_{\rho^{\prime\prime}}(C_{|\text{ cycles}(\mathcal{S})|})&0&\cdots&\cdots&0&\cdots&b_{|C_{|\text{cycles}(\mathcal{S})|}}(C_{| \text{cycles}(\mathcal{S})|})&0\end{bmatrix}^{\top}\] Having noticed the above ones, it is easy to see that \(\mathsf{rank}(\mathcal{M}_{\text{arcs}(\mathcal{S})}(\tau_{0},\mathcal{S}))\) is independent of the initial configuration, since \[\mathcal{M}_{\text{arcs}(\mathcal{S})}(\tau_{0}^{\prime}, \mathcal{S}) =\mathcal{M}_{2\text{-move}}(\tau_{0}^{\prime},\mathcal{S})\cdot \mathcal{B}(\mathcal{S})=(\mathcal{D}[\tau_{0},\tau_{0}^{\prime}]\cdot \mathcal{M}_{2\text{-move}}(\tau_{0},\mathcal{S}))\mathcal{B}(\mathcal{S})\] \[=\mathcal{D}[\tau_{0},\tau_{0}^{\prime}]\cdot(\mathcal{M}_{2 \text{-move}}(\tau_{0},\mathcal{S})\mathcal{B}(\mathcal{S}))=\mathcal{D}[ \tau_{0},\tau_{0}^{\prime}]\cdot\mathcal{M}_{\text{arcs}(\mathcal{S})}( \tau_{0},\mathcal{S}).\] Thus, by Fact B.2 we get that \(\mathsf{rank}(\mathcal{M}_{\text{arcs}(\mathcal{S})}(\tau_{0},\mathcal{S}))= \mathsf{rank}(\mathcal{M}_{\text{arcs}(\mathcal{S})}(\tau_{0}^{\prime}, \mathcal{S}))\), concluding also the proof of Lemma 2.10.
2303.07156
Some quaternary additive codes outperform linear counterparts
The additive codes may have better parameters than linear codes. However, it is still a challenging problem to efficiently construct additive codes that outperform linear codes, especially those with greater distances than linear codes of the same lengths and dimensions. This paper focuses on constructing additive codes that outperform linear codes based on quasi-cyclic codes and combinatorial methods. Firstly, we propose a lower bound on the symplectic distance of 1-generator quasi-cyclic codes of index even. Secondly, we get many binary quasi-cyclic codes with large symplectic distances utilizing computer-supported combination and search methods, all of which correspond to good quaternary additive codes. Notably, some additive codes have greater distances than best-known quaternary linear codes in Grassl's code table (bounds on the minimum distance of quaternary linear codes http://www.codetables.de) for the same lengths and dimensions. Moreover, employing a combinatorial approach, we partially determine the parameters of optimal quaternary additive 3.5-dimensional codes with lengths from $28$ to $254$. Finally, as an extension, we also construct some good additive complementary dual codes with larger distances than the best-known quaternary linear complementary dual codes in the literature.
Chaofeng Guan, Ruihu Li, Yiting Liu, Zhi Ma
2023-03-13T14:30:22Z
http://arxiv.org/abs/2303.07156v4
# Some good quaternary additive codes ###### Abstract It is well known that additive codes may have better parameters than linear codes. However, it is still a challenging problem to efficiently construct additive codes that outperform linear codes, especially those with greater distance than linear codes of the same length and dimension. To advance this problem, this paper focuses on constructing additive codes that outperform linear codes using quasi-cyclic codes and combinatorial methods. Firstly, we propose a lower bound on the minimum symplectic distance of 1-generator quasi-cyclic codes of index even. Further, we get many binary quasi-cyclic codes with large symplectic distances utilizing computer-supported combination and search methods, all corresponding to good quaternary additive codes. Notably, 15 additive codes have greater distances than best-known quaternary linear codes in Grassl's code table (bounds on the minimum distance of quaternary linear codes [http://www.codetables.de](http://www.codetables.de)) for the same lengths and dimensions. Moreover, employing a combinatorial approach, we partially determine the parameters of optimal quaternary additive 3.5-dimensional codes with lengths from \(28\) to \(254\). Finally, as an extension, we also construct some good additive complementary dual codes with larger distances than best-known quaternary linear complementary dual codes in the literature. quasi-cyclic codes, symplectic bound, additive codes, optimal, additive complementary dual codes. ## I Introduction One of the most significant problems in coding theory is constructing good error-correcting codes. After decades of efforts, scholars have constructed a large number of linear codes with suitable parameters, Grassl et al. summarized those results and established an online code table [1] of best-known linear codes over small finite fields \(\mathbb{F}_{q}\), \(q\leq 9\). Unlike linear codes, additive codes are closed under vector addition but not necessarily closed under scalar multiplication. All linear codes can be considered as also additive codes, but additive codes are not necessarily linear. Therefore, theoretically, additive codes may have better parameters than additive codes. In addition, additive codes also have critical applications in quantum information [2, 3], computer memory systems [4, 5, 6], deep space communication [7], and secret sharing [8, 9]. Thus, it is crucial to construct good additive codes, especially ones with better performance than best linear codes. Quaternary additive codes were the first to receive scholarly attention, given the links to communications, electronic devices, computers, etc. In [10], Blokhuis and Brouwer determined the parameters of optimal quaternary additive codes of lengths not more than 12, some of which have higher information rates than optimal linear cases. Afterward, much work has been done on quaternary additive codes with small lengths [11, 12, 13, 14] or low dimensions [15, 16], resulting in a general determination of the parameters of quaternary additive codes of lengths up to 15 and a complete determination of the parameters of 2.5-dimensional optimal quaternary additive codes. Meanwhile, additive complementary dual codes1 (ACD codes) have also attracted a great deal of scholars' attention owing to their utility in constructing maximal-entanglement entanglement-assisted quantum codes [17, 18, 19], and their application in resisting side-channel attacks [20, 21, 22, 23, 24]. ###### Abstract We consider the following problem of determining the number of positive integer \(n\)-dimensional matrices \(\mathcal{C}_{s}\) of \(\mathcal{C}_{s}\). The problem of determining the number of positive integer \(n\)-dimensional matrices \(\mathcal{C}_{s}\) is formulated as follows. The problem of determining the number of positive integer \(n\)-dimensional matrices \(\mathcal{C}_{s}\) is formulated as follows. than linear, with higher information rate. However, there is also a particular case in which the corresponding best linear codes are \([n,\lfloor\frac{k}{2}\rfloor,d_{1}]_{q^{2}}\) and \([n,\lceil\frac{k}{2}\rceil,d_{2}]_{q^{2}}\), and \(d_{2}<d_{s}<d_{1}\). In this case, we can consider that \((n,\frac{k}{2},d_{s})_{q^{2}}\) fills the distance gap of best linear codes. ### _Cyclic codes and quasi-cyclic codes_ Let \(\mathscr{C}\) be an \([n,k]_{q}\) code over \(\mathbb{F}_{q}\), \(\mathscr{C}\) is cyclic provided that for all \(\mathbf{c}=(c_{0},c_{1},\cdots,c_{n-1})\in\mathscr{C}\), the cyclic shift \(\mathbf{c}^{\prime}=(c_{n-1},c_{0},\cdots,c_{n-2})\in\mathscr{C}\). Considering each codeword \(\mathbf{c}\) as a coefficients vector of polynomial \(\mathbf{c}(x)=\sum_{i=0}^{n-1}c_{i}x^{i-1}\) in \(\mathbb{F}_{q}[x]\), then \(\mathscr{C}\) corresponds to a principal ideal in the quotient ring \(\mathbb{R}_{q,n}=\mathbb{F}_{q}[x]/\left\langle x^{n}-1\right\rangle\), which is generated by a unique monic non-zero polynomial \(g(x)\) of degree \(n-k\). We call \(g(x)\) the generator polynomial of cyclic code \(\mathscr{C}\), and \(\mathscr{C}\) also can be denoted as \(\left\langle g(x)\right\rangle\). The parity check polynomial of \(\mathscr{C}\) is \(h(x)=\left(x^{n}-1\right)/g(x)\). The Euclidean dual code \(\mathscr{C}^{\perp_{\varepsilon}}\) of \(\mathscr{C}\) is also a cyclic code with generator polynomial \(g^{\perp_{\varepsilon}}(x)=\tilde{h}(x)=x^{\text{deg}(h(x))}h(x^{-1})\). A linear code \(\mathscr{C}\) of length \(n\ell\) over \(\mathbb{F}_{q}\) is called a quasi-cyclic code of index \(\ell\) if \(\mathbf{c}=(c_{0},c_{1},\ldots,c_{n\ell-1})\) is a codeword of \(\mathscr{C}\), then \(\mathbf{c}^{\prime}=(c_{n-1},c_{0},\ldots,c_{n-2},c_{2n-1},c_{n},\ldots,c_{2n -2},\ldots,c_{n\ell-1},c_{(n-1)\ell},\ldots,c_{n\ell-2})\) is also a codeword. Circulant matrices are basic components in the generator matrix for quasi-cyclic codes. An \(n\times n\) circulant matrix \(M\) is defined as \[M=\left(\begin{array}{cccc}m_{0}&m_{1}&m_{2}&\ldots&m_{n-1}\\ m_{n-1}&m_{0}&m_{1}&\ldots&m_{n-2}\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ m_{1}&m_{2}&m_{3}&\ldots&m_{0}\end{array}\right). \tag{1}\] If the first row of \(M\) is mapped onto polynomial \(m(x)\), then circulant matrix \(M\) is isomorphic to polynomial \(m(x)=m_{0}+m_{1}x+\cdots+m_{n-1}x^{n-1}\in\mathbb{R}_{q,n}\). So \(M\) can be determined by polynomial \(m(x)\). Generator matrix of \(h\)-generator quasi-cyclic code with index \(\ell\) has the following form: \[G=\left(\begin{array}{cccc}M_{1,0}&M_{1,1}&\cdots&M_{1,\ell-1}\\ M_{2,0}&M_{2,1}&\cdots&M_{2,\ell-1}\\ \vdots&\vdots&\ddots&\vdots\\ M_{h,0}&M_{h,1}&\cdots&M_{h,\ell-1}\end{array}\right), \tag{2}\] where \(M_{i,j}\) are circulant matrices generated by the polynomials \(m_{i,j}(x)\in\mathbb{R}_{q,n}\), where \(1\leq i\leq h\) and \(0\leq j\leq\ell-1\). ## III Bound on symplectic weight of 1-generator quasi-cyclic codes of index even For narrative convenience, and throughout this paper, we fix \(\ell\) as an even number and \(m=\ell/2\). Let \([s,t]\) (\(s\leq t\)) denote set \(\{s,s+1,\cdots,t\}\). For \(g(x)=g_{0}+g_{1}x+g_{2}x+\cdots+g_{n-1}x^{n-1}\in\mathbb{R}_{q,n}\), \([g(x)]\) denote vector generated by coefficients of \(g(x)\) in \(\mathbb{F}_{q}^{n}\), i.e., \([g(x)]=[g_{0},g_{1},g_{2},\cdots,g_{n-1}]\). Before providing the symplectic distance of 1-generator quasi-cyclic codes, we introduce the relationship between symplectic and Hamming weights as shown in Lemma 1. **Lemma 1**: _([38]) If \(\vec{x}\), \(\vec{y}\) be two vectors in \(\mathbb{F}_{q}^{n}\), then there is_ \[q\cdot\mathrm{w}_{s}(\vec{x}\mid\vec{y})=\mathrm{w}_{H}(\vec{x})+\mathrm{w}_{ H}(\vec{y})+\sum_{\alpha\in\mathbb{F}_{q}^{*}}\mathrm{w}_{H}(\vec{x}+\alpha \vec{y}). \tag{3}\] **Definition 1**: _Let \(g(x)\) and \(f_{j}(x)\) are polynomials in \(\mathbb{R}_{q,n}\), \(g(x)\mid(x^{n}-1)\), where \(j\in[0,n-1]\). If \(\mathscr{C}\) is a quasi-cyclic code generated by \(\left([g(x)f_{0}(x)]\), \([g(x)f_{1}(x)]\), \(\cdots\), \([g(x)f_{\ell-1}(x)]\right)\), then \(\mathscr{C}\) is called \(1\)-generator quasi-cyclic code with index \(\ell\). The generator matrix \(G\) of \(\mathscr{C}\) have the following form:_ \[G=\left(G_{0},G_{1},\cdots,G_{\ell-1}\right), \tag{4}\] _where \(G_{j}\) are \(n\times n\) circulant matrices generated by \([g(x)f_{j}(x)]\)._ As a special class of quasi-cyclic codes, \(1\)-generator quasi-cyclic codes can be regarded as linear codes generated by juxtaposing multiple cyclic codes. The following theorem determines a lower bound on the symplectic distance of \(1\)-generator quasi-cyclic codes with even index. _Theorem 1:_ Suppose \(\mathscr{C}\) is a \(1\)-generator quasi-cyclic code in Definition 1 of index \(\ell\). If \(gcd(f_{j}(x)+\alpha f_{j+m}(x),\frac{x^{n}-1}{g(x)})=1\), and \(deg(f_{j}(x)f_{j+m}(x))\geq 1\), \(\alpha\in\mathbb{F}_{q}\), \(i\in[0,\ell-1]\), \(j\in[0,m-1]\), then the following equation holds. \[d_{s}(\mathscr{C})\geq m\cdot\left\lceil\frac{q+1}{q}d(g(x))\right\rceil, \tag{5}\] where \(d_{s}(\mathscr{C})\) is the minimum symplectic distance of \(\mathscr{C}\), \(d(g(x))\) is the minimum Hamming distance of cyclic code \(\langle g(x)\rangle\). _Proof:_ Given that \(a(x)\) is any polynomial in \(\mathbb{R}_{q,n}\), then any codeword of \(\mathscr{C}\) can be denoted as \(\mathbf{c}=([a(x)f_{0}(x)g(x)],[a(x)f_{1}(x)g(x)],\;\cdots,\;[a(x)f_{\ell-1}(x) g(x)])\). Let \(\mathbf{c_{1}}=([a(x)f_{0}(x)g(x)],\;[a(x)f_{1}(x)g(x)],\;\cdots,\;[a(x)\;f_{m- 1}(x)g(x)])\), \(\mathbf{c_{2}}=([a(x)f_{m}(x)g(x)]\), \([a(x)f_{m+1}(x)g(x)]\), \(\cdots\), \([a(x)f_{\ell-1}(x)g(x)])\), and \(\mathbf{c_{3}}=\mathbf{c_{1}}+\alpha\mathbf{c_{2}}=([a(x)g(x)(f_{0}(x)+\alpha f _{m}(x))],\;[a(x)g(x)(f_{1}(x)+\alpha f_{m+1}(x))],\;\cdots,\;[a(x)g(x)(f_{m- 1}(x)+\alpha f_{\ell-1}(x))])\), respectively. By Lemma 1, symplectic weight of \(\mathbf{c}\) is \[\begin{array}{l}\mathrm{w}_{s}(\mathbf{c})=\left(\mathrm{w}_{H}(\mathbf{c_ {1}})+\mathrm{w}_{H}(\mathbf{c_{2}})+\sum\limits_{\alpha\in\mathbb{F}_{q}^{ \ast}}\mathrm{w}_{H}(\mathbf{c_{3}})\right)/q\\ =\left(\mathrm{w}_{H}(\mathbf{c_{1}})+\mathrm{w}_{H}(\mathbf{c_{2}})\right)/q \\ +\sum\limits_{i=0}^{m}\sum\limits_{\alpha\in\mathbb{F}_{q}^{\ast}}\mathrm{w}_{ H}([a(x)g(x)(f_{i}(x)+\alpha f_{m+i}(x))])/q.\end{array}\] For the reason that \(gcd(f_{i}(x),\frac{x^{n}-1}{g(x)})=1\), \(i\in[0,\ell-1]\), there are \(\mathrm{w}_{H}(\mathbf{c_{1}})\geq m\cdot d(g(x))\) and \(\mathrm{w}_{H}(\mathbf{c_{2}})\geq m\cdot d(g(x))\). In addition, it is easy to verify that when \(gcd(f_{j}(x)+\alpha f_{j+m}(x),\frac{x^{n}-1}{g(x)})=1\), there is \(\mathrm{w}_{H}([a(x)g(x)(f_{i}(x)+\alpha f_{m+i}(x))])\geq d(g(x))\). Therefore, the following formula holds. \[\begin{array}{l}\mathrm{w}_{s}(\mathbf{c})\geq 2m\cdot d(g(x))/q+(q-1)m\cdot d(g(x))/q\\ \geq m\cdot\left\lceil\frac{q+1}{q}d(g(x))\right\rceil.\end{array}\] \(\square\) _Lemma 2:_ Let \(gcd(n,q)=1\), \(g(x)\mid(x^{n}-1)\), and \(f(x)\) are polynomials in \(\mathbb{R}_{q,n}\). If \(f(x)\mid g(x)\), then there is \(gcd(f(x)+\alpha,\frac{x^{n}-1}{g(x)})=1\), \(\alpha\in\mathbb{F}_{q}\). _Proof:_ For \(gcd(n,q)=1\), \(x^{n}-1\) has no repeated irreducible factors over split field, so if \(f(x)\mid g(x)\), then \(gcd(f(x),\frac{x^{n}-1}{g(x)})=1\). In addition, as \(f(x)+\alpha\) is not a factor of \(\frac{x^{n}-1}{g(x)}\), \(gcd(f(x)+\alpha,\frac{x^{n}-1}{g(x)})=1\). \(\square\) _Corollary 1:_ For \(gcd(n,q)=1\), if there exsits a cyclic code over \(\mathbb{F}_{q}\) of parameters \([n,k,d]_{q}\), \(k<\frac{n}{2}\) then there also exsit additive codes over \(\mathbb{F}_{q^{2}}\) of parameters \((mn,\frac{k}{2},\geq m\cdot\left\lceil\frac{q+1}{q}d\right\rceil)_{q^{2}}\). _Proof:_ It is easy to conclude that Corollary 1 holds by combining Theorem 1 and Lemma 2. \(\square\) The following lemma will help to determine the optimality of low-dimensional additive codes. _Lemma 3:_ Let \(\mathscr{C}_{a}\) be an quaternary additive \((n,k,d)_{4}\) code, with \(k\geq 1\). Then, \[3n\geq\sum\limits_{i=0}^{2k-1}\left\lceil\frac{d}{2^{i-1}}\right\rceil. \tag{6}\] _Proof:_ The concatenated code of quaternary additive \((n,k,d)_{4}\) code and binary linear \([3,2,2]_{2}\) is \([3n,2k,2d]_{2}\). By the Griesmer bound, there is Equation (6), so this lemma holds. \(\square\) _Example 1:_ Let \(q=2\), \(n=31\), taking generator polynomial \(g(x)=x^{26}+x^{24}+x^{22}+x^{21}+x^{20}+x^{18}+x^{17}+x^{13}+x^{12}+x^{11}+x^{1 0}+x^{9}+x^{6}+x^{5}+x^{3}+1\) from Chen's Database [39], this will generate a binary cyclic code \(\mathscr{C}\) of parameters \([31,5,16]_{2}\). Through Theorem 1 and Lemma 2, choosing \(f_{j}(x)=x+1\) and \(f_{j+m}(x)=1\), we can obtain quaternary additive codes with parameters \((31m,2.5,\geq 24m)_{4}\). By virtue of Lemma 3, \((31m,2.5,24m)_{4}\) is optimal additive code, which also has better performance than optimal quaternary linear codes \([31m,2,24m]_{4}\). It should be noted that similar results were also obtained by Bierbrauer et al. [16]. However, the approach in this paper is more concise, and our codes have a cyclic (or quasi-cyclic) structure, which makes ours easier to encode and decode. **Theorem 2**: _If \(\mathscr{C}\) is a \(1\)-generator quasi-cyclic code with parameters \([tn,k,d]_{q}\) of index \(t\); Set polynomials \(f_{l}(x),f_{r}(x)\) satisfy \(gcd(f_{l}(x)+\alpha f_{r}(x),\frac{x^{n}-1}{g(x)})=1\), \(\alpha\in\mathbb{F}_{q}\), and \(deg(f_{l}(x)f_{r}(x))\geq 1\); then, there also exists an additive code have parameters \((tn,\frac{k}{2},\geq\left\lceil\frac{q+1}{q}d\right\rceil_{q})_{q}\)._ _Proof:_ Suppose generator of \(\mathscr{C}\) is \(\mathbf{g}(\mathbf{x})=([g(x)f_{0}(x)]\), \([g(x)f_{1}(x)]\), \(\cdots\), \([g(x)f_{t-1}(x)])\). Set \(\mathscr{C}^{\prime}\) is a \(1\)-generator quasi-cyclic code with generator \(\mathbf{g^{\prime}}(\mathbf{x})=(\mathbf{g}(\mathbf{x})f_{l}(x)\mid\mathbf{g} (\mathbf{x})f_{r}(x))\). Let \(a(x)\) be any polynomial in \(\mathbb{R}_{q,n}\), then any codeword in \(\mathscr{C}^{\prime}\) can be denoted as \(\mathbf{c^{\prime}}=(a(x)\mathbf{g}(\mathbf{x})f_{l}(x)\mid a(x)\mathbf{g}( \mathbf{x})f_{r}(x))\). Let \(\mathbf{c_{1}}^{\prime}=(a(x)\mathbf{g}(\mathbf{x})f_{l}(x))\), \(\mathbf{c_{2}}^{\prime}=(a(x)\mathbf{g}(\mathbf{x})f_{r}(x))\), and \(\mathbf{c_{3}}^{\prime}=\mathbf{c_{1}}^{\prime}+\alpha\mathbf{c_{2}}^{\prime}\), respectively. With the help of Lemma 1, symplectic weight of \(\mathbf{c^{\prime}}\) is given by the following equation. \[\mathrm{w}_{s}(\mathbf{c^{\prime}})=\left(\mathrm{w}_{H}(\mathbf{c_{1}}^{ \prime})+\mathrm{w}_{H}(\mathbf{c_{2}}^{\prime})+\sum\limits_{\alpha\in \mathbb{F}_{q}^{*}}\mathrm{w}_{H}(\mathbf{c_{3}}^{\prime})\right)/q.\] Since, \(\mathbf{c_{1}}^{\prime},\mathbf{c_{2}}^{\prime}\in\mathscr{C}\), there are \(\mathrm{w}_{H}(\mathbf{c_{1}}^{\prime})\geq d\), and \(\mathrm{w}_{H}(\mathbf{c_{2}}^{\prime})\geq d\). In addition, for the reason that \(\forall\alpha\in\mathbb{F}_{q}\), \(gcd(f_{l}(x)+\alpha f_{r}(x),\frac{x^{n}-1}{g(x)})=1\), and \(\mathbf{c_{3}}^{\prime}=a(x)(f_{l}(x)+\alpha f_{r}(x))(\mathbf{g}(\mathbf{x}))\); hence, \(\sum\limits_{\alpha\in\mathbb{F}_{q}^{*}}\mathrm{w}_{H}(\mathbf{c_{3}}^{ \prime})\geq(q-1)d\). Therefore, the following formula holds. \[\mathrm{w}_{s}(\mathbf{c^{\prime}}) \geq 2d/q+(q-1)d/q\] \[\geq \left\lceil\frac{q+1}{q}d\right\rceil.\] Then it is clear that \(\mathscr{C}^{\prime}\) is an symplectic code have parameters \(\left\lceil 2tn,k,\geq\left\lceil\frac{q+1}{q}d\right\rceil\right\rfloor_{q}^{s}\), which corresponds to an additive \(\left(tn,\frac{k}{2},\geq\left\lceil\frac{q+1}{q}d\right\rceil\right)_{q^{2}}\) code. **Example 2**: _Let \(q=2\), \(n=127\), taking generator polynomial \(g(x)=x^{120}+x^{119}+x^{117}+x^{114}+x^{113}+x^{111}+x^{108}+x^{106}+x^{104}+x ^{102}+x^{101}+x^{99}+x^{98}+x^{97}+x^{94}+x^{89}+x^{85}+x^{82}+x^{79}+x^{78}+ x^{77}+x^{76}+x^{74}+x^{72}+x^{66}+x^{64}+x^{63}+x^{62}+x^{61}+x^{58}+x^{57}+x^{56}+x^{54}+x^{53 }+x^{52}+x^{51}+x^{50}+x^{49}+x^{48}+x^{46}+x^{45}+x^{42}+x^{40}+x^{39}+x^{35}+x^{ 34}+x^{31}+x^{30}+x^{28}+x^{27}+x^{25}+x^{21}+x^{20}+x^{19}+x^{18}+x^{17}+x^{12 }+x^{11}+x^{10}+x^{6}+x^{4}+x+1\), this will generate an optimal binary cyclic code \(\mathscr{C}\) of parameters \([127,7,64]_{2}\). Through Lemma 2, choosing \(f_{0}(x)=x+1\) and \(f_{1}(x)=1\); then \(([g(x)f_{0}(x)],[g(x)f_{1}(x)])\) can generate a quasi-cyclic code \(\mathscr{C}_{l}\) with parameters \([254,7,128]_{2}\). By Theorem 1 and Lemma 3, \(\Phi(\mathscr{C}_{l})\) is an optimal quaternary \((127,3.5,96)_{4}\)2, which has better performance than optimal quaternary linear codes \([127,3,96]_{4}\) in [1]. In addition, with Theorem 2, taking \(f_{l}(x)=f_{0}(x)\), \(f_{r}(x)=1\); then, \(([g(x)f_{0}(x)f_{l}(x)]\), \([g(x)f_{1}(x)f_{l}(x)]\), \([g(x)f_{0}(x)f_{r}(x)]\), \([g(x)f_{1}(x)f_{r}(x)])\) will generate a \([508,7,\geq 192]_{2}^{s}\) symplectic code. So, by Lemma 3, there exist optimal \((254,3.5,192)_{4}\) additive code, which also outperform optimal quaternary linear codes \([254,3,192]_{4}\). Footnote 2: This code is also obtained by Guo et al. in [15], but our construction is simpler and has a cyclic structure. **Remark 1**: _Lemma 2 gives only one way to select \(f_{i}(x)\) that satisfies Theorem 1 and 2. This ensures that the distance of the resulting additive code is greater than or equal to the lower bound but is not necessarily the best. For additive codes of dimension greater than 3.5, a computational search would be an efficient way to construct additive codes with suitable parameters. An efficient search approach is to choose the generator polynomial of best cyclic code as \(g(x)\). It is more probable to find codes with considerable distance even if the lower bound of the distance derived from Theorem 1 and 2 is not large. Similar to the ASR algorithm, proposed by Aydin, Siap and Ray-Chaudhuri in [40], this can help us search for good additive codes._ ## IV Good quaternary additive codes outperform best-known linear codes This section focuses on constructing good additive codes. We construct many additive codes superior to linear counterparts, specifically, some of which perform better than best-known quaternary linear codes in [1]. Moreover, employing a combinatorial approach, we partially determine the parameters of optimal quaternary additive 3.5-dimensional codes with lengths from \(28\) to \(254\). Consider the finite field of order \(2\) by \(\mathbb{F}_{2}\) and the finite field of order \(4\) by \(\mathbb{F}_{4}=\{0,1,w,w^{2}\}\), where \(w^{2}+w+1=0\). In addition, \(\mathbf{1_{n}}\), \(\mathbf{w_{n}}\), \(\mathbf{w_{n}^{2}}\) denote all \(1\), \(w\) and \(w^{2}\) vectors of length \(n\), respectively. Before starting construction, we introduce additive codes' basic derivation and augmentation. **Lemma 4**: _If \(\mathscr{C}_{a}\) is an additive code of parameters \((n,k,d)_{q^{2}}\), then the following additive codes also exist:_ _(1) For \(i\geq 1\), \((n+i,k,\geq d)_{q^{2}}\) (Additive Extend);_ _(2) For \(i\leq d\), \((n-i,k,\geq d-i)_{q^{2}}\) (Additive Puncture);_ _(3) For \(i\leq k\), \((n-i,k-i,\geq d)_{q^{2}}\) (Additive Shorten);_ _Proof:_ The difference between generator matrix of additive and linear codes is that the number of rows of additive codes is \(2k\). For the reason that puncture and extension can be considered extensions and deletions of the codewords. Therefore, the extent and puncture of additive and linear codes have similar properties, so (1) and (2) hold. Since the proof of Theorem 1.5.7 (i) in [35] is also valid for additive codes, the puncture of \(\mathscr{C}_{a}\) is equivalent to the shorten of \(\mathscr{C}_{a}^{\perp}\). By puncturing \(\mathscr{C}_{a}^{\perp}\), one can get an additive code \((n-1,k)_{q^{2}}\), which satisfies that at least any \(d-1\) columns are linearly independent. Hence, there exist \((n-i,k-i,\geq d)_{q^{2}}\) additive codes. Therefore, (3) also holds. \(\square\) **Lemma 5**: _(Additive Augmentation) If \(\mathscr{C}_{a}\) is an additive code of parameters \((n,k,d)_{q^{2}}\), and no codeword with weight \(n\) in \(\mathscr{C}_{a}\), then there also exist the following additive codes._ _(1) \((n,k+0.5,\leq d)_{q^{2}}\)._ _(2) \((n,k+1,\leq d)_{q^{2}}\)._ _Proof:_ For \(\mathbb{F}_{q^{2}}^{n}\), there are three special vectors of weight \(n\), \(\mathbf{1_{n}}\), \(\mathbf{w_{n}}\), \(\mathbf{w_{n}^{2}}\), which can be expressed in two bases \(\mathbf{1}_{\mathbf{n}}\) and \(\mathbf{w_{n}}\). Since, there is no codeword in \(\mathscr{C}_{a}\) of weight \(n\), then \(\mathbf{1_{n}}\) and \(\mathbf{w_{n}}\) both cannot denoted by bases of \(\mathscr{C}_{a}\). Therefore, adding \(\mathbf{1_{n}}\), one can get an \((n,k+0.5)_{q^{2}}\) additive code; adding \(\mathbf{1_{n}}\) and \(\mathbf{w_{n}}\) to \(\mathscr{C}_{a}\), one can get an \((n,k+1)_{q^{2}}\) additive code, both of them have minimum distances no more than \(d\). \(\square\) **Example 3**: _Let \(q=2\), \(n=63\). Taking \(g(x)=x^{53}+x^{52}+x^{51}+x^{50}+x^{48}+x^{47}+x^{45}+x^{43}+x^{42}+x^{40}+x^{3 8}+x^{31}+x^{28}+x^{25}+x^{24}+x^{21}+x^{20}+x^{19}+x^{17}+x^{14}+x^{13}+x^{9} +x^{8}+x^{5}+x+1\), \(\langle g(x)\rangle\) can generator a \([63,10,27]_{2}\) cyclic code, selecting \(f_{0}(x)=x^{61}+x^{59}+x^{58}+x^{54}+x^{52}+x^{50}+x^{45}+x^{44}+x^{43}+x^{41}+ x^{39}+x^{33}+x^{32}+x^{31}+x^{26}+x^{24}+x^{23}+x^{22}+x^{20}+x^{18}+x^{13}+x^{12}+x^{1 1}+x^{10}+x^{9}+x^{6}+x^{4}+x^{2}+x\), and \(f_{1}(x)=1\); Since \(gcd(f_{0}(x)+f_{1}(x),\frac{x^{n}-1}{g(x)})=1\), and \(deg(f_{0}(x)f_{1}(x))\geq 1\), \(([g(x)f_{0}(x)],[g(x)])\) will generate a symplectic \([126,10,\geq 41]_{2}^{s}\) code \(\mathscr{C}_{s}\). Using Magma [34], the real distance of \(\mathscr{C}_{a}=\Phi(\mathscr{C}_{s})\) can be calculated as \(45\). Therefore, we can get an additive code of parameters \((63,5,45)_{4}\), which have a larger minimum distance compared with best-known linear code \([63,5,44]_{4}\) in [1]. In addition, augment \((63,5,45)_{4}\), we can get \((63,5.5,45)_{4}\); extend \((63,5,45)_{4}\), we can get \((64,5,46)_{4}\). Extend \((63,5.5,45)_{4}\), we can get \((64,5.5,46)_{4}\). Best-known linear codes in [1] are \([63,5,44]_{4}\), \([64,5,45]_{4}\), so \((63,5.5,45)_{4}\), \((64,5,46)_{4}\) and \((64,5.5,46)_{4}\) all outperform best-known linear counterparts. **Lemma 6**: _(Additive Construction X) If there are two additive codes \(\mathscr{C}_{a2}\subset\mathscr{C}_{a1}\), with parameters \((n,k_{2},d_{2})_{q^{2}}\subset(n,k_{1},d_{1})_{q^{2}}\), where \(d_{2}>d_{1}\). Let \(\mathscr{C}_{a3}\) be an additive code have parameters \((l,k_{1}-k_{2},\delta)_{q^{2}}\), then there exists \((n+l,k_{1},\min\{\delta+d_{1},d_{2}\})_{q^{2}}\) additive code._ _Proof:_ First, noting the generator matrices of \(\mathscr{C}_{a1}\), \(\mathscr{C}_{a2}\) and \(\mathscr{C}_{a3}\) as \(G_{a1}\), \(G_{g2}\) and \(G_{a3}\), respectively. Since, \(\mathscr{C}_{a2}\subset\mathscr{C}_{a1}\), there is \(G_{a1}=\begin{pmatrix}G_{a2}\\ G_{ax}\end{pmatrix}\). Then, constructing \(Gx=\begin{pmatrix}G_{ax}&G_{a3}\\ G_{a2}&0\end{pmatrix}\), which can generate an additive code have parameters \((n+l,k_{1},\min\{\delta+d_{1},d_{2}\})_{q^{2}}\). \(\square\) **Example 4**: _Let \(q=2\), \(n=35\). Taking \(g(x)=x^{28}+x^{25}+x^{24}+x^{21}+x^{20}+x^{19}+x^{18}+x^{17}+x^{15}+x^{13}+x^{12}+ x^{11}+x^{9}+x^{8}+x^{6}+x^{2}+x+1\), \(f_{0}(x)=x^{34}+x^{33}+x^{30}+x^{28}+x^{27}+x^{25}+x^{23}+x^{22}+x^{18}+x^{17}+x ^{16}+x^{11}+x^{8}+x^{7}+x^{6}+x^{5}+x^{4}\), \(f_{1}(x)=x^{34}+x^{27}+x^{23}+x^{21}+x^{20}+x^{19}+x^{18}+x^{16}+x^{15}+x^{13}+x ^{11}+x^{10}+x^{5}+x^{4}+x\). Then, \(([g(x)f_{0}(x)],[g(x)f_{1}(x)])\) will generate a symplectic \([70,7,26]_{2}^{s}\) code, which corresponds to an additive code of parameters \((35,3.5,26)_{4}\). Further, we can get a subcode with parameters \((35,1.5,30)_{4}\) by taking out codewords of weight \(30\) from \((35,3.5,26)_{4}\). Generator matrix of additive \((35,1.5,30)_{4}\) code is as follows._ \[G_{sub}=\left(\begin{array}{ \((40,3.5,30)_{4}\) and \((43,3.5,32)_{4}\) as auxiliary codes \(\mathscr{C}^{\prime}_{aux}\); then, with Lemma 6, we can get optimal additive codes with parameters \((160,4.5,120)_{4}\), \((163,4.5,122)_{4}\), \((168,4.5,126)_{4}\), \((171,4.5,128)_{4}\). By virtue of Lemma 3, \((43,3.5,32)_{4}\), \((160,4.5,120)_{4}\), \((163,4.5,122)_{4}\), \((168,4.5,126)_{4}\), \((171,4.5,128)_{4}\) are all optimal additive codes and have better performance than optimal linear codes in [1]. A combination method can obtain more optimal \(3.5\)-dimensional additive codes ranging from \(28\) to \(254\), as shown in Table 1. For clarity of presentation, we make \(\mathscr{C}_{t}\) denote quaternary additive \((n,3.5,n-t)_{4}\) code. \((\mathscr{C}_{t_{1}}\mid\mathscr{C}_{t_{2}})\) denotes the juxtaposition code of \(\mathscr{C}_{t_{1}}\) and \(\mathscr{C}_{t_{2}}\). Moreover, when \(\mathscr{C}_{t_{1}}\) and \(\mathscr{C}_{t_{2}}\) are combined, we default them both to the maximum length. The fourth column of Table I shows the range of the optimal additive codes derived with Lemma 3. In addition, most perform better than the optimal 3-dimensional linear codes in [1], but we will not list them here due to space limitations. Furthermore, we also construct a number of good additive codes, all of which are better than their linear counterparts in [1]. We give their specific constructions in Table V in Appendix and compare their parameters with best-known linear codes in [1] in Tables II and III to illustrate the effectiveness of the methods in this paper. In particular, Table II is a comparison done for same length \(n\) and dimension \(k\). Our additive codes have a greater distance. Listed in Table III are additive codes with higher information rates than best-known linear codes, i.e., they have twice as many codewords as the corresponding linear codes for the same code length \(n\) and distance \(d\). In particular, some of the additive codes in Tables II, III, V, IV and VI are derived codes and are marked with abbreviations to save space, as follows. * P: Puncture Code; * Ex: Extend Code; * S: Shorten Code; * D: Dual Code; * Au: Augment Code (Add \(\mathbf{1_{n}}\)); * DoubleAu: Augment Twice Code (Add \(\mathbf{1_{n}}\) and \(\mathrm{w_{n}}\)); * X: Additive Construction X. ## V Some quaternary ACD codes have larger distance than LCD codes This section focuses on constructing good ACD codes with greater distance than best-known quaternary LCD codes in [30, 31, 32, 33]. In [29], the authors identify sufficient and necessary conditions for the quasi-cyclic codes to be symplectic LCD, as shown in Lemma 7. **Lemma 7**: _([29]) Let \(\mathscr{C}\) be a \(1\)-generator quasi-cyclic code in Definition 1 of index \(\ell\). Taking \(\Lambda=\sum\limits_{j=0}^{m-1}(f_{j}(x)\bar{f}_{m+j}(x)-f_{m+j}(x)\bar{f}_{j} (x))\), then \(\mathscr{C}\) is symplectic LCD code if and only if the following equations hold._ \[\begin{array}{c}g(x)=\tilde{g}(x),\\ gcd(\Lambda,\frac{x^{n}-1}{g(x)})=1.\end{array} \tag{7}\] **Corollary 2**: _Let \(\mathscr{C}\) be a \(1\)-generator quasi-cyclic code in Definition 1 of index \(\ell\). If generator of \(\mathscr{C}\) satisfying Theorem 1 and Lemma 7, then there exists a symplectic LCD code with the following parameters:_ \[\left[\ell n,n-deg(g(x)),\geq m\cdot\left\lceil\frac{q+1}{q}d(g(x))\right\rceil \right]_{q}^{s}.\] **Remark 3**: _Symplectic LCD codes are equivalent to ACD codes, so corollary 2 also reveal the existence of ACD codes with parameters \(\left(mn,(n-Degree(g(x)))/2,\geq m\cdot\left\lceil\frac{q+1}{q}d(g(x))\right\rceil \right)_{q^{2}}\)._ **Example 7**: _Let \(q=2\), \(n=13\). Taking \(g(x)=x+1\), which can generate an optimal binary LCD code with parameters \([13,12,2]_{2}\). Selecting \(f_{1}(x)=x^{12}+x^{9}+x^{8}+x^{7}+x^{6}+x^{5}+x^{4}+x^{3}\) \(f_{2}(x)=x^{12}+x^{9}+x^{8}+x^{6}+x\), and \(f_{3}(x)=x^{10}+x^{9}+x^{8}+x^{7}+x^{6}+x^{3}+x^{2}+x+1\); Then, \(([g(x)],[g(x)f_{1}(x)],[g(x)f_{2}(x)],[g(x)f_{3}(x)])\) will generate a symplectic \([52,12,\geq 6]_{2}^{*}\) LCD code. Using Magma [34], one can compute the real symplectic distance of this code as \(15\). Therefore, we can get an ACD code of parameters \((26,6,15)_{4}\), which have a larger minimum distance compared with best-known LCD code \([26,6,14]_{4}\) in [33]. The following lemma will be helpful in constructing new ACD codes by combining trace Hermitian self-orthogonal codes and ACD codes. **Lemma 8**: _If there exist \((n_{1},k,d_{1})_{q^{2}}\) ACD code \(\mathscr{C}_{a1}\) and \((n_{2},k,d_{2})_{q^{2}}\) additive trace Hermitian self-orthogonal code \(\mathscr{C}_{a2}\), respectively. Then, there also exist \((n_{1}+n_{2},k,\geq(d_{1}+d_{2}))_{q^{2}}\) ACD code. _Proof:_ Denoting generator matrices of \(\mathscr{C}_{a1}\) and \(\mathscr{C}_{a2}\) as \(G_{a1}\) and \(G_{a2}\), respectively. Then, let \(\Phi^{-1}(G_{a1})=(A\mid B)\), \(\Phi^{-1}(G_{a2})=(C\mid D)\), and \(G_{a}=(A,C,B,D)\), then there is \[G_{a}\begin{pmatrix}0&I_{n_{1}+n_{2}}\\ -I_{n_{1}+n_{2}}&0\end{pmatrix}G_{a}^{T}=\begin{pmatrix}-B,-D,A,C\end{pmatrix} \begin{pmatrix}A^{T}\\ B^{T}\\ C^{T}\\ D^{T}\end{pmatrix}=(-BA^{T}+AB^{T})+(-DC^{T}+CD^{T}).\] Since, \(\mathscr{C}_{a1}\) is ACD code and \(\mathscr{C}_{a2}\) is trace Hermitian self-orthogonal code, there are \(Rank(-BA^{T}+AB^{T})=2k\) and \(-DC^{T}+CD^{T}=\mathbf{0}\). Therefore, \(\Phi(Ga)\) can generate an ACD code with parameters \((n_{1}+n_{2},k,\geq(d_{1}+d_{2}))_{q^{2}}\). \(\square\) _Example 8:_ Let \(q=2\), \(n_{1}=12\), \(n_{2}=17\). Taking \(g_{1}(x)=x^{4}+1\), \(f_{1,0}(x)=x^{11}+x^{8}+x^{6}+x^{5}+x^{4}+1\), \(f_{1,1}(x)=x^{11}+x^{8}+x^{4}\) and \(g_{2}(x)=x^{9}+x^{8}+x^{6}+x^{3}+x+1\), \(f_{2,0}(x)=x^{16}+x^{14}+x^{13}+x^{12}+x^{10}+x^{8}+x^{7}+x^{5}+x^{4}+x^{3}+x^{ 2}+x\), \(f_{2,1}(x)=x^{16}+x^{12}+x^{5}+x^{4}+x^{3}+x^{2}+x\). Then, \(([g_{1}(x)f_{1,0}],[g_{1}(x)f_{1,1}])\) and \(([g_{2}(x)f_{2,0}],[g_{2}(x)f_{2,1}])\) will generate \([24,8,7]_{2}^{s}\) symplectic LCD code \(\mathscr{C}_{s1}\) and symplectic self-orthogonal \([34,8,12]_{2}^{s}\) code \(\mathscr{C}_{s2}\), respectively. Therefore, \(\Phi(\mathscr{C}_{s1})\) is \((12,4,7)_{4}\) ACD code and \(\Phi(\mathscr{C}_{s2})\) is additive \((17,4,12)_{4}\) trace Hermitian self-orthogonal code. In accordance with Lemma 8, \((\Phi(\mathscr{C}_{s1})\mid\Phi(\mathscr{C}_{s2}))\) is an ACD code with parameters \((29,4,\geq 19)_{4}\), which outperforms \([29,4,18]_{4}\) LCD code in [33]. Further, choosing \(g_{3}(x)=x^{2}+1\), \(f_{3,0}(x)=x^{7}+x^{6}+x^{5}+x^{2}\), and \(f_{3,1}=x^{9}+x^{6}+x^{5}+x^{3}+1\); then, \(([g_{3}(x)f_{3,0}],[g_{3}(x)f_{3,1}])\) can also generate an symplectic LCD \([20,8,6]_{2}^{s}\) code \(\mathscr{C}_{s3}\). By Lemma 8, \((\Phi(\mathscr{C}_{s2})\mid\Phi(\mathscr{C}_{s3}))\) is an \((27,4,\geq 18)_{4}\) ACD code, which is also performing better than LCD code \([27,4,17]_{4}\) in the literature [33]. _Lemma 9:_ (Construction X of ACD code) If there are two ACD codes \(\mathscr{C}_{a2}\subset\mathscr{C}_{a1}\), with parameters \((n,k_{2},d_{2})_{q^{2}}\subset(n,k_{1},d_{1})_{q^{2}}\), where \(d_{2}>d_{1}\). Let \(\mathscr{C}_{a3}\) be an additive trace Hermitian self-orthogonal code have parameters \((l,k_{1}-k_{2},\delta)_{q^{2}}\), then there exists \((n+l,k_{1},\min\{\delta+d_{1},d_{2}\})_{q^{2}}\) ACD code. _Proof:_ First, noting the generator matrices of \(\mathscr{C}_{a1}\), \(\mathscr{C}_{a2}\) and \(\mathscr{C}_{a3}\) as \(G_{a1}\), \(G_{a2}\) and \(G_{a3}\), respectively. Since, \(\mathscr{C}_{a2}\subset\mathscr{C}_{a1}\), there is \(G_{a1}=\begin{pmatrix}G_{a2}\\ G_{ax}\end{pmatrix}\). Then, construct \(Gx=\begin{pmatrix}G_{ax}&G_{a3}\\ G_{a2}&0\end{pmatrix}\). By Lemma 8, \(Gx\) can generate an additive code have parameters \((n+l,k_{1},\min\{\delta+d_{1},d_{2}\})_{q^{2}}\). \(\square\) _Example 9:_ Let \(q=2\), \(n=12\). Taking \(g(x)=x^{4}+1\), and select \(f_{1}(x)=x^{7}+x^{6}+x^{5}+x^{4}+x^{3}+x^{2}+x+1\), \(f_{2}(x)=x^{9}+x^{8}+x^{5}\), \(f_{3}(x)=x^{10}+x^{9}+x^{6}+x^{5}+x^{4}+x^{3}+x^{2}\). Then, \(([g(x)],[g(x)f_{1}(x)],[g(x)f_{2}(x)],[g(x)f_{3}(x)])\) will generate a symplectic LCD \([42,8,16]_{2}^{s}\) code, which corresponds to an ACD code of parameters \((24,4,16)_{4}\). Using Magma [34], one can easily get a subcode with parameter \((24,1,22)_{4}\) by taking out two codewords of weight \(22\) from \((24,4,16)_{4}\). Generator matrix of additive code \((24,1,22)_{4}\) is as follows. \[G_{sub}=\begin{pmatrix}1\,ww^{2}w\;w\;0\;1\,ww^{2}w\;w\;0\;0\,w^{2}w^{2}\;1\; \;1\;wwww^{2}w^{2}\;1\;\;1\;ww\\ w\,0\;1\;\;ww^{2}ww\;0\;\;1\;ww^{2}w\;w\;w\;w^{2}w^{2}\,1\;1\;\;w\;w\;w^{2}w^{2} \,1\end{pmatrix}.\] According to Lemma 9, selecting trace Hermitian self-dual \((6,3,4)_{4}\)3 as auxiliary code leads to an ACD code with parameters \((30,4,20)_{4}\), which has a larger minimum distance than \((30,4,19)_{4}\) in [33]. Footnote 3: This code can be derived from the \([[6,0,4]]_{2}\) quantum code in [1]. _Lemma 10:_ If \(\mathscr{C}_{a}\) is an ACD code of parameters \((n,k,d)_{q^{2}}\), then the following ACD codes also exist: (1) For \(i\geq 1\), \((n+i,k,\geq d)_{q^{2}}\) (ACD Extend); (2) For \(i\leq k\), \((n-i,k-i,\geq d)_{q^{2}}\) (ACD Shorten); (3) For \(i\leq d\), \((n-i,k,\geq d-i)_{q^{2}}\) (ACD Puncture). _Proof:_ For (1), it is sufficient to directly juxtapose the all-zero column with \(\mathscr{C}_{a}\), or an additive self-orthogonal code. For (2), let \(G_{a}\) denote generator matrix of \(\mathscr{C}_{a}\). Deleting the first row and column of \(G_{a}\) yields \(G_{a}^{\prime}\). Since the dimensionality of ACD codes must be of integer dimensions, the generator matrix of all ACD codes is of even rows. Therefore, \(G_{a}^{\prime}\) will generate an additive code \(\mathscr{C}_{a}^{\prime}\) with \(0.5\)-dimension hull \(\mathcal{H}\). Then, removing \(\mathcal{H}\) from \(\mathscr{C}_{a}^{\prime}\) and noting the resulting ACD code as \(\mathscr{C}_{a}^{\prime\prime}\). As the two-step operation from \(\mathscr{C}_{a}\) to \(\mathscr{C}_{a}^{\prime\prime}\) is equivalent to shorten, the parameter of \(\mathscr{C}_{a}^{\prime\prime}\) is \((n-1,k-1,\geq d)_{q^{2}}\). Repeating this process will yield ACD codes with parameters \((n-i,k-i,\geq d)_{q^{2}}\). Finally, since shorten for \(\mathscr{C}_{a}\) is equivalent to puncture for \(\mathscr{C}_{a}^{\perp}\), (3) also holds. \(\square\) With a computer-aided search method, we also construct some good ACD codes of lengths ranging from \(22\) to \(30\). We give their generators in Table VI in Appendix. Further, we compare them with the best quaternary LCD codes in [30, 31, 32, 33] in Table IV. The results show that our ACD codes have greater distances for the same length and dimension. Specifically, symbols in Tables IV and VI are labeled the same as in Tables II and III. ## VI Conclusion In this work, we determine a lower bound on the symplectic distance of \(1\)-generator quasi-cyclic codes with index even and give several combinations and derivations of additive codes. To verify the applicability of our methods, with the help of Magma [34], we construct many good additive codes and ACD codes that are better than best-known linear codes in [1] and best LCD codes in [30, 31, 32, 33], respectively. Our results show that additive codes can improve linear codes for the same length and dimension when not optimal. Notably, most of the additive codes in this paper can also be considered additive cyclic codes. Therefore, it will be a fascinating problem to study the construction of additive codes using additive cyclic codes [41, 42, 43] in the future. ## Appendix In order to save space, we give generators of quasi-cyclic codes in abbreviated form in Table V and VI, presenting the coefficient polynomials in ascending order, with the indexes of the elements representing successive elements of the same number. For example, polynomial \(1+x^{2}+x^{3}+x^{4}\) over \(\mathbb{F}_{2}\) is denoted as \(101^{3}\). ## Acknowledgments The authors would like to thank the previous discussion with Markus Grassl for greatly inspiring thoughts about additive codes.
2301.06431
Broadband plasmonic nanoantennas for multi-color nanoscale dynamics in living cells
Recently, the implementation of plasmonic nanoantennas has opened new possibilities to investigate the nanoscale dynamics of individual biomolecules in living cell. However, studies have yet been restricted to single molecular species as the narrow wavelength resonance of gold-based nanostructures precludes the simultaneous interrogation of different fluorescently labeled molecules. Here we exploited broadband aluminum-based nanoantennas carved at the apex of near-field probes to resolve nanoscale-dynamic molecular interactions on intact living cell membranes. Through multicolor excitation, we simultaneously recorded fluorescence fluctuations of dual-color labeled transmembrane receptors known to form nanoclusters in living cells. Fluorescence cross-correlation studies revealed transient interactions between individual receptors in regions of ~60 nm. Moreover, the high signal-to-background ratio provided by the antenna illumination allowed us to directly detect fluorescent bursts arising from the passage of individual receptors underneath the antenna. Remarkably, by reducing the illumination volume below the characteristic receptor nanocluster sizes, we resolved molecular diffusion within nanoclusters and distinguished it from nanocluster diffusion. Spatiotemporal characterization of transient interactions between molecules is crucial to understand how they communicate with each other to regulate cell function. Our work demonstrates the potential of broadband photonic antennas to study multi-molecular events and interactions in living cell membranes with unprecedented spatiotemporal resolution.
Maria Sanz-Paz, Thomas S. van Zanten, Carlo Manzo, Mathieu Mivelle, Maria F. Garcia-Parajo
2023-01-16T13:41:16Z
http://arxiv.org/abs/2301.06431v2
# Broadband plasmonic nanoantennas for multi-color nanoscale dynamics in living cells ###### Abstract We present a novel and efficient and efficient method for the generation of plasmonic nanoantennas in living cells. We demonstrate that the generation of plasmonic nanoantennas in living cells is a promising tool for studying the generation of plasmonic nanoantennas in living cells. We demonstrate that the generation of plasmonic nanoantennas in living cells is a promising tool for studying the generation of plasmonic nanoantennas in living cells. _Maria Sanz-Paz\({}^{{\dagger},\ddagger,\#}\), Thomas S. van Zanten\({}^{{\dagger},\ddagger,\#}\), Carlo Manzo\({}^{{\dagger},\xi}\), Mathieu Mivelle\({}^{{\color[rgb]{0,0,0}*}}\), Maria F. Garcia-Parajo\({}^{{\dagger},\bot,\bot}\)_ \({}^{{\dagger}}\)ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute for Science and Technology, 08860 Barcelona, Spain \({}^{\lx@sectionsign}\)Department of Physics, University of Fribourg, Chemin du Musee 3, Fribourg CH-1700, Switzerland; \({}^{\ddagger}\)National Centre for Biological Sciences, Bangalore, India \({}^{\lx@sectionsign}\)Facultat de Ciencies, Tecnologia i Enginyeries, Universitat de Vic - Universitat Central de Catalunya, C. de la Laura 13, 08500 Vic, Spain \({}^{\lx@sectionsign}\)Sorbonne Universite, CNRS, Institut des NanoSciences de Paris, UMR 7588, 75005 Paris, France \({}^{\bot}\)ICREA, Pg. Lluis Companys 23, 08010 Barcelona, Spain \({}^{\#}\)Equally contributing authors ###### Abstract Recently, the implementation of plasmonic nanoantennas has opened new possibilities to investigate the nanoscale dynamics of individual biomolecules in living cell. However, studies have yet been restricted to single molecular species as the narrow wavelength resonance of gold-based nanostructures precludes the simultaneous interrogation of different fluorescently labeled molecules. Here we exploited broadband aluminum-based nanoantennas carved at the apex of near-field probes to resolve nanoscale-dynamic molecular interactions on intact living cell membranes. Through multicolor excitation, we simultaneously recorded fluorescence fluctuations of dual-color labeled transmembrane receptors known to form nanoclusters in living cells. Fluorescence cross-correlation studies revealed transient interactions between individual receptors in regions of \(\sim\)60 nm. Moreover, the high signal-to-background ratio provided by the antenna illumination allowed us to directly detect fluorescent bursts arising from the passage of individual receptors underneath the antenna. Remarkably, by reducing the illumination volume below the characteristic receptor nanocluster sizes, we resolved molecular diffusion within nanoclusters and distinguished it from nanocluster diffusion. Spatiotemporal characterization of transient interactions between molecules is crucial to understand how they communicate with each other to regulate cell function. Our work demonstrates the potential of broadband photonic antennas to study multi-molecular events and interactions in living cell membranes with unprecedented spatiotemporal resolution. Bowtie nanoaperture antennas, Photonic antennas, Fluorescence cross-correlation spectroscopy, Nanoscale lipid dynamics, Plasma membrane organization, Receptor nanoclustering. In recent years, the compartmentalization of biomolecules in space and time has emerged as a primary mechanism that regulates cellular function[1, 2, 3, 4, 5]. At the plasma membrane level, extensive research has demonstrated that multiple molecules, such as proteins and lipids, interact in a dynamic fashion creating transient nanoscale compartments of functional activity[6, 7, 8, 9, 10]. Such findings were revealed owing to the recent development of different optical techniques aimed at improving both spatial and temporal resolution beyond that of conventional diffraction-limited optical methods[11, 12, 13, 14, 15, 10]. Yet, monitoring dynamic multi-molecular interactions in living cells at the nanoscale remains challenging. With the advancement of single-molecule and super-resolution approaches, multiple techniques have been implemented aiming to reduce the illumination volume set by diffraction, thus enabling single-molecule dynamic studies at high labeling conditions in living cells on the nanoscale. For instance, stimulated emission depletion microscopy (STED)[16] and metallic nanoapertures[17, 18, 19] reduce the illumination area down to 50-200 nm in diameter. However, in the case of STED, both the high laser powers required and the increased photobleaching constitute significant drawbacks for its routine application in living cells[20]. Moreover, although dual-color STED is nowadays widely used for super-resolution imaging in fixed cells[21], its extension to living cells for multi-molecular dynamic studies using fluorescence cross-correlation spectroscopy (FCCS) remains highly challenging. In the case of subwavelength apertures, there is a compromise between volume confinement and light throughput since the effective power density decays as the fourth power of the aperture size. This severely limits their practical use in nanoscale studies, since aperture dimensions must be kept around 150-200 nm to provide sufficient excitation power. Nanoapertures and zero-mode waveguides (ZMW) have been used to perform simultaneous two-color FCCS in solution[22], on supported lipid bilayers[18] and live cells[18]. However, they suffer from the same throughput drawbacks as the single-color approaches. Photonic antennas take advantage of electromagnetic resonances to enhance the optical field at nanometric dimensions[23], reducing the observation volume to a few zeptoliters and thus enabling the detection of single molecules in highly concentrated solutions[23, 24]. These exciting results have prompted the search for different antenna nanofabrication strategies and geometries for nanoscale studies under biologically-relevant scenarios[25, 26, 27, 28, 29, 30, 31]. For instance, using nanoantenna arrays, we have been able to follow lipid diffusion in both model membranes and living cells[32, 33, 34]. Importantly, such measurements proved the existence of transient nanodomains in the membrane of living cells as small as 10 nm in size[32], demonstrating the great potential of photonic antennas to measure molecular diffusion and unravel nanoscale heterogeneities in intact living cell membranes. Nevertheless, despite their advantages, several challenges still limit the broad application of these devices for biological membrane studies. Because of the strong field gradients of the antenna near-field, the antenna needs to be positioned close to the fluorescence molecules (\(\sim\)10nm) such that the largest enhancement and spatial confinement are reached. This is commonly achieved by preparing lipid bilayers or seeding the cells on top of the antenna substrate, requiring careful sample preparation to minimize unwanted interactions between the sample and the underlying substrate that can potentially affect the diffusion of the molecules. In addition, most antenna designs are only resonant in a narrow wavelength range, restricting experiments to a single color. Here, we address these two limitations by implementing self-standing broadband photonic antennas fabricated at the apex of near-field scanning probes. We show that, by maintaining the antenna stationary within 10 nm above intact living cell membranes, lipid diffusion can be recorded on regions of \(\sim\)50 nm in size. We further demonstrate the capability of broadband antennas for FCCS at the nanoscale on living cells. Using this approach we resolve receptor interactions within nanoclusters and, importantly, discriminate molecular diffusion within nanoclusters from nanocluster diffusion in the plasma membrane of living cells. ## Results and Discussion In our experiments, photonic antennas were fabricated at the apex of tapered near-field probes and relied on a near-field scanning optical microscope (NSOM) for 3D positioning control of the antenna over the living cell membrane (Figure 1A). Laser light (\(\lambda=488\) nm and/or \(\lambda=633\) nm) was coupled to the back end of the near-field probe and guided toward the antenna. The near-field light exiting the antenna excited the sample that was maintained stationary with respect to the antenna. Fluorescence intensity fluctuations arising from the passage of molecules diffusing through the antenna illumination volume were collected using a high-NA objective (NA=1.3), filtered out from the excitation light, and sent to two single-photon counting avalanche photodiodes (APD) arranged for dual color spectral detection. Two photon-counting units were used to record the fluorescent photon arrival times, which were then processed by a software correlator[35]. For antenna design, we chose bowtie nanoaperture antennas (BNAs) carved on aluminum-coated optical fibers using a focused ion beam (FIB)[36]. Our fabrication approach allows for extreme reproducibility of BNA probes, with a gap between the metallic arms of \(\sim\)50 nm[36]. Moreover, BNAs provide optical throughputs of \(\sim\)10\({}^{-3}\), three orders of magnitude larger than circular nanoaperture probes of similar dimensions[36]. For accurate position control of the antenna over the cell membrane surface, we relied on a shear-force feedback loop with high sensitivity under liquid conditions[37]. The feedback loop maintained the antenna-sample distance at \(\sim\)10 nm with an error of \(\pm\)1 nm under liquid conditions (Figure S1). This approach has two advantages as compared to antennas fabricated on substrates. First, because of the controlled distance separation, it minimizes unwanted sticky interactions between the antenna and the membrane that might alter the diffusion of molecules. Second, diffusing fluorescent molecules experience the same degree of near-field enhancement and confinement by the antenna, as opposed to antenna substrates in which distance variations might occur due to sample preparation imperfections and/or axial membrane fluctuations. Indeed, we occasionally membrane movements as high as 25 nm over 10 s (Figure S1) that our feedback loop was able to track and compensate for with an accuracy of \(\pm\)1 nm. This shows that our overall approach maintains a constant axial distance between the membrane and the antenna despite potential membrane fluctuations during the measurements. Figure 1: **Operation and assessment of the optical performance of a photonic antenna probe to measure molecular diffusion on living cell membranes.****(A)** Schematics of the experimental setup. The BNA is engineered at the apex of an NSOM probe whose 3D position with respect to the sample is controlled with nanometer precision. The antenna is kept stationary with respect to the cell membrane and illuminates a nanoscopic area. Fluorescence fluctuations from diffusing molecules are collected via the objective and sent to detectors for multicolor detection. The top inset shows a representative SEM image of a BNA probe (scale bar: 100 nm). The bottom inset displays, as an example, a dual-color confocal image of a living CHO cell attached to the substrate, with surface receptors labeled with two different fluorophores (scale bar: 5 \(\upmu\)m). **(B)** FDTD simulations of the total electric field at \(\lambda=488\) nm (upper panel set) and \(\lambda=633\) nm (lower panel set) excitation together with the experimentally obtained fluorescence intensity maps from two spectrally different 20 nm beads excited by a BNA under the two main orthogonal excitations. For both wavelengths, the field is maximally confined and enhanced for excitation polarization transversal to the BNA gap. Scale bar: 100 nm **(C)** Experimentally measured fluorescence intensity from 20 nm beads excited by a 50 nm gap BNA antenna probe as a function of the axial probe-sample distance under different wavelength excitations and for a polarization transversal to the BNA. The solid lines represent the mean, and the shadowed areas correspond to the standard deviation from multiple retraction curves. To assess the spectral response of the BNAs, we performed FDTD simulations for different antenna gap sizes. Consistent with our earlier simulations[36, 38], the response of the BNA is broadband over the whole range of the visible spectrum regardless of the gap size (Figure S2), showing its potential for multicolor excitation with comparable enhancement and confinement, opening the possibility for FCCS experiments at the nanoscale. We further performed FDTD simulations to determine the (_x,y_) near-field intensity distributions of the BNAs for different wavelength excitations and optical fields transversally or longitudinally polarized with respect to the BNA gap (Figure 1B). The simulations assume a BNA size of 300x300 nm\({}^{2}\) and a gap size of 30 nm. For both ends of the visible spectrum (\(\lambda=488\) and \(\lambda=633\) nm), the field is highly confined and enhanced in the gap region of the BNA for a transversally polarized optical field (Figure 1B). However, for longitudinally polarized excitation and regardless of the wavelength used, the resonance is lost, and the field spatially delocalizes away from the gap (Figure 1B). To experimentally validate the simulations, we imaged 20 nm beads embedded in a thin polymer layer using a 50 nm gap BNA probe for the two excitation polarizations and wavelengths. In agreement with our simulations, (_x,y_) fluorescence distributions obtained from individual beads showed a larger enhancement and confinement for transversally polarized excitation regardless of the wavelength used (Figure 1B), confirming the broadband resonant character of the BNA. In addition, the field enhancement at both excitation wavelengths is directly estimated by calculating the ratio of the field detected for transversal and longitudinal polarizations, resulting in a 3.3-fold increase for \(\lambda=633\) nm and a 1.7-fold increase for \(\lambda=488\) nm. This larger enhancement measured at \(\lambda=633\) nm agrees well with the expected spectral response of our BNA, which is about two-fold higher at \(\lambda=633\) nm as compared to \(\lambda=488\) nm (see Figure S2, for a 50 nm gap). To evaluate the degree of confinement of the electric field in the axial direction for the two excitation wavelengths, the fluorescent intensity of individual beads _vs._ the antenna-sample axial distance was recorded (Figure 1C and Figure S3). A single exponential fitting of the transversely excited BNA (Figure 1C) yields an axial field penetration (1/e) of (\(66.6\pm 0.6\)) nm at \(\lambda=633\) nm and (\(72.0\pm 1.0\)) nm at \(\lambda=488\) nm. Notably, the standard deviation obtained from multiple approach-retraction experiments is very small, demonstrating the accurate axial control of the antenna position. Furthermore, taking the experimentally obtained confinement dimensions, we estimate the excitation volume of the BNA probe to be \(\sim\)5\(\cdot\)10\({}^{5}\) nm\({}^{3}\), more than two orders of magnitude smaller than the typical confocal volume. Overall, these results confirm the broadband response and nanoscale confinement of BNAs, highlighting their suitability for nanoscale multicolor experiments. To first demonstrate the feasibility of BNA probes to study the lateral mobility components in the membrane of living cells, we used Atto647N conjugated phosphoethanolamine (PE) lipids [16, 19]. CHO cells were allowed to spontaneously adhere to a glass coverslip for 48 hours and the fluorescent PE lipids were incorporated into the cell membrane (100-300 nM PE/BSA; two orders of magnitude higher labeling concentrations as compared to confocal) as described previously [16, 19]. Fluorescence fluctuations from diffusing PE lipids were recorded using a BNA probe with a nominally gap size of 50 nm, excited at \(\lambda=633\) nm. For longitudinal BNA excitation, intensity bursts below 100 kHz (background \(\sim\)30 kHz) were typically recorded (Figure 2A). In strong contrast, for transversal BNA excitation, intensity bursts of up to 600 kHz were detected (Figure 2B). Moreover, burst durations, i.e., passage time of the lipids through the antenna illumination area, were much shorter for transversal polarized excitation as compared to longitudinal. These two effects (increased fluorescence and shorter burst duration) confirm the polarization-dependent enhancement and spatial confinement provided by BNAs directly measured on living cells. Fluorescence time traces of at least 5 seconds in length were autocorrelated for different experiments using \(G(\tau)=\langle F(t)\cdot F(t+\tau)\rangle/\langle F(t)\rangle^{2}\), where \(\tau\) is the delay (lag) time and \(\langle\) \(\rangle\) indicates time averaging. Multiple autocorrelation functions (ACFs) were averaged and normalized for each excitation condition, i.e., confocal and the two BNA excitations (Figure 2C). A clear shift of the ACF curves towards shorter time lags was obtained when going from confocal to BNA transversal excitation. Since PE diffuses randomly within the membrane [19, 32], the shortening in the diffusion times obtained upon BNA transversal excitation results from the reduced illumination area provided by the antenna gap. To statistically confirm these results, we generated ACF curves from individual fluorescence traces and measured the amplitude of the ACF at \(G(0)\) and the time at which the ACF decays to half of its amplitude, \(\tau_{1/2}\). \(G(0)\) inversely scales with the apparent number of fluorescing molecules in the illumination area \(N\), while \(\tau_{1/2}\) reports on the characteristic diffusion time of the lipids through the illumination area [16, 19]. The distributions of \(N\) and \(\tau_{1/2}\) values over multiple ACFs are shown in Figure 2D. For a longitudinally excited BNA, the mean number of PE lipids and average transit time are \(N=8.5\pm 1.8\) and \(\tau_{1/2}=(12\pm 3)\) ms, respectively. For transversal BNA excitation, the PE diffusion times become much shorter, with a mean average of \(\tau_{1/2}=(1.8\pm 0.5)\) ms (Figure 2D). Moreover, for randomly diffusing molecules such as PE [19, 32], a reduction in the illumination area should scale linearly with a reduction in the number of molecules, \(N\), as is indeed observed. Using the measured transit times we estimate a 12/1.8 = 6.7-fold reduction in the number of molecules. Nevertheless, the average number of molecules obtained for transversal excitation is \(N\) = 4\(\pm\)2 (Figure 2D), i.e., showing a reduction of only 2.1-fold with respect to longitudinal excitation. This apparent discrepancy can be well understood by the fact that, for transversal BNA excitation, the antenna is resonant, and the field is enhanced, leading to higher intensity emission from the fluorophores and, thus, an effective higher \(N\). The field enhancement can thus be directly estimated from the ratio between the expected reduction of \(N\) and the experimentally obtained values, yielding a 6.7/2.1 = 3.2-fold fluorescence enhancement. This value agrees excellently with the 3.3-fold enhancement obtained from measuring beads (Figure 1B), demonstrating that the antenna performance is fully maintained even in complex environments such as living cell membranes. Figure 2: **Single lipid mobility in living cell membranes at the nanometer scale recorded with a BNA probe.****(A, B)** Representative fluorescence time traces (1 ms bin) of Atto647N-conjugated PE diffusing in a living cell membrane for the two orthogonal excitation polarization conditions (shown in the insets) of the BNA at \(\lambda=633\) nm. Both traces were recorded at the same membrane position. The insets display 200 ms zoom-ins of different representative bursts with a 50 \(\upmu\)s bin. Note that the overall constant background in both fluorescence time traces further indicates the fixed distance separation between the antenna and the cell membrane maintained by the feedback loop. **(C)** Normalized ACFs curves of PE-Atto647N diffusing on the cell membrane under confocal (black squares), longitudinally excited BNA (blue triangles), and transversally excited BNA gap (magenta circles) illumination. The dashed lines correspond to the \(\tau_{1/2}\) values. **(D)** Distributions of the apparent number of molecules \(N\) and characteristic diffusion times obtained from individual trace, **(E)** 2D plot together with population distributions of burst duration _vs._ burst brightness (average background corrected) directly extracted from multiple fluorescence traces for longitudinal (blue, over 1488 bursts) and transversal (magenta, over 3643 bursts) excitation to the BNA gap. The large signal-to-background ratio afforded by BNA antennas allowed us to additionally perform burst analysis[32, 39] over multiple fluorescence trajectories. Here, individual bursts likely correspond to the passage of single PE lipids transiting within the BNA illumination area. We determined each burst's duration, i.e., transit time and background-subtracted intensity. Two discrete populations with different burst durations and intensities were unambiguously recovered for the two polarization excitation modes of the BNA (Figure 2E). Furthermore, a clear correlation between both parameters was obtained: shorter and brighter bursts for transversal BNA excitation, signatures of higher field confinement, and enhancement. Indeed, a \(\sim\)30-fold shortening of the burst duration was obtained for transversal (10-2 - 10\({}^{1}\) ms, peak at 0.7 ms) _vs._ longitudinal (10-1 - 10\({}^{2}\) ms, peak at 20 ms) BNA excitations (Figure 2E). The peak values of the histograms are within the range of the \(\tau_{1/2}\) values derived from the ACFs (Figure 2D), although the latter corresponds to average diffusion times over individual trajectories, whereas the burst duration histograms reveal the entire distribution of individually diffusing lipids. Furthermore, the effective confinement area provided by the BNA gap can be calculated directly from the recovered transient times, considering the reported diffusion coefficient of PE (0.5 \(\upmu\)m\({}^{2}\)/s)[16, 19]. We obtain a confinement diameter (assuming a circular illumination profile) between 42 nm (taking 0.7 ms from the burst analysis) and 68 nm (taking 1.8 ms as derived from the ACF curves), which is within the range of the 50 nm gap size as measured by SEM. Finally, the burst brightness provided a direct measure of the increased excitation intensity given by the BNA. The background corrected burst intensity increased by 11-fold, from 100 kHz to 1100 kHz maximum value (Figure 2E), upon changing the BNA excitation from longitudinal to transversal, similar to earlier reports[36, 40, 41]. In summary, these results demonstrate the well-maintained optical performance of photonic antennas for studies in living cell membranes. We further explored the potential of our broadband antennas for multi-color fluorescence autocorrelation spectroscopy (FCS) and FCCS studies at the nanoscale in living cells. Here, we focused on the transmembrane receptor DC-SIGN, a pathogen recognition receptor that forms nanoclusters ranging from 100-400 nm on both dendritic and CHO cells[42, 43, 44, 45, 46, 47]. DC-SIGN nanoclustering has been reported to play a key role in the capture of a large variety of nanometric-sized viruses [43, 45, 46]. Although DC-SIGN nanoclustering has been studied using a broad range of techniques, including TEM[43, 46], super-resolution microscopy[42], fluorescence recovery after photobleaching (FRAP)[47] and single-particle tracking (SPT)[43], it is not yet clear whether DC-SIGN nanoclusters are stable in time or assemble/disassemble transiently. Moreover, it is still an open question whether DC-SIGN receptors are mobile within nanoclusters or if the reported diffusions of DC-SIGN correspond to the mobility of the entire nanoclusters[43, 47]. To address some of these questions, we labeled the DC-SIGN on a stably transfected on CHO cells at saturating conditions, using equimolar concentrations of Atto520- and Atto647N-conjugated to single-chain antibodies. Dual-color FCS and FCCS experiments at the nanoscale were performed using a BNA probe (50 nm gap size) simultaneously excited with \(\lambda=488\)nm and \(\lambda=633\) nm (transversal polarization for both excitation wavelengths). Representative fluorescence traces recorded simultaneously in the two detection channels are shown in Figure 3A and Figure S4. Zoom-ins of coincident bursts indicate receptor co-diffusion within the same nanometric illumination area. Multiple single-color ACFs were normalized and averaged (Figure 3B). The shape of the ACF curves and the \(\tau_{1/2}\) values (17\(\pm\)6 ms and 10\(\pm\)2 ms for Atto647N and Atto520, respectively) are comparable, confirming similar confinement areas for both excitation wavelengths. Taking into account the transit times and the excitation area estimated above from the ACF curves for \(\lambda=633\) nm excitation (3600 nm\({}^{2}\)), we calculate an average diffusion coefficient for DC-SIGN of \(D=A/(4\tau)=5.3\cdot 10^{-2}\)\(\mu\)m\({}^{2}\)/s, which is within the range of our previously published values[43]. We then generated cross-correlation functions (CCFs) (Figure 3C) for the traces shown in Figure 3A and Figure S4. Non-negligible cross-correlations amplitudes \(G_{x}\) were retrieved for both traces, indicating co-diffusion of receptors in the same nanoscale volume and, thus, nanoscale interaction. In addition, average transit times of \(\tau_{x,1/2}\) of 10.2\(\pm\)1.8 ms and 19\(\pm\)8 ms were obtained for the two different curves, indicating heterogeneity in DC-SIGN diffusion, in agreement with earlier SPT results[43]. As a control, we also recorded time traces of DC-SIGN labeled with Atto520 and the Atto647N-conjugated PE lipid analog. As expected, the CCF Figure 3: **Simultaneous dual-color detection of receptor diffusion and FCCS at the nanoscale.****(A)** Representative fluorescent time trace (1 ms bin) of Atto520 (green) and Atto647N (red) conjugated to single chain antibodies and bound to DC-SIGN, expressed on CHO cells. Trajectories were generated using transversally polarized excitation of a BNA probe simultaneously excited with \(\lambda=488\)nm and \(\lambda=633\) nm. Zoom-ins of some of the coincident bursts are shown in the insets. **(B)** Normalized ACFs corresponding to the fluorescent intensity traces of dual-labeled DC-SIGN. **(C)** CCFs of two different dual-color DC-SIGN traces (pink and purple diamonds). The orange curve corresponds to the cross-correlation of Atto520-DC-SIGN with Atto647N-PE, and is used as a negative control to show no specific co-diffusion. **(D, E)** Histograms of the co-diffusion times **(D)** and the cross-correlation amplitudes **(E)** were obtained from finding \(\tau_{x,1/2}\) from the curves in Figure S5A. curves are almost flat and remained close to zero, consistent with a lack of cross-correlation between DC-SIGN and the lipid diffusions, and validating our FCCS measurements at the nanoscale. Multiple CCF curves of DC-SIGN obtained from different cells and/or membrane regions are shown in Figure S5A. The curves exhibit a broad distribution of cross-correlation times \(\tau_{x,1/2}\) and cross-correlation amplitudes at \(G_{x}\)(0) (Figure 3D, E). For comparison, the amplitude histogram of CCF curves from the traces with a lacking of cross-correlation is shown in Figure S5B. The broad range of characteristic \(\tau_{x,1/2}\) times obtained from the CCF curves can arise from both correlated motions of individual DC-SIGN receptors within the same nanocluster or correlated motion between different nanoclusters. In either case, the broad distribution of \(\tau_{x,1/2}\) indicates a range of interaction strengths of DC-SIGN receptors with their nano-environment that directly impinges on their diffusion behavior as two receptors, or receptor nanoclusters, co-diffuse. In the first case, the nano-environment constitutes the intra-nanocluster milieu, whereas, in the second case, it corresponds to the inter-nanocluster surrounding. Surprisingly, only a modest percentage (\(<30\%\)) of all the recorded trajectories exhibited cross-correlation amplitudes above the background. These results are at first sight unexpected, considering that a large majority of the DC-SIGN receptors partition in nanoclusters[42], and thus a high level of receptor co-diffusion was anticipated. To get more insight into these intriguing results and the nature of the broad range of \(\tau_{x,1/2}\) values obtained from FCCS curves, we performed burst analysis on individual DC-SIGN trajectories. We first measured the time duration of individual bursts and calculated the diffusion coefficient (\(D\)) from individual bursts (Figure 4A). A broad range of \(D\) values was obtained, as expected for most transmembrane receptors[43, 48]. However, in contrast to our earlier results obtained using SPT[43], the \(D\) values obtained here are significantly higher (peak \(\sim\)0.5 \(\upmu\)m\({}^{2}\)/s here _vs._\(\sim\)0.1 \(\upmu\)m\({}^{2}\)/s from SPT measurements[43]). Moreover, while the \(D\) distribution from SPT measurements showed a long tail towards values \(<0.1\)\(\upmu\)m\({}^{2}\)/s[43], the \(D\) distribution obtained from burst analysis at the nanoscale shows a tail towards \(D\) values \(>0.5\)\(\upmu\)m\({}^{2}\)/s. Since our FCS experiments are performed with much higher temporal resolution (\(\sim\)3\(\upmu\)s) as compared to SPT (typically 20-50 ms), the \(D\) values reported here most likely correspond to the diffusion of individual DC-SIGN molecules rather than to nanocluster diffusion as reported in Ref[43]. With most DC-SIGN receptors contained within nanoclusters[42], these results, therefore, imply that DC-SIGN can diffuse _inside_ nanoclusters. Earlier experiments by Jacobson and co-workers indeed suggested that DC-SIGN nanoclusters are not fully packed and that certain lipids could freely diffuse within these nanoclusters[45]. To further support these observations, we measured the distribution of lag times between the end and the onset of consecutive bursts, _cgdf_. We reasoned that if DC-SIGN is mostly organized within nanoclusters, its relative molecular density would be much larger inside nanoclusters than outside, which should be reflected in two markedly different inter-burst timescales. In agreement with the hypothesis, the histograms of _cgdf_ show a clear bimodal distribution (Figure 4B) with one population with peaking at \(\sim\)5 \(\cdot\)10-3 s and a second one at \(\sim\)2 s, regardless of the label used. We interpret these results based on the fact that the illumination area provided by the BNA (\(\sim\)50 nm) is much smaller than the reported average cluster size for DC-SIGN (\(\sim\)180 nm)[43]. Thus, when a nanocluster is present below the BNA, individual molecules inside the nanocluster will diffuse fast through the BNA gap, with short off-intervals between them, giving rise to the population of small _cgdf_ values. On the other hand, the individual nanoclusters have a lower density on the membrane and diffuse much slower, so it will take a much longer time for a new nanocluster to arrive at the BNA illumination region, resulting in a population with longer _cgdf_ values (Figure 4C). The random diffusion of individual receptors within large nanoclusters also explains the low occurrence of cross-correlation curves obtained in our measurements. To the best of our knowledge, these experiments resolve, for the first time, molecular diffusion inside receptor nanoclusters and distinguish it from the diffusion of the nanoclusters themselves. Figure 4: **Burst analysis reveals molecular diffusion of receptors inside nanoclusters as well as diffusion of nanoclusters on the plasma membrane of living cells.****(A)** Histogram of the diffusion coefficients of DC-SIGN calculated from the burst length duration for multiple Atto520 (green, 454 bursts) and Atto647N (red, 364 bursts) trajectories. **(B)** Distribution of \(\tau_{off}\) for the same trajectories as analyzed in **(a)**. **(C)** Schematic depiction of molecular diffusion inside nanoclusters (left) that give rise to the short \(\tau_{off}\) timescales and the nanocluster diffusion (right) that lead to the long \(\tau_{off}\) timescale as probed by the BNA. The grey circle illustrates a nanocluster containing multiple DC-SIGN molecules (green and red dots) that diffuse within the illumination area provided by the BNA gap (yellow circle). An experimentally obtained trajectory is shown below (1 ms binning, \(\sim\)1 s length), denoting the two different \(\tau_{off}\) regimes. In summary, we have demonstrated that probe-based broadband BNA photonic antennas are highly effective in obtaining fast dynamics of membrane components, i.e., lipids and proteins, at the nanoscale in living cells. This is both owing to their large optical throughput and spatial confinement of the excitation field. In addition, we showed that BNAs provide comparable enhancement and spatial confinement to dimensions as small as \(\sim\)50 nm for different excitation wavelengths. We demonstrated FCCS at the nanoscale in living cells and exploited the broadband behavior of these photonic antennas to investigate nanoscale co-diffusion of membrane receptors in living cells. Importantly, because of the nanometric dimensions of the BNA gap sizes and the sizes of the nanoclusters under study, we reveal for the first time intermolecular diffusion inside nanoclusters and receptor nanocluster diffusion. Altogether, our work shows that broadband photonic antennas are promising candidates for quantitative multi-color studies in living cell membranes with ultra-high spatiotemporal resolution. ## Materials and Methods _Antenna probe fabrication._ The BNA probes were fabricated as described previously[36]. Briefly, heat-pulled optical fibers were coated with 5 nm Ti and 150 nm Al. The coated fibers were milled by focused ion-beam (FIB) to obtain a 500-700 nm diameter opening. This end-facet was subsequently coated with a high-quality Al layer of about 120 nm. Finally, the BNA was directly milled face-on into the Al-coated end-facet. This fabrication method allowed for extreme reproducibility of BNA probes with gap regions around 50 nm[36]. _Optical setup._ For excitation in our combined NSOM/confocal setup, we used two lasers: a He-Ne laser at \(\lambda=633\) nm and an argon-krypton laser (Model 3060; Spectra-Physics, Santa Clara, CA) at \(\lambda=488\) nm. A combination of a polarizer and a \(\lambda/2\) waveplate for each separate excitation wavelength ensured control of the incoming polarization to the antenna. The fluorescence from single molecules diffusing through the excitation volume of the antenna was collected by a high NA objective (oil, 1.3NA), split into two branches using a dichroic mirror (600 LP), individually filtered from the excitation light (Semrock 536/40 and 675/67) and detected by two single photon counting avalanche photodiodes (APDs) A photon-counting unit (NI BCN-800) was used to record photon arrival time traces that are successively processed by a software correlator. A shear-force feedback system on the antenna probe, together with the piezo-electric sample stage, guaranteed close and constant distance regulation between the antenna and the cell membrane. The feedback system is based on a piezo-electric tuning fork that is operated in the air while the end facet of the probe containing the BNA is immersed in liquid using the diving bell concept[37]. _FDTD simulations._ 3D numerical modeling of the antenna was based on finite-difference time-domain (FDTD) simulations. The simulations consider a volume spanning \(\pm 2.6\)\(\upmu\)m in \(x\) and \(y\) around the BNA end face. The refraction index and taper angle of the dielectric body of the probe were chosen to be 1.448 and 32\({}^{\circ}\), respectively, and the aluminum dielectric constant is given by the Drude model adapted for each wavelength considered. The BNA is located at x = y = z = 0. In the z direction, the simulation extends to 1 \(\upmu\)m in air and terminates at 7 \(\upmu\)m into the body of the probe. All six boundaries of the computation volume are terminated with convolutional-periodic matching layers to avoid parasitic unphysical reflections around the probe. The non-uniform grid resolution varies from 25 nm for portions at the periphery of the simulation to 5 nm for the region in the immediate vicinity of the BNA (\(\pm 200\) nm in x and y and \(-\)200 to 100 nm in z). Excitation was done by a linearly polarized Gaussian beam launched at 7 \(\upmu\)m away from the tip body and propagating towards the BNA. _Cells and samples preparation._ Chinese hamster ovary (CHO) cells were cultured in phenol-red free Dulbecco's Modified Eagle Medium (DMEM) with nutrient mixture F-12 (1:1) supplemented with 10% fetal calf serum and Antibiotic Antimycotic Solution (Gibco). For incorporation of Atto647N conjugated phosphoethanolamine lipid (PE) in the cell membrane, we followed previously published protocols[16, 19]. Bovine Serum Albumin (BSA)/PE analog complexes were prepared by dissolving the lipid analogs in CHCl\({}_{3}\)/MeOH (3:1). From the stock solution, 100 nM of lipid analog were dried under a stream of nitrogen and re-dissolved in 20 \(\upmu\)l of absolute ethanol. After adding 1 ml of defatted BSA (0.2 mM in DMEM), the solution was vigorously vortexed. Cells spontaneously adhered to a glass coverslip after 48 hours of incubation at 37\({}^{\circ}\) C. For PE analogs incorporation, cells were washed with DMEM and incubated with BSA/PE analog complexes for 10 min at 25\({}^{\circ}\) C temperature, washed with DMEM, and prepared for observation. Typical concentrations of BSA/PE complexes were 100-300 nM. For the experiments on DC-SIGN, CHO cell lines stably expressing DC-SIGN wild type, containing a short C-terminal AU1 tag as already published[43], were cultured in Ham's F-12 medium (LabClinics) supplemented with 10% fetal calf serum and Antibiotic Antimycotic Solution (Gibco). Monovalent single-chain anti-human AU1 antibodies (mAbs) were generated from AU1 Ab (Covance) by reduction with Dithiotheitol (DTT, Invitrogen) according to manufacturer's instructions. Reduced Abs were then labeled with either Atto520 or Atto647N according to standard protocols provided by the manufacturer. Glass coverslip-adhered CHO cells were incubated for 30 min at room temperature using equimolar concentrations of Atto520- and Atto647N-conjugated single chain mAbs at saturating conditions to label all DC-SIGN receptors. Before imaging, extensive washing with the serum-free medium was performed to remove non-bound mAbs. _Burst analysis._ Fluorescence bursts were detected and quantified from unfiltered photon arrival-time recordings as previously described[39]. In brief, a likelihood-based algorithm was used to sequentially analyze photon recordings to test the null hypothesis (no burst, recording compatible with background noise) against the hypothesis that a fluorescence burst arises as a consequence of a fluorophore crossing the excitation volume. Background level and typical fluorophore intensity were estimated from the trace to be analyzed. Probabilities associated with false positives and missing event errors were both set to 0.001[32, 49]. From all detected bursts, intensity and length are measured, as well as the time between consecutive bursts for the DC-SIGN experiments. **Supporting Information** Supporting Information is available from the Wiley Online Library or from the author. **Acknowledgements** The research leading to these results has received funding from the European Commission H2020 Program under grant agreement ERC Adv788546 (NANO-MEMEC), Government of Spain (Severo Ochoa CEX2019-000910-S, State Research Agency (AEI) PID2020-113068RB-I00 / 10.13039/501100011033 (to M.F.G.-P., BES-2015-072189 (to M.S.-P.), grant RYC-2015-17896 funded by MCIN/AEI/10.13039/501100011033 and "El FSE invierte en tu futuro" (to C.M.), grants BFU2017-85693-R and PID2021-125386NB-I00 funded by MCIN/AEI/10.13039/501100011033/ and FEDER "Una manera de hacer Europa" (to C.M.), Fundacio CELLEX (Barcelona), Fundacio Mir-Puig and the Generalitat de Catalunya through the CERCA program and AGAUR (Grant No. 2017SGR1000 to M.F.G.-P. and 2017SGR940 to C.M.). **Conflict of Interest** The authors declare no conflict of interest. **Data Availability Statement** The data that support the findings of this study are available from the corresponding author upon reasonable request.
2303.08265
The imprint of clump formation at high redshift. II. The chemistry of the bulge
In Paper I we showed that clumps in high-redshift galaxies, having a high star formation rate density (\Sigma_SFR), produce disks with two tracks in the [Fe/H]-[\alpha/Fe] chemical space, similar to that of the Milky Way's (MW's) thin + thick disks. Here we investigate the effect of clumps on the bulge's chemistry. The chemistry of the MW's bulge is comprised of a single track with two density peaks separated by a trough. We show that the bulge chemistry of an N-body + smoothed particle hydrodynamics clumpy simulation also has a single track. Star formation within the bulge is itself in the high-\Sigma_SFR clumpy mode, which ensures that the bulge's chemical track follows that of the thick disk at low [Fe/H] and then extends to high [Fe/H], where it peaks. The peak at low metallicity instead is comprised of a mixture of in-situ stars and stars accreted via clumps. As a result, the trough between the peaks occurs at the end of the thick disk track. We find that the high-metallicity peak dominates near the mid-plane and declines in relative importance with height, as in the MW. The bulge is already rapidly rotating by the end of the clump epoch, with higher rotation at low [\alpha/Fe]. Thus clumpy star formation is able to simultaneously explain the chemodynamic trends of the MW's bulge, thin + thick disks and the Splash.
Victor P. Debattista, David J. Liddicott, Oscar A. Gonzalez, Leandro Beraldo e Silva, Joao A. S. Amarante, Ilin Lazar, Manuela Zoccali, Elena Valenti, Deanne B. Fisher, Tigran Khachaturyants, David L. Nidever, Thomas R. Quinn, Min Du, Susan Kassin
2023-03-14T22:38:32Z
http://arxiv.org/abs/2303.08265v1
# The imprint of clump formation at high redshift. II. The chemistry of the bulge ###### Abstract In Paper I we showed that clumps in high-redshift galaxies, having a high star formation rate density (\(\Sigma_{\rm SFR}\)), produce disks with two tracks in the [Fe/H]-[\(\alpha\)/Fe] chemical space, similar to that of the Milky Way's (MW's) thin+thick disks. Here we investigate the effect of clumps on the bulge's chemistry. The chemistry of the MW's bulge is comprised of a single track with two density peaks separated by a trough. We show that the bulge chemistry of an \(N\)-body+smoothed particle hydrodynamics clumpy simulation also has a single track. Star formation within the bulge is itself in the high-\(\Sigma_{\rm SFR}\) clumpy mode, which ensures that the bulge's chemical track follows that of the thick disk at low [Fe/H] and then extends to high [Fe/H], where it peaks. The peak at low metallicity instead is comprised of a mixture of in-situ stars and stars accreted via clumps. As a result, the trough between the peaks occurs at the end of the thick disk track. We find that the high-metallicity peak dominates near the mid-plane and declines in relative importance with height, as in the MW. The bulge is already rapidly rotating by the end of the clump epoch, with higher rotation at low [\(\alpha\)/Fe]. Thus clumpy star formation is able to simultaneously explain the chemodynamic trends of the MW's bulge, thin+thick disks and the Splash. Galactic bulge (2041) -- Milky Way formation (1053) -- Milky Way evolution (1052) -- Milky Way dynamics (1051) -- Galaxy bulges (578) 00000-0002-0001-7000-0001-70001-7001-7000-7001-7000-7001-70001-7000-70001-70001-70001-70001-70001-70001-70001-70001-7001-70001-7001-70001-70001-7001-70001-70001-7001-70001-70001-70001-7001-70001-7001-70001-70001-70001-70001-7001-70001-70001-70001-70001-70001-70001-70001-70001-70001-70001-0001-70001-70001-0001001-70001-70001-000101-700 surveys such as ARGOS (Freeman et al., 2013), GIBS (Zoccali et al., 2014) and APOGEE (Majewski et al., 2016) have mapped the chemistry across the bulge (e.g. Ness et al., 2013; Gonzalez et al., 2015; Zoccali et al., 2017; Queiroz et al., 2021) generally finding that its [Fe/H]-[\(\alpha\)/Fe] plane exhibits a single track, with two peaks and a trough between them. In contrast, in the Solar Neighborhood, two tracks1 are evident: at fixed [Fe/H], a high-[\(\alpha\)/Fe] track corresponds to the thick disk and a low-[\(\alpha\)/Fe] track corresponds to the thin disk. The bulge chemistry follows the thick disk track at low metallicity (Melendez et al., 2008; Bensby et al., 2010; Alves-Brito et al., 2010; Hill et al., 2011; Bensby et al., 2013), but then extends to the most metal-rich thin-disk stars. The location of the knee in the [Fe/H]-[\(\alpha\)/Fe] plane has generally been found to be identical between the bulge and thick disk (Jonsson et al., 2017; Zasowski et al., 2019), with perhaps minor differences (Johnson et al., 2014; Bensby et al., 2017; Schultheis et al., 2017), which may be partly attributed to comparing bulge giants with local thick disk dwarfs. Williams et al. (2016) found bimodalities in the bulge's [Fe/H] and [\(\alpha\)/Fe] in the _Gaia_-ESO data, with the metal-rich stars exhibiting lower velocity dispersions than the metal-poor ones. The advent of the large APOGEE DR17 dataset, and matching data from _Gaia_ Data Release 2 (DR2), have permitted more detailed studies of the bulge chemistry. Lian et al. (2020) used the bulge's chemistry to model its star formation history (SFH) and concluded that it is comprised of three phases: an early high star formation rate (SFR) phase, which is interrupted by a quenched phase, which produces a gap in the chemistry, followed by a later secular phase of low SFR. Footnote 1: Different authors prefer either the term _tracks_ or _sequences_ to refer to the same thing. Throughout we will refer to _tracks_. The chemistry of the disk(s) differs from these trends. Many explanations have been advanced for the disk \(\alpha\)-bimodality. The "two-infall" model of Chiappini et al. (1997) (see also Chiappini, 2009; Bekki and Tsujimoto, 2011; Tsujimoto and Bekki, 2012; Grisoni et al., 2017; Khoperskov et al., 2021; Spitoni et al., 2021) suggests that a high SFR episode formed the high-\(\alpha\) track, followed, around 8 Gyr ago, by a drop in the SFR and then the infall of pristine gas that diluted the overall metallicity of the MW, giving rise to the low-\(\alpha\) population. Recent work has focused on forming multiple chemical tracks via some variant of accretion events (Snaith et al., 2016; Grand et al., 2017; Mackereth et al., 2018; Buck, 2020), including those of stars born out of the plane of the disk (Agertz et al., 2021). In Clarke et al. (2019, hereafter Paper I) we presented a simulation of an isolated galaxy that produced a disk chemical dichotomy similar to the MW's chemical thin+thick disks. At early times (largely over the first 2 Gyr, but continuing to 4 Gyr at a lower rate) the model develops clumps with high SFR densities, \(\Sigma_{\rm SFR}\). The masses and SFRs of the clumps in this model are comparable to those observed in high-redshift galaxies (e.g. Guo et al., 2015; Dessauges-Zavadsky et al., 2017; Guo et al., 2018; Cava et al., 2018; Huertas-Company et al., 2020). The clumps represent a second mode of star formation, separate from the usual distributed star formation, with high \(\Sigma_{\rm SFR}\), leading to two tracks in the chemical, [Fe/H]-[\(\alpha\)/Fe], plane. The rate of clump formation declines rapidly as the gas fraction drops, thereby resembling the two-infall model. In agreement with Bournaud et al. (2009), Paper I showed that clumps produce a geometric thick disk. The chemical and geometric properties of the thick disk formed this way are consistent with those of the MW (Beraldo e Silva et al., 2020). Moreover, Amarante et al. (2020) showed that the resulting low angular momentum tail of the old stars is consistent with the "Splash" population in the MW (Di Matteo et al., 2019; Belokurov et al., 2020). Paper I showed that some of the clumps sink to the center of the galaxy, where they contribute to the formation of a bulge. While definitively determining if clumps are long lived enough to build bulges is challenging due to observational systematics (see, for instance, the discussion in Bournaud et al., 2014), observations of the stellar populations (e.g. Guo et al., 2018; Lenkic et al., 2021) and gradients of clump mass (Huertas-Company et al., 2020; Ambachew et al., 2022) suggest that at least some fraction of clumps likely do survive long enough to fall into the bulge. The chemistry of bulges formed with a significant contribution from clumps has not been studied extensively in the literature, despite frequent suggestions that bulges, including the MW's, may be partly built from clumps (e.g. Nataf, 2017; Queiroz et al., 2021). Interestingly, Immeli et al. (2004) found a bimodal distribution of [Mg/Fe] within the bulge of their clumpy chemodynamical model. Inoue and Saitoh (2012) found a metal-rich bulge formed from clumps but did not study the chemistry in greater detail. Therefore in this paper we study the consequences of star formation in a clumpy mode on the chemistry of the bulge. The paper is organized as follows. Section 2 presents the simulations used in this paper. The chemistry, star formation, kinematics, and spatial variation of the model bulges are presented in Section 3. We discuss our results, and give a brief summary of the main results, in Section 4. ## 2 The simulations We use the clumpy simulation of Paper I, as well as a control simulation that fails to produce long-lived clumps; both these models are described in Beraldo e Silva et al. (2020). The subgrid physics of the two models differs only in the strength of the feedback employed. Both models are evolved from the same initial conditions, comprised of a cospatial hot gas corona and dark matter halo with Navarro-Frenk-White (Navarro et al., 1997) profiles. The dark matter halo has virial mass of \(10^{12}\,\mathrm{M_{\odot}}\) and a virial radius \(r_{200}\simeq 200\,\mathrm{kpc}\). The gas corona, which constitutes 10% of the mass within the virial radius, starts with spin \(\lambda=0.065\)(Bullock et al., 2001), and as it cools, via metal line cooling (Shen et al., 2010), it settles into a disk. Stars form from dense gas (density \(>1\mathrm{cm^{-3}}\)) when the temperature drops below 15,000 K and the flow is convergent. Gas particles are not allowed to cool below the resolution limit by setting a pressure floor \(p_{floor}=3G\epsilon^{2}\rho^{2}\), where \(G\) is Newton's gravitational constant, \(\epsilon\) is the softening length, set at 50 pc, and \(\rho\) is the gas particle's density (Agertz et al., 2009). The feedback via supernovae Types Ia and II uses the blastwave prescription of Stinson et al. (2006). In the clumpy model, we couple 10% of the \(10^{51}\) erg per supernova to the interstellar medium as thermal energy. In contrast, in the high-feedback model, 80% of the feedback energy is coupled to the gas. As shown in previous studies (Hopkins et al., 2012; Genel et al., 2012; Buck et al., 2017; Oklopcic et al., 2017), high feedback coupling inhibits the clumps, and Beraldo e Silva et al. (2020) show that in that case the geometric properties of the disk(s) do not resemble those of the MW. Feedback via asymptotic giant branch stars is also included. Gas chemical and thermal diffusion uses the method of Shen et al. (2010). We evolve the models in isolation using a smooth particle hydrodynamics+\(N\)-body tree-code based on gasoline (Wadsley et al., 2004). The initial models are comprised of \(10^{6}\) particles in both the dark matter and gas components; both models form \(\sim 2\times 10^{6}\) stars. The clumpy model forms clumps during the first 2 Gyr, continuing at a lower rate to 4 Gyr, as shown in Paper I. The final disk galaxy has a rotational velocity of \(242\,\mathrm{km\,s^{-1}}\) at the Solar Neighborhood, making it comparable to the MW (see fig. 2 of Paper I). The high-feedback model evolves without forming any significant long-lived clumps. Henceforth we refer to the two models as the clumpy and high-feedback models. Neither of these two models forms a bar. The formation of a bar quenches star formation within most of the body of the bar (e.g. Khoperskov et al., 2018). In order to compare with the MW, we assume that the MW's bar formed at \(t=6\) Gyr (which would make it \(\sim 8\) Gyr old now). ## 3 Bulge stellar populations ### The chemistry of the bulge The top left panel of Fig. 1 presents the chemistry of the stars within a galactocentric radius \(R=1\) kpc at 10 Gyr in the clumpy model. As in Paper I, we apply Gaussian measurement uncertainties of \(\sigma_{\mathrm{[Fe/H]}}=0.1\) and \(\sigma_{\mathrm{[O/Fe]}}=0.03\) to mimic the measurement errors in APOGEE (Nidever et al., 2014). The chemical space has a single track, with the density peaked at two locations: one metal-rich at \(\mathrm{[Fe/H]}\simeq 0.55\) and a broader metal-poor peak at \(\mathrm{[Fe/H]}\simeq-0.1\). The bottom left panel of Fig. 1 presents the chemistry of the clumpy model's thin+thick disks at \(R>5\) kpc, and compares this with the chemistry of the model's bulge (the red points represent a random selection of 1000 bulge particles). The chemistry of the bulge follows that of the thick disk at \(\mathrm{[Fe/H]}\lesssim 0\), and then continues to more metal-rich than the thin disk. The MW's bulge exhibits the same trend (e.g. Melendez et al., 2008; Bensby et al., 2010; Alves-Brito et al., 2010; Hill et al., 2011; Bensby et al., 2013; Lian et al., 2020). We have verified that the trends in Fig. 1 are already in place by \(t=6\) Gyr. The right panels of Fig. 1 present the chemistry of the high-feedback model. A number of important differences between the clumpy and high-feedback models are evident. The first difference is that the track of the bulge in chemical space no longer has two peaks. Instead the bulge has a single sharp peak at \(\mathrm{[Fe/H]}\simeq 0.6\) with a long tail to lower metallicities. Moreover, this model does not have a bimodal chemical distribution in the disk (see also Beraldo e Silva et al., 2020), which happens because the high-\(\alpha\) stars form only via the clumpy star formation mode in these simulations. As a consequence, the bulge chemical distribution is offset vertically in \(\mathrm{[O/Fe]}\) relative to the disk. While the bulge has a high SFR and can therefore reach a high \(\mathrm{[O/Fe]}\), this is not the case in the disk, and the bulge ends up more \(\alpha\)-rich than the disk. The lack of a trough in the bulge's chemistry and the difference between the bulge's peak \(\alpha\) and that of the disk are different from the trends observed in the MW. In spite of these differences in chemical space, the overall SFH of these two models is very similar, as seen in Fig. 2. The main difference is at early times, when the presence of the clumps briefly raises the overall peak SFR by \(\sim 20\%\). In the high-feedback model, these clumps are short-lived (Genel et al., 2012; Hopkins et al., 2012; Buck et al., 2017; Oklopcic et al., 2017), and the SFR is therefore briefly lower. ### Evolution of the bulge's chemistry The fact that the clumpy model's bulge chemistry has a single, double-peaked track that matches that of the thick disk at \([\mathrm{Fe/H}]\lesssim 0\) is strikingly similar to what is observed in the MW. Understanding this trend therefore can help unravel the formation of the MW's bulge. Thus we next explore the evolution of the bulge chemistry to understand how clumpy star formation produces these trends. Fig. 3 shows the chemical evolution of the bulge inside \(R=1\,\mathrm{kpc}\) for both models. We show the MDF and the \(\alpha\) distribution function (\(\alpha\)DF) for all bulge stars formed up to 2, 4, 6, 8 and 10 Gyr. The clumpy model, at \(t=4\,\mathrm{Gyr}\), when clump formation fully ceases, has a bulge MDF which is bimodal (top left panel), with a low-metallicity peak at \([\mathrm{Fe/H}]\simeq-0.1\) and a small peak at \([\mathrm{Fe/H}]\simeq 0.4\). The high-metallicity peak grows in importance as subsequent in-situ star formation adds a population of high-metallicity stars. The trough between the two peaks falls at \([\mathrm{Fe/H}]\simeq 0.25\). In the MW's bulge, the metallicity of the trough varies with position in the range \([\mathrm{Fe/H}]\sim-0.2\) to 0.2 (Zoccali et al., 2017). After \(t=4\) Gyr, the \(\alpha\)DF of the clumpy model (bottom left panel) has a fixed peak at high \([\mathrm{O/Fe}]\) (at \(\approx 0\), but we caution that \([\mathrm{O/Fe}]\) values often have significant offsets in simulations compared to observations, as we also found in Paper I.) At \(t=4\,\mathrm{Gyr}\), the \(\alpha\)DF has a point of inflection at low \([\mathrm{O/Fe}]\), where a pronounced second peak later develops. A double-peaked \(\alpha\)DF is similarly present in the MW's bulge (e.g. Lian et al., 2020) In contrast, the chemical evolution of the high-feedback model (right panels) results in only a single peak in the bulge's MDF, and only a weak double peak in the bulge \(\alpha\)DF. At best a weak trough is visible in chemistry of its bulge. The two models differ at the low-\([\mathrm{Fe/H}]\) peak (_i.e._ at the high-\([\mathrm{O/Fe}]\) peak), which must represent the location where the clump formation plays an important role in one model and is absent from the other. Small differences between the clumpy and the high-feedback models are already present at 2 Gyr, which Fig. 2 shows has the largest differences between the global SFRs of the two models. At 2 Gyr the MDF of the clumpy bulge has Figure 1: The density of stars in the \([\mathrm{Fe/H}]\)-\([\mathrm{O/Fe}]\) chemical space at \(t=10\,\mathrm{Gyr}\). Top: all stars within \(R=1\) kpc. Bottom: stars at \(R>5\,\mathrm{kpc}\) with 1000 random bulge stars superposed as red points. At left is the clumpy model, while at right is the high-feedback model. Smoothing in \([\mathrm{Fe/H}]\) and \([\mathrm{O/Fe}]\) has been applied to all panels to match the chemical resolution of APOGEE, as described in Sec 3.1. Bins with less than 100 stars have been suppressed. In the clumpy model, the bulge chemistry matches that of the disk in the high-\([\mathrm{O/Fe}]\) region, in agreement with MW trends, and in contrast to the high-feedback model. a peak at low [Fe/H], while a peak at high [Fe/H] is incipient, but not yet prominent. The high-feedback bulge has a very similar MDF, but it has only a single peak at roughly the same subsolar [Fe/H] as in the clumpy model. The low-[Fe/H] peak is more prominent in the clumpy bulge than that in the high-feedback bulge, but the overall trends are similar. Similarly the \(\alpha\)DFs of the two models are not yet very different, with a single peak at high \(\alpha\). The differences between the chemistry of the two bulges become larger between 2 and 4 Gyr, despite the fact that the global SFRs of the two models are more similar at these times. In the clumpy model, the separate peak at high [Fe/H] now becomes more developed, while the continuing enrichment in the bulge of the high-feedback model results in only a single peak at high [Fe/H]. The low-[Fe/H] peak in the clumpy model grows in importance at this time, while shifting to higher [Fe/H]. At the same metallicities as the low-[Fe/H] peak of the clumpy model, the bulge of the high-feedback model barely changes during this time. In the high-feedback bulge, the \(\alpha\)DF begins to develop a peak at low [\(\alpha\)/Fe], while in the clumpy bulge the low-[\(\alpha\)/Fe] peak has not yet started to be visible, but the high-\(\alpha\) peak continues to grow while shifting to lower [\(\alpha\)/Fe]. As we show below, the driver of these differences is the infall of clumps into the bulge of the clumpy model between 2 and 4 Gyr. After 4 Gyr, when no further clumps form in the disk of the clumpy model, the chemical evolution of the two bulges proceeds very similarly, with an increasing numbers of stars at the high-[Fe/H], low-[\(\alpha\)/Fe] peaks. During this time, the clumpy model develops a second peak at low [\(\alpha\)/Fe], which had formed earlier in the high-feedback model. In summary, it is not the differences in their SFRs that give rise to the different chemistries of the two bulges, but the infall of clumps onto the bulge of the clumpy model, which drives the continued growth of the low-[Fe/H] peak in its chemistry. ### Formation location Paper I showed that the clumps in the clumpy simulation often fall to the center. If clumps are disrupted before they can reach the bulge, then they may play a less prominent role in the formation of the bulge. We therefore consider the formation location of bulge stars to test the effect of the infalling clumps on the chemistry of the bulge. Figure 2: The overall star formation history of the clumpy and high-feedback models. The top left panel of Fig. 4 shows the distribution of the formation radius, \(\langle R_{\rm form}\rangle\), in the chemical space. An important conclusion from this plot is the different origins of the two MDF peaks. The stars at the low-[Fe/H] peak in the chemical track have large \(\langle R_{\rm form}\rangle\), indicating that many of them are forming outside the bulge and reaching it via clumps. The high-[Fe/H] peak instead is produced by in-situ2 star formation (as in the high-feedback model, seen in the top right panel of Fig. 3). The bottom left panel of Fig. 4 shows the fraction of bulge stars that are ex-situ. This is high at intermediate [Fe/H] and low at high [Fe/H], closely mirroring the top left panel. Footnote 2: Here we use the terms in situ and ex situ to refer to formation inside or outside the bulge, but within the galaxy. When we plot the distribution of stars in the bulge's chemical space excluding those stars that formed outside \(R_{\rm form}=2\) kpc, which we do in the top right panel of Fig. 4, we find that the low-[Fe/H] peak is substantially reduced, with the track resembling somewhat the distribution of the high-feedback model in Fig. 1. (The small peaks remaining after this subtraction are caused by stars formed in clumps that are still star forming inside \(R=2\) kpc.) This explains why the bulge chemistry at low [Fe/H] is such a good match to the chemistry of the thick disk: _many of these stars share a similar origin_. Figure 3: The evolution of the MDF (top) and \(\alpha\)DF (bottom) of stars within the bulge (\(R\leq 1\) kpc) between \(t=2\) Gyr and \(t=10\) Gyr. At left is the clumpy model, and at right is the high-feedback one. In the clumpy model, a bimodality is present in the MDF at \(t=10\) Gyr with a broad, low peak at [Fe/H] \(\sim-0.1\) and a narrow, high peak at [Fe/H] \(\sim 0.5\). The bimodality is already evident, although weaker, at \(t=2\) Gyr, when clump formation has started to die down, and is well established at 4 Gyr. A bimodality is also present in the \(\alpha\)DF at \(t=10\) Gyr, with a broad, low peak at [O/Fe] \(\sim 0\) and a narrow, high peak at [O/Fe] \(\sim-0.3\). This bimodality is significantly weaker and/or absent at \(t=4\) Gyr. In the high-feedback model, instead, only a single peak develops in the MDF although the \(\alpha\)DF still has a weak second peak. All distributions have been normalized to the corresponding peak at 10 Gyr. In the top row, the vertical dotted lines indicate the regions around the peaks where we define MDF peaks discussed in Section 3.5. The bottom right panel of Fig. 4 shows the distribution of those stars excluded from the second panel, _i.e._ the stars that end within the bulge that formed at \(R_{\rm form}>2\) kpc. This shows that the bulk of these ex-situ stars arriving within clumps settle along the bulge track, with their highest density at the location of the low-metallicity peak. A number of additional conclusions can be drawn from the top left panel of Fig. 4. First is the fact that clumps bring with them a small population of low-\(\alpha\) stars, which settle below the low-[Fe/H] peak around [Fe/H] \(\sim-0.4\) and [O/Fe] \(\sim-0.1\). As shown in Paper I (in figures 15 and 17), some of the stars formed in clumps have low [\(\alpha\)/Fe]. This population of stars is relatively small and does not contaminate the chemical distribution significantly. The second point is that, to a large extent, the chemistry of the bulge, at the high- and low-metallicity ends, is dominated by stars formed in situ, and is contaminated by clumps only at \(-0.5\lesssim{\rm[Fe/H]}\lesssim 0.0\). ### The link between the single track and the star formation mode In the left panels of Fig. 5, we plot the density of stars in the space of final versus formation radii (\(R_{\rm final}\) versus \(R_{\rm form}\)). The stars that form during the clump epoch, \(t_{\rm form}<4\) Gyr, and that end within the inner 1 kpc (top left panel) form at a range of radii, including a significant contribution forming in-situ. For the stars formed after the clump epoch, \(4\leq t_{\rm form}/\,{\rm Gyr}\leq 6\), (bottom left panel) star formation occurs in situ, resulting in the diagonal distribution in the \(R_{\rm final}\)-\(R_{\rm form}\) space. In the absence of clumps, stars only reach the bulge from farther out via eccentric orbits; the bottom left panel shows that the fraction of such stars is low. Figure 4: Top left: distribution of \(\langle R_{\rm form}\rangle\) in the chemical space of stars formed in the first 4 Gyr that end within the inner 1 kpc of the clumpy model. The accreted clumps are responsible for the low-[Fe/H] peak while in-situ star formation produces the high-[Fe/H] peak. The contours indicate the density of particles; the 5 contour levels span a factor of 10. Bottom left: the fraction of ex-situ stars (those with \(R_{\rm form}>2\) kpc) that end up in the bulge (\(R_{\rm final}<1\) kpc). Top right: the in situ bulge, showing the distribution of stars contained within \(R_{\rm final}<1\) kpc when stars with \(R_{\rm form}>2\) kpc are excluded. While the distribution is not completely smooth, no prominent peak at low-[Fe/H] is evident. Bottom right: the ex situ bulge, defined as those stars within \(R_{\rm final}<1\) kpc with \(R_{\rm form}>2\) kpc. The stars in the bulge therefore are a mix of those formed in situ and those accreted in clumps. Paper I showed that there are two modes of star formation: a high \(\Sigma_{\rm SFR}\) and a low \(\Sigma_{\rm SFR}\) one. Clumps are associated with high \(\Sigma_{\rm SFR}\) (\(\Sigma_{\rm SFR}\gtrsim 1\) M\({}_{\odot}\) yr\({}^{-1}\) kpc\({}^{-2}\)) while lower \(\Sigma_{\rm SFR}\) is typical of distributed (nonclumpy) star formation (see Figure 15 of Paper I). The right panels of Fig. 5 show the distribution of \(\langle\Sigma_{\rm SFR}\rangle\) in the same \(R_{\rm final}\) versus \(R_{\rm form}\) space. For stars with \(t_{\rm form}<4\) Gyr (top right panel), the high \(\langle\Sigma_{\rm SFR}\rangle\) at \(R_{\rm final}<0.5\) kpc is produced by the full range of \(R_{\rm form}\), which therefore must include stars formed in clumps that have fallen in, as well as those formed in situ. This is made clearer by comparing with stars that form after 4 Gyr (bottom right panel) when clump formation has ceased; now bulge stars have relatively low \(\langle\Sigma_{\rm SFR}\rangle\simeq 3\) M\({}_{\odot}\) yr\({}^{-1}\) kpc\({}^{-2}\) (except at the very center), and \(R_{\rm form}\simeq R_{\rm final}\). These stars clearly are forming in situ rather than falling in as clumps. Figure 5: The number of stars (left column) and \(\langle\Sigma_{\rm SFR}\rangle\) in the \(R_{\rm form}\)-\(R_{\rm final}\) space for stars at the center of the clumpy model. The top row shows the distributions for \(t_{\rm form}\leq 4\) Gyr while the bottom row is for \(4<t_{\rm form}/\,{\rm Gyr}\leq 6\). The diagonal structure in the bottom panels shows predominantly in-situ star formation after \(t_{\rm form}=4\) Gyr, whereas the upper panels show a significant population of bulge stars brought in by clumps. (Note the different scales on the two axes. The diagonal dashed green line indicates \(R_{\rm form}=R_{\rm final}\).) Figure 6: Star formation modes at the center of the clumpy model for stars with \(t_{\rm form}\leq 4\,\)Gyr. Top: The normalized distribution of the star formation rate density for different final radii. Middle: The cumulative formation radius of stars that end up at different radii. Bottom: The normalized distribution of star formation rate density for different formation radii. In the top and bottom panels, the black histograms refer to all the stars within the model. All distributions use kernel density estimates (KDEs) with a Gaussian kernel and window width satisfying Silverman’s rule (Silverman, 1986). Therefore stars born outside the bulge in the clumpy high-\(\Sigma_{\rm SFR}\) mode are reaching the bulge. In the top panel of Fig. 6 we plot \(\Sigma_{\rm SFR}\) for stars forming before 4 Gyr that end at different radii within the inner galaxy. From \(R_{\rm final}\leq 2\) kpc (red curve) to \(R_{\rm final}\leq 1\) kpc (green curve), the contribution of the high-\(\Sigma_{\rm SFR}\) mode of star formation rises, and overwhelmingly dominates at \(R_{\rm final}<0.5\) kpc (blue curve). This would seem to imply that infalling clumps dominate the inner galaxy. However, the middle panel of Fig. 6, which shows the cumulative distribution of \(R_{\rm form}\) for stars at different \(R_{\rm final}\), shows that a significant in-situ population is also present. Indeed the fraction of stars that formed within \(R_{\rm form}=2\) kpc _rises_ as \(R_{\rm final}\) decreases, although it never exceeds \(\sim 60\%\). Roughly half the stars that end up at \(R_{\rm final}\leq 2\) kpc were born outside this region. Thus clumps are delivering a significant fraction of the bulge's mass, but in-situ star formation is equally important. Why then does the in-situ star formation not produce a separate track in the bulge chemistry like the disk's low-[\(\alpha\)/Fe] track? The bottom panel of Fig. 6 plots \(\langle\Sigma_{\rm SFR}\rangle\) for stars that were born within a given radius by \(t_{\rm form}=4\) Gyr. The vast majority of early stars formed within 2 kpc formed via the high-\(\Sigma_{\rm SFR}\) mode. Therefore, the early bulge itself acts as a clump of high \(\Sigma_{\rm SFR}\), as first shown by Mandelker et al. (2014). Thus the bulge never gets to form a low-\(\alpha\) track: even in the absence of clumps falling into the bulge, for instance because they are disrupted before they reach the center, the bulge chemistry will still lack metal-poor low-\(\alpha\) stars. Indeed even the high-feedback model has only a single track in the bulge, and is \(\alpha\)-rich (compared with the disk), as can be seen in the top right panel of Fig. 1. ### Bulge ages and quenching If star formation continues in the bulge after the clump epoch ends, then the younger stars will necessarily be at the high-[Fe/H] peak. The high-[Fe/H] peak then would be younger, on average, than the low-[Fe/H] peak. In the MW, the difference in mean age between the two peaks would be governed by when the bar forms, because bars generally quench star formation within most of their radius (including the vertically thickened part that forms the bulge). Fig. 7 shows the SFH up to 6 Gyr for the stars that end within \(R=1\) kpc (we have checked that the result does not change qualitatively if we consider stars inside \(R=2\) kpc). This shows that the high-metallicity population ([Fe/H] \(\geq 0.25\)) overlaps in age with the low-[Fe/H] population. However no new stars with low-[Fe/H] form after \(\sim 4\) Gyr. Lian et al. (2020) interpreted the trough between the two peaks in the MW bulge's chemical track as an episode of quenching in its SFH, before star formation restarted and produced the high-[Fe/H] peak. The top panel of Fig. 7 shows that the star formation in the clumpy model's bulge never drops to zero, although its chemical track in Fig. 1 develops a trough between the two peaks. The bottom panel of Fig. 7 presents the SFH of the high-feedback model. Despite the similarity in the SFH of the two models, only the clumpy model develops a trough between two peaks in the bulge chemical track (as can be seen in Figure 1), suggesting that the SFH need not be responsible for the trough. The evolution of the bulge MDF in the clumpy model, seen in Fig. 3, shows that the bulge reaches the high-[Fe/H] regime already by 2 Gyr, and is bimodal already by that point. The bimodality increases at later times, particularly after 4 Gyr, but the trough is present before the clumpy episode is over. Thus the high-[Fe/H] peak contains old stars and represents the ordinary chemical evolution of a rapidly star-forming system. If we understand the [Fe/H]-enrichment as developing smoothly, then stars in the trough will be, on average, slightly older than those in the high-[Fe/H] peak. The stars at the low-[Fe/H] peak, because they are a mix of in-situ stars and stars accreted via clumps, represent a range of ages, from older than the trough (from the in-situ evolution) to stars younger than the old stars in the high-[Fe/H] peak (from the later stages of clump accretion). Fig. 8 shows the distribution of ages at the two peaks, within the [Fe/H] limits indicated by the vertical dotted lines in the MDFs of Fig. 3. At \(2\lesssim t_{\rm form}/\) Gyr \(\lesssim 4\), the ages of stars at the high-[Fe/H] peak significantly overlap those of the youngest stars at the low-[Fe/H] peak. We show, as dashed lines, the age distributions for the high-feedback model in the same metallicity ranges. While the age distribution at the high-[Fe/H] peak is comparable to that of the clumpy model, the region where the low-[Fe/H] peak would be has predominantly older stars and only overlaps the high-[Fe/H] peak's ages in the exponential wing of the distribution. We conclude that the trough between the high- and low-[Fe/H] peaks in the clumpy model is not due to a quenching of in-situ star formation but rather due to the end of clumps delivering stars to the low-[Fe/H] peak of the bulge. A possible diagnostic of this scenario is the age distribution of the low-[Fe/H] peak compared with that in the trough: stars at the low-[Fe/H] peak should include younger stars than those in the trough. We explore this prediction for the clumpy model in the top panel of Fig. 9, where we plot the mean time of formation, \(\langle t_{\rm form}\rangle\), of stars in the chemical space of the bulge stars formed by \(t=6\) Gyr. _Along the ridge_ of the chemical track from metal-poor to metal-rich, we reach a local maximum in \(\langle t_{\rm form}\rangle\) at the location of the low-[Fe/H] peak, while the high-[Fe/H] peak is the location of late star formation and has the largest \(\langle t_{\rm form}\rangle\) (_i.e._, youngest stars). In between, at the trough, \(\langle t_{\rm form}\rangle\) has a local minimum, meaning the stars in this region are older. Observationally, this dip gives the appearance of a drop in the SFR of the bulge and thus resembles a quenching episode. However this is clearly not the case in the evolution of the clumpy model. In contrast, the bottom panel shows the mean age of the high-feedback model, which shows that the mean age increases monotonically along the ridge in this case. A final noteworthy property of the clumpy bulge's chemistry is that the trough occurs just beyond the highest metallicity of the thick disk track (see Fig. 1). This happens because the trough is not significantly polluted by stars formed in the same clumps that produced the thick disk. Figure 7: Star formation history to 6 Gyr, _i.e._, 2 Gyr after the end of the clump era in the clumpy model, for stars that end up within \(R_{\rm final}<1\) kpc, separated by metallicity. The full distribution is shown in black. The separation into metal-rich and metal-poor is at [Fe/H] \(=0.25\), which marks the minimum between the two peaks in the MDF of the clumpy model. The stars in the low-[Fe/H] and high-[Fe/H] populations are shown in blue and red, respectively. The top panel shows the clumpy model while the bottom panel shows the high-feedback model split at the same metallicity. The overall similarity of the SFHs of the two models suggests that quenching is not responsible for the trough in the chemical space of the clumpy model, seen in the top left panel of Fig. 1. ### Dependence of kinematics on chemistry The top panel of Fig. 10 shows profiles of the radial velocity dispersion, \(\sigma_{R}\), of stars at \(t=4\) Gyr separated into [O/Fe] bins. The high-[O/Fe] stars are generally hotter, by \(20-30\)\(\rm\ km\,s^{-1}\), than the low-[O/Fe] stars even just at the end of the clumpy epoch. This reflects on the chaotic interaction of clumps near the center of the galaxy, which heats the high-[O/Fe] populations at birth. The bottom panel of Fig. 10 shows profiles of the mean streaming velocity, \(\langle V_{\phi}\rangle\); although clumps are falling to the center, the bulge remains rotationally supported, because the clumps are on in-plane, prograde orbits, which are known to produce rapidly rotating remnants even when the resulting mergers are collisionless (e.g. Read et al., 2008; Hartmann et al., 2011). In cosmological simulations, Inoue and Saitoh (2012) also found rapidly rotating bulges forming from clumps. The low-[O/Fe] stars are more rapidly rotating, by \(\sim 50-100\,\rm km\,s^{-1}\), as expected given their lower velocity dispersion. We measure the actions of stars using agama(Vasiliev, 2019), which uses the Stackel fudge of Binney (2012), assuming a flattened axisymmetric potential for the disk and a spherical potential for the halo. Fig. 11 shows the mean radial action, \(\langle J_{R}\rangle\), in the chemical space, for stars in the inner 1 kpc at 6 Gyr. Bearing in mind that Debattista et al. (2020) found that bar formation substantially steepens the vertical gradient of \(\langle J_{R}\rangle\), we anticipate that stars at the high-[Fe/H] peak would dominate near the mid-plane while the large heights would be dominated by the low-[Fe/H] peak if a bar had formed. ### Comparison with the Milky Way Figure 8: The SFH of stars centered at the clumpy model’s two MDF peaks within the inner 1 kpc. The clumpy model is shown by the solid lines while the high-feedback model at the same [Fe/H] ranges is shown by the dashed lines. The two metallicity ranges selected are indicated by the vertical dotted lines in the top row of Fig. 3. Figure 9: The distribution of \(\langle t_{\rm form}\rangle\) in the chemical space of stars formed in the first 6 Gyr that end within the inner 1 kpc of the clumpy (top) and high-feedback (bottom) models. The location of the trough in the chemical space of the clumpy model corresponds to a local minimum in \(\langle t_{\rm form}\rangle\). The contours indicate the density of particles; the 5 contour levels span a factor of 10. Despite the absence of a bar in the clumpy model, we can compare the vertical distribution of the MDF with the MW's. In order to do this, we use the model at 6 Gyr assuming that the bulge is quenched by bar formation at this time. We apply a coordinate transformation of the model's Cartesian coordinates to Galactic coordinates, \((l,b,d)\), after placing the Galactic center at 8 kpc from the Sun. We select particles across constant longitude stripes (\(-6.5^{\circ}<l<6.5^{\circ}\)) at different latitudes (\(1.5^{\circ}<|b|<2.5^{\circ}\) and \(5.5^{\circ}<|b|<6.5^{\circ}\), restricted to a distance \(7<d/\) kpc \(<9\). This represents a typical spectroscopic selection of giant stars in the MW bulge (e.g. Wylie et al., 2021) with which variations as a function of latitude are studied. Fig. 12 shows the resulting distribution of the selected stars in chemical space; these display two over-densities that change their relative contributions as a function of Galactic latitude, as in the MW. The chemical track in Fig. 12 is comprised of a sequence of [Fe/H]-poor stars, followed by a trough and then a shorter sequence of [Fe/H]-rich stars whose relative contribution decreases with increasing height from the plane. Without any scaling applied to the simulation, the [Fe/H]-rich population is no longer present at a latitude of \(b=6^{\circ}\). Since the simulated galaxy has not formed a bar, the detailed properties of the two populations and their spatial variations are not directly comparable to those in the MW. The specific distributions seen in the MW would depend on many details, such as the epoch of bar formation, the vertical thickening of the bar, and the star and clump formation histories. However, the presence of this overall bimodality and trend in the simulation is consistent with the observations of Rojas-Arriagada et al. (2019), Wylie et al. (2021) based on [Mg/Fe] abundances from APOGEE and ARGOS data. They showed, from a large number of stars, that the [Fe/H] bimodality is produced by a low-\(\alpha\) sequence of stars over a range of \(\sim 0.5\) dex around a solar metallicity that merges with the main high-\(\alpha\) sequence. In the simulation, there is a third, much smaller component in Fig. 12 that appears as a lower-[O/Fe] overdensity in the metal-poor regime. Figure 10: Radial velocity dispersions, \(\sigma_{R}\), (top) and mean rotational velocity, \(\langle V_{\phi}\rangle\), (bottom) of stars at 4 Gyr, in the inner 2 kpc of the clumpy model, as a function of [O/Fe]. Low-[O/Fe] stars have lower radial velocity dispersions and higher rotation. This population becomes more important at higher distances from the plane but clearly remains a minor component throughout. This component is comprised of stars formed in clumps that have lower [O/Fe] that form at large radii (see Fig. 4). Their appearance suggests that the clumpy simulation underestimates the evaporation rate of the clumps, possibly because the feedback should be higher. We note, however, that a hint of such a low-[O/Fe], metal-poor population can be seen in Wylie et al. (2021) where an increasing width of the low-[Fe/H] sequence as a function of height is evident in their figure 25, but further studies with higher number statistics are needed to confirm this. Despite not having formed a bar, the simulation trends suggest that the chemical bimodality in the bulge produced by clumps is plausibly able to account for the spatial variations of the chemistry of the MW's bulge. ## 4 Discussion The single track in the chemical space of the bulge, _i.e._, the absence of an [\(\alpha\)/Fe] bimodality for a fixed [Fe/H], in the clumpy simulation is similar to that observed in the MW, including the fact that it has two density peaks along the track. Together with the clumpy model's two tracks in the chemical space of the disk (Paper I), this is a striking agreement with the trends seen in the MW, and suggests that the model is capturing a generic behavior. Altogether, these results demonstrate that a holistic view of the chemistry of the entire MW (both bulge and thin+thick disks) provides a more stringent constraint on how the early MW formed (see also Di Matteo, 2016). ### Comparison with other scenarios We have shown that an episode of star formation in clumps is able to explain the twin peaks in the bulge's single track in the chemical space. The bulge track follows that of the thick disk at low metallicity but then continues to the thin disk and beyond at high metallicity, as observed in the MW. In Paper I and Beraldo e Silva et al. (2020) we showed that the chemical thick and thin disks produced via clumps have similar properties (scale-lengths and scale-heights, Figure 11: The mean radial action, \(\langle J_{R}\rangle\), in the chemical space of the clumpy model. The contours indicate the density of particles; the 5 contour levels span a factor of 10. Figure 12: Gaussian kernel estimate of the density in the [Fe/H]-[O/Fe] chemical space for stars in the clumpy model. Ten equally spaced contours show the density distribution of stars selected to be within the volume bounded by \(-6.5^{\circ}<l<6.5^{\circ}\) and \(7<d/\,{\rm kpc}<9\), in \(1^{\circ}\) stripes at \(b=2^{\circ}\) (bottom) and \(6^{\circ}\) (top). The histogram at the top shows the full distribution of [Fe/H]; the minimum between the two [Fe/H] peaks, indicated by the dotted line in the central panel, splits the distribution into the low- and high-[Fe/H] populations. The [O/Fe] histograms of these two populations are shown at the right with the full (black), low-[Fe/H] (red) and high-[Fe/H] (dotted blue). kinematics, and MDFs) as found in the MW. Amarante et al. (2020) showed that clumps also produce the relatively metal-rich population that bridges the thick disk and the inner halo, which has been termed 'the Splash' (Di Matteo et al., 2019; Belokurov et al., 2020)3. The clump scenario predicts that the thin and thick disks were forming at the same time (Paper I), which appears to be consistent with the presence of RR Lyrae in the thin disk as well as an age overlap between the chemical thin and thick disks (Beraldo e Silva et al., 2021). Footnote 3: Di Matteo et al. (2019) refer to this feature as “The Plume”. Since _Gaia_'s confirmation of the _Gaia_-Sausage-Enceladus (hereafter GSE) (Belokurov et al., 2018; Helmi et al., 2018) merger remnant, the chemodynamics of the early MW have been interpreted as products solely of this merger. Numerous cosmological simulations have indeed shown that disk chemical bimodalities can arise from gas-rich mergers (e.g. Brook et al., 2005; Snaith et al., 2016; Grand et al., 2018; Mackereth et al., 2018; Buck, 2020). Mackereth et al. (2018) found that such outcomes only occur in about 5% of the EAGLE simulation galaxies, but Buck (2020) found them to be more common in the NIHAO simulation suite. Likewise, the Splash has been interpreted as a combination of accreted material and the kinematically heated disk after the GSE merger (Di Matteo et al., 2019; Belokurov et al., 2020; Mackereth et al., 2019; Gallart et al., 2019). The bulge is an important test of the hypothesis that the GSE merger is exclusively responsible for the chemodynamics of the MW because, on the one hand, the merger cannot leave a classical bulge more massive than \(\sim 8\%\) of the total stellar mass (Shen et al., 2010; Bland-Hawthorn & Gerhard, 2016; Debattista et al., 2017), while at the same time producing a bulge chemistry with a single track with two peaks. To date, cosmological simulations that produce the chemical thin and thick disks appear to produce two, or more, tracks in the chemistry of the bulge (e.g. Grand et al., 2018; Buck, 2020). We have shown here that the chemistry of the bulge can largely, and very naturally, be produced by clumpy star formation (including within the bulge itself). Thus most of the chemodynamics of the early MW, excluding the accreted halo, can now be explained by clumps. However, because the GSE merger certainly happened, it is important to understand to what extent a merger in the presence of clumps is able to explain the details of the MW's chemodynamics. We will be exploring exactly this with project GASTRO (Amarante et al., 2022). Further complicating matters, besides the GSE, there have been suggestions of at least one other equally massive merger in the MW during its early evolution (Massari et al., 2019; Forbes, 2020; Horta et al., 2021). Horta et al. (2021) used APOGEE DR16 and Gaia DR2 to characterize the stars of this merger event, which they called "Heracles"4. They estimated its stellar mass as \(\sim 5\times 10^{6}\) M\({}_{\odot}\), _i.e._ as massive as GSE (see also Kruijssen et al., 2020). The stars associated with Heracles are located at \(R<5\) kpc, and are thus more bound to the Galactic potential than the GSE remnant (but, see also Lane et al., 2022, for a discussion of whether Heracles could be an artifact in the \(E-L_{z}\) plane of APOGEE's selection function). These stars are also chemically distinct from the GSE (Horta et al., 2021; Naidu et al., 2022) and would imprint as bursts in the SFH of the inner MW (Orkney et al., 2022). Naidu et al. (2022) estimated it was accreted \(\sim 1.7\) Gyr before GSE. Recently, Myeong et al. (2022) argued for an in-situ origin of Heracles. More recently, this population has been interpreted as the first stars that formed in the MW, based on _Gaia_, APOGEE DR17, and H3 survey data (Belokurov & Kravtsov, 2022; Conroy et al., 2022; Rix et al., 2022). Footnote 4: Massari et al. (2019) and Forbes (2020) dubbed this remnant “Kraken” and “Koala”, respectively. An alternative, popular model for the formation of the geometric thick disk posits that it formed in situ, already thick, in an "upside-down, inside-out" manner (Bird et al., 2013, 2021). Support for this model includes the short scale-length of the (chemical) thick disk (Bovy et al., 2012; Hayden et al., 2015) and the homogeneity of the high-\(\alpha\) population. This scenario is also supported by the high gas velocity dispersions and star formation rates in high-redshift galaxies (e.g. Kassin et al., 2012; Wisnioski et al., 2015). The lack of flaring in the high-\(\alpha\) populations is also consistent with the upside-down scenario (Bovy et al. (2016); Mackereth et al. (2017) but see also Lian et al. (2022)). In general, however, studies of the upside-down formation scenario have provided no explanation for the disk chemical bimodality, or the chemistry of the bulge. An alternative flavor of the upside-down formation scenario is based on misaligned star formation. Meng & Gnedin (2021) showed that, in their cosmological simulations, stars always form in a thin disk, even at \(z>1.5\), and only give the appearance of an upside-down formation because disks tilt rapidly at early times, which leaves the star-forming plane misaligned (warped) with respect to the main disk plane. The subsequent precession of the stars formed off the plane continuously inflates the height of the main disk (see also Khachaturyants et al., 2021). More recently, Tamfal et al. (2022) used a high-resolution (\(\sim 10^{9}\) particles) zoom-in cosmological simulation to show that the disk is already forming thin as early as \(z\sim 7-8\), with no upside-down formation. This rotationally supported disk thickens slowly due to internal instabilities and external perturbations, with stellar accretion from satellites providing the main geometric thick disk. In a similar vein, Agertz et al. (2021) (see also Renaud et al., 2021, 2021) proposed that the origin of the chemical bimodality of the thin+thick disks is due to different chemistry in the inner and outer disks which accreted their gas from separate filaments. Early rapid star formation and mergers in the inner disk gave rise to the high-\(\alpha\) thick disk population, while star formation in the outer misaligned disk is inhibited by the low density of the gas until the last major merger triggers star formation in the outer disk, which becomes the metal-poor, low-\(\alpha\) thin disk. The continuing star formation then builds the metal-rich, low-\(\alpha\) thin disk we see today. Renaud et al. (2021) presented the chemistry of this simulation; the bimodal tracks extend to the inner galaxy, contrary to what is observed in the MW. It is unclear whether this outcome can be avoided in this scenario. The classical two-infall scenario of Chiappini et al. (1997) (see also Chiappini, 2009; Bekki and Tsujimoto, 2011; Tsujimoto and Bekki, 2012; Grisoni et al., 2017; Spitoni et al., 2021) proposes that the formation of two sequences in the disk chemistry results from two-infall episodes, with high SFR during the first infall, producing the high-\(\alpha\) sequence, followed by a second infall with low SFR, producing the low-\(\alpha\) sequence. As noted in Paper I the clump model is similar, in terms of SFR, to this model, and the outcome may be indistinguishable. However, our results for the nonclumpy model, which has an SFH not much different from that of the clumpy model, but which fails to form a disk chemical bimodality, is at odds with a pure early high SFR producing a disk chemical bimodality. Clumps produce the chemical bimodality by boosting the star formation rate _density_ by a factor of \(\sim 100\) compared to distributed star formation (Clarke et al., 2019). Khoperskov et al. (2021) presented several simulations which produced a thin+thick disk chemical bimodality which they attributed to the rapidly dropping SFR, coupled with outflows (see also Vincenzo and Kobayashi, 2020) similar to the two-infall model. The authors also noted that their models undergo a period of clump formation, with comparable clump masses to what we found in Paper I. ### Observational tests Clumps are observed in more than half of high-redshift MW progenitors (e.g. Elmegreen and Elmegreen, 2005; Ravindranath et al., 2006; Elmegreen et al., 2007; Forster Schreiber et al., 2011; Genzel et al., 2011; Guo et al., 2012, 2015). Observed at high resolution, clumps are found to have sizes of order \(100-500\) pc, average masses of \(\sim 10^{8}\) M\({}_{\odot}\)(Livermore et al., 2012, 2015; Fisher et al., 2017; Cava et al., 2018) and contribute about 7% of the ongoing star formation rate (Wuyts et al., 2012). Aside from the formation of a geometric thick disk (Bournaud et al., 2009; Clarke et al., 2019; Beraldo e Silva et al., 2020), the presence of clumps does not lead to substantial differences in the morphological properties of galaxies. Indeed the cosmological zoom-in simulations of Inoue and Yoshida (2019), with identical initial conditions but varying gas physics, found a strong dependence of clump formation on the equation of state of the gas, but very little effect on the global properties of the galaxies. The signatures of clumps are therefore primarily chemical, because the masses of the clumps are modest (Livermore et al., 2012; Fisher et al., 2017; Cava et al., 2018; Benincasa et al., 2019), and the clump epoch lasts only a brief time, until the gas mass fraction declines (Cacciato et al., 2012). We showed in Paper I that the clumps that form in the clumpy simulation are comparable to the ones found in high-redshift galaxies and predicted that chemical bimodalities in disks should be common. Using MUSE spectroscopy, Scott et al. (2021) showed that the MW analog UGC 10738 has an \(\alpha\)-rich geometric thick disk, from which they conclude that accretion events are unlikely sources of thick disks. More studies such as this can help establish whether geometric thick disks are \(\alpha\)-enhanced. This will be particularly important for exploring the merger hypothesis, since the merger histories of galaxies are very variable (e.g. Lacey and Cole, 1993; Stewart et al., 2008; Boylan-Kolchin et al., 2010). Upcoming data from the James Webb Space Telescope will measure the chemistry of the Andromeda galaxy's disk from resolved stellar spectroscopy. Andromeda is known to have had a much more active merger history than that of the MW (e.g. McConnachie et al., 2010; Weisz et al., 2014; McConnachie et al., 2018; D'Souza and Bell, 2018; Hammer et al., 2018). If clumps played an important role in its chemical evolution, we expect the chemistry of Andromeda's old disks to be comprised of, at least, a high-\(\alpha\) and a low-\(\alpha\) track somewhat similar to the MW's, with possibly additional merger-induced tracks. However more detailed tests must necessarily come from the MW since we can study it in much greater detail than any other galaxy. Understanding to what extent the outcome of the GSE merger is degenerate with the clump scenario is an important ingredient in unravelling the formation of the MW. A holistic approach, considering the properties of the bulge, the thin+thick disks, and the Splash, is vital to this enterprise. However efforts thus far have been hampered by the relatively small datasets comprising thousands of stars. Future space-based (e.g., _Gaia_) and ground -based observatories (e.g. Vera Rubin Telescope) surveys will permit proper-motion measurements of large samples of bulge stars to help unravel the formation of the bulge (Gough-Kelly et al., 2022). A possible test is the distribution of ages at the bulge's low-[Fe/H] peak versus that of the trough between the two peaks. Most stars in the MW's bulge will now be old; measuring an age difference of \(\sim 2\,\)Gyr in a present-day \(\sim 10\,\)Gyr-old bulge (Ortolani et al., 1995; Kuijken & Rich, 2002; Zoccali et al., 2003; Ferreras et al., 2003; Sahu et al., 2006; Clarkson et al., 2008, 2011; Brown et al., 2010; Valenti et al., 2013; Calamida et al., 2014; Renzini et al., 2018; Surot et al., 2019) is challenging. Nonetheless, the chemical thin and thick disks do appear to overlap in age, as seen by the existence of RR Lyrae with small vertical excursions and low [\(\alpha\)/Fe] (Prudil et al., 2020). An age overlap between the MW's thin and thick disk, which was predicted in Paper I has also been demonstrated by Beraldo e Silva et al. (2021) using the stellar ages of turnoff and giant stars from the Sanders & Das (2018) catalog. Likewise, Silva Aguirre et al. (2018) find an age overlap between high-\(\alpha\) and low-\(\alpha\) disk stars from astroseismic ages. Gent et al. (2022) reach a similar conclusion based on data from the _Gaia_-ESO survey together with _Gaia_ EDR3 data. Thus it may well be possible to measure the mean age difference between stars at the trough and those in the low-[Fe/H] peak to test whether clumps have contributed to the bulge. ### Clumps as probes of feedback implementations Clumps were first proposed to play a role in the formation of bulges by Noguchi (1999). Following this suggestion, several works explored the role of clumps in bulge formation (Immeli et al., 2004; Bournaud et al., 2007; Elmegreen et al., 2008; Aumer et al., 2010; Inoue & Saitoh, 2012). When the cosmological setting is also included, the possibility of "ex-situ" clumps forming directly in the cold gas streaming in before reaching the disk was also recognized (Dekel et al., 2009; Ceverino et al., 2010). Clumps can even be excited by external perturbations (Inoue et al., 2016). The cosmological simulations of Dubois et al. (2021) find that \(\sim 10\%\) of the stellar mass of \(z=4\) galaxies may be in the form of clumps, while those of Mandelker et al. (2014) resulted in \(60\%\) of galaxies hosting an in-situ clump population. Meanwhile Mandelker et al. (2017) showed that bulges can host their own clump, which is more robust to feedback; they further showed that radiation pressure increases the cold gas fraction (by delaying star formation), increasing the lifetime of low-mass clumps. Inoue & Saitoh (2012) showed that bulges formed from clump mergers are rapidly rotating, exponential, and comprised of old, metal-rich stars, similar to the bulge of the MW. In addition, clumps may further affect the formation of the bulge by funnelling gas to the center, leading to further star formation and compaction (Dekel & Burkert, 2014). However other studies have questioned the importance of clumps for the evolution of galaxies. Efficient coupling of feedback energy to gas destroys clumps (Elmegreen et al., 2008; Hopkins et al., 2012) and many simulations that employ high feedback prescriptions have failed to find significant clumps or have found ones that do not contribute much to bulges (e.g. Tamburello et al., 2015). The short-lived clumps in the FIRE simulations do not manage to migrate to the bulge (Oklopcic et al., 2017), and may not even have been bound. Similarly, in the NIHAO simulation suite, Buck et al. (2017) found that clumps are only present in the light, not in the mass, and therefore have minimal contribution to bulge growth. The detailed treatment of various forms of feedback (e.g. Fensch & Bournaud, 2021) therefore plays an important role in the ease with which clumps form in simulations. Moreover, in simulations of single giant molecular clouds (GMCs), the energy imparted by photoionization, winds, and supernova feedback can be channeled along preferred directions, thereby preserving the GMC for a longer time than would otherwise be expected (Rogers & Pittard, 2013; Dale, 2017; Howard et al., 2017). Thus tests of what role, if any, clumps have played in the evolution of galaxies like the MW can inform improvements in subgrid implementations of feedback on the smallest scales, perhaps by accounting for feedback channeling. Further study of the impact of clumps on galaxy formation therefore may have much broader impact on the study of galaxy formation (e.g. Dekel et al., 2022). Recently, Marasco et al. (2022) found that the observed outflows from a sample of starbursting dwarf galaxies are lower than predicted by cosmological simulations that employ high feedback. They find mass loading factors of warm gas outflows more than 2 orders of magnitude lower than predicted, providing strong support for the need for gentler feedback prescriptions. ### Caveats The simulation presented in this paper is clearly idealized and lacks some of the ingredients that have been suggested to have mattered in the MW's chemical evolution. Of these the most important is the merger of the _Gaia_-Sausage-Enceladus progenitor. The effect of the GSE merger will be explored in future papers (_e.g._, Amarante et al., 2022). Our simulations place the initial gas in a hot corona. This is appropriate for a galaxy of the MW's mass since redshift \(z\sim 1\)(Birnboim & Dekel, 2003; Keres et al., 2005), but is less appropriate before then. More realistically, the gas should flow in along filaments (cold-mode accretion). Unfortunately setting up such initial conditions for high-resolution simulations is difficult. However cosmological simulations have found clumps in galaxies still in the filamentary cold accretion mode (Dekel et al., 2009; Ceverino et al., 2010); provided that the inflow rate of the gas, and the resulting clump and star formation rates are realistic, it does not matter how gas reaches the disk. If gas stalls in the outer disk (for instance as some of the gas does in Agertz et al., 2021), then it may be that gas surface densities are never high enough for clumps to form. We speculate that if this inhibits the flow of gas to the bulge then the bulge itself may never reach the same high-[\(\alpha\)/Fe] state as the thick disk. However in general filamentary, cold-mode accretion need not alter the general picture much so long as realistic clumps form. One limitation of the clumpy model presented here is that the clump population in this particular simulation may be too large. This is suggested by high rotation velocity at the center (Clarke et al., 2019), and the failure to form a bar (although bars often fail to form at this mass resolution). These effects may be improved in models with higher feedback that still permit long-lived, but lower-mass clumps, which are still able to produce a high-\(\alpha\) population (e.g. Garver et al. submitted). Despite the absence of a bar, we may anticipate what the influence of a bar might be. Debattista et al. (2017) showed that many of the trends with metallicity observed in the MW's bulge can be explained by the secular evolution of the bar, via a mechanism they termed _kinematic fractionation_. In this mechanism, different populations are separated by the bar formation on the basis of their radial random motion. Populations with large radial random motions are lifted by the bar to large heights ending as a spheroidal population and forming a weaker bar, whereas populations that start with lower radial random motion do not rise to as large heights but end with a more strongly peanut-shaped distribution and a stronger bar. As a result, in general the X-shape is better traced by the metal-rich stars, which are younger and start out cooler, while metal-poor stars, which are older and thus kinematically hotter, trace a more boxy structure (Debattista et al., 2017; Athanassoula et al., 2017; Buck et al., 2018; Debattista et al., 2019; Fragkoudi et al., 2020). Subsequently, Debattista et al. (2020) demonstrated that the vertical thickening of stellar populations increases monotonically with the radial action of stars from _before_ the bar formed. Since stars with larger radial random motion are typically older, and usually more metal-poor, kinematic fractionation results in a vertical metallicity gradient. In addition the X-shape ends up better traced by metal-rich stars, as is observed in the MW (Ness et al., 2012; Uttenthaler et al., 2012; Rojas-Arriagada et al., 2014). We have shown here that the radial action, \(J_{R}\), decreases along the bulge's chemical track with increasing metallicity. Thus kinematic fractionation would raise stars at the metal-poor peak to larger heights than those of the metal-rich peak. This would further enhance the trends of Section 3.7, which already match those observed in the MW. Recently Queiroz et al. (2021) derived distances of a large sample of bulge stars with starhorse using APOGEE DR16 and _Gaia_ EDR3 parallaxes. They argued that the chemistry of the bulge is comprised of not one but two tracks, contrary to earlier studies. The two tracks do not overlap in metallicity (unlike the thin+thick disks), but are separated by a gap and have different slopes. After accounting for the stellar population-dependent selection function of APOGEE, Eilers et al. (2022), also found tracks with different slope, although the tracks still do not overlap in metallicity. If these trends are confirmed by imminent large surveys with instruments such as MOONS (Cirasuolo et al., 2014) and 4MOST (de Jong et al., 2014), this may suggest that the clump scenario needs alteration or is perhaps wrong. ### Summary The principal results of this paper are as follows: 1. A single track with two peaks in the bulge's [Fe/H]-[\(\alpha\)/Fe] space results when clump formation can occur in the early evolution. Clumps sink to the center, contributing to the bulge. The bulge is later populated by more metal-rich, \(\alpha\)-poor stars that form in situ after the epoch of clump formation. Such twin peaks are not present when the feedback suppresses clump formation. The relative mass in the high- and low-[Fe/H] peaks constrains the epoch when star formation in the bulge is quenched (see Sections 3.1, 3.2, and 3.3). 2. Star formation within the bulge occurs in the high-\(\Sigma_{\rm SFR}\) clump mode. This ensures that a separate low-[\(\alpha\)/Fe] track never forms (see Section 3.4). 3. The metal-rich bulge population, while on average younger than the metal-poor population, overlaps with it in age because the latter population is partly built from stars that came in with clumps after the chemical evolution of in-situ star formation in the bulge had moved to higher metallicities (see Section 3.5). 4. By the end of the clump epoch, the bulge is already rapidly rotating. The high-[\(\alpha\)/Fe], low-[Fe/H] bulge population is kinematically hotter than the low-[\(\alpha\)/Fe], high-[Fe/H] one (see Section 3.6). 5. The population at the metal-rich peak is prominent at low latitudes but declines with distance from the mid-plane, as observed in the MW (see Section 3.7). 6. A test of the role of clumps on the MW's bulge comes from a comparison of the age distributions of the low-[Fe/H] peak and the trough between the two peaks. In the presence of clumps, the age distributions overlap significantly, with the mean age higher in the trough than at the low-[Fe/H] peak, contrary to the usual expectation of increasing metallicity with age (see Section 3.5). This paper, together with Paper I, presents idealized simulations that demonstrate that clump formation provides a very direct and natural way of producing chemical trends observed not only in the MW's thin+thick disk, but also in the bulge. The simulations are by no means wholly realistic, but the ease with which they produce the trends observed in the MW encourages us to explore further the role of clumps in the early history of galaxies. In contrast, satisfying both constraints in other scenarios of thick disk formation may require a more specific set of circumstances, which would mean the MW is unusual. The clump model makes some important predictions that can be verified with future facilities, including that chemical thick disks should be common in MW-mass galaxies and that a population of chemical thin-disk stars of comparable age to the thick disk should exist in the MW. Studying the consequences of the clumps in such simulations may also provide a useful probe of feedback implementations. **Acknowledgements.** V.P.D., L.B.S., and T.K. were supported by STFC Consolidated grant ST/R000786/1. D.J.L. was supported for part of this project by a UCLan UURIP internship. L.B.S acknowledges the support of NASA-ATP award 80NSSC20K0509 and Science Foundation AAG grant AST-2009122. J.A.S.A. acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 852839). M.Z. acknowledges support from the ANID BASAL Center for Astrophysics and Associated Technologies (CATA) through grants AFB170002, ACE210002, and FB210003, the ANID Millennium Institute of Astrophysics (MAS) ICN12_009 and ANID Fondecyt Regular grant 1191505. E.V. acknowledges the Excellence Cluster ORIGINS funded by the Deutsche Forschungsgemeinschaft (DFG; German Research Foundation) under Germany's Excellence Strategy - EXC-2094-390783311. S.A.K. would like to acknowledge support from NASA's Astrophysics Data Analysis Program (ADAP) grant number 80NSSC20K0760. We thank the anonymous referee for comments that helped improve this paper. An important part of the methodology for the stellar population modeling used in this paper was worked out in 2018 at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. The visit of V.P.D. was partially supported by a grant from the Simons Foundation. The simulations in this paper were run at the DiRAC Shared Memory Processing system at the University of Cambridge, operated by the COSMOS Project at the Department of Applied Mathematics and Theoretical Physics on behalf of the STFC DiRAC HPC Facility: www.dirac.ac.uk. This equipment was funded by BIS National E-infrastructure capital grant ST/J005673/1, STFC capital grant ST/H008586/1 and STFC DiRAC Operations grant ST/K00333X/1. DiRAC is part of the National E-Infrastructure. This paper is dedicated to the memory of George Lake, for whose support and inspiration V.P.D. is deeply indebted.
2304.03397
High-Dimensional Quantum Certified Deletion
Certified deletion is a protocol which allows two parties to share information, from Alice to Bob, in such a way that if Bob chooses to delete the information, he can prove to Alice that the deletion has taken place by providing a verification key. It is not possible for Bob to both provide this verification, and gain information about the message that was sent. This type of protocol is unique to quantum information and cannot be done with classical approaches. Here, we expand on previous work to outline a high-dimensional version of certified deletion that can be used to incorporate multiple parties. We also experimentally verify the feasibility of these protocols for the first time, demonstrating the original 2-dimensional proposal, as well as the high-dimensional scenario up to dimension 8.
Felix Hufnagel, Anne Broadbent, Ebrahim Karimi
2023-04-06T22:02:33Z
http://arxiv.org/abs/2304.03397v1
# High-Dimensional Quantum Certified Deletion ###### Abstract Certified deletion is a protocol which allows two parties to share information, from Alice to Bob, in such a way that if Bob chooses to delete the information, he can prove to Alice that the deletion has taken place by providing a verification key. It is not possible for Bob to both provide this verification, and gain information about the message that was sent. This type of protocol is unique to quantum information and cannot be done with classical approaches. Here, we expand on previous work to outline a high-dimensional version of certified deletion that can be used to incorporate multiple parties. We also experimentally verify the feasibility of these protocols for the first time, demonstrating the original 2-dimensional proposal, as well as the high-dimensional scenario up to dimension 8. _Introduction_ - In the current climate of remote services and mass data storage, the ability to know if someone has deleted information that you have sent to them or asked them to hold onto for some period of time may be as important as communication security. Going forward, a verifiable proof that a company has deleted personal data may be integral to our continued faith in cloud storage and data-collecting companies. The inability to make copies of a general quantum state, described by the no-cloning theorem [1; 2], is a fundamental aspect of many proposed quantum technologies, including quantum key distribution (QKD) [3] and blind quantum computing [4]. Such technological solutions use the physical properties of quantum mechanical systems to gain a security advantage over the previously used digital approaches. QKD has become a frontrunning solution to secure communication in a future where access to quantum computing resources will render current security protocols useless. This field has received significant attention both from the theoretical physics and mathematics community which has developed new protocols and security proofs to optimise the original communication protocol proposed in 1984 [3], and from experimental physics which has pushed the bounds of what can actually be achieved with fibre channels [5; 6; 7], underwater channels [8], and free-space channels [9; 10] linking line of sight stations within cities and from ground to satellite [11]. Furthermore, this technology is beginning to enter the commercially available phase of development with a few pioneering companies such as ID Quantique, Toshiba, and MagiQ Technologies Inc., to name a few, producing specific-use products. Eventually it is expected that a large infrastructure of quantum communication channels will be used to allow secure communication around the world. On the back of this infrastructure one can begin to consider other applications for the quantum channels such as blind quantum computation, quantum money, and the quantum internet etc. A more recent proposal, again resting on the no-cloning theorem for quantum states, has defined a protocol for certified deletion [12]. The certified deletion protocol allows for a receiving party to guarantee that information sent to them has in fact been deleted and that no copy has been held by the receiving party. Such a protocol is not possible in the world of digital communication where a person can always hold on to the raw bits that have been sent, and recreate a copy. This has motivated protocols using certified deletion for data security and privacy, software licensing, and new public encryption schemes using quantum resources [13; 14; 15]. There have also been similar ideas around proving erasure of quantum data stored at some remote location [16]. This allows for one party to store a backup of their data at some location while being able to request a proof that this data is deleted at a later date. Here, we experimentally demonstrate the certified deletion protocol proposed in [12], using the orbital angular momentum degree of freedom of photons. In addition, we extend the protocol beyond the qubit, aiming to develop the certified deletion protocol to utilise high-dimensional quantum communications systems. _Protocol_ - A fundamental aspect of certified deletion is shared with QKD, that of conjugate coding using mutually unbiased bases (MUBs). By encoding information into conjugate bases, we are able to take advantage of the quantum no-cloning principle, leading to mathematical limits on the eavesdropper's abilities in QKD or to Bob's ability to _both_ convince Alice of deletion _and_ extract information about the message in certi Figure 1: Here we describe the relationship between BB84 and certified deletion. In the certified deletion protocol the malicious Bob, who is trying to determine both the deletion key and the secret message, maps onto Eve in the BB84 protocol. For the security proof, we can describe the certified deletion protocol as Alice sending the quantum state to Malicious Bob and then on to the Honest Bob. fied deletion. These limits provide us with a guideline of experimental error thresholds which we must keep our quantum system below to allow for secure communication, and consequently certified deletion. The specifications of a particular protocol will dictate the form of the security proofs and thus will change the error thresholds for different protocols. These specifications include the types of states used, the dimensionality of those states, as well as the choice of measurements that will be performed. Different protocols will have advantages in terms of improved message rates or improved tolerance to errors, but will also ultimately depend on the practical ability to create certain quantum states and perform complex measurements. Let us now extend the certified deletion protocol to the high-dimensional vector space. We will begin by using concepts from high-dimensional QKD schemes that have been developed as extensions to the original BB84 [17]. A generalised quantum measurement can be given as set of linear operators on a quantum system \(A\) denoted by \(\left\{\widehat{M}_{A}^{x}\right\}\), satisfying \(\sum_{x}\left(\widehat{M}_{A}^{x}\right)^{\dagger}\left(\widehat{M}_{A}^{x} \right)=\mathbb{I}_{A}\), where, \(x\in\{1,2,\ldots\}\), and \(\dagger\) stands for the conjugate transpose. When \(\sum_{x}\widehat{M}_{A}^{x}=\mathbb{I}_{A}\), these operators are known as a positive-operator valued measure (POVM). MUBs are a particular class of POVMs whose defining feature is that the overlap of two different bases, denoted by \(\widehat{M}_{A}^{x}\) and \(\widehat{N}_{A}^{\gamma}\), gives \(c=max_{x,y}\left\|\sqrt{\widehat{M}_{A}^{x}}\sqrt{\widehat{N}_{A}^{y}}\right\| _{\infty}^{2}=1/d\), where \(x,y\in\{1,\ldots,d\}\), \(d\) is the dimension of the vector space, and \(\left\|\widehat{O}\right\|_{\infty}=\max_{i}\left|o_{i}\right|\) is the infinity norm. For certified deletion, the string of bits composing the message from Alice are encoded in the computational basis, \(\Pi_{1}=\left\{\widehat{M}_{A}^{x}=\left|\psi_{x}\right\rangle\left\langle \psi_{x}\right|\right\}\), and the deletion key is encoded in the Hadamard basis, \(\Pi_{2}=\left\{\widehat{N}_{A}^{\gamma}=\left|\phi_{y}\right\rangle\left\langle \phi_{y}\right|\right\}\). The choice of ordering for sending in the computational and Hadamard basis is randomised, determined by a random number generator. Thus due to the use of the mutually unbiased bases, Bob can only get information either about the message, by measuring in the computational basis, or the deletion key, by measuring in the Hadamard basis. We show that an attempt from Bob to obtain information about both the key and message is equivalent to an eavesdropper attack on a QKD scheme, and thus, our security proof can use proofs developed for quantum communication. We can draw a picture here relating certified deletion to BB84, Fig. 1. In the certified deletion case we must consider as an adversary a malicious Bob who is trying to determine both the deletion key and secret message, and an honest Bob. The malicious Bob here maps onto Eve in BB84, where Eve is trying to find out information without introducing error, which can be generallized to Eve trying to find out information about one basis without introducing errors into the conjugate basis. To frame it in another way, Bob must provide Alice with the deletion key, meaning he makes each measurement in the Hadamard basis. While Eve tries to determine Alice's states in the computational basis only to gain information on the secret message. As in QKD protocols, a certain upper bound is established for the number of errors allowable, here in the proof of deletion Bob provides to Alice. The mutual information between Alice and Bob is given by \[I_{AB}=\sum_{ij}P(x_{i},y_{j})\log_{2}\left(\frac{P(x_{i},y_{j})}{P(x_{i})P(y _{j})}\right), \tag{1}\] where the \(P(x_{i},y_{j})\) are the probabilities of the outcome \(x_{i}\) for Alice and \(y_{i}\) for Bob, and \(P(x_{i})\) and \(P(y_{i})\) are the independent probabilities of each outcome for Alice and Bob. Thus in the high dimensional case, _i.e._, qudits, we consider a uniform probability of detection errors in Bob's measurement, and can give the mutual information for Alice and Bob in terms of Bob's state fidelity \(F\) by, \[I_{AB}=\log_{2}(d)+F\log_{2}\left(F\right)+\left(1-F\right)\log_{2}\left(\frac {1-F}{d-1}\right). \tag{2}\] Bob's state fidelity is given by \(F=\left\langle\psi_{i}|\widetilde{\rho}_{B}\left|\psi_{i}\right\rangle\) in the computational basis and by \(F=\left\langle\phi_{i}|\widetilde{\rho}_{B}\left|\phi_{i}\right\rangle\) in the Hadamard basis where \(\widetilde{\rho}_{B}\) is the quantum state received by Bob via the quantum channel. This fidelity also corresponds to the trace of the probability of detection matrix that will be measured in the experimental section to characterise the quantum channel and yields the quantum bit error rate (QBER) through \(\text{QBER}=1-F\). It has been shown that a limit on Eve's, _i.e._, malicious Bob's, information can be derived from an uncertainty principle approach which limits the information that can be gained by Eve and Bob from making measurements \(\widehat{E}_{A}\) and \(\widehat{B}_{A}\) respectively on a quantum system \(A\)[18]. Using the eigenstates \(\left|e_{j}\right\rangle\) and \(\left|b_{i}\right\rangle\) for \(\widehat{E}_{A}\) and \(\widehat{B}_{A}\) respectively, this limit is given by, \[I_{AB}+I_{AE}\leq 2\log_{2}\left(d\max_{i,j}\left|\left\langle b_{i}\left|e_{j} \right\rangle\right|\right). \tag{3}\] From here, we know that a measurement by Eve in the complementary MUB to Bob will give \(\left|\left\langle b_{i}\left|e_{j}\right\rangle\right|^{2}=1/d\) and thus \(I_{AB}+I_{AE}\leq\log_{2}(d)\). Finally, the proposition \(R\geq\max\{I_{AB}-I_{AE},I_{AB}-I_{BE}\}\), detailing the necessary restriction on the mutual information for generating a message, gives the result that we can establish a non-zero message rate for \(I_{AB}>I_{AE}\). Here, \(I_{ij}\) defines the mutual information between \(i\) and \(j\), where \(A\), \(B\), and \(E\) represent Alice, Bob, and Eve, respectively [19]. We then can determine that we must have \(I_{AB}>\log_{2}(d)/2\) to guarantee a positive message rate. Combining this with Eq. (2), we reach a lower bound on the fidelity required to give a positive message rate: \[F\log_{2}\left(\frac{1}{F}\right)+\left(1-F\right)\log_{2}\left(\frac{d-1}{1-F }\right)<\frac{\log_{2}d}{2}. \tag{4}\] Once it is established that Alice and Bob have a sufficiently high mutual information (fidelity, corresponding to a limit on Eve's information), it has been shown that hash functions can be used to further reduce Eve's information all the way to zero. Though this privacy amplification comes at the expense of reducing Alice and Bob's message length. Equation 4 allows us to determine the maximum tolerable QBER, beyond which point a secret message cannot be established. For example, the error thresholds for dimensions 2, 4, and 8 are 11.00%, 18.93%, and 24.70% respectively[18]. _High-dimensional photonic states_ - In our protocol, we will use orbital angular momentum (OAM) states to encode our quantum information. OAM of photons is defined by its characteristic property of having an azimuthally dependent phase \(\left\langle\mathbf{r}\right|\ell\rangle:=e^{i\ell\phi}/\sqrt{2\pi}\), where \(\ell\) is an integer from \(-\infty\) to \(+\infty\), and \(\phi\) is the azimuthal angle in the polar coordinates \(\mathbf{r}\). Such photonic states carry angular momentum of magnitude \(\ell\hbar\) per photon along propagation direction. These photonic quantum states form complete and orthogonal bases, which we represent with \(\left|\ell\right\rangle\). The OAM states for \(\ell=\{-4,-3,\ldots,3,4\}\), with the computational and the corresponding Hadamard states, are shown in Fig. 2. In two dimensions the protocol is similar to that used in BB84 QKD. One takes first the computational basis given by \(\{\left|\psi_{0}\right\rangle:=\left|-1\right\rangle;\,\left|\psi_{1}\right\rangle :=\left|+1\right\rangle\}\). The conjugate basis is then taken as the Hadamard, \(\left\{\left|\phi_{1}\right\rangle:=\left(\left|-1\right\rangle+i\left|+1 \right\rangle\right)/\sqrt{2};\left|\phi_{2}\right\rangle:=\left(\left|-1 \right\rangle-i\left|+1\right\rangle\right)/\sqrt{2}\right\}\). As these states form a pair of MUBs, a projective measurement of a state in the correct basis gives a certain result, due to the orthogonality of the states in each basis, while measurement in the wrong basis gives no information, or a 50% probability of detection for each state in the 2-dimensional case. When we move to higher dimensions we again will use two MUBs. However, each MUB will now have \(d\) different states. In the higher dimensions, the states for the Hadamard basis are given by \(\left|\phi_{i}\right\rangle=\sum_{j=1}^{d}e^{i\gamma i/d}\left|\psi_{j}\right\rangle\). For dimension 4, the states would be \(\ell=\{-2,-1,+1,+2\}\) and for dimension 8, the states would be \(\ell=\{-4,-3,-2,-1,+1,+2,+3,+4\}\). More generally, the computational basis will be the pure OAM states from \(\ell\in\{-d/2,...0,...,d/2\}\) for the odd dimensions, and \(\ell\in\{-(d-1)/2,\ldots,-1,1,\ldots,(d-1)/2\}\) excluding \(\ell=0\) for even \(d\). _Experiment_ - The experimental setup consists of a single photon source and OAM state preparation for Alice, and an OAM measurement system for Bob. Single photons are produced by pumping a BBO crystal with a 405 \(nm\) UV laser. Degenerate photon pairs are selected using a 10 \(nm\) bandpass filter centred at 810 \(nm\) wavelength. A knife edge mirror is placed after the filter to separate the photon pairs which are anti-correlated in momentum. Each of these halves of the beam are then coupled to a single mode fibre where the signal photon is sent to Alice's state preparation system and the idler photon is sent to a detector. When coupled to the single mode fibres, the source has a rate of approximately 22 KHz, which reduces to 1 KHz after propagation through the experimental setup. The losses are mostly due to the spatial light modulators (SLMs) used for state preparation and detection. Alice uses a SLM to generate her desired OAM states. SLMs are not 100% efficient, thus a holography approach is used, adding a diffraction grating to the phase mask which results in the desired OAM mode being formed in the 1st order of diffraction with very high state purity. After the SLM a 4-f lens system with a pinhole at the focus is used to filter out all diffraction orders except for the first order. Bob also has an SLM and uses the phase flattening technique to measure the states sent by Alice. This technique involves applying the conjugate phase of the OAM mode that is to be measured and can be seen as removing any transverse phase from the incoming OAM carrying photon. This allows the photon to be coupled to a single mode fibre upon propagation, as the flat phase corresponds to a Gaussian mode in the far field. It must be stated that this is a projective measurement in which Bob must choose which state he will measure and only measures a single state at a time, thus limiting the efficiency of this particular measurement technique. There do however exist different approaches which can sort OAM modes upon choosing the basis in which one wants to measure. At present, many of these approaches suffer from efficiency problems which limit the eventual transmission rates. The idler photon is sent to Bob to perform a coincidence mea Figure 2: The logical and Hadamard bases for dimensions 2, 4 and 8 are shown. The pure OAM states are shown on the left where \(\ell=\{-1,+1\}\) make up the logical basis for \(d=2\); \(\ell=\{-2,-1,+1,+2\}\) for \(d=4\); \(\ell=\{-4,-3,-2,-1,+1,+2,+3,+4\}\) for \(d=8\). The Hadamard basis states for dimensions 2, 4, and 8, are shown on the right side. The states are plotted with intensity and phase, where the colour ranging from red through the colors and back to red represents a phase of 0 to 2\(\pi\). surement with the signal photon, reducing noise from background light and detector dark counts. Both photons are detected using single-photon avalanche diode (SPAD) detectors. Bob makes all measurements, in the computational and Hadamard basis, using the SLM with different phase patterns. The experimental probability of detection matrices for the \(d=2,4\) and 8 QKD setup are shown in Fig. 4. The columns correspond to the different states sent by Alice, while the rows correspond to the choice of measurement made by Bob. Thus we expect to find 100% detection probability on the diagonal elements where Bob's measurement setting is the same state as that sent by Alice. When Bob measures Alice's states in the incorrect basis, he can gain no information and we expect to see a uniform probability of \(1/d\) for all measurements in the wrong basis. When Bob reads the message, he performs all measurements in the logical basis, giving the measurement results shown on the left. This results in no information being gained about the deletion key, which can be seen by the incoherence of Alice's deletion key states. When Bob chooses to delete the message, he will instead choose to measure every state using the Hadamard basis, giving the results shown on the right. In this case we can see that the message states sent by Alice do not give any information. The error rates obtained are \(\text{QBER}=0.96\%,2.4\%\), and \(7.2\%\) for dimension 2, 4, and 8, respectively. We can calculate the achievable message rate from our QBER (\(Q\)) using \(K^{(d)}(Q)=\log_{2}(d)-2\,h^{(d)}(Q)\) where the \(d\)-dimensional Shannon entropy is given by \(h^{(d)}(x)=-x\log_{2}(x/(d-1))-(1-x)\log_{2}(1-x)\). We have obtained message rates of 0.84, 1.60, and 1.85 per message photon for dimension 2, 4, and 8, respectively. Here, we see the advantage that can be achieved by moving to higher dimensional protocols, as the message rates here increase as the dimension of the protocol increases. _Discussion_ - A practical quantum information processing hardware called quantum memory, capable of arbitrary storing/releasing quantum states, has not yet been fully realised. Though the deletion concept may seem incomplete Figure 3: Experimental Setup: Alice and Bob’s measurement and detection setups are shown. Alice pumps a BBO crystal with a UV diode laser at 405 nm resulting in 810 nm pairs of single photons through SPDC, and the pump is filtered out using a 10 nm width bandpass filter centred at 810 nm. The entangled pairs are separated using a knife edge mirror as shown in in inlay **a**. Entangled pairs are anti-correlated in their momentum and thus one photon from each pair falls on each side of the knife edge, as shown in the inlay. The single photons are coupled to a single mode fibre (SMF). The idler photon is then sent directly to Bob and the signal photon is sent to Alice’s state preparation stage which uses a spatial light modulator (SLM) to prepare the OAM carrying beams. A diffraction grating is applied to the phase hologram which results in the formation of the desired mode in the first order of diffraction. A 4-f lens system with a pinhole at the focus then removes all the diffraction orders other than the first order. The holograms used for the dimension 4 states along with the corresponding modes is shown in inlay **b**. Bob measures the incoming photon’s state using an SLM and SMF, again using a 4-f system to remove all of the diffraction orders other than the first. The collected photons and idler photons are detected using single-photon avalanche diode (SPAD) detectors which then trigger a coincidence box to measure the coincidences. without a quantum memory, we can still conceive of other near-term applications in which Bob decides when the information is transmitted and whether he will keep or delete the information. For instance, Alice may continually send out an encryption key or software license, for which Bob must continue verifying that he has deleted until the time he would like to use it. Therefore, providing schemes to confirm a message has not been read will be vital for a quantum communication network. In this work, we have experimentally demonstrated the certified deletion protocol for a qubit QKD system as proposed by Broadbent and Islam [12]. We have also extended the protocol to include high-dimensional quantum states and have demonstrated this high-dimensional protocol, gaining an advantage in the message rate per sifted photon. The increase in message rate at higher dimensions also comes with a higher tolerance for errors (and therefore noise), which can be valuable for establishing communication in certain noisy environments. Another property of high dimensional states is that there are more than two MUBs that can be used to encode information. In fact, in dimensions where \(d\) is a power of a prime number, _i.e._, \(d=2,3,4,5,8,\ldots\), there exists (\(d+1\))-MUBs. Unique QKD protocols, _e.g._ six-state [20], using these extra MUB have been created that have additional key rate and security benefits over BB84. For certified deletion, one could come up with interesting new protocols involving multiple parties with messages or deletion keys encoded in the different bases. This work was supported by Canada Research Chairs; Canada First Research Excellence Fund (CFREF); National Research Council of Canada High-Throughput and Secure Networks (HTSN) Challenge Program; and Natural Sciences and Engineering Research Council of Canada (NSERC).
2306.16685
Constrained RS coding for Low Peak to Average Power Ratio in FBMC -- OQAM Systems
Multi-carrier modulation techniques have now become a standard in many communication protocols. Filter bank based multi-carrier (FBMC) generation techniques have been discussed in the literature as a means for overcoming the shortcomings of IFFT/FFT based OFDM system. The Peak to Average Power Ratio (PAPR) is a problem faced by all multi-carrier techniques. This paper discusses the methods for reducing PAPR in a FBMC system while maintaining acceptable Bit Error Rate (BER). A new PAPR minimizing scheme called Constrained Reed Solomon (CRS) coding is proposed. The hybrid techniques using coding and companding are tested for different channel models and is found to yield promising results.
Job Chunkath, V. S. Sheeba, Nisha Varghese
2023-06-29T05:11:55Z
http://arxiv.org/abs/2306.16685v2
# Constrained RS coding for Low Peak to Average Power Ratio in FBMC - OQAM Systems ###### Abstract Multi-carrier modulation techniques have now become a standard in many communication protocols. Filter bank based multi-carrier (FBMC) generation techniques have been discussed in the literature as a means for overcoming the shortcomings of IFFT/FFT based OFDM system. The Peak to Average Power Ratio (PAPR) is a problem faced by all multi-carrier techniques. This paper discusses the methods for reducing PAPR in a FBMC system while maintaining acceptable Bit Error Rate (BER). A new PAPR minimizing scheme called Constrained Reed Solomon (CRS) coding is proposed. The hybrid techniques using coding and companding are tested for different channel models and is found to yield promising results. BER; Block codes; Companding; Fading; PAPR. ## I Introduction The excessive growth of wireless communication services has resulted in an ever increasing demand for higher data rates in many application areas. The high data rate requirement has resulted in the exploration of new methods for implementing wireless communication. The present-day data rate requirement can be supported by the use of multi-carrier (MC) based systems, instead of single carrier (SC) systems prevalent in earlier days. The ability to perform well in multi-path propagation and frequency selective fading environments was considered as the major advantage of this concept. The multi-carrier communication scheme considers wideband frequency selective channel as a number of narrowband sub-channels with flat fading characteristics, which can be rectified by using simple equalization techniques at the receiver. One of the popular multi-carrier systems that are presently being used is the Orthogonal Frequency Division Multiplexing (OFDM) technique [1]. Many wideband applications like Digital Audio (DAB), Digital Video Broadcast (DVB), and Wi-Fi make use of OFDM techniques. The main advantage of OFDM is its high spectral efficiency. It makes use of cyclic prefix (CP) to overcome the Inter-Symbol Interference (ISI) caused by delay spread. This redundancy in data results in decrease of overall data rate. The stop-band attenuation of OFDM system is -13dB, which causes leakage into adjacent sub-bands. The above drawbacks of OFDM, has resulted in the search for a better system. This has resulted in the study of filter bank multi-carrier (FBMC) system as a possible alternative candidate for generation of orthogonal multi-carriers. In FBMC system it is possible to have prototype filters with large filter orders independent of the number of allotted sub-channels. The filters for different sub-channels are combined together with the help of a transmultiplexer [2]. Thus this system can have better stop-band attenuation which results in lower frequency leakage between adjacent sub-channels. The improvement in spectral shaping leads to the use of simpler equalization techniques at the receiver avoiding the use of CP. The FBMC scheme discussed in this paper makes use of Offset Quadrature Amplitude Modulation (OQAM) as the digital modulation technique for the data stream which is provided to the filter bank system. The OQAM modulation helps in maintaining orthogonality between the adjacent sub-channels, thus facilitating data recovery, even during adjacent channel interference. The system discussed in this paper makes use of OQAM digital modulation and filter bank based multi-carrier generation, hence can be called FBMC-OQAM system [3]. The FBMC system is also affected by PAPR problem. The conventional solution to this problem is amplifier '_back off_' i.e., the amplifier operation characteristics are set below the desirable optimum values. Hence the amplifier will be able to operate in the linear region for a limited dynamic range of the signal, but this result in limited stage gain. Diverse methods for PAPR reduction techniques for OFDM systems are described in the literature. The difference between OFDM system and FBMC system is that the overlapping adjacent symbols are present in an FBMC system [3]. Hence all the PAPR minimizing techniques used for OFDM are not applicable in an FBMC system. The Clipping and filtering method in conjunction with Tone Reservation (TR), Active Constellation Extension (ACE) and both these methods together (TRACE) are discussed in [4] by Eldessoki _et al_. The PAPR is reported at 5dB for the combined methods with BER of \(10^{-6}\) for SNR greater than 14dB. The method discussed in [4] makes use of multiple stages to achieve PAPR reduction and the performance in fading channels is not discussed. Techniques like Multi-Block Joint Optimization (MBJO) [5] and sliding window tone reservation (SWTR) are used in FBMC-OQAM system for PAPR reduction in [6] by Shixian Lu _et al_. The BER performance of the systems is not reported, also the overlapping structure of FBMC-OQAM symbols lead to high system complexity. The clipping and its iterative compensation method is proposed for FBMC-OQAM system by Zsolt Kollar and Peter Horvath in [7], where complex receiver design for the compensation of clipping noise is the major drawback. PAPR reduction methods for an FBMC system with PAM symbols are discussed in [8], but this is limited only to PAM symbols. The \(A\)-law and \(\mu\)-law companding method is considered in [9], it is observed that BER performance is better when \(\mu\)-law companding is used. The PAPR reduction using DFT spreading and Identically Time Shifted Multicarrier (ITSM) conditioning for FBMC waveform is discussed by Dongjun Na and Kwonhue Choi in [10]. The PAPR value obtained is around 8dB. Junhui Zhao _et al_[11] discuss a joint optimization of Partial Transmit Sequence (PTS) and Improved Bi-Layer Partial Transmit Sequence with Iterative Clipping and Filtering (IBPTS-ICF) scheme for reducing PAPR to 4dB. A hybrid PAPR reduction scheme involving multi-data block Partial Transmit Sequence (PTS) and tone reservation (TR) is described in [12] by Han Wang _et al_. The lowest value of PAPR obtained is 6dB, but the BER performance in various channels is not discussed in [10], [11], and [12]. The reliability of a communication system depends on data integrity. The data integrity of a communication system is to be safeguarded by an efficient error correction scheme [13]. In this paper block coding techniques like Bose, Chaudhuri, and Hocquenghem (BCH) coding and Reed - Solomon (RS) coding are implemented. The RS code is sub-class of BCH code, it is a non-binary block code that can correct multiple random and as well as sporadic errors [14]. The non-binary nature of RS code can be effectively utilized for symbol transmission that is being used in multi-carrier communication. The RS coding scheme is finding increased popularity in mobile communication scenarios due to the efficient decoder implementation. In this paper, we present an FBMC-OQAM system with different PAPR reduction methods. The multi-carrier signals from the transmitter are subjected to impairments of different channel models like ITU Vehicular A channel and Pedestrian B channel. Various PAPR reduction methods like non-linear companding are carried out along with error control coding techniques like BCH and RS coding. The different PAPR reduction methods are compared using CCDF plot. The system performance is evaluated using BER plot in different channel models. The typical performance of FBMC-OQAM with constrained RS coding and \(\mu\)-law companding results in a PAPR of 4.6dB which is lower than the values claimed in recent papers [4], [7], and [10]. The proposed scheme confines PAPR to a narrow range of 0.55dB for a wide dynamic input channel load. Thus this method is helpful in preventing problems related to signal amplification due to non-linearity of final stage power amplifier. Thus the method proposed in this paper is capable of delivering a low PAPR and BER using the same coding scheme. The paper is organized into five sections; section II, describes the system model. This section deals with the design and implementation of FBMC-OQAM system. The coding and companding technique for PAPR reduction are discussed in section III. The simulation results are presented in section IV. Section V summarizes the paper and its contributions. ## II System Model The block schematic of the complete system is given in the Fig. 1. The transmitter section consists of input data block. This block applies necessary reformatting to the data as required by the error control coding scheme. The encoded stream of bits is then given to the FBMC-OQAM transmitter. The \(\mu\)-law companding is applied on the bit stream before transmission. At the receiver section, the first stage finds the inverse \(\mu\)-law transform. The subsequent FBMC-OQAM receiver is used to recover the data. The error correction on the recovered data is done using error control stage. The decoded data is then sent towards the data output block. ### _FBMC-OQAM System_ One of the major concepts in the FBMC system is the use of transmultiplexer. The transmultiplexer is used for converting the signals from time division multiplexed (TDM) version into frequency division multiplexed (FDM) form and vice versa [15]. The FBMC- OQAM system is shown in Fig. 2. The main processing blocks in this direct form representation of an FBMC-OQAM system are OQAM pre-processing, synthesis filter bank, analysis filter bank, and OQAM post-processing [9]. ### _OQAM Pre/Post Processing_ The first operation in OQAM pre-processing is the conversion from complex values to real values. The real and imaginary parts which are the constituents of the complex-valued QAM symbol \(c_{k,l}\) where \(k=0\),\(1,\ldots,M-1\), are separated and time staggered by half the symbol period. The conversion from complex-to-real increases the sampling rate by a factor of 2. Fig 1: Block schematic of the proposed system. The interference in adjacent sub-channels is avoided as the adjacent values in a single sub-channel and in the next sub-channel are multiplied by powers of \(j\) (by \(\theta_{k,n}=j^{(k+n)}\)), hence they will be orthogonal to each other, thus ensuring interference-free transmission. The first operation, in the OQAM post-processing is the multiplication of the sequence by \(\theta_{k,n}^{*}\) (where \(\theta^{*}\) is the conjugate of \(\theta\)), followed by the operation of separating the real part. The second operation is real-to-complex conversion, which decreases the sample rate by a factor of 2 [9]. ### _Synthesis & Analysis Filter Banks_ All the sub-channel filters in synthesis filterbank \(G_{k}(Z)\) with near perfect reconstruction (NPR) characteristics are formed from a single real valued linear phase FIR prototype filter \(G_{0}(Z)\) with impulse response \(p(m)\), by exponential modulation. The \(k^{th}\) synthesis filter is defined by, \[g_{k}(m)=p(m)exp\left(j\frac{2\pi k}{M}\Big{(}m-\frac{L_{p}-1}{2}\Big{)}\right) \tag{1}\] where \(m=0,1,\ldots,L_{p}-1\) and \(L_{p}\) is the filter length. The \(k^{th}\) analysis filter can be realized by using the equation [9] which is defined by \[f_{k}(m)=g_{k}^{*}(L_{p}-1-m)\] In this paper, the prototype filter is designed by using frequency sampling method [9]. ## III Coding & Companding One of the techniques considered for PAPR reduction in the literature is block coding. The block coding results in spreading out the sequence of bits in such a way that the orthogonal signals will not have the same phase which prevents high signal peaks. This paper discusses the application of block coding followed by companding as an effective method for reducing PAPR. The block schematic of the system discussed is shown in the Fig. 1. The companding transform is helpful in curtailing high peak amplitudes in the modulated output signal, thus resulting in reduced PAPR. In this paper, the \(\mu\)-law companding method is implemented and it is found to deliver lower PAPR at acceptable levels of BER. ### \(\mu-Law\) Companding The output of \(\mu\)-law companding is given by the expression \[F(y)=sgn(y)\frac{ln(1+\mu|y|)}{m(1+\mu)},\ -1\leq|y|\leq 1 \tag{2}\] where \(y\) is the input signal to the compander. The companding transform is found to increase the dynamic range of signal, but it has a lesser effect on smaller amplitudes of the signals. The companding is carried out at the transmitter with a \(\mu=25\)[9]. At the receiver, the inverse of the companding transform is carried out on the received signal \(r\). The expression of the inverse transform is given below. \[F^{-1}(r)=sgn(r)\left(\frac{1}{\mu}\right)((1+\mu)^{|r|}-1),\ -1\leq|r|\leq 1 \tag{3}\] ### _Bose, Chaudhuri, and Hocquenghem Coding_ The ability to detect and correct errors is an essential requirement for the successful performance of any communication system. The BCH codes are error correcting codes used to correct random errors occurring during data transmission. The BCH codes are linear, cyclic codes capable of correcting several errors in a block of message bits. Typically for any positive integer \(m\geq 3\) and hamming distance \(d_{min}\geq 2t+1\), a binary BCH code having following parameters is found to exist. \[\begin{array}{ll}\text{Block length:}\ n=2^{m}-1\\ \text{Number of parity check digits:}\ r=n-k\leq mt\\ \text{Minimum distance:}\ d_{min}\geq 2t+1\end{array}\] where \(n\) is the length of encoded message bits, \(k\) is the input vector length, and \(t\) is the number of errors that can be corrected by the BCH code. The BCH encoding results in the introduction of \(n-k\) parity bits in the encoded message. This sequence of bits increases the Hamming distance between successive message bits. The increased distance and orthogonality of encoded message prevents the accumulation of the same phase and frequency signals which reduces the PAPR. The BER performance of the BCH coding is also good, due to the inherent error correcting capability. In this paper a BCH code with the parameters \((n,k)=(127,85)\) is chosen. ### Reed-Solomon (RS) Coding The Reed - Solomon (RS) code is a block code scheme that utilizes the group of bits or symbols instead of bits as in the case of BCH code. If \(k\) symbols of a message are to be encoded with \(r\) parity symbols, it will form a codeword of length \(n=k+r\). As RS code utilizes symbol grouping and the number of symbols in a codeword is fixed as \(n=2^{m}-1\). The symbol error correcting capability of the code is \(t=r/2\). In this paper we also evaluate the suitability of RS (25, 16), which is a punctured form of RS (31, 19), for PAPR reduction and error correction. ### Constrained Reed-Solomon (CRS) Coding The RS coding scheme suitable for FBMC system can be developed as follows, consider a \(M\) channel FBMC system, where \(M=2^{N}\). For an OQAM scheme 2 bit symbols can be applied to a channel, hence total number of bits that can be sent using \(M\) channels is \(2^{N+1}\) bits. Assuming that an error control coding scheme having a total size \(\approx 2^{N+1}\) bits exists i.e. considering both message and parity bits together. Let \(R_{b}\) be the number of parity bits of this coding scheme such that \(R_{b}\leq 2^{N}\). Consider a parity symbol is formed by \(q\) bits from the available parity bits \(R_{b}\). A maximum of \(r\) parity symbols can be formed with \(q\) bits each if \[r\times q\leq 2^{N} \tag{4}\] The total number of possible message bits, \(K_{b}\) is \[K_{b}=2^{N+1}-r\times q \tag{5}\] Assuming that each message symbol is formed by \(p\) bits from the available \(K_{b}\) message bits. A maximum of \(k^{\prime}\) message symbols can be formed with \(p\) bits each, as \[k^{\prime}\times p\leq 2^{N+1}-r\times q \tag{6}\] \[\text{i.e.}\hskip 28.452756ptk^{\prime}\times p+r\times q\leq 2^{N+1} \tag{7}\] Now consider a \((n,k)\) RS code, with a total symbol size \[n=k+r \tag{8}\] A constraint \(p<q\), is applied on equation (7), keeping both \(p\) and \(q\) as integers. Then the upper limit for \(p\) can be found from the relation \[p\leq\frac{2^{N+1}-r\times q}{k^{\prime}}\hskip 28.452756pt\text{where }k^{\prime}\leq k \tag{9}\] The lower limit of \(p\) can be found by applying the condition \(k^{\prime}=k\) in the equation (9). Applying the constraint \(p<q\) on a RS (31, 19) code with \(N=6,r=12,q=5,k^{\prime}=16\) and evaluating equation (9), The value of \(p\) is in the interval \(3.58<p\leq 4.25\), hence an integer value of \(p=4\) can be accepted. The application of \(p<q\) and \(k^{\prime}>r\) in the proposed constrained RS coding results in \[k^{\prime}\times p\cong r\times q \tag{10}\] The conventional RS coding cannot be directly applied to the above FBMC system, hence a punctured and shortened RS (25, 16) is found suitable. This can be analyzed by substituting the values \(k=16,r=9,p=q=5\), which results in \[k\times p\gg r\times q\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14.26378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt\hskip 14.226378pt \hskip 14.226378pt\hskip 14. distribution of \(10^{6}\) random binary bits are generated and used as test data. The simulation is carried out on different ITU channel models. The FBMC-OQAM system with 64 sub-channels is tested on all three channel models and baseline performance is verified. The PAPR reduction methods like \(\mu\)-law companding, BCH and CRS coding jointly with \(\mu\)-law companding are tested. The system performance is evaluated by plotting the Bit Error Rate (BER) versus Signal to Noise Ratio (SNR) curve for different schemes. In this paper performance under six different methodologies are compared. ### _Peak to Average Power Ratio_ The block coding methods investigated in this paper are BCH, punctured RS and the proposed CRS schemes. The extent of error correction and its impact on PAPR is also investigated for all these methods. In this paper a 64 sub-channel system is considered. One of the error correction method used in this paper is the BCH (127, 85) coding. The message when encoded in this method produces 127 output bits which can be suffixed with one bit, so that 128 bits can be made available for OQAM modulation. It can be observed from Fig. 3 that the PAPR for random load is 4.2dB for BCH (127, 85) with \(\mu\)-law companding method. The Fig. 4 shows the PAPR at full channel load, under this condition the BCH coding scheme has PAPR of 20.41dB. Thus for a varying channel load the output PAPR widely varies for BCH coding scheme, hence Reed- Solomon coding scheme is explored. The conventional RS (31, \(k\)) coding cannot be adapted directly to the problem as a 64 sub-channel system is to be realized; hence a punctured and shortened RS (25,16) code is implemented in this paper. This coding scheme with \(\mu\)-law companding offers a PAPR value of 4.2dB for random load and 5.99dB at full load, which is a variation of 1.79dB only. The capability of CRS (31, \(k\)) code for reducing PAPR is also investigated. The number of sub-channels utilized in this paper is met by CRS (31, \(k\)) coding scheme with \(k\geq 19\). The CRS (31, 19) code is used in further studies due to its better error correction capability. To investigate the performance of different schemes discussed, the PAPR is also obtained with all channels fully loaded and the values obtained under these conditions are given in Table II. In the Fig. 3, the PAPR obtained by different methods are compared with the values claimed in recent papers [4], [7], and [10]. It is observed that PAPR reduction obtained by IBPTS-ICF [11] method is the only method that has lower value than the methods proposed in this paper. The Improved Bi-Layer Partial Transmit Sequence and Iterative Clipping and Filtering (IBPTS-ICF) scheme presented in [11] is a complicated approach when compared to the proposed CRS coding scheme which reduces the PAPR and also does error correction. The PAPR values obtained at full load condition can be considered as the upper bound of the above system, for random load condition this value may be lower as shown in column two of Table II. The PAPR is observed to decrease when CRS coding and CRS coding with \(\boldsymbol{\mu}\)-law companding are applied. The PAPR obtained under full load and random load conditions are 5.15dB and 4.6dB respectively. Thus the proposed CRS (31, 19) coding scheme has the least variation in PAPR irrespective of dynamic input channel load. This narrow range bound variation in PAPR prevents the RF power amplifier non-linearity from affecting the signal integrity. ### _Bit Error Rate_ The BER performance of the system for various ITU channels is as shown in Fig. 5 and 6. It can be observed from Fig. 5 and 6 that CRS (31, 19) coding with \(\mu\)-law \begin{table} \begin{tabular}{|l|r|r|r|} \hline \multicolumn{1}{|c|}{Methodology} & \multicolumn{1}{c|}{Random load} & \multicolumn{1}{c|}{Full load} & \multicolumn{1}{c|}{Range} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{P\({}_{\theta}\) (dB)} & \multicolumn{1}{c|}{P\({}_{\theta}\) (dB)} & \multicolumn{1}{c|}{(dB)} \\ \hline FBMC-OQAM & 14.4 & 18.92 & 4.52 \\ \hline BCH (127,85) & 10.53 & 20.41 & 9.88 \\ \hline RS (25,16) & 10.24 & 15.97 & 5.73 \\ \hline CRS (31,19) & 11.74 & 12.76 & 1.02 \\ \hline \(\mu\)-law companding & 6.78 & 18.92 & 12.14 \\ \hline BCH + \(\mu\)-law companding & 4.2 & 20.41 & 16.21 \\ \hline RS (25,16) + \(\mu\)-law companding & 4.2 & 5.99 & 1.79 \\ \hline CRS (31,19) + \(\mu\)-law companding & 4.6 & 5.15 & 0.55 \\ \hline \end{tabular} \end{table} TABLE II: Typical valuesof PAPR\({}_{\theta}\) for various channel loads Fig. 4: Comparison of PAPR obtained by different techniques at full channel load. Fig. 3: Comparison of PAPR obtained by different techniques for random bit stream comparing gives lower BER at lower SNR values for Pedestrian B and Vehicular A channel model. It is evident that the BER performance is the best when CRS (31, 19) coding is applied to FBMC-OQAM system. Thus, it can be inferred that the proposed CRS (31, 19) code performs better than conventional punctured RS (25, 16) and BCH (127, 85) codes. ## 5 Conclusion The two important parameters of a multi-carrier communication system are its BER and PAPR. This paper investigated the joint application of block coding and companding as a means of decreasing BER and PAPR. The different results obtained by using various methods were compared and it is found that BER performance of CRS scheme is found to be better for Pedestrian B and Vehicular A channel models. The proposed constrained RS coding scheme discussed in this paper has the following results, * The forward error correction and PAPR minimization is achieved together using the same constrained RS coding scheme. * The worst case PAPR values of different coding schemes are compared for the first time. * The PAPR of a system is proportional to the amount of information present at the encoder output. * The proposed method has resulted in a PAPR of 4.6dB for FBMC-OQAM system, which is lower than the values claimed in recent papers [4, 7], and [10]. * The constrained RS coding method has confined the PAPR to a narrow range of 0.55dB for a wide dynamic input channel load, thus preventing the power amplifier non-linearities from affecting the signal amplification. Thus the method presented in this paper is suitable for enhancing the performance of multi-carrier systems.
2302.04776
Isolating clusters of zeros of analytic systems using arbitrary-degree inflation
Given a system of analytic functions and an approximation to a cluster of zeros, we wish to construct two regions containing the cluster and no other zeros of the system. The smaller region tightly contains the cluster while the larger region separates it from the other zeros of the system. We achieve this using the method of inflation which, counterintuitively, relates it to another system that is more amenable to our task but whose associated cluster of zeros is larger.
Michael Burr, Kisun Lee, Anton Leykin
2023-02-09T17:10:12Z
http://arxiv.org/abs/2302.04776v1
# Isolating Clusters of Zeros of Analytic Systems Using Arbitrary-Degree Inflation ###### Abstract. Given a system of analytic functions and an approximation to a cluster of zeros, we wish to construct two regions containing the cluster and no other zeros of the system. The smaller region tightly contains the cluster while the larger region separates it from the other zeros of the system. We achieve this using the method of inflation which, counterintuitively, relates it to another system that is more amenable to our task but whose associated cluster of zeros is larger. ## 1. Introduction Suppose that \(\mathcal{F}\) is a system of \(m\)_analytic function_ in \(n\) unknowns, where \(m\geq n\), and \(z^{*}\in\mathbb{C}^{n}\) is a point near several isolated zeros of \(\mathcal{F}\), i.e., \(z^{*}\) approximates a _cluster_ of zeros of \(\mathcal{F}\). The _zero cluster isolation problem_ is to compute two closed regions \(R_{-}\) and \(R_{+}\) and a positive integer \(c\) such that 1. \(z^{*}\in R_{-}\subseteq R_{+}{}^{\circ}\), where \(R_{+}{}^{\circ}\) is the interior of \(R_{+}\) and 2. the number of zeros of \(\mathcal{F}\) is the same in both \(R_{-}\) and \(R_{+}\) and equals \(c\). In other words, \(R_{-}\) encircles a cluster of \(c\) zeros of \(\mathcal{F}\), and this cluster of zeros is isolated from the other zeros of \(\mathcal{F}\) by \(R_{+}\setminus R_{-}\). We also consider the relaxation where \(c\) is an upper bound on the number of zeros in \(R_{-}\) and \(R_{+}\). We develop the method of _inflation_, which applies in the square system case (\(m=n\)) and gives the exact count \(c\) when it succeeds. When inflation fails and in the overdetermined case, we provide a method that yields an upper bound on the size of the cluster. At a high level, we have the following steps: 1. From the given system \(\mathcal{F}\), find a _nearby_ system \(\mathcal{G}\) with a singularity at \(z^{*}\), 2. compute the structure of the singularity of \(\mathcal{G}\) at \(z^{*}\), and 3. use the relationship between \(\mathcal{F}\) and \(\mathcal{G}\) to infer the location and count of the zeros of \(\mathcal{F}\) near \(z^{*}\) from the structure of the singularity of \(\mathcal{G}\) at \(z^{*}\). The word nearby should only be used in a colloquial and motivational sense since we do not provide a metric for identifying nearness. We consider both numerical and symbolic perturbations of \(\mathcal{F}\) to generate \(\mathcal{G}\), but we require the final computation to be _certified_. In other words, as part of their computations, our algorithms not only generate both the integer \(c\) and the regions \(R_{-}\) and \(R_{+}\), but they also provide a proof of correctness, showing that \(R_{-}\), \(R_{+}\), and \(c\) have the required properties. Since all of our constructions and computations pertain to a small neighborhood of one point and tolerate small perturbations of functions in that neighborhood, one may replace analytic functions with _polynomials_ as long as there is an effective way to estimate the difference with the original functions. Hence, we focus on the polynomial case throughout the remainder of the paper. ### Motivation and contribution Many numeric and symbolic algorithms struggle with computing or approximating zeros of zero-dimensional systems of polynomials that are either singular or clustered. For some algorithms, however, providing information about the clusters, such as their sizes, locations, and distances from the other zeros, can be used to restore the efficiency of these algorithms [7, 10]. In addition, data about these clusters can also be used to derive more precise estimates on the algorithmic complexity of algorithms, see, for example, [14, 3, 4, 1]. Our main contribution is in generalizing the technique dubbed _inflation_ and introduced by the first and third authors in [6]. Counterintuitively, the inflation procedure transforms a square system with a multiple zero into a square system with the same multiple zero but of _higher_ multiplicity. In [6], a notion of a _regular zero of order \(d\)_ is defined. In this paper we define a _regular zero of order \(d\) and breadth \(\kappa\)_ where: * a regular zero of order \(d\) corresponds to a regular zero of order \(d\) and breadth \(n\), * a regular zero (in the usual sense) is a regular zero of order \(1\). In this new terminology, the original inflation procedure of [6] attempts to create a regular zero of order \(2\) from a regular zero of order \(2\) and arbitrary breadth. Here we develop _inflation of arbitrary order_, a routine to create a regular zero of order \(d\) from a regular zero of order \(d\) and arbitrary breadth. This turns out to be much more subtle and intricate than the approach in [6]. In addition, for systems where inflation cannot be applied directly, we develop new methods to isolate the cluster and provide upper bounds on the size of the cluster. The shape of isolating regions is dictated by the type of the singularity the input system is close to. Although these regions may in turn be easily bounded by Euclidean balls, this would be an unnecessary relaxation: the region \(R_{-}\) that we construct (see, for instance, Figure 1) is natural and encapsulates the cluster much closer than the ball in which it may be inscribed. The _symbolic_ procedure of inflation is carried out for \(\mathcal{G}\) in the view of _numerical_ applications. Namely, the transformations that we use are applied to a nearby polynomial system \(\mathcal{F}\) with a cluster of zeros. At the end, the effect of the transformations on the difference between \(\mathcal{F}\) and \(\mathcal{G}\) must be small enough to apply the multivariate version of _Rouche's theorem_[5, Theorem 2.12]. We note that our certification step is similar to the certification in [2], but the goals of the papers are different and the use of inflation to regularize the system is one of the novel contributions of the current paper. We note that the paradigm in which we operate doesn't distinguish between scenarios where there is only one singular zero and scenarios where several simple or singular zeros are tightly clustered. We aim to produce the isolating regions as described in the introduction. We point out that our procedures to construct a nearby system with a singular zero do not work universally. Producing a nearby singular system in a more general setting is the focus of [12], for instance. We also assume that an approximation \(z^{*}\) is given to us. There is more focused work on algorithms to approximate a cluster in case of embedding dimension one [9] or to restore convergence of Newton's method around a singular solution via deflation [11], for example. Isolating clusters in cases not covered by our technique and finding new algorithms to approximate clusters are worth future exploration. ### Outline In Section 2, we consider a square system with a singular zero and, first, introduce necessary transformations to put the system in pre-inflatable shape with a regular zero of breadth \(\kappa\) and order \(d\), and then inflate in order to isolate the original singular zero. In Section 3, we demonstrate that the same procedure applied to a nearby system succeeds in isolating a cluster of roots. In Section 4, we consider systems that are hard or impossible to put in inflatable shape and show that after symbolic manipulation, it is still possible to isolate a cluster, and the size of the cluster can be bounded from above. Section 5 is devoted to proofs of our results. ## 2. Inflation The first case we consider is a square system which has a singularity at \(z^{*}\). This case is a main step in our general case in Section 3 since there we replace the given system with a nearby singular system. For simplicity, we assume \(z^{*}\) is the origin in many of our computations. Since the point \(z^{*}\) is explicitly given or computed as a rational point, no heavy symbolic techniques are needed to perform this translation. ### Regular breadth-\(\kappa\) systems of order \(d\) Consider a _graded local order_\(>\) on \(\mathbb{C}[x_{1},\ldots,x_{n}]\), i.e., the order \(>\) respects multiplication and if the total degrees of two exponent vectors \(\alpha\) and \(\beta\) satisfy \(|\alpha|>|\beta|\), then \(x^{\alpha}<x^{\beta}\). For a polynomial, we use the phrase _initial term_ to denote the largest nonzero monomial under the order \(>\), and we use _initial form_ to denote the homogeneous polynomial formed from the terms of the polynomial with smallest total degree. We define the _breadth_\(\kappa\) of the polynomial system to be the nullity of its Jacobian. For an ideal \(I=\langle\mathcal{F}\rangle\subseteq\mathbb{C}[x_{1},\ldots,x_{n}]\), the _standard monomials_ are the monomials that do not appear as initial terms of polynomials in \(I\). For each \(i\), we define the (local) _Hilbert function_ evaluated at \(i\), denoted by \(h_{\mathcal{F}}(i)\), to be the number of monomials of total degree \(i\) appearing as standard monomials. The corresponding (local) _Hilbert series_ is defined to be \(HS_{\mathcal{F}}(t)=\sum_{i\geq 0}h_{\mathcal{F}}(i)t^{i}\). We note that \(h_{\mathcal{F}}(1)=\kappa\). **Definition 2.1**.: Suppose that \(\mathcal{P}=\{p_{1},\ldots,p_{n}\}\) is a square polynomial system \(\mathbb{C}[x_{1},\ldots,x_{n}]\) such that the origin is an isolated zero of \(\mathcal{P}\) of breadth \(\kappa\). We say that the origin is a _regular zero of breadth \(\kappa\) and order \(d\)_ if the Hilbert series for \(\langle\mathcal{P}\rangle\) at the origin is \((1+t+\cdots+t^{d-1})^{\kappa}\). We note that when the origin is a regular zero of breadth \(\kappa\) and order \(d\) of a system \(\mathcal{P}\), the multiplicity of the zero at the origin is \(d^{\kappa}\). Throughout the remainder of this section, we provide Algorithm 3, which converts any square polynomial system into a standardized form, called the pre-inflatable form. **Definition 2.2**.: Suppose that \(\mathcal{P}=\{p_{1},\ldots,p_{n}\}\) is a square polynomial system in \(\mathbb{C}[x_{1},\ldots,x_{n}]\) such that the origin is an isolated zero of \(\mathcal{P}\). We say that \(\mathcal{P}\) is a _\((\kappa,k,\ell)\)-pre-inflatable system_ if 1. \(\mathcal{P}\) has breadth \(\kappa\) and the kernel of the Jacobian is \(\langle e_{1},\ldots,e_{\kappa}\rangle\), where \(e_{i}\) denotes the \(i\)-th standard basis vector, 2. the only terms in \(p_{1},\ldots,p_{\kappa}\) involving \(x_{\kappa+1},\ldots,x_{n}\) have degree greater than \(k\), and 3. the only terms in \(p_{\kappa+1},\ldots,p_{n}\) involving only \(x_{1},\ldots,x_{\kappa}\) have degree greater than \(\ell\). In the case where our algorithm is applied to a square system with a regular zero of breadth \(\kappa\) and order \(d\), we prove in Section 5 that the resulting system is particularly well-structured. In particular, when the parameters to the pre-inflatable algorithm are \(k=\ell=d\), the resulting system is described as in the following theorem: **Theorem 2.1**.: Let \(\mathcal{G}\) be a square system in \(n\) variables with a zero at \(z^{*}\). Suppose that \(z^{*}\) is a zero of breadth \(\kappa\) and order \(d\). Then there is a locally invertible transformation that realizes \(z^{*}\) as a regular zero of breadth \(\kappa\) and order \(d\) at the origin of a polynomial system \(\mathcal{P}=\{p_{1},\ldots,p_{n}\}\) which is \((\kappa,d,d)\)-pre-inflatable such that 1. the initial degree of each \(p_{i}\) is equal to \(d\) for \(1\leq i\leq\kappa\), 2. the initial forms of \(p_{1},\ldots,p_{\kappa}\) do not vanish on the unit sphere in \(x_{1},\ldots,x_{\kappa}\), and 3. the initial form of \(p_{i}\) is \(x_{i}\) for \(\kappa+1\leq i\leq n\). We observe that when the second condition holds, the initial forms of \(p_{1},\ldots,p_{\kappa}\) form a regular sequence. Systems of the form described in Theorem 2.1 are ideal for applying inflation. The _inflation operator of order \(d\) and breadth \(\kappa\)_ is defined to be \[S_{\kappa}^{d}(x_{i})=\begin{cases}x_{i}&1\leq i\leq\kappa\\ x_{i}^{d}&\kappa+1\leq i\leq n\end{cases}.\] The inflation operator in [6] is of order \(2\) and breadth \(\kappa\). ### Constructing regular zeros Suppose that a given system \(\mathcal{G}\) has a singular zero at the origin of breadth \(\kappa\). We present a sequence of transformations to construct an equivalent system that is \((\kappa,k,\ell)\)-pre-inflatable for any given \(k,\ell\in\mathbb{N}\), see Algorithm 1. First, since \(\mathcal{G}\) has breadth \(\kappa\), there is a linear transformation \(A:\mathbb{C}^{n}\to\mathbb{C}^{n}\) so that the kernel of the Jacobian of \(\mathcal{A}:=\mathcal{G}\circ A\) is spanned by \(e_{1},\ldots,e_{\kappa}\), where \(e_{i}\) denotes the \(i\)-th standard basis vector. This implies that the linear parts of the polynomials in \(\mathcal{A}\) only involve \(x_{\kappa+1},\ldots,x_{n}\). Second, there is a linear map \(B:\mathbb{C}[x_{1},\ldots,x_{n}]^{n}\to\mathbb{C}[x_{1},\ldots,x_{n}]^{n}\) such that \(\mathcal{B}:=B\circ\mathcal{A}=\{b_{1},\ldots,b_{n}\}\) is a square system of polynomials where \(b_{1},\ldots,b_{\kappa}\) do not have any linear terms while the linear part of \(b_{i}\) is \(x_{i}\) for \(i>\kappa\). The map \(B\) can be chosen to implement row reduction on the linear parts of the polynomials of \(\mathcal{A}\). Next, for the given \(k\), there is an invertible linear map \(C_{k}:\mathbb{C}[x_{1},\ldots,x_{n}]^{n}\to\mathbb{C}[x_{1},\ldots,x_{n}]^{n}\) such that \(\mathcal{C}_{k}:=C_{k}\circ\mathcal{B}=\{c_{1},\ldots,c_{n}\}\) is a square system of polynomials with the same properties as \(\mathcal{B}\), and, in addition, in \(c_{1},\ldots,c_{\kappa}\), the smallest total degree of a term involving \(x_{\kappa+1},\ldots,x_{n}\) is greater than \(k\). This transformation can be achieved by using the initial terms of \(b_{\kappa+1},\ldots,b_{n}\) to eliminate monomials involving \(x_{\kappa+1},\ldots,x_{n}\) of small degree. Finally, for the given \(\ell\), there is an invertible change of variables, denoted by \(D_{\ell}\), such that \(\mathcal{P}_{k,\ell}:=\mathcal{C}_{k}\circ D_{\ell}=\{p_{1},\ldots,p_{n}\}\) is a square system of polynomials with the same properties as \(\mathcal{C}_{k}\) and the smallest degree of a term in \(p_{\kappa+1},\ldots,p_{n}\) involving only \(x_{1},\ldots,x_{\kappa}\) is greater than \(\ell\). This change of variables can be achieved by a sequence of transformations of the form \(x_{i}\mapsto x_{i}+q_{i}(x_{1},\ldots,x_{\kappa})\) for some polynomial \(q_{i}\) and the remaining variables are left unchanged. The property of interest in this series of transformations is the consequence of the Lemma 5.1, which proves that the resulting system is \((\kappa,k,\ell)\)-pre-inflatable. The following example explicitly illustrates this construction: **Example 2.3**.: [13, Example 4.1] Consider the polynomial system \[\mathcal{G}=\left\{\begin{matrix}2x_{1}+x_{2}+x_{1}^{2}\\ 8x_{1}+4x_{2}+x_{2}^{2}\end{matrix}\right\}.\] This system has a zero at the origin and its Jacobian is \(\left(\begin{matrix}2&1\\ 8&4\end{matrix}\right)\), which has a one-dimensional nontrivial kernel spanned by \(\langle 1,-2\rangle\). Therefore, this system is breadth-one. We construct a \((1,3,3)\)-pre-inflatable system from \(\mathcal{G}\). For the linear transform \(A\), we use the matrix \(\frac{1}{\sqrt{5}}\left(\begin{matrix}1&2\\ -2&1\end{matrix}\right)\), which is the unitary matrix which maps the first standard basis vector to a nonzero element of the kernel of the Jacobian and the second standard basis vector to a vector perpendicular to the kernel. The resulting system is \[\mathcal{A}=\mathcal{G}\circ A=\left\{\begin{matrix}\sqrt{5}x_{2}+\frac{x_{1} ^{2}}{5}+\frac{4x_{1}x_{2}}{5}+\frac{4x_{2}^{2}}{5}\\ 4\sqrt{5}x_{2}+\frac{4x_{1}^{2}}{5}-\frac{4x_{1}x_{2}}{5}+\frac{x_{2}^{2}}{5} \end{matrix}\right\}.\] Next, row reduction on the linear part of this system, expressed via the matrix \(\left(\begin{matrix}1&-\frac{1}{4}\\ 0&\frac{1}{4\sqrt{5}},\end{matrix}\right)\) results in the system \[\mathcal{B}=B\circ\mathcal{A}=\left\{\begin{matrix}x_{1}x_{2}+\frac{3x_{2}^{2} }{4}\\ x_{2}+\frac{x_{1}^{2}}{5\sqrt{5}}-\frac{x_{1}x_{2}}{5\sqrt{5}}+\frac{x_{2}^{2}}{ 20\sqrt{5}}\end{matrix}\right\}.\] Next, the transformation for \(k=3\) is the symbolic transformation that uses the initial \(x_{2}\) of the second polynomial to eliminate monomials involving \(x_{2}\) in the first equation. The transformation is given by the matrix \[C_{3}=\left(\begin{matrix}-5\sqrt{5}&5\sqrt{5}x_{1}+\frac{15\sqrt{5}x_{2}}{4}+ \frac{x_{1}^{2}}{4}+\frac{x_{1}x_{2}}{2}-\frac{3x_{2}^{2}}{16}\\ 1\end{matrix}\right),\] which arrives at the system \[\mathcal{C}_{3}=C_{3}\circ\mathcal{B}=\left\{\begin{matrix}x_{1}^{3}+\frac{x_ {1}^{4}}{20\sqrt{5}}+\frac{x_{1}^{3}x_{2}}{20\sqrt{5}}-\frac{x_{1}^{2}x_{2}^{ 2}}{8\sqrt{5}}+\frac{x_{1}x_{2}^{3}}{16\sqrt{5}}-\frac{3x_{2}^{4}}{320\sqrt{5} }\\ x_{2}+\frac{x_{1}^{2}}{5\sqrt{5}}-\frac{x_{1}x_{2}}{5\sqrt{5}}+\frac{x_{2}^{2} }{20\sqrt{5}}\end{matrix}\right\}.\] Finally, the change of variables for \(\ell=3\) absorbs the unwanted terms involving \(x_{1}\) into \(x_{2}\) via the transformation where \(x_{2}\) is replaced by \(x_{2}-\frac{x_{1}^{2}}{5\sqrt{5}}-\frac{x_{1}^{3}}{125}\). This results in rather long polynomials, but many of the coefficients are quite small in absolute value. The resulting system starts with \[\mathcal{P}_{3,3}=\mathcal{H}\circ D_{3}=\left\{\begin{array}{l}x_{1}^{3}+\frac{x _{1}^{4}}{20\sqrt{5}}+\frac{x_{3}^{2}x_{2}}{20\sqrt{5}}-\frac{x_{1}^{2}x_{2}^{ 2}}{8\sqrt{5}}+\frac{x_{1}x_{3}^{3}}{16\sqrt{5}}-\frac{3x_{1}^{4}}{320\sqrt{5} }+\ldots\\ x_{2}-\frac{x_{1}x_{2}}{5\sqrt{5}}+\frac{x_{2}^{2}}{20\sqrt{5}}-\frac{x_{1}^{2} x_{2}}{250}+\frac{x_{1}^{4}}{500\sqrt{5}}-\frac{x_{1}^{3}x_{2}}{1250\sqrt{5}}+ \ldots\end{array}\right\}\] Since the transformations \(B\) and \(C_{k}\) are both invertible for any \(k\), we observe that \(\mathcal{A}\), \(\mathcal{B}\), and \(\mathcal{C}_{k}\) all generate the same ideal. The construction of \(\mathcal{P}_{k,\ell}\) from \(\mathcal{G}\), as in Example 2.3, forms a preprocessing step so that the resulting system is pre-inflatable. Unfortunately, not all pre-inflatable systems can be successfully inflated so that their zeros can be isolated using our techniques. Only those systems where the \((\kappa,d,d)\)-pre-inflatable system has a regular zero of breadth \(\kappa\) and order \(d\) can be inflated with our techniques. ### Applying inflation Suppose that \(\mathcal{P}\) is a \((\kappa,d,d)\)-pre-inflatable system where the origin is a regular zero of breadth \(\kappa\) and order \(d\). The system \(\mathcal{P}\circ S_{\kappa}^{d}\) is then a square system where the initial forms are all of degree \(d\) and form a regular sequence, see Section 5.2. Therefore, \(\mathcal{P}\circ S_{\kappa}^{d}\) has a zero of multiplicity \(d^{n}\) at the origin. Let \((\mathcal{P}\circ S_{\kappa}^{d})_{d}\) denote the square homogeneous system of degree \(d\) consisting of the initial forms of \(\mathcal{P}\circ S_{\kappa}^{d}\). Since this system does not vanish on the unit sphere, let \(M\) be a positive lower bound on \(\|(\mathcal{P}\circ S_{\kappa}^{d})_{d}\|\) over the (Hermitian) unit sphere. Since all of the terms of \(\mathcal{P}\circ S_{\kappa}^{d}-(\mathcal{P}\circ S_{\kappa}^{d})_{d}\) are of degree greater than \(d\), there is a constant \(C>0\) such that for all \(\|x\|\leq 1\), \(\|\mathcal{P}\circ S_{\kappa}^{d}(x)-(\mathcal{P}\circ S_{\kappa}^{d})_{d}(x) \|\leq C\|x\|^{d+1}.\) Then, by applying Rouche's theorem as in Lemma 5.6 both \((\mathcal{P}\circ S_{\kappa}^{d})_{d}\) and \(\mathcal{P}\circ S_{\kappa}^{d}\) have \(d^{n}\) zeros in the ball of radius \(\varepsilon\). While it is straight-forward to observe that the origin is a zero of multiplicity \(d^{n}\) for \(\mathcal{P}\circ S_{\kappa}^{d}\), the content of this computation is that there are no additional zeros in the ball of radius \(\varepsilon\). The process established is summarized in Algorithm 2. **Example 2.4**.: Continuing Example 2.3, the polynomial system \(\mathcal{P}_{3,3}\) is \((1,3,3)\)-pre-inflatable, and the origin is a regular zero of breadth \(1\) and order \(3\). The inflation step replaces \(x_{2}\) with \(x_{2}^{3}\). The resulting system is \[\mathcal{P}_{3,3}\circ S_{1}^{3}=\left\{\begin{array}{l}x_{1}^{3}+\frac{x_{1 }^{4}}{20\sqrt{5}}-\frac{x_{1}^{5}}{5000}-\frac{x_{1}^{6}}{5000\sqrt{5}}+\frac {x_{2}^{3}x_{3}^{3}}{20\sqrt{5}}-\frac{x_{1}^{7}}{10000}+\cdots\\ x_{2}^{3}+\frac{x_{1}^{7}}{500\sqrt{5}}-\frac{x_{1}x_{2}^{3}}{5\sqrt{5}}+\frac {x_{1}^{7}}{31250}-\frac{x_{1}^{7}x_{2}}{250}+\frac{x_{1}^{6}}{312500\sqrt{5}} +\ldots\end{array}\right\}.\] In this example, \((\mathcal{P}_{3,3}\circ S_{1}^{3})_{3}=\{x_{1}^{3},x_{2}^{3}\}\), and \(\|(\mathcal{P}_{3,3}\circ S_{1}^{3})_{3}(x)\|\geq\frac{1}{2}\) for \(x\) of norm \(1\). We observe that the sum of the absolute values of the noninitial coefficients of \(\mathcal{P}_{3,3}\circ S_{\kappa}^{3}\) is \(\frac{39599350003}{78125000000\sqrt{5}}+\frac{98831503}{3906250000}\approx 0.251981\). Since this is less than \(\frac{1}{2}\), for any fixed \(0<\varepsilon\leq 1\) and \(\|x\|=\varepsilon\), \[\|(\mathcal{P}_{3,3}\circ S_{1}^{3})_{3}(x)\|>\|\mathcal{P}_{3,3}\circ S_{1}^{3} (x)-(\mathcal{P}_{3,3}\circ S_{1}^{3})_{3}(x)\|. \tag{1}\] Hence, Rouche's theorem applies and \(\mathcal{P}_{3,3}\circ S_{1}^{3}\) has \(3^{2}\) zeros in the ball of radius \(\varepsilon\). Since the ball containing the \(9\) zeros is defined by \(|x_{1}|^{2}+|x_{2}|^{2}\leq\varepsilon\), we can apply the inverse of the changes of variables \(A\), \(D_{3}\), \(S_{1}^{3}\) to compute the following region in the original coordinates, \[\frac{1}{5}\left|x_{1}-2x_{2}\right|^{2}+\frac{1}{5^{1/3}}\left|(2x_{1}+x_{2}) +\frac{\left(x_{1}-2x_{2}\right)^{2}}{25}+\frac{\left(x_{1}-2x_{2}\right)^{3}}{625 }\right|^{\frac{2}{3}}\leq\varepsilon^{2},\] containing the triple zero of \(\mathcal{F}\). We observe that the inflation map creates a three-to-one cover of the zeros of \(\mathcal{P}_{3,3}\circ S_{1}^{3}\) to those of \(\mathcal{P}_{3,3}\), which confirms the root count of \(\mathcal{F}\). **Theorem 2.2**.: Suppose that \(\mathcal{G}\) is a square polynomial system where \(z^{*}\) is a regular zero of breadth \(\kappa\) and order \(d\) of \(\mathcal{G}\). Algorithm 2 produces a region containing \(z^{*}\) and no other zeros of \(\mathcal{G}\). Moreover, the multiplicity of the zero at \(z^{*}\) is \(d^{\kappa}\). We note that that when considering one (exact) singular zero, we produce only the large region \(R_{+}\) since the small region \(R_{-}\) can be taken to be trivial, i.e., \(R_{-}=\{z^{*}\}\). ## 3. Clusters of Zeros In the intended application of our approach, we do not expect to be given a system that has a multiple zero, as explored in Section 2. Instead, we expect to be given a system that has a cluster of zeros, each with multiplicity one. Suppose that \(\mathcal{F}\) is a square system of polynomials and \(z^{*}\) approximates the center of a cluster of zeros of \(\mathcal{F}\). Our approach is to find a nearby singular system and use that system to inform about the zeros of \(\mathcal{F}\). ### Isolating clusters Suppose that system \(\mathcal{G}\) has a (singular) zero at \(z^{*}\) whose coefficients are close to those in \(\mathcal{F}\). Suppose also there exist invertible maps \(T:\mathbb{C}^{n}\to\mathbb{C}^{n}\) and \(U:\mathbb{C}[x_{1},\ldots,x_{n}]^{n}\to\mathbb{C}[x_{1},\ldots,x_{n}]^{n}\) such that the origin is a regular zero of breadth \(\kappa\) and order \(d\) of \(U\circ\mathcal{G}\circ T\). One candidate for \(U\) and \(T\) is presented in Section 2.2. We then apply these maps and inflation to the original system to get the system \(U\circ\mathcal{F}\circ T\circ S^{d}_{\kappa}\). Let \((U\circ\mathcal{F}\circ T\circ S^{d}_{\kappa})_{d}\) denote the homogeneous part of this system of degree \(d\). Similarly, we write \((U\circ\mathcal{F}\circ T\circ S^{d}_{\kappa})_{>d}\) and \((U\circ\mathcal{F}\circ T\circ S^{d}_{\kappa})_{<d}\) for the terms greater than or less than \(d\). Let \(M\) be a positive lower bound on \(\|(U\circ\mathcal{F}\circ T\circ S^{d}_{\kappa})_{d}\|\) over the (Hermitian) unit sphere. Since all the terms of \((U\circ\mathcal{F}\circ T\circ S^{d}_{\kappa})_{>d}\) are of degree greater than \(d\), there is a constant \(M_{1}>0\) such that for all \(\|x\|\leq 1\), \(\|(U\circ\mathcal{F}\circ T\circ S^{d}_{\kappa})_{>d}\|\leq M_{1}\|x\|^{d+1}\). Similarly, since all terms of \((U\circ\mathcal{F}\circ T\circ S^{d}_{\kappa})_{<d}\) have degree less than \(d\), there is a constant \(M_{2}>0\) such that for all \(\|x\|\leq 1\), \(\|(U\circ\mathcal{F}\circ T\circ S^{d}_{\kappa})_{<d}\|<M_{2}\). If \(\left(\frac{2M_{2}}{M}\right)^{1/d}<\frac{M}{2M_{1}}\), then for any \(\varepsilon\) between \(\varepsilon_{-}=\left(\frac{2M_{2}}{M}\right)^{1/d}\) and \(\varepsilon_{+}=\frac{M}{2M_{1}}\), \(\|(U\circ\mathcal{F}\circ T\circ S^{d}_{\kappa})_{d}(x)\|\) dominates the other parts of \(U\circ\mathcal{F}\circ T\circ S^{d}_{\kappa}\) and, by Rouche's theorem, see Lemma 5.6, \(U\circ\mathcal{F}\circ T\circ S^{d}_{\kappa}\) and \((U\circ\mathcal{F}\circ T\circ S^{d}_{\kappa})_{d}\) have the same number of zeros in the ball of radius \(\varepsilon\). The smaller region \(R_{-}\) corresponds to the lower bound \(\varepsilon_{-}\) and the larger region \(R_{+}\) corresponds to the upper bound \(\varepsilon_{+}\) on \(\varepsilon\). **Example 3.1**.: Consider the polynomial system \[\mathcal{F}=\left\{\begin{matrix}2x_{1}+x_{2}+x_{1}^{2}+0.001\\ 8x_{1}+4x_{2}+x_{2}^{2}+0.001\end{matrix}\right\}\] with approximate zero \(z^{*}=(-0.0001,-0.0001)\). This system is a perturbation of our running example from Example 2.3. The three zeros in the cluster are approximately \((-0.043-0.082i,0.091+0.158i)\), \((-0.043+0.082i,0.091+0.158i)\), and \((0.086,-0.181)\), and \(z^{*}\) approximates their average. First, we shift the system \(\mathcal{F}\) so that \(z^{*}\) is at the origin. The resulting system is very close to system \(\mathcal{G}\) from Example 2.3. After applying the same transformations from Examples 2.3 and 2.4, the resulting system is (after rounding) is \[\left\{\begin{matrix}-0.0084+0.0013x_{1}+0.000078x_{1}^{2}+x_{1}^{3}+0.0016x_{2 }^{3}+\ldots\\ -0.000022+0.00002x_{1}+0.00000089x_{1}^{2}+0.00000008x_{1}^{3}+x_{2}^{3}+\ldots \end{matrix}\right\}.\] In this case, the cubic part of the system is bounded from below on the unit circle by \(0.4984\). On the other hand, the sum of the coefficients of degree less than \(3\) is greater than \(0.009757\), which can be used for \(M_{2}\). Finally, the sum of the coefficients of degree greater than \(3\) is less than \(0.2746\), which can be used for \(M_{1}\). Therefore, we may choose \(\varepsilon_{+}=0.9075\) and \(\varepsilon_{-}=0.3396\), as illustrated in Figure 1 in the original domain. The only other zero of the system is approximately equal to \((4,8)\) and is far away from all depicted isolating regions. We also note that the regions are not convex and the boundaries of the regions are only piecewise smooth. **Remark 3.2**.: One approach to compute \(M\) is based on sum-of-squares computations as in [6]. For the bounds \(M_{1}\) and \(M_{2}\), one way to get these bounds is to sum the absolute values of the coefficients appearing in the appropriate systems. **Theorem 3.1**.: Suppose that \(\mathcal{F}\) is a square polynomial system where \(z^{*}\) approximates a cluster of zeros. If Algorithm 3 succeeds, then it produces a pair of regions \(R_{-}\) and \(R_{+}\) containing \(z^{*}\) and the cluster of zeros such that \(R_{-}\subseteq{R_{+}}^{\circ}\). Moreover, the number of zeros in the cluster is \(d^{\kappa}\). ### Constructing a singular system In order to complete the steps outlined in Section 3.1, we need to be able to construct an appropriate singular system \(\mathcal{G}\). One way to construct such a system is outlined in [6, Section 2.1] via the singular value decomposition of the Jacobian \(D\mathcal{F}(z^{*})\). This construction also provides \(\kappa\) as a count of the number of small singular values of the Jacobian. For any \(d>0\), we may apply Algorithm 2 to \(\mathcal{G}\) to construct a \((\kappa,d,d)\)-pre-inflatable system. It is unlikely that the resulting system has the origin as a regular zero of breadth \(\kappa\) and order \(d\). Even though the polynomial system resulting from Algorithm 2 might not be amenable to inflation itself, the constructed transformations, when applied to \(\mathcal{F}\) as in Section 3.1, may succeed in isolating the cluster of Figure 1. Contours of isolating regions for a cluster of zeros of \(\mathcal{F}\). The inner- and outer-most contours are the boundaries of \(R_{-}\) and \(R_{+}\), the smallest and largest isolating regions our method produces. The (red) point in the second quadrant depicts the real part of two conjugate nonreal zeros. \(\mathcal{F}\). We have experimental evidence that applying these transformations will be successful when extra terms in \(p_{1},\ldots,p_{\kappa}\) have small coefficients. **Remark 3.3**.: The value of \(d\) in the construction of the pre-inflatable system may either be given or guessed through the computations. In particular, we may apply Algorithm 2 with many different values of \(d\) until the degree-\(d\) homogeneous part of the resulting system has enough terms with coefficients larger than some tolerance as to not vanish on the unit sphere. ## 4. Irregular systems Even when the approach in Section 2 fails for singular systems, we present ways to isolate the cluster and estimate its size. Three instances where the approach in Section 2 may fail are when the origin is a regular zero of breadth \(\kappa\) and order \(d\), when the initial forms of the \((\kappa,d,d)\)-pre-inflatable system vanish on the unit sphere, and when the initial system is not square. ### Uneven inflation The structure of the polynomial system in Theorem 2.1 are designed so that we know the structure of the system after inflation, see Section 5.2. In particular, several of the steps in the construction of a pre-inflatable system are designed to control which terms appear in the initial form of the system after inflation. When the initial forms of a polynomial system do not vanish on the unit sphere, but they do not have the appropriate degrees, we may apply an inflation operator that changes the degree of each variable individually. To illustrate this, consider the following motivating example: **Example 4.1**.: Consider the following family of polynomial systems, where \(a\) is a parameter: \[\mathcal{G}=\left\{\begin{matrix}x_{1}\\ x_{2}^{2}+ax_{3}^{2}+x_{3}^{4}\\ x_{3}^{3}\end{matrix}\right\}.\] An initial attempt might be to inflate by replacing \(x_{1}\), \(x_{2}\), and \(x_{3}\) by \(x_{1}^{6}\), \(x_{2}^{3}\), and \(x_{3}^{2}\), respectively. Unfortunately, after this inflation step, the resulting system is \[\left\{\begin{matrix}x_{1}^{6}\\ ax_{3}^{4}+x_{2}^{6}+x_{3}^{8}\\ x_{3}^{6}\end{matrix}\right\}.\] We cannot apply our approaches unless \(a=0\), in which case the initial forms are all of degree \(6\) and do not vanish on the unit sphere. When \(a=0\), the inflated system has a zero of multiplicity \(6^{3}=216\) at the origin and Rouche's theorem can be applied to isolate these zeros. Moreover, since the inflation map is \(36\)-to-one, this region isolates the \(6\) solutions of the original system. When \(a\neq 0\), it is impossible to choose an inflation map so that the initial forms are all of the same degree. In this case, the inflation approach fails and we must consider alternate methods. For a general singular system \(\mathcal{G}\) where zero is not a regular zero, suppose that it is possible to replace each variable by a power so that all the initial terms of the resulting system have the same degree. In this case, Rouche's theorem, see Lemma 5.6 can be applied to isolate the cluster. For this approach to succeed, it is usually important that the initial forms of the system \(\mathcal{G}\) have some structure and that problem-specific higher degree terms have a zero coefficient. ### Upper bounds One may attempt a symbolic transformation that leads to a system where Rouche's theorem applies, see Lemma 5.6. Given a singular system \(\mathcal{G}\) of \(m\) functions in \(n\) unknowns with \(m\geq n\) with an isolated zero at the origin, there is an \(n\times m\) matrix \(T\) such that the initial forms of the polynomials in \(T\mathcal{G}\) do not vanish on the unit sphere. Even in the case \(m=n\), it is possible to find a suitable \(T\) that is invertible in a neighborhood of the singularity at the origin. Therefore, the multiplicity of the origin as a zero of \(T\mathcal{G}\) is only an upper bound of the multiplicity of \(\mathcal{G}\). One popular "rewriting" method is to derive \(T\) from a local Grobner basis computation. In particular, we choose \(n\) elements whose initial terms are pure powers from the Grobner basis. This process also applies to the overdetermined case because we are choosing only \(n\) elements from the Grobner basis regardless of the number of equations in \(\mathcal{F}\). We illustrate this method in the following example: **Example 4.2**.: Consider the following singular polynomial system \[\mathcal{G}=\begin{cases}x_{1}x_{2}-x_{3}^{3}\\ x_{2}x_{3}-x_{3}^{3}\\ x_{1}x_{3}-x_{2}^{1}\end{cases}.\] The initial forms simultaneously vanish on the unit sphere in the coordinate directions. From a local Grobner basis calculation, there are three elements in the basis whose initial terms are pure powers: \[\left\{\begin{array}{c}x_{2}^{4}-x_{3}^{4}\\ x_{1}^{4}-x_{2}^{4}\\ x_{3}^{5}-x_{1}^{3}x_{2}^{3}\end{array}\right\}.\] Therefore, we can find a system of polynomials in the ideal generated by \(\mathcal{G}\) that have the same degree for their initial forms. In particular, we have the elements \[\mathcal{P}=\begin{cases}x_{2}^{5}-x_{2}x_{3}^{4}\\ x_{1}^{5}-x_{1}x_{2}^{4}\\ x_{3}^{5}-x_{1}^{3}x_{2}^{3}\end{cases}.\] This system can be obtained by multiplying the equations of \(\mathcal{G}\) by the following matrix, derived from a local Grobner basis calculation: \[T=\begin{pmatrix}x_{2}x_{3}&0&-x_{2}^{2}\\ 0&-x_{1}^{2}&x_{1}x_{2}\\ -x_{3}^{2}&x_{2}^{3}&x_{2}x_{3}\end{pmatrix}.\] Since the initial forms do not vanish on the unit sphere, we can find a lower bound \(M\) for \(\|\mathcal{P}_{5}\|\) on the unit sphere. Then, by following the approach of Section 2, we find a region \(R_{+}\) that isolates the singularity at the origin. In this case, the singularity has multiplicity at most \(4\cdot 4\cdot 5=80\) while the true multiplicity is \(11\). In the cluster case, i.e., when \(\mathcal{F}\) is given with \(z^{*}\) approximating a cluster of zero, a suitable \(T\) can be found by executing the steps of a Grobner basis computation while dropping terms with small coefficients to construct \(\mathcal{G}\). As long as \(T\mathcal{F}-\mathcal{G}\) is sufficiently small, then Rouche's theorem can be used. Since the multiplicity of \(z^{*}\) may increase when \(\mathcal{F}\) transforms into \(\mathcal{G}\), this increase also applies to the size of the corresponding cluster of \(T\mathcal{F}\). Thus, this process may not provide the exact size of the cluster, but an upper bound of it. ## 5. Proofs We provide proofs for several of the stated facts in the paper. ### Pre-inflatable form We prove that the procedure described in Section 2.2 and Algorithm 1 produces a \((\kappa,k,\ell)\)-pre-inflatable system for any square polynomial system of breadth \(\kappa\). **Lemma 5.1**.: Let \(\mathcal{G}\) be a square polynomial system with a singular zero at \(z^{*}\) of breadth \(\kappa\). The result of Algorithm 1 with parameters \(k\) and \(\ell\) on \(\mathcal{G}\) is a \((\kappa,k,\ell)\)-pre-inflatable system of polynomials with a zero at the origin whose multiplicity is the same as the multiplicity of \(z^{*}\) for \(\mathcal{G}\). Proof.: We first show that the multiplicity of the origin and \(z^{*}\) are the same for the input and output systems. The first step of the algorithm is an invertible affine transformation on the domain, and such transformations do not change the multiplicity of a zero. The second and third steps replace the system with a new system that generates the same ideal, hence the multiplicity does not change. Finally, the last step uses transformations of the form \(x_{i}\mapsto x_{i}+q_{i}(x_{1},\dots,x_{\kappa})\), which preserve leading forms of all polynomials in the ideal, and, hence preserve the Hilbert series and multiplicity. Now, we prove that the final system is \((\kappa,k,\ell)\)-pre-inflatable. By the discussion above, the breadth of the system does not change under the steps of Algorithm 1, therefore, the final system has the correct breadth. In addition, the affine transformation in the first step rotates the domain so that the resulting Jacobian has the correct kernel. The second and third steps do not change the kernel of the Jacobian, and the last step maintains the initial terms, so the Jacobian is also preserved. The first and second steps prepare the initial terms of the last \(n-\kappa\) polynomials via standard linear algebra, and these initial terms are not changed in the last two steps. Finally, the third and fourth steps can be broken down into a sequence of cancellation steps, each of which remove a term of low degree and replace it with terms of higher degree. Through induction, all of the desired terms have coefficient zero. Therefore, the resulting system is in \((\kappa,k,\ell)\)-pre-inflatable form. This construction explicitly leads to the following corollary, which proves one of the conditions in Theorem 2.1. **Corollary 5.2**.: Let \(\mathcal{G}\) be a square system in \(n\) variables with a zero at \(z^{*}\). Suppose that \(z^{*}\) is a zero of breadth \(\kappa\) and order \(d\). Algorithm 1 with parameters \(k=\ell=d\) applied to this system results in a \((\kappa,d,d)\)-pre-inflatable system such that the initial form of \(p_{i}\) is \(x_{i}\) for \(\kappa+1\leq i\leq n\). ### Regular zero form The following proof is an algorithmic proof of the remaining two conditions in Theorem 2.1. It provides a construction of an analytic change of variables that transforms a system with a regular zero of breadth \(\kappa\) and order \(d\) into one of the desired form. Before beginning the proof, we introduce some notation and a fact about Hilbert series. For series \(A(t)=\sum_{i\geq 0}a(i)t^{i}\) and \(B(t)=\sum_{i\geq 0}b(i)t^{i}\), we let \(A(t)\geq B(t)\) if \(a(i)\geq b(i)\) for all \(i\). In addition, we consider the following lemma for the proof: **Lemma 5.3**.: _[_8_, Lemma 1]_ _Consider \(\mathcal{F}=\{f_{1},\ldots,f_{n}\}\) and \(\mathcal{G}=\{g_{1},\ldots,g_{n}\}\) where \(f_{1},\ldots,f_{n},g_{1},\ldots,g_{n}\) are homogeneous and \(g_{1},\ldots,g_{n}\) are generic. If \(\deg g_{i}=\deg f_{i}\), then, \(HS_{\mathcal{G}}(t)\leq HS_{\mathcal{F}}(t)\)._ Moreover, generic homogeneous forms of degree \(d\) form a regular sequence, and the Hilbert series of a regular sequence is \(\frac{(1-t^{d})^{n}}{(1-t)^{n}}\). **Lemma 5.4**.: Let \(\mathcal{G}\) be a square system in \(n\) variables with a zero at \(z^{*}\). Suppose that \(z^{*}\) is a zero of breadth \(\kappa\) and order \(d\). Algorithm 1 with parameters \(k=\ell=d\) applied to this system results in a \((\kappa,d,d)\)-pre-inflatable system such that the initial degree of each \(p_{i}\) is equal to \(d\) for \(1\leq i\leq\kappa\). Proof.: In order to prove this lemma, we begin with the transformation from Section 2.2 that transforms the original system \(\mathcal{G}\) into the system \(\mathcal{P}_{d,d}=\{p_{1},\ldots,p_{n}\}\) that is a \((\kappa,d,d)\)-pre-inflatable system. As mentioned in the proof of Lemma 5.1, the Hilbert series is unchanged under this transformation. From the definition of a pre-inflatable system, in \(\{p_{1},\ldots,p_{k}\}\) the only monomials of degree at most \(d\) which appear in \(p_{1},\ldots,p_{\kappa}\) involve only the variables \(x_{1},\ldots,x_{\kappa}\). On the other hand, since the initial term of \(g_{i}\) for \(i>\kappa\) is \(x_{i}\), it follows that no monomial involving any of \(x_{\kappa+1},\ldots,x_{n}\) can appear as a standard monomial. In addition, the coefficients in the local Hilbert series for \(\{1,\ldots,t^{d-1}\}\) are the number of monomials in \(\kappa\) variables. This implies that \(p_{1},\ldots,p_{\kappa}\) cannot have any monomials of degree less than \(d\). We now prove that the initial degree of \(p_{1},\ldots,p_{\kappa}\) must be \(d\). Suppose that \(p\in\langle p_{1},\ldots,p_{n}\rangle\) such that the initial term of \(p\) is of degree \(d\) and involves only \(x_{1},\ldots,x_{\kappa}\). Briefly, we write \(p=\sum q_{i}p_{i}\). Since, by construction, the monomials in \(p_{\kappa+1},\ldots,p_{n}\) involving only \(x_{1},\ldots,x_{\kappa}\) must have degree larger than \(d\), it follows that the initial term of \(p\) does not appear in any \(q_{i}p_{i}\) for \(i>\kappa\). On the other hand, since the initial degree of \(p_{1},\ldots,p_{\kappa}\) is at least \(d\), it must be that the initial term of \(p\) is an initial term of an element of \(\langle(p_{1})_{d},\ldots,(p_{\kappa})_{d}\rangle\), where \((p_{i})_{d}\) denotes the homogeneous part of \(p_{i}\) of degree \(d\). This observation implies that the standard monomials of \(\langle\mathcal{P}_{d,d}\rangle\) of degree \(d\) are the same as the standard monomials of degree \(d\) that only involve \(x_{1},\ldots,x_{\kappa}\) of \(\langle(p_{1})_{d},\ldots,(p_{\kappa})_{d}\rangle\). Suppose that \((p_{i})_{d}=0\) for \(\ell\) values of \(i\), where \(1\leq i\leq\kappa\). Then, by Lemma 5.3, the coefficient of \(t^{d}\) in the Hilbert series of \(\langle(p_{1})_{d},\ldots,(p_{\kappa})_{d}\rangle\) is at least the coefficient of \(t\) in \(\frac{(1-t^{d})^{\kappa-\ell}}{(1-t)^{\kappa}}.\) If \(\ell>0\), then this coefficient is (strictly) greater than the corresponding coefficient in \((1+t+\cdots+t^{d-1})^{\kappa}\), a contradiction. Hence, \(\ell=0\) and the initial degree of each of \(p_{1},\ldots,p_{\kappa}\) must be \(d\). **Lemma 5.5**.: Let \(\mathcal{G}\) be a square system in \(n\) variables with a zero at \(z^{*}\). Suppose that \(z^{*}\) is a zero of breadth \(\kappa\) and order \(d\). Algorithm 1 with parameters \(k=\ell=d\) applied to this system results in a \((\kappa,d,d)\)-pre-inflatable system such that the initial forms of \(p_{1},\ldots,p_{\kappa}\) do not vanish on the unit sphere in \(x_{1},\ldots,x_{\kappa}\). Proof.: Suppose that \((p_{1})_{d},\ldots,(p_{\kappa})_{d}\) do not form a regular sequence. Let \(r\) be the smallest degree where there exists \(1<j\leq\kappa\) and homogeneous polynomials \(m_{1},\ldots,m_{j}\) of degree \(r-d\) such that \(\sum_{i=1}^{j}m_{i}(p_{i})_{d}=0\) and \(m_{j}\not\in\langle(p_{1})_{d},\ldots,(p_{j-1})_{d}\rangle\). For all degrees \(k\) less than \(i\) and \(1\leq\ell\leq\kappa\), the multiplication map \[(k[x_{1},\ldots,x_{n}]/\langle(p_{1})_{d},\ldots,(p_{\ell-1})_{d}\rangle)_{k-d} \xrightarrow{(p_{\ell-1})_{d}}(k[x_{1},\ldots,x_{n}]/\langle(p_{1})_{d}, \ldots,(p_{\ell})_{d}\rangle_{k}\] is injective. Hence, the coefficient of \(t^{k}\) in the Hilbert series for \(\{(p_{1})_{d},\ldots,(p_{\kappa})_{d}\}\) agrees with the corresponding coefficient for a regular sequence. In dimension \(r\), this map is not always injective, so the coefficient of \(t^{r}\) in the Hilbert series for \(\{(p_{1})_{d},\ldots,(p_{\kappa})_{d}\}\) is larger than the coefficient of \(t^{r}\) for a regular sequence. We now show that this also implies that the number of standard monomials of \(\langle\mathcal{P}_{d,d}\rangle\) in dimension \(r\) contradicts the assumption on the Hilbert series. Let \(p\in\langle p_{1},\ldots,p_{\kappa}\rangle\) such that the initial degree of \(p\) is \(r\) and the initial form of \(p\) is not in \(\langle(p_{1})_{d},\ldots,(p_{\kappa})_{d}\rangle\). Since \(p\in\langle p_{1},\ldots,p_{\kappa}\rangle\), \(p=\sum q_{i}p_{i}\) for some polynomials \(q_{i}\). Suppose that the \(q_{i}\)'s have been chosen so that the minimum initial degree of \(q_{i}p_{i}\) is maximized. Let \(m\) be this initial degree. Moreover, assume that the \(q_{i}\)'s have been chosen so that the largest index where the initial degree of \(q_{i}p_{i}\) is \(m\) is minimized. Let this index be \(\ell\). Let \((q_{i})_{m-d}\) denote the degree \(m-d\) homogeneous part of \(q_{i}\). Since the initial degree of \(q_{i}p_{i}\) is at least \(m\), either \((q_{i})_{m-d}=0\) or \((q_{i})_{m-d}\) is the initial form of \(q_{i}\). In addition, \(\sum_{i=1}^{\ell}(q_{i})_{m-d}(p_{i})_{d}=0\) since otherwise, this would be the initial form of \(p\) and would also be in \(\langle(p_{1})_{d},\ldots,(p_{\kappa})_{d}\rangle\). Moreover, the sum is not a sum of \(0\)'s since \((q_{\ell})_{m-d}\neq 0\). Therefore, \(m<r\) and so, by the assumption on \(r\), \((q_{\ell})_{m-d}\in\langle(p_{1})_{d},\ldots,(p_{\ell-1})_{d}\rangle\). Therefore, there exist homogeneous polynomials \(s_{1},\ldots,s_{\ell-1}\) which are either \(0\) or of degree \(m-2d\) such that \((q_{\ell})_{m-d}=\sum_{i=1}^{\ell-1}s_{i}(p_{i})_{d}\). Then, \[\sum_{i=1}^{\kappa}q_{i}p_{i} =\sum_{i=1}^{\ell-1}q_{i}p_{i}+q_{\ell}p_{\ell}+\sum_{i=\ell+1}^{ \kappa}q_{i}p_{i}\] \[=\sum_{i=1}^{\ell-1}q_{i}p_{i}+((q_{\ell})_{m-d}+(q_{\ell}-(q_{ \ell})_{m-d}))p_{\ell}+\sum_{i=\ell+1}^{\kappa}q_{i}p_{i}\] \[=\sum_{i=1}^{\ell-1}q_{i}p_{i}+\sum_{i=1}^{\ell-1}s_{i}(p_{i})_{d} p_{\ell}+(q_{\ell}-(q_{\ell})_{m-d})p_{\ell}+\sum_{i=\ell+1}^{\kappa}q_{i}p_{i}\] \[=\sum_{i=1}^{\ell-1}q_{i}p_{i}+\sum_{i=1}^{\ell-1}s_{i}(p_{i}-(p_{ i}-(p_{i})_{d}))p_{\ell}+(q_{\ell}-(q_{\ell})_{m-d})p_{\ell}+\sum_{i=\ell+1}^{ \kappa}q_{i}p_{i}\] \[=\sum_{i=1}^{\ell-1}(q_{i}+s_{i}p_{\ell})p_{i}+\left(\sum_{i=1}^{ \ell-1}s_{i}((p_{i})_{d}-p_{i})+(q_{\ell}-(q_{\ell})_{m-d})\right)p_{\ell}+\sum_ {i=\ell+1}^{\kappa}q_{i}p_{i}.\] The initial degree of \((p_{i})_{d}-p_{i}\) is greater than \(d\) and that of \(q_{\ell}-(q_{\ell})_{m-d}\) is greater than \(m-d\) as well. We see that this violates the assumptions on \(m\) and \(\ell\). In other words, either the minimum initial degree of a summand is larger or there are fewer terms that attain the degree \(m\). Hence, \((p_{1})_{d},\ldots,(p_{\kappa})_{d}\) form a regular sequence and only have finitely many common zeros in \(\kappa\)-dimensional affine space. Therefore, they cannot vanish on the unit sphere \(x_{1},\ldots,x_{\kappa}\), as, by homogeneity, this would imply that they vanish on a line. The proof of Lemma 5.5 implies that if the initial forms of \(p_{1},\ldots,p_{\kappa}\) are a regular sequence, then the initial forms of \(\langle P_{d,d}\rangle\) are the same as the forms in \(\langle(p_{1})_{d},\ldots,(p_{\kappa})_{d}\rangle\). Moreover, we can also conclude that if the Hilbert series for \(\langle(p_{1})_{d},\ldots,(p_{\kappa})_{d}\rangle\) is \(\frac{(1-t^{d})^{\kappa}}{(1-t)^{\kappa}}\), then \((p_{1})_{d},\ldots,(p_{\kappa})_{d}\) form a regular sequence. ### Application of Rouche's theorem Finally, we prove the consequence of Rouche's theorem that we use to certify our algorithms. **Lemma 5.6**.: Let \(\mathcal{P}\) be a square polynomial system and \(\mathcal{Q}\) be a square homogeneous polynomial system of degree \(d\). Let \(\mathbb{S}_{\varepsilon}\) denote the \(n\)-dimensional (Hermitian) unit sphere of radius \(\varepsilon\). Suppose that 1. There is a positive constant \(M\) such that \(\min\{\|Q(x)\|:x\in\mathbb{S}_{1}\}\geq M\) and 2. There are constants \(M_{1}\) and \(M_{2}\) and a decomposition \(\mathcal{P}=\mathcal{P}_{1}+\mathcal{P}_{2}+\mathcal{Q}\) such that for all \(\varepsilon\leq 1\) 1. \(\max\{\|\mathcal{P}_{1}(x)\|:x\in\mathbb{S}_{\varepsilon}\}\leq M_{1}\varepsilon^{d+1}\). 2. \(\max\{\|\mathcal{P}_{2}(x)\|:\|x\|\leq 1\}\leq M_{2}\) If \(\left(\frac{2M_{2}}{M}\right)^{1/d}<\frac{M}{2M_{1}}\), then for any \(\varepsilon\in\left[\left(\frac{2M_{2}}{M}\right)^{1/d},\frac{M}{2M_{1}}\right]\), \(\mathcal{P}\) has \(d^{n}\) zeros in the ball of radius \(\varepsilon\). Proof.: The first condition implies that \(Q\) has no zeros on the unit sphere, so all of its \(d^{n}\) zeros are at the origin. For \(x\) satisfying the given conditions, \[\|\mathcal{P}(x)-\mathcal{Q}(x)\|\leq\|\mathcal{P}_{1}(x)\|+\|\mathcal{P}_{2}(x) \|\leq M_{1}\varepsilon^{d+1}+M_{2}\leq M\varepsilon^{d}\leq\|\mathcal{Q}(x)\|.\] Then, by the multivariate version of Rouche's theorem [5, Theorem 2.12], both \(\mathcal{P}\) and \(\mathcal{Q}\) have the same number of zeros in \(\mathbb{S}_{\varepsilon}\). ## acknowledgments Burr was supported by National Science Foundation grant DMS-1913119 and Simons Foundation collaboration grant # 964285. Leykin was supported by National Science Foundation grant DMS-2001267.
2306.07950
Electron Localization in Rydberg States
We discuss the possibility of localizing an electron in a highly excited Rydberg state. The second-order correlation of emitted photons is the tool for the determination of electron position. This second-order correlation of emitted radiation and, therefore, the correlation of operators describing the acceleration of the electron allows for a partial localization of the electron in its orbit. The correlation function is found by approximating the transition matrix elements by their values in the classical limit. It is shown that the second-order correlation, depending on two times, is a function of the time difference and is a periodic function of this argument with the period equal to the period of the corresponding classical motion. The function has sharp maxima corresponding to large electron acceleration in the vicinity of the ``perihelion.'' This allows the localization of the electron in its consecutive approach to the perihelion point.
Jan Mostowski, Joanna Pietraszewicz
2023-06-13T17:45:28Z
http://arxiv.org/abs/2306.07950v1
# Electron Localization in Rydberg States ###### Abstract We discuss the possibility of localizing an electron in a highly excited Rydberg state. The second-order correlation of emitted photons is the tool for the determination of electron position. This second-order correlation of emitted radiation and, therefore, the correlation of operators describing the acceleration of the electron allows for a partial localization of the electron in its orbit. The correlation function is found by approximating the transition matrix elements by their values in the classical limit. It is shown that the second-order correlation, depending on two times, is a function of the time difference and is a periodic function of this argument with the period equal to the period of the corresponding classical motion. The function has sharp maxima corresponding to large electron acceleration in the vicinity of the "perihelion." This allows the localization of the electron in its consecutive approach to the perihelion point. pacs: topics: Rydberg state, radiation, second order correlation, localization ## 1 Introduction The measurement process, since the early days of quantum physics, has been one of the central issues in many attempts to understand the relation between the classical and quantum description of physical systems [1] (for a more recent analysis, see, e.g., [2]). The most common theory of quantum measurement [3, 4, 5] assumes that the quantum system is coupled to a meter. The interaction between them entangles the two systems. The measurement, described as a projection onto the state of the meter, provides information on the state of the system. Continuous measurements of quantum systems treated as stochastic processes were first considered in [6] in the context of photon counting. The formalism based on path integrals was initiated in [7] and further developed in [4] (see also [8]). The classical motion of the electron bound in a Coulomb field is periodic. The wavefunction describing the bound electron in a stationary state does not show any time-dependent features. Time dependence, and hence classical features of wavefunctions, can be obtained for non-stationary states, linear combinations of energy eigenstates with different energies. Such a construction is well known in the case of a harmonic oscillator, and the most classical states are well-known coherent states [9] (see also, e.g., [10]). The corresponding time-dependent states in the case of Rydberg states were introduced in [11] (see also [12, 13, 14, 15]). Another point of view was presented in [16], where it is pointed out that when a measurement breaks the time-translational symmetry of a stationary state, a periodic motion of the system is initiated. This approach was further elaborated in [17, 18]. The classical limit of quantum mechanics is still a vivid subject of investigation (see, e.g., [19]). One of the recently discussed problems in this area relates to the successive measurements of particle position and detection of the trajectory. Most of the interest has been limited to free particles, and not much has been done in the case of bound states. Quantum description of the hydrogen atom is well known. All energies and wavefunctions of stationary states are well known. The classical limit is approached in the limit of large quantum numbers -- the wavefunction should be related to classical trajectories. This relation has been discussed in many papers. Both time-dependent states, analogs of harmonic oscillator coherent states, and stationary states in the limit of high excitation were shown to exhibit classical features. In this paper, we will present yet another aspect of the classical limit in the case of Rydberg states. Namely, we will use the radiation emitted from the highly exited state to determine the electron position as a function of time. Detection of radiation at a given time breaks the time-translational symmetry and allows observation of the time dependence of subsequent evolution. This approach provides a partial but straightforward way of estimating the elements of the time-dependent classical trajectory hidden in the stationary wavefunction. Radiation from a quantum system, such as a hydrogen atom, is usually studied in the frequency domain. The spectrum consists of several lines. Measurement of the spectrum is not the only possibility -- time dependence of radiation can be studied as well. The time dependence of the spontaneous emission from a highly excited vibrational state of a diatomic molecule was used to determine the time-dependent relative position of the constituents. This allowed us to demonstrate the time dependence of various states, such as coherent states and others, e.g., the Schrodinger cat state [20]. Let us note that in the case of Rydberg states with the principle quantum number \(n\approx 100\), the characteristic frequency of radiation is \(\nu\approx 10^{10}\) Hz, so the time dependence of the radiation for times smaller than \(1/\nu\) is within experimental reach. Radiation observed for such small times of the order of \(1/\nu\) exhibits different features as compared to the long-time measurements. This and the relation to the position measurement will be discussed below. ## 2 A simple case -- harmonic oscillator We will begin the discussion of electromagnetic radiation in the time domain and its relation to the measurement of the electron position with a simple example of a harmonic oscillator. The charged particle oscillates with the frequency \(\omega\) along the \(x\) axis; its motion is given by \(x_{cl}(t)=A\cos(\omega t)\). This electron is a source of electromagnetic radiation. We will find the \(x\) component of the electric field in the far zone along the \(y\) axis (to simplify the geometry). We have, in the dipole approximation, \[E_{x}({\mathbf{R}},t)=-\frac{e}{4\pi\epsilon_{0}R}\;\omega^{2}x_{cl }(t_{ret}), \tag{1}\] where \(t_{ret}=t-|{\mathbf{R}}|/{\rm c}\) is the retarded time, and \(c\) is the speed of light. We have skipped the \(R\) dependence of the field -- it is just like in classical electrodynamics, namely \(E_{x}\sim R^{-1}\). It follows from (1) that the electric field oscillates with the frequency \(\omega\). This classical treatment does not take into account radiation damping, thus it is valid only for a short time, shorter than the characteristic damping time. We will now discuss an emission of radiation, taking into account the quantum nature of the oscillator. We will concentrate on the highly excited states of the oscillator and hence on the classical limit. The position of an oscillating particle is described by the position operator \(x\). It can be expressed in terms of the lowering and raising operators \(a\) and \(a^{\dagger}\), respectively, as follows \[x=x_{0}\,\frac{(a+a^{\dagger})}{\sqrt{2}}, \tag{2}\] where \(x_{0}=\sqrt{\frac{\hbar}{M\omega}}\), \(\hbar\) is the Planck constant, and \(M\) denotes the mass of the oscillating particle. The component \(E_{x}\) of the electric field operator (the radiated part) in the dipole approximation is given by \[E_{x}({\mathbf{R}},t)=-\frac{e}{4\pi\epsilon_{0}R}\;\omega^{2}x(t_{ ret}), \tag{3}\] just like in the classical case. This time, however, the electric field is an operator, and we will find the expectation values of this operator. We assume that at time \(t=0\), the oscillator is in the energy eigenstate \(|n\rangle\) with energy \(E_{n}=\hbar\omega n\). Thus the expectation value of the \(x\) operator, and hence of the \(E_{x}({\mathbf{r}},t)\) operator, is equal to zero. The first-order correlation function becomes \[\left\langle E_{x}({\mathbf{R}},t_{2})E_{x}({\mathbf{R}},t_{1})\right\rangle=\frac{1}{2}\,\frac{e^{2}}{(4\pi\epsilon_{0}R )^{2}}\,\omega^{4}x_{0}^{2}\] \[\times\Big{[}n\,{\rm e}^{{\rm i}\omega(t_{2}-t_{1})}+\Big{(}n+ \frac{1}{2}\Big{)}\;{\rm e}^{-{\rm i}\omega(t_{2}-t_{1})}\Big{]}. \tag{4}\] In the case of the highly excited state, i.e., when \(n\gg 1\), we can approximate \(\sqrt{n(n+1)}\approx n\approx\sqrt{n(n-1)}\). Then we get \[\left\langle E_{x}({\mathbf{R}},t_{2})E_{x}({\mathbf{R}},t_{1})\right\rangle=\] \[\frac{e^{2}}{\left(4\pi\epsilon_{0}R\right)^{2}}\,\omega^{4}x_{0} ^{2}\,n\cos\big{(}\omega(t_{2}-t_{1})\big{)}, \tag{5}\] just as in the classical case. The average intensity of radiation given by the first correlation function at \(t_{2}=t_{1}\) is a constant. The first correlation function for \(t_{2}>t_{1}\) gives the spectrum of radiation and, in this case, consists of one line only. The second-order correlation function is more interesting. For \(n\gg 1\), we get \[\left\langle E_{x}^{2}({\mathbf{R}},t_{2})E_{x}^{2}({\mathbf{R}},t_{1})\right\rangle=n^{2}\Big{[}1+\cos\big{(}2\omega(t_{2}-t_{1}) \big{)}\Big{]}. \tag{6}\] The second correlation function oscillates with the frequency \(2\omega\). This tells us that the maxima of radiation occur every half period of the electron motion. Thus, the second correlation function can be used to determine the position of the oscillating particle in the vicinity of a turning point. The high intensity is due to the large acceleration of the oscillating charge and this takes place when the electron is close to one of the turning points. Thus, if high intensity has been detected at \(t_{1}\), then the electron will reach another turning point half the period later, and the intensity will be high once more. Thus, the time dependence of the second correlation function provides information about the motion of the electron. The information is not complete, as the radiation does not distinguish between the two turning points. It is worth noting that the correlation function allows the detection of the particle close to the turning point in spite of the dipole approximation. ## 3 Classical radiation from Kepler orbit Before we discuss radiation from the Rydberg states, we will give a classical description of motion in the Coulomb field [21]. If the motion is in the \(xy\) plane, the coordinates \(x\) and \(y\) as functions of time are given by \[x(t) = a\,\big{[}\cos\big{(}\xi(t)\big{)}-\epsilon\big{]},\] \[y(t) = a\,\sqrt{1-\epsilon^{2}}\;\sin\big{(}\xi(t)\big{)}, \tag{7}\] where \[\omega t+\varphi=\xi(t)-\epsilon\sin\big{(}\xi(t)\big{)}, \tag{8}\] where \(\varphi\) is an arbitrary phase. The radial variable \(r=\sqrt{x^{2}+y^{2}}\) can also be expressed as a function of time \[r=a\big{[}1-\epsilon\,\cos\big{(}\xi(t)\big{)}\big{]}. \tag{9}\] The parameters \(a\), \(\omega\), and \(\epsilon\) characterize the trajectory. They can be related to energy and angular momentum in the standard way [21]. We will also need more general trajectories that differ by an orientation in the plane of the motion described by the phase \(\chi\) and by the phase of the motion, \(\varphi\). Thus we define \[X(t) = x(t)\cos(\chi)+y(t)\sin(\chi),\] \[Y(t) = -x(t)\sin(\chi)+y(t)\cos(\chi), \tag{10}\] with \(\omega t+\varphi=\xi(t)-\epsilon\sin(\xi(t))\). The classical description of the radiation of a charge moving along such an orbit is found to be in complete analogy to the harmonic oscillator case. We will use the dipole approximation since the size of the orbit is much smaller than the characteristic wavelengths of the emitted radiation. The electric field in the far zone is given by \[\boldsymbol{E}(\boldsymbol{R},t)=\frac{1}{R}\,\boldsymbol{n}\times\big{[} \boldsymbol{n}\times\boldsymbol{a}(t_{ret})\big{]}, \tag{11}\] where \(\boldsymbol{a}\) is the acceleration, and \(\boldsymbol{n}=\boldsymbol{R}/|R|\). Radiation damping is neglected, as in the previous section. Also, the Fourier decomposition of the trajectory can be found (see [21]). Here we will give the Fourier decomposition of the \(x\) variable \[x(t)=\sum_{k}\exp(\mathrm{i}\,k(\omega t+\varphi))\,x_{k}, \tag{12}\] where \[x_{k}=\frac{a}{2k}\Big{[}J_{k-1}(k\epsilon)-J_{k+1}(k\epsilon)\Big{]},\quad k \neq 0. \tag{13}\] A similar formula holds for \(y(t)\). This will be used in the next section. ## 4 Classical limit of matrix elements From now on, we will use atomic units. Consider the quantum description of an atom in a highly excited energy eigenstate. We label the states by standard quantum numbers: \(n\) -- principle quantum number, \(l\) -- angular momentum quantum number, and \(m\) -- magnetic quantum number. The energy \(E_{n}\) of this state depends on the principal quantum number \(n\) as \(E_{n}=-1/(2n^{2})\). We will be interested only in states with \(m=l\), thus, we will skip the magnetic quantum number to avoid confusion. This means that the wavefunctions considered in this paper are well concentrated in the \(xy\) plane, which is perpendicular to the angular momentum. This can be seen from the explicit form of the spherical harmonics function \(|Y_{l,l}(\theta,\varphi)|^{2}\sim\sin^{2l}(\theta)\) that has a sharp maximum at \(\theta=\frac{\pi}{2}\) for large \(l\). We will, therefore, not consider the wavefunction dependence along the \(z\) axis. The expectation values of the radiated field depend on the matrix elements of the position operator between the quantum states of the atom, i.e., \(\langle\psi_{n,l,l}|\,x|\,\psi_{n^{\prime},l^{\prime}}\rangle\), where \(x\) is the coordinate. In spherical coordinates, \(x=r\sin(\theta)\cos(\varphi)\), and a similar expression is valid for the \(y\) coordinate, \(y=r\sin(\theta)\sin(\varphi)\). The wavefunctions \(\psi_{n,l,l}(r,\theta,\varphi)=R_{n,l}(r)Y_{l,l}(\theta,\varphi)\) are the standard states of the hydrogen atom, with \(R_{n,l}(r)\) describing the radial part of the wavefunction and \(Y_{l,l}(\theta,\varphi)\) denotes the spherical harmonics. Because of selection rules, these matrix elements are different from zero only if \(l^{\prime}=l\pm 1\). The radial part of the matrix element of \(r^{k}\) (for any \(k\)), i.e., \[\int_{0}^{\infty}\mathrm{d}r\;r^{2+k}R_{n,l}(r)R_{n^{\prime},l^{\prime}}(r), \tag{14}\] can be found explicitly in terms of special functions [22]. In fact, the classical limit of this expression, valid for \(n\to\infty\), \(l\to\infty\) with \(l/n=\mathrm{const}\), has been found in [23]. In this limit, (14) approaches the Fourier transform of the classical trajectory \(r^{k}_{classical}\) for the frequency \(\omega=(E_{n}-E_{n^{\prime}})/\hbar\). The classical trajectory \(r(t)\) corresponds to the average energy \(E=\frac{1}{2}(E_{n}+E_{n^{\prime}})\) and the eccentricity \(\epsilon=\sqrt{1-(l/n)^{2}}\). Thus, for the matrix element of \(r\), we find for \(l^{\prime}=l\pm 1\) that \[\langle n^{\prime},l^{\prime},l^{\prime}|r|n,l,l\rangle\approx a_{0} \,\frac{n^{2}}{2(n-n^{\prime})}\] \[\times\Big{[}J_{n-n^{\prime}+1}\big{(}(n-n^{\prime})\epsilon \big{)}-J_{n-n^{\prime}-1}\big{(}(n-n^{\prime})\epsilon\big{)}\Big{]}, \tag{15}\] where \(a_{0}\) denotes the Bohr radius, and \(\epsilon\) corresponds to the eccentricity of the classical orbit with energy and angular momentum equal to the average of the energies of the initial and final state. It should be noted that (15) is analogous to (5) for the harmonic oscillator, where \(\sqrt{n(n+1)}\) is replaced by \(n\) for large quantum numbers \(n\). The transition elements for \(x\) can also be found \[\left\langle n^{\prime},l+1,l+1|x|n,l,l\rangle+\left\langle n^{\prime},l-1,l-1 \right|x|n,l,l\right\rangle \tag{16}\] \[\approx x_{n-n^{\prime}},\] where \(x_{n-n^{\prime}}\) is given by (13). These formulas allow describing the radiation from the Rydberg states using classical approximations. The values of the matrix elements can be modeled classically by random trajectories. Consider then the trajectories \[X(t)=x(t)\cos(\chi)+y(t)\sin(\chi),\] \[Y(t)=-x(t)\sin(\chi)+y(t)\cos(\chi), \tag{17}\] with \(\omega t+\phi=\xi-\epsilon\sin(\xi)\). The quantities \(\phi\) and \(\chi\) are random phases, with uniform distributions between \(0\) and \(2\pi\). In this case, the expectation values of the \(x\) and \(y\) operators are equal to the mean values of the classical quantities \(X\) and \(Y\) with the same values of energy and angular momentum. ## 5 Radiation from a Rydberg state In this and subsequent sections, we will use atomic units in the description of a quantum state. The electric field \(\mathbf{E}(\mathbf{R},t)\) in the far field is given by the same formula as in the classical field, with the difference that the acceleration \(a\) is an operator acting on the quantum state of the system consisting of an electron and the photon vacuum. In the quantum case, also the electric field is an operator. Thus, for the radiated part of the field, we get in the dipole approximation \[\mathbf{E}(\mathbf{R},t)=\frac{1}{R}\ \mathbf{n} \times\big{[}\mathbf{n}\times\mathbf{a}(t_{ret})\big{]}, \tag{18}\] where \(n\) is the unit vector in the direction of the observation point, \(\mathbf{n}=\frac{\mathbf{R}}{|R|}\). In what follows, we will find the expectation values of the electric field, as well as the first and second correlation function. It should be noted that the radiation is weak, and therefore the measurement of light intensity in the classical sense is questionable. The expectation value of the electric field squared at a given point should be understood as the photon counting rate. We assume that at \(t=0\), the state describes the photon vacuum and the atom is in the state \(\psi_{n,l,l}\). This requires matrix elements of the operators \(x\) and \(y\) and their second derivatives over time. The first correlation function of the \(x\) component of the field radiated in the \(y\) direction is given by \[\left\langle E_{x}(\mathbf{R},t_{2})E_{x}(\mathbf{R},t_{1} )\right\rangle=\frac{\left\langle a_{x}(t_{2,ret})\,a_{x}(t_{1,ret})\right\rangle }{\left(4\pi\epsilon_{0}R\right)^{2}}. \tag{19}\] The expectation value of the product of accelerations will be found in the classical limit. First, we will linearize the energy in the vicinity of the initial state energy with the principal quantum number \(n_{0}\). We get \[E_{n}\approx-\frac{1}{2n_{0}^{2}}+\frac{n-n_{0}}{n_{0}^{3}}. \tag{20}\] This allows for the approximation of the expectation values of the acceleration operator \(a(t)\) by the expectation values of the \(r\) operator \[\left\langle n^{\prime},l-1,l-1\right|a_{x}(t)\big{|}n,l,l\rangle\approx\] \[-(n-n^{\prime})^{2}\omega_{0}^{2}\,\exp\left(-\mathrm{i}(n-n^{\prime})\omega_{ 0}t\right)x_{n-n^{\prime}}, \tag{21}\] with \(\omega_{0}=1/n_{0}^{3}\). Thus, for the two-time correlation function of acceleration in the state \(|n,l,l\rangle\), the following can be found \[\left\langle a_{x}(t_{2})a_{x}(t_{1})\right\rangle=\sum_{n^{ \prime}l^{\prime}}\langle n,l,l|x|n^{\prime}l^{\prime}l^{\prime}\rangle \langle n^{\prime}l^{\prime}l^{\prime}|x|n,l,l\rangle\] \[\times\exp\big{(}\mathrm{i}\omega(t_{2}-t_{1})(n-n^{\prime}) \big{)}. \tag{22}\] The same can be expressed by the correlation of the classical trajectories \[\left\langle a_{x}(t_{2})a_{x}(t_{1})\right\rangle\approx\int\frac{\mathrm{d} \phi}{2\pi}\int\frac{\mathrm{d}\chi}{2\pi}\frac{\mathrm{d}^{2}X(t_{2})}{\mathrm{ d}t_{2}^{2}}\frac{\mathrm{d}^{2}X(t_{1})}{\mathrm{d}t_{1}^{2}}. \tag{23}\] This is a good approximation for large \(n\) and \(l\). The main point is that the matrix elements of the angular part \[\int\mathrm{d}\theta\,\sin(\theta)\;\mathrm{d}\phi\ Y_{l,l}(\theta,\varphi) \sin(\theta)\,\mathrm{e}^{\mathrm{i}\varphi}\ Y_{l-1,l-1}(\theta,\varphi), \tag{24}\] hence the matrix element of the position operator \(x\) weakly depends on \(l\) for large \(l\). The correlation function obtained above is shown in Fig. 1. From the above considerations, it follows that the average intensity of radiation is proportional to the correlation function at \(t_{1}=t_{2}\) and does not depend on time. The Fourier transform of the correlation function \[\int\mathrm{d}t\,\left\langle a_{x}(t)a_{x}(0)\right\rangle\,\exp(\mathrm{i} \,k\omega t) \tag{25}\] Figure 1: The first correlation function of accelerations (23) (normalized to the average square of acceleration) as a function of \(\omega(t_{2}-t_{1})\) for two periods (\(\epsilon=0.8\)) determines the radiation spectrum. Thus the spectrum of radiation from a Rydberg state can be approximated by the spectrum of radiation from the corresponding classical orbit. ## 6 Second order correlation In this section, we will discuss the second-order correlation function of the radiation originating from a Rydberg state. This is given by \[G(t_{2},t_{1})=\Bigl{\langle}E_{x}(t_{1})E_{x}(t_{2})E_{x}(t_{2})E_{x}(t_{1}) \Bigr{\rangle}. \tag{26}\] The state is, as before, the photon vacuum and the Rydberg state of the atom. Expressing the electric field by the acceleration of an electron in the atom, we get \[G(t_{2},t_{1})=\frac{1}{R^{4}}\exp\left(-2{\rm i}\ n\omega(t_{1}- t_{2})\right)\] \[\times\bigl{\langle}a_{x}(t_{1})a_{x}(t_{2})a_{x}(t_{2})a_{x}(t_{ 1})\bigr{\rangle}. \tag{27}\] Just as before, we insert a complete set of states \(|n,l,l\rangle\) between the \(a\) operators and apply the approximation of \(l\) independence of the matrix elements in the case of large \(l\). This leads to the following representation of the correlation function \[G(t_{2},t_{1})=\frac{1}{R^{4}}\int\frac{{\rm d}\chi}{2\pi}\int \frac{{\rm d}\varphi}{2\pi}\] \[\times\frac{{\rm d}^{2}X(t_{2})}{{\rm d}t_{2}^{2}}\frac{{\rm d}^ {2}X(t_{2})}{{\rm d}t_{2}^{2}}\frac{{\rm d}^{2}X(t_{1})}{{\rm d}t_{1}^{2}} \frac{{\rm d}^{2}X(t_{1})}{{\rm d}t_{1}^{2}}. \tag{28}\] Integration over the angle \(\chi\) can be done explicitly, whereas integration over the angle \(\varphi\) has to be done numerically. This is our final result. It gives the second correlation function of radiation emitted by the atom in a Rydberg state. The formula is approximated and valid for small time differences \(t_{2}-t_{1}\) because it does not take radiation damping into account. It is valid only in the case of Rydberg states with large \(n\) and large \(l\), with the maximal magnetic quantum number \(m=l\). An example of the second-order correlation function is shown in Fig. 2. One can notice very strong correlation of radiation for small times -- much smaller than the period of motion -- and the periodic behavior of the correlation. ## 7 Conclusions Electromagnetic radiation from an atom in the Rydberg state can be used to partially localize the electron on the orbit. According to the classical view of radiation, the electron moves along an elliptic orbit and emits radiation most efficiently when the acceleration is large. This happens when the electron is close to the nucleus. The quantum wavefunction \(\psi(r,\theta,\varphi)\) describing the electron state does not indicate the time when the electron is close to the nucleus. Therefore the emitted radiation is time-dependent, and its period reflects the period of motion. The time-averaged intensity, as well as the spectrum of radiation, is constant in time (for a relatively short time; radiation damping is not taken into account). The second correlation function, \(G(t_{2},t_{1})\), depends on the time difference \(t_{2}-t_{1}\) and is a periodic function of time, with the frequency of the classical electron motion. In the quantum language, the atom is in a highly excited Rydberg state with the principal quantum number \(n\). The state is stationary, therefore, the average intensity of emitted radiation is constant in time. The spectrum is stationary since radiation damping is neglected, and consists of several narrow lines corresponding to the transition to lower energy states. The second correlation function, however, breaks the time translation symmetry, and this unravels the time evolution of radiation. Based on the measurement of radiation, we can reconstruct the motion of the electron. The second correlation function was found in the classical approximation, however, its meaning is indeed purely quantum. The classical approximation means that transition matrix elements have been approximated by the corresponding classical expression. If exact expressions for the matrix elements had been used, the result would have been very similar. The calculations would have been numerically more complex. Figure 2: The second-order correlation function of accelerations (27) (normalized to the square of the average square of acceleration) as a function of \(\omega(t_{2}-t_{1})\) for one period (\(\epsilon=0.8\)). Panel (b) shows the same for smaller values of time difference We have to stress that electron localization is limited by the uncertainty principle. Thus in the case of the state with orbital quantum number \(l\), the angle localization is possible up to \(2\pi/l\). While in the case of large \(l\) considered here, this is not a strong limitation, it does play a significant role in the case of \(l\) of the order of 1, even for states with large principle quantum numbers \(n\). Our results show that the correlation function is strongly time-dependent. This correlation function clearly shows that if a strong and short impulse of radiation is detected, the next such pulse will come after one period of the corresponding classical motion, or in the quantum language, after time \(T=2\pi/(E_{n}-E_{n-1})\). This is due to the large acceleration of an electron in the vicinity of the nucleus. The first strong pulse localizes the electron at this point and breaks the time independence of the radiation. The second pulse comes after one period. Between the strong pulses, the radiation is much weaker because of the small acceleration. Thus the observation of the time dependence of radiation allows the localization of the Rydberg electron in the vicinity of the nucleus. This method of localizing an electron on the orbit is non-standard. The recent approach to quantum particle localization is based on successive measurements of a single particle. Measurement means entangling the particle with another system -- a pointer -- and then the measurement of the pointer state. In the present approach, the electromagnetic field serves as the pointer. The electron position is not measured directly -- remember the dipole approximation -- the electron acceleration is being measured. Obviously, the second-order correlation gives a deeper insight into the dynamics than the average values of observables. Also, it provides some insight into the measurement process in quantum mechanics, due to which the difficult process of position measurement is replaced by a standard measurement of radiation. We have to point out that the approach described in this paper does not discuss the probabilities of single measurements, but rather it discusses averages such as a correlation function. Nevertheless, it is a possible way of detecting the motion of an electron along a trajectory. ###### Acknowledgements. JM dedicates this paper to Professor Iwo Bialynicki-Birula, who taught me quantum mechanics and for many years guided me through quantum physics.
2303.13123
Laplacian Segmentation Networks Improve Epistemic Uncertainty Quantification
Image segmentation relies heavily on neural networks which are known to be overconfident, especially when making predictions on out-of-distribution (OOD) images. This is a common scenario in the medical domain due to variations in equipment, acquisition sites, or image corruptions. This work addresses the challenge of OOD detection by proposing Laplacian Segmentation Networks (LSN): methods which jointly model epistemic (model) and aleatoric (data) uncertainty for OOD detection. In doing so, we propose the first Laplace approximation of the weight posterior that scales to large neural networks with skip connections that have high-dimensional outputs. We demonstrate on three datasets that the LSN-modeled parameter distributions, in combination with suitable uncertainty measures, gives superior OOD detection.
Kilian Zepf, Selma Wanna, Marco Miani, Juston Moore, Jes Frellsen, Søren Hauberg, Frederik Warburg, Aasa Feragen
2023-03-23T09:23:57Z
http://arxiv.org/abs/2303.13123v2
# Laplacian Segmentation Networks: Improved Epistemic Uncertainty from Spatial Aleatoric Uncertainty ###### Abstract Out of distribution (OOD) medical images are frequently encountered, e.g. because of site- or scanner differences, or image corruption. OOD images come with a risk of incorrect image segmentation, potentially negatively affecting downstream diagnoses or treatment. To ensure robustness to such incorrect segmentations, we propose Laplacian Segmentation Networks (LSN) that jointly model epistemic (model) and aleatoric (data) uncertainty in image segmentation. We capture data uncertainty with a spatially correlated logit distribution. For model uncertainty, we propose the first Laplace approximation of the weight posterior that scales to large neural networks with skip connections that have high-dimensional outputs. Empirically, we demonstrate that modelling spatial pixel correlation allows the Laplacian Segmentation Network to successfully assign high epistemic uncertainty to out-of-distribution objects appearing within images. ## 1 Introduction Image segmentation is a core component in the biomedical image analysis toolbox, used extensively both to quantify organs for use in scientific studies, as well as to inform clinicians by outlining relevant parts of the image. Its widespread use places high requirements for its safe and interpretable operation. However, neural networks, which form the backbones of most modern segmentation models, are infamous for being overconfident in their predictions outside the training distribution (Hendrycks and Gimpel, 2016). As a result, downstream predictions can be highly unreliable even though they might come with high accuracy on in-distribution data. Figure 1 shows an example derived from a U-net model. While the network has never seen images corrupted by synthetic noise during training it will confidently predict pixels replaced by noise as skin lesions. This renders every following analysis based on the prediction unreliable. While the example is synthetic, similar effects occur in images corrupted, for example, by polluted cameras, or data loss. **In this work**, we introduce Laplace approximations (LA) for epistemic uncertainty quantification in binary image segmentation models. Current Laplace approximations (Daxberger et al., 2021) scale quadratically with the output dimension of the neural network, which prevent their usage in segmentation. We develop a fast Hessian approximation for deep architectures with skip connections, which are an integral part of modern segmentation networks such as the U-net (Ronneberger et al., 2015). This enables us to combine the aleatoric logit distribution of Stochastic Segmentation Networks (SSN) (Monteiro et al., 2020) with post-hoc Laplace approximations for epistemic uncertainty quantification. Specifically, we propose to find the Gaussian approximation on the mean network of the SSN. This can be viewed as combining an evidential deep learning (EDL) model (Sensoy et al., 2018; Ulmer et al., 2023) with a LA approximation of epistemic uncertainty rather than the point estimate prevailingly used in EDL, and, to the best of our Figure 1: Corrupted images can lead to confident and wrong predictions by a U-net (fourth column). In automated downstream tasks, where predictions are not inspected by experts, this can remain undiscovered, leading to critical mistakes and wrong follow up treatment of patients. Our model assigns high epistemic uncertainty to the corrupted images. knowledge, this is the first work to propose this combination. We demonstrate that the spatial correlation in the aleatoric component is essential for the quantification of epistemic uncertainty in the Laplace approximation. Joint modelling of epistemic and aleatoric segmentation uncertainty is not new (Kendall and Gal, 2017; Popescu et al., 2021; Joy et al., 2021; Fuchs et al., 2021). The binary cross entropy loss typically used to train segmentation models makes an assumption of independent pixels. Segmentation models using this loss tend to get stuck in local minima when the target is small compared to the background, unless the classes are reweighted in the loss (Lin et al., 2017). We show experimentally that this changes when you spatially correlate pixels: The model can converge without reweighting the smaller class. As a result, the loss landscape also looks different, leading to a change in the LA estimation of epistemic uncertainty. In our experiments, we observe an effect of this in the form of improved OOD behavior. In a series of experiments on two medical binary segmentation tasks, we show that the proposed method outperforms baselines on a variety of OOD scenarios, including the detection of distribution shifts on datasets and highlighting corrupted parts of an image, by assigning high uncertainty to respective pixels. Providing a more reliable model uncertainty, the proposed Laplacian Segmentation Networks contributes to the applicability and safety of binary segmentation models in practice. ## 2 Background and Related Work While several different sources and taxonomies of uncertainties have been proposed (Kiureghian and Ditlevsen, 2009; Gawlikowski et al., 2021), the Bayesian framework (Bishop, 2006; Kendall and Gal, 2017) distinguishes between two of them: aleatoric and epistemic. These appear directly in the predictive distribution \[p(y|x,D)=\int\underbrace{p(y|x,\theta)}_{\text{aleatoric}}\underbrace{p( \theta|D)}_{\text{epistemic}}\mathrm{d}\theta, \tag{1}\] where \((x,y)\) is an input-output pair, \(D\) is the training data and \(\theta\) the model parameters. The aleatoric (likelihood) density \(p(y|x,\theta)\) models noise or variation in the data from which the model learns. This is typically caused either by ambiguities in the image or in the definition of the object to be annotated, for example tumor boundaries which are hard to define due to gradual tissue infiltration. The posterior density \(p(\theta|D)\propto p(D|\theta)p(\theta)\) reflects the epistemic uncertainty of the estimated model, which quantifies the degree to which the model itself should be trusted. Here \(p(\theta)\) is the prior over the parameters and \(p(D|\theta)\) is the likelihood of the training data. The intractable posterior distribution over the parameters prohibits the evaluation of the integral in Eq. (1). To overcome this, it is necessary to utilize suitable approximations for the predictive distribution itself or parts of the integral on the right hand side of Eq. (1). In standard image segmentation, the posterior distribution \(p(\theta|D)\) is usually approximated with a Dirac distribution \(\delta(\theta-\hat{\theta})\), where \(\hat{\theta}\) is the maximum a posteriori (MAP) or maximum likelihood (ML) estimate, which simplifies the posterior predictive to \(p(y|x,D)=p(y|x,\hat{\theta})\). The additional assumption of pixel-wise conditional independence within the segmentation mask gives the log-likelihood function \(\log p(y|x,\theta)=\sum_{s=1}^{S}\log p(y_{s}|x_{s},\theta)\), where \(s\) indexes pixels. If we assume a flat prior \(p(\theta)=1\), we recover the frequently used negative log-likelihood loss \[L(\theta)=-\log p(D|\theta)=-\sum_{i=1}^{N}\sum_{s=1}^{S}\log p(y_{s,i}|x_{s,i },\theta), \tag{2}\] where \(i\) indexes the training data. Note that this equivalent to the cross-entropy loss for a Bernoulli likelihood. While a joint modelling objective for aleatoric and epistemic uncertainty for regression and classification tasks in computer vision was suggested by Kendall and Gal (2017), most research has focused on either one or the other (Kohl et al., 2018; Monteiro et al., 2020). Methods that mainly focus on modelling aleatoric uncertainty in image segmentation have been proposed based on mixing deterministic segmentation architectures with generative components such as variational autoencoders and normalizing flows (Baumgartner et al., 2019; Kohl et al., 2019; Selvan et al., 2020). Probabilistic graphical models and combinations with neural networks have also been introduced for this purpose, but their application is restricted due to computational expense during inference time (Bartra et al., 2012; Kirillov et al., 2015, 2016; Arnab et al., 2018; Kamnitsas et al., 2017; Kirillov et al., 2015). Ensemble and multi-head models (Lakshminarayanan et al., 2017; Rupprecht et al., 2017; Lee et al., 2016) have been applied for both aleatoric and epistemic uncertainty quantification, training on multiple- or same annotations for aleatoric or epistemic uncertainty quantification, respectively. In particular, this results in a frequentist approach to estimate \(p(\theta|D)\). A Bayesian way to approximate the posterior is Monte-Carlo Dropout, a variational method based on Bernoulli distributions, which can be easily implemented for neural networks (Gal and Ghahramani, 2016, 2015, 2016). Model uncertainty is retrieved by averaging multiple forward passes, while setting weights randomly to 0 with probability \(p\), both during inference time and training. Approaches that model both types of uncertainty in one model have been introduced based on the combination of Mean-Variance networks with diagonal covariance matrix and Dropout (Kendall and Gal, 2017) and Gaussian-Process based convolutional layers (Popescu et al., 2021). ### Post-hoc Laplace Approximation For Bayesian Neural Networks (Bishop, 2006) Laplace's method (MacKay, 1992) is used to approximate the intractable posterior distribution \(p(\theta|D)\) with a Gaussian centered at a local mode \(\theta_{\textsc{map}}\). In a first step the mode of the posterior can be determined by iterative numerical optimization such as stochastic or conjugate gradient decent. Note that for the binary cross entropy loss case described in Eq. (2) maximizing the log posterior is equivalent to minimizing the loss function. A second order Taylor expansion around \(\theta_{\textsc{map}}\) gives \[\log p(\theta|D)\!\simeq\!\log(\theta_{\textsc{map}}|D)\!-\!\frac{1}{2}(\theta \!-\!\theta_{\textsc{map}})^{\!\top}\mathbf{H}(\theta\!-\!\theta_{\textsc{map}}) \tag{3}\] where the first derivative vanishes because \(\theta_{\textsc{map}}\) is assumed to be a local mode. The Hessian \(\mathbf{H}\) in Eq. (3) is defined as \[\mathbf{H}=-\nabla_{\theta}\nabla_{\theta}\log p(\theta|D)\big{|}_{\theta= \theta_{\textsc{map}}}. \tag{4}\] Applying the exponential function to Eq. (3) gives an approximation \(q(\theta)\) which is proportional to the density function of a multivariate normal distribution. The normalizing factor can be found by evaluating the determinant of \(\mathbf{H}\) giving the final approximation to the posterior as \[q(\theta)=\mathcal{N}(\theta|\theta_{\textsc{map}},\mathbf{H}^{-1}). \tag{5}\] Evaluating the Hessian for such large functions is computationally infeasible because of the quadratic complexity in the network parameters and the usually large output dimensions in vision tasks. Approximations exist to overcome this (Botev, 2020) and frameworks for applying them are available (Detlefsen et al., 2021; Daxberger et al., 2021). We propose a fast Hessian approximation for models with skip connections (Sec. 3.2) that scales linearly with parameters and output resolution and makes a post-hoc Laplace approximation of the parameter posterior feasible. Further, we extend the mean-variance network in Kendall and Gal (2017) from heteroscedastic pixel-wise to spatially correlated aleatoric uncertainty using SSN. We quantify epistemic uncertainty by approximating the posterior on the mean network, and show that this yields reliable OOD detection. ## 3 Laplacian Segmentation Network To model the epistemic uncertainty in Eq. (1) with a Laplace approximation, we show that recent progress in scaling LA to images (Miani et al., 2022) can be extended to segmentation networks using skip connections. ### Laplace Approximation of Mean Network We can reformulate the integral for the predictive distribution in Eq. (1) by integrating over logits \(\eta\) to obtain \[p(y|x,D)=\iint p(y|\eta)p(\eta|x,\theta)p(\theta|D)\,\mathrm{d}\eta\,\mathrm{ d}\theta. \tag{6}\] This further factorization of Eq. (1) corresponds to the model formulation used in Evidential Deep Learning (Sensoy et al., 2018; Ulmer et al., 2023). Following Monteiro et al. (2020) and Kendall and Gal (2017), we model the conditional distribution over logits \(p(\eta|x,\theta)\) as a normal distribution parametrized by neural networks \(\mu\) and \(\Sigma\) : \[\eta|x\sim\mathcal{N}(\mu(x,\theta_{1}),\Sigma(x,\theta_{2})), \tag{7}\] and assume pixel-wise independence for the predicted labels given the logits. Thus, we can model \(p(y|\eta)\) for each pixel \(i\) as a Bernoulli distribution parametrized by the softmax of the respective logit. Since the size of the covariance matrix \(\Sigma\) scales quadratically with the number of pixels in the image, we use the low-rank parameterisation of Monteiro et al. (2020) \[\Sigma(x)=D(x)+P(x)^{T}P(x), \tag{8}\] i.e. the variance network \(\Sigma(x)\) is implemented with two networks \(D(x)\) and \(P(x)\). The vectors \(\theta_{1}\in\Theta_{1}=\mathbb{R}^{T}\) and \(\theta_{2}\in\Theta_{2}=\mathbb{R}^{T}\) parameterize the mean and variance networks (c.f. Eq. 7) and share the first \(t\) entries, i.e. we define the shared weight vector \(\theta_{t}\) of the network by \[\theta_{t}\coloneqq(\theta_{1_{1}},\dots,\theta_{1_{t}})=(\theta_{2_{1}}, \dots,\theta_{2_{t}})\in\Theta_{t}=\mathbb{R}^{t}. \tag{9}\] Then \(\theta\in\Theta=\mathbb{R}^{(t+2\cdot(T-t))}\) contains all model parameters \[\theta\coloneqq(\theta_{t},\theta_{1_{t+1}},\dots,\theta_{1_{T}},\theta_{2_{t +1}},\dots,\theta_{2_{T}}). \tag{10}\] The post-hoc Laplace approximation first finds a mode \(\theta_{\textsc{map}}\) by minimizing the loss function \[\mathcal{L}(\theta)=-\log\mathbb{E}_{p(\eta|x,\theta)}[p(y|\eta) ]-\log p(\theta)\approx\\ -\text{logsumexp}_{m=1}^{M}\Big{(}\sum_{s=1}^{S}\log p(y_{s}| \eta_{s}^{(m)})\Big{)}+\log(M), \tag{11}\] where \(M\) logits \(\eta\) are sampled from the distribution in Eq. (7) and where the term \(-\log p(\theta)\) vanishes assuming a flat prior \(p(\theta)=1\). Since current algorithms for fast Hessian computations have no implementation for this loss function, we instead make use of the shared weights in the parameter vectors. For \(t\gg T-t\), the loss landscape is dominantly defined by the shared parameters. Current SSN implementations usually fulfil this criteria, since they estimate the mean and variance of the logit distribution based on the feature maps of a deep deterministic segmentation model, while using only one convolutional layer each for mean and variance estimation. We therefore discard the entries of the variance network on the parameter vector \(\theta_{\textsc{map}}\), i.e. we set \[\theta_{\textsc{map}}^{*}\coloneqq\theta_{\textsc{map}}\big{|}_{(\theta_{t}, \theta_{1_{t+1}},\dots,\theta_{1_{T}})}\in\Theta_{\textsc{mean}}=\mathbb{R} ^{T}. \tag{12}\] We can make use of the fact that the SSN loss function reduces to the binary cross entropy loss under zero variance, which allows us to fall back on the fast Hessian computation frameworks available. The posterior is then found by the Laplace approximation as described in Sec. 2.1 resulting in a Gaussian approximation in the parameter space \(\Theta_{\text{mean}}\) \[q(\theta^{*})=\mathcal{N}(\theta^{*}|\theta_{\text{MAP}}^{*},\mathbf{H}^{*-1}), \tag{13}\] with \(\mathbf{H}^{*}\) defined as \[\mathbf{H}^{*}=-\nabla_{\theta}\nabla_{\theta}\log p(\theta^{*}|D)\big{|}_{ \theta^{*}=\theta_{\text{MAP}}^{*}}. \tag{14}\] Additionally to the aleatoric logit distribution, we can now investigate the epistemic component in form of the Laplace approximation for a given sample during inference time. Figure 2 gives an schematic overview of the proposed Laplacian Segmentation Network (LSN). ### Fast Hessian Approximations for Segmentation Networks with Skip Connections Computation of second order derivatives for Segmentation Networks is expensive due to the vast amount of parameters and pixels in the output. Standard methods approximate the Hessian with the diagonal of the Generalized Gauss Newton (ggn) matrix (Foresee and Hagan, 1997; Botev, 2020). This approximation, besides enforcing positive definiteness, also allows for an efficient backpropagation-like algorithm. The required compute scales linearly in the number of parameters and quadratic in the number pixels. The quadratic dependency is prohibitive already with images of size \(64\times 64\). We therefore make use of the diagonal backpropagation (db) proposed by Miani et al. (2022), which returns a trace-preserving approximation of the diagonal of the ggn. The complexity of this approximation scales linearly with the number of pixels, allowing the computation of the Hessian also for larger images. The idea is to add a diagonal operator \(\mathfrak{D}\) in-between each backpropagation step. For each layer \(l\) \[[\nabla_{\theta}\nabla_{\theta}\log p(\theta|D)]_{l}^{\text{ \tiny{GGN}}}[J_{\theta}f_{\theta}(x)^{\top}H^{(L)}J_{\theta}f_{\theta}(x)]_{l}= \tag{15}\] \[=J_{\theta}{f^{(l)}}^{\top}\left(\prod_{i=l+1}^{L}J_{x}{f^{(i)}}^ {\top}H^{(L)}\prod_{i=L}^{l+1}J_{x}{f^{(i)}}\right)J_{\theta}{f^{(l)}}\] \[\overset{\text{DB}}{\approx}J_{\theta}{f^{(l)}}^{\top}\mathfrak{D }\left(J_{x}{f^{(l+1)}}^{\top}\mathfrak{D}\left(\ldots\right)J_{x}{f^{(l+1)}} \right)J_{\theta}{f^{(l)}}\] where \(H^{(L)}\) is the Hessian of the binary cross entropy loss with respect to the logits, which can be expressed in closed form as diagonal plus outer product matrix. Moreover, we extend the StochMan library (Detlefsen et al., 2021) with support for skip-connection layers. For a given submodule \(f_{\theta}\), a skip-connection layer \(\text{sc}_{f}\) concatenates the function with the identity, such that \(\text{sc}_{f}(x)=(f_{\theta}(x),x)\). The Jacobian is then \(J_{x}\text{sc}_{f}(x):=(J_{x}f_{\theta}(x),\mathbb{I}_{x})\). We exploit the block structure and efficiently backpropagate the diagonal only. With a recursive call on the submodule \(f\), the backpropagation supports nested skip-connections, i.e. when some submodules of \(f\) are skip-connections as well. This unlocks the use of various curvature-based methods for segmentation architectures with skip connection in future research. For a technical description of the used Hessian approximation we refer to A.2. ## 4 Results In the following, we investigate the derived LSN and test it in different practical scenarios. We start by illustrating the effect of introducing spatial correlation between pixels in scenarios with strong class imbalance. When segmenting small objects from background, reweighting classes is often necessary to optimize the network; this, however, has an undesired effect on the estimated epistemic uncertainties. We show how LSNs incorporating spatial correlation do not have this problem, and illustrate how this results in epistemic uncertainty with superior OOD behavior. Next, we compare how the two types of uncertainties behave when modelled together and how and whether they differ. Finally we investigate out-of-distribution capabilities of the considered models on three different tasks. First we show with a toy example to which extent the epistemic components of the models are able to detect corrupted white noise boxes that are added to the images. Second, we assess the ability to detect distribution shifts by comparing the ISIC dataset against two similar datasets containing images of skin without lesions. ### Baselines & Data We compare our suggested model (LSN) to the following baselines in our experiments: (1) Ensemble of U-nets (Ensemble), were each member is trained on the same dataset (2) U-net with Monte-Carlo Dropout (U-net + Dropout) (3) U-net with post-hoc Laplace approximation (U-net + LA) (4) SSN with Dropout on the mean network (SSN + Dropout). Additionally, we consider the case of a diagonal covariance matrix, ignoring spatial correlation, for LSN and SSN + Dropout. We indicate this with a _(diag)_ tag. Training and experimental scripts are implemented in PyTorch and will be made available upon publication. All models share the same U-net backbone with five encoding blocks for comparability. Each block contains two convolution layers and uses the hyperbolic tangent activation function. For the estimation of \(\mu\), \(D\) and \(P\) the last feature map of the U-net is passed through three separate \(1\times 1\) convolution layers, respectively. The ISIC19 dataset (Combalia et al., 2019; Codella et al., 2018; Tschandl et al., 2018) is scaled to a resolution of \(64\times 64\) and is split into 11028 images for training and 1379 images for each, validation and testing. We refer to appendix A.1 for further details on the training procedure. ### Spatial Correlation affects Class Imbalance affects Epistemic Uncertainty Class imbalance is a common challenge when segmenting small objects, often treated with methods of cost sensitive learning (Elkan, 2001; Kukar et al., 1998) deriving loss functions that penalize wrongly classified pixels of different classes differently (Lin et al., 2017). For the binary cross entropy loss, this is typically done by scaling the class-wise loss contributions with a factor that represents the imbalance between target and background. To compensate for class imbalance in the ISIC dataset (about 1:8), all models were trained with binary cross entropy, reweighting the underrepresented class with different factors. Table 1 shows the validation set intersection-over-union (IoU) after the last epoch for different scaling factors of the loss function. We find that only the models including spatial variation reach a useful optimum when reweighting is not applied, illustrating that including spatial correlation into the modelling has a desired effect on training with imbalanced classes. We hypothesize that this is because the high correlation between pixels in the background effectively reduces their contribution to the loss. But this spatial correlation does not only affect our ability to optimize models; this also affects our epistemic uncertainty estimates: The alternative rescaling of parts of the loss has an immediate effect on epistemic uncertainty, since the Gaussian Approximation found post-hoc depends on the curvature of the loss landscape. In Fig. 3 we illustrate this effect with a toy example, showing the landscape a function (left) and of a rescaled version of the same function (right), along with their post-hoc fitted Laplace approximations. As shown in the illustration, the scaled function exhibits higher curvature, and hence a lower epistemic uncertainty, than the original. In other words, we find that including spatial correlation between pixels in the aleatoric logit distribution should affect the resulting epistemic uncertainty given by a post-hoc Laplace approximation. In the next sections, we shall see this empirically by comparing LSN to the version LSN (diag), where the aleatoric uncertainty is modelled without spatial correlation. ### How do aleatoric and epistemic uncertainty differ when modelled jointly? The proposed model can quantify aleatoric and epistemic uncertainty via the logit distribution of the SSN and the post-hoc Laplace approximation. To evaluate how similar \begin{table} \begin{tabular}{l l l l l} \hline \hline & \multicolumn{4}{c}{Scaling Factor} \\ \cline{2-5} Model & 1 & 2 & 4 & 8 \\ \hline Ensemble & \(<10^{-10}\) & \(<10^{-10}\) & \(0.74\) & \(0.72\) \\ \hline U-net + Dropout & \(<10^{-10}\) & \(<10^{-10}\) & \(0.64\) & \(0.61\) \\ SSN (diag) + Dropout & \(<10^{-10}\) & \(<10^{-10}\) & \(0.69\) & \(0.65\) \\ SSN + Dropout & \(0.73\) & \(0.72\) & \(0.68\) & \(0.60\) \\ \hline U-net + LA & \(<10^{-10}\) & \(<10^{-10}\) & \(0.67\) & \(0.70\) \\ LSN (diag) & \(<10^{-10}\) & \(<10^{-10}\) & \(0.67\) & \(0.65\) \\ LSN & \(0.71\) & \(0.65\) & \(0.58\) & \(0.59\) \\ \hline \hline \end{tabular} \end{table} Table 1: Modelling the spatial correlation in the aleatoric logit distribution makes scaling the binary cross entropy loss unnecessary. IoU on the ISIC validation set after training for all models and different scaling factors of the loss function. Other learning parameters are kept equal. Figure 2: **Model overview.** Epistemic uncertainty maps are retrieved by sampling mean networks from the Laplace approximation \(q(\theta^{*})\) and calculating the variance on their outputs for \(x\). The aleatoric logit distribution is predicted on the parameter configuration \(\theta_{\text{MAP}}\) akin to Monteiro et al. (2020). the modelled distributions are we evaluate them in terms of segmentation performance. For each image in the ISIC test set, we draw 50 samples from the logit distribution, parameterized by \(\theta_{MAP}\). We then draw 50 mean networks from the parameter distribution given by the Laplace approximation. Retrieving samples from both components of the model is illustrated in Fig. 2. Figure 4 shows Precision-Recall curves for the proposed model, as well as the baseline models that use the SSN and therefore model both uncertainties. Note, that the SSN (diag) + Dropout corresponds to the heteroscedastic Bayesian Neural Network with Dropout derived by Kendall and Gal (2017). First, we find that the aleatoric component, targeting the data variation, performs better in terms of segmentation performance than the epistemic component for our model. Further, the epistemic component of the LSN with diagonal covariance matrix performs worse (lower AUC), underpinning that the scaling of the loss has an effect on the post-hoc Laplace approximation. For the Dropout-based methods, on the other hand, we see virtually no difference between the epistemic and aleatoric components. This is in line with the findings of Kendall and Gal (2017), that Dropout as the epistemic component imitates the aleatoric logit distribution, unable to capture the different types of uncertainty. What does this tell us? We see that the aleatoric and epistemic components of the LSN clearly capture different aspects of the model's uncertainty. To better understand where this difference comes from, Figure 5 shows epistemic and aleatoric variance maps of sigmoids for a given ISIC test set sample. While the epistemic component assigns low variance to this in-distribution sample, the aleatoric component expresses the variation of plausible predictions for the image. We find this behavior attractive: In the Bayesian framework, the aleatoric component should play the role of the predictor, as it imitates the distribution of annotations found in the data. The epistemic component, conversely, should capture the model uncertainty, enabling the model to identify OOD samples. In the following experiments we will investigate to which extent the proposed model's epistemic component accomplishes this requirement. Figure 4: Precision-Recall plots for the ISIC test dataset show that aleatoric and epistemic variance in the LSN models (first row) are not the same, opposed to the SSN + Dropout models. The better performance of the aleatoric component (higher AUC) indicates that the model learns to separate data variation from model uncertainty. This is not the case for models with Dropout as an epistemic component, confirming the findings by Kendall and Gal (2017). Figure 5: Epistemic and aleatoric uncertainties differ for the LSN model. While the aleatoric component assigns high variance to the boundary, expressing how plausible segmentations could differ, the epistemic variance for this in-distribution sample is low. Figure 3: Illustration of loss landscape (grey) with fitted post-hoc Laplace approximation (orange) around a local minimum \(\theta_{MAP}\). On the right the loss landscape is scaled by a factor \(>1\), as done when correcting for class imbalance. Note that the found Gaussian approximation’s variance depends on the curvature at \(\theta_{MAP}\). ### Detection of Within-Image Corruption To assess how well the different models are able to assign high epistemic variance to corrupted parts of an image, we distort in-distribution images by either adding squared white noise boxes, or cropping out black boxes, to the samples of the ISIC test set. This allows us to directly compare the epistemic components' variance predictions for ID and OOD pixels and investigate whether they are able to to assign high variance to the affected area within the image. Figure 6 shows a sample from the ISIC dataset with its corrupted counterpart, along with the respective models' epistemic variance maps. The ensemble assigns high variance to the whole lesion since some members end up in suboptimal local minima and predict background only. Any meaningful epistemic uncertainty captured by the remaining models is clouded by this. While all models assign variance to the boundary of the lesion, the LSN assigns the variance more accurately to the white noise box. To quantify whether the models assign higher variance to the OOD samples compared to their ID counterparts, we define the Pixel Ratio for a given image \(x\) as \[\textit{Pixel Ratio}(x)=\frac{\sum_{s=1}^{S}\mathrm{Var}(\sigma(\eta_{s}(x_{ \text{OOD}})))}{\sum_{s=1}^{S}\mathrm{Var}(\sigma(\eta_{s}(x)))}, \tag{16}\] where \(S\) is the number of pixels and \(x_{OOD}\) the corrupted version of the image. While the Pixel Ratio detects increased variance across the entire image, it does not quantify whether the increase in prediction variance is localized at the corrupted box. To this end, we define the Box Ratio as \[\textit{Box Ratio}(x_{\text{OOD}})=\frac{\sum_{k=1}^{K}\mathrm{Var}(\sigma( \eta_{k}(x_{\text{OOD}})))}{\sum_{s=1}^{S}\mathrm{Var}(\sigma(\eta_{s}(x_{ \text{OOD}})))} \tag{17}\] where \(K\) is the index set for those pixels lying within the white noise box. The Box Ratio gives an indication to what extent the considered method assigns high variance to those pixels that have been distorted. In Table 2, we provide both measures for the case of an added white noise box or a black crop box, where size and position of the box vary randomly for each image in the ISIC test set. Specifically, we sample positions \([0,40],[0,40]\) and side lengths \([10,20]\) of the boxes uniformly. All models assign higher variance to the OOD samples, indicated by Pixel Ratio values larger than 1. However, the LSN is able to better locate the corrupted boxes on average. The total sum of variance from ID to OOD does not increase much for the LSN, indicated by low Pixel Ratios. At the same time the high Box Ratio indicates that the LSN moves the variance from the boundary of the lesion to the corrupted box. This is also visible in the qualitative result in Fig. 6. This is illustrated by the examples in Figure 6, where we for instance see clearly that while the U-net + Dropout gets a top score on the Pixel Ratio, its low Box Ratio score uncovers the fact that the model did not really localize the corrupted box at all. ### Distribution Shifts on ISIC dataset Detecting gradual distribution shifts is critical in medical imaging, since downstream tasks might be sensitive to small systematic input changes. For example, a slightly flatter camera angle directly influences predicted area of a skin lesion, distorting decisions in automated systems. We evaluate OOD performance of all models, trained on the ISIC 2018 dataset and evaluated on an in-distribution test set and three OOD datasets: Derm-Skin (DERM), Clin-Skin (CLINIC) (Pacheco et al., 2020) and the PAD-UFES-20 dataset (Pacheco et al., 2020). The Derm-Skin dataset contains 1,565 images of healthy skin, cropped out of the ISIC dataset. The Clin-Skin dataset contains 723 images showing healthy skin gathered from social networks. The PAD-UFES-20 dataset contains 1570 photos of skin lesions collected from smartphone cameras. Figure 7 shows OOD detection capabilities for all models, while investigating the effects of different modeling on the epistemic component's predictive entropy. The distributions reported in Figure 7 relate to the average per pixel predictive entropy per image. The models with the best separation between ID and OOD entropy are the Ensemble, SSN (diag) + Dropout, LSN and LSN (diag) methods. However, even among the best performing methods, there remains significant overlap in predictive entropy between the ID and OOD datasets. This lack in separation motivates future work in exploring alternative uncertainty quantification metrics. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Pixel Ratio \(\uparrow\)} & \multicolumn{2}{c}{Box Ratio \(\uparrow\)} \\ \cline{2-5} Model & Black & Noise & Black & Noise \\ \hline Ensemble & 1.00 & 0.95 & 0.14 & 0.13 \\ \hline U-net + Dropout & **1.82** & **1.80** & 0.13 & 0.14 \\ SSN (diag) + Dropout & 1.09 & 1.20 & 0.15 & 0.17 \\ SSN + Dropout & 1.51 & 1.60 & 0.14 & 0.20 \\ \hline U-net + LA & 1.14 & 1.47 & 0.17 & 0.24 \\ LSN (diag) & 1.33 & 1.23 & 0.23 & 0.24 \\ LSN & 1.29 & 1.64 & **0.26** & **0.29** \\ \hline \hline \end{tabular} \end{table} Table 2: Average ratio of pixel sum of variance maps for ID and OOD samples (Pixel Ratio) and ratio between correctly assigned variance to the corrupted box and pixel sum of variance maps (Box Ratio) for all models on ISIC. Results are shown for OOD images generated by adding white noise box (Noise) or a black crop box (Black) at a randomly assigned position and size. ## 5 Discussion and Conclusion In this paper, we have demonstrated how Laplace approximations can scale to image segmentation tasks, through a trace-preserving diagonal Hessian approximation. Importantly, this scales linearly with the number of image pixels, unlike past work which exhibited a quadratic complexity. We have demonstrated across different datasets and quality measures that this successfully captures the epistemic uncertainty of estimated model parameters, in contrast to existing methods. While we have illustrated a promising potential for the LSN models, our current implementation still has some limitations. First, we currently tackle only binary segmentation problems, which are common in medical imaging. Moreover, our Laplace approximation currently estimates uncertainty in the mean function of a mean-variance network. However, a fully Bayesian approach would apply a Laplace approximation to the entire model, giving even more precise estimates of uncertainty. We make the interesting experimental discovery that accurately capturing aleatoric uncertainty is essential to recover useful epistemic uncertainty estimates. In particular, we find that an explicit model of spatial correlation is essential. One hypothesis is that the epistemic uncertainty is forced to explain whichever uncertainty is left unexplained by the aleatoric component. A coarse (aleatoric) likelihood model that ignores spatial correlation, thus, force the epistemic component to capture spatial data correlations even if this is not epistemic. Such a hypothesis would suggest that richer correlation structures than those considered here (e.g. based on attention mechanisms (Vaswani et al., 2017)) could yield even finer grained epistemic uncertainties. Figure 6: Predicted epistemic variance maps of two samples from the ISIC test set and counterparts with added white noise box and black crop for the different models. LSN assigns high variance to the white noise box as well as the black crop. Figure 7: Box plot of entropy for all models on the in-distribution ISIC test set and the OOD datasets DERM, CLINIC and PAD-UFES-20. All models assign higher entropy to the OOD datasets. ## Acknowledgements This work was supported by a research grant (42062) from VILLUM FONDEN. This project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 757360). The work was partly funded by the Novo Nordisk Foundation through the Center for Basic Machine Learning Research in Life Science (NNF200C0062606). The authors acknowledge the Pioneer Centre for AI, DNRF grant number P1. This research used resources provided by the Darwin testbed at Los Alamos National Laboratory (LANL) which is funded by the Computational Systems and Software Environments subprogram of LANL's Advanced Simulation and Computing program (NNSA/DOE). This work was supported by the Laboratory Directed Research and Development program of LANL under project number 20210043DR. LANL is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy (Contract No. 89233218CNA000001). A Appendix ### Implementation and Training Details All architectures were trained using the Adam optimizer (Kingma and Ba, 2014) and its default PyTorch configuration. For models with Dropout the learning rate was 0.005, for all other models the learning rate was 0.0001. We used a batch size of 32 and trained all models for 60 epochs. These hyperparameters were found with a grid search on the validation set. Computations were performed on an internal GPU cluster. To retrieve epistemic uncertainties we sample \(50\) samples from the parameter distributions given by Dropout and the post-hoc Laplace approximation. ### Fast Hessian Approximation The aim of this section is to clarify the following sequence of approximations for the Hessian \(H_{\theta}\) \[H_{\theta}\ \stackrel{{(1)}}{{\approx}}\ \operatorname{ \textsc{ggn}}_{\theta}\ \stackrel{{(2)}}{{\approx}}\ \operatorname{\textsc{ggnd}}_{\theta}\ \stackrel{{(3)}}{{\approx}}\ \operatorname{\textsc{db}}_{\theta}\] where \(\operatorname{\textsc{ggn}}_{\theta}\) and \(\operatorname{\textsc{ggnd}}_{\theta}\) refer to the Generalized Gauss Newton matrix and its Diagonal (Botev, 2020) and \(\operatorname{\textsc{db}}_{\theta}\) is the matrix found by the diagonal backpropagation algorithm (Miani et al., 2022). Each of the above approximations trade exactness of the found solution for speed of its computation. Specifically, the approximation \((1)\) yields a matrix \(\operatorname{\textsc{ggn}}_{\theta}\) that is always positive semi-definite. Its diagonal given by \((2)\) is commonly used for the Laplace approximation in neural networks, since its Inverse is computationally feasible, often referred to as diagonal Laplace. In the following we develop the terminology needed to explain our contribution in extending the diagonal backpropagation algorithm \(\operatorname{\textsc{db}}_{\theta}\) to nested skip-connection layers. ### Notation Consider a neural network (nn) \(f_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}\) with \(L\) layers. The parameter \(\theta=(\theta_{1},\ldots,\theta_{L})\in\Theta\) is the concatenation of the parameters for each layer \(i\in\{1,...,L\}\). The nn is a composition of \(L\) functions \(f^{(1)},f^{(2)},\ldots,f^{(L)}\), where \(f^{(i)}\) is parametrized by \(\theta_{i}\). \[f_{\theta}:=f^{(L)}_{\theta_{L}}\circ f^{(L-1)}_{\theta_{L-1}}\circ\ldots\circ f ^{(2)}_{\theta_{2}}\circ f^{(1)}_{\theta_{1}}.\] Since we need explicit access to the intermediate values, we call the input \(x_{0}\in\mathcal{X}\) and iteratively define \(x_{i}:=f^{(i)}_{\theta_{i}}(x_{i-1})\) for \(i=1,\ldots,L\), such that the nn output is \(x_{L}\in\mathcal{Y}\). This notation can be visually presented as We highlight that there is no restriction of what a layer \(f^{(i)}\) can be, in particular, it can be a composition of functions itself, translating to an object of the _sequential_ class in PyTorch. This recursive structure of the function, which is replicated by its implementation in PyTorch, will be key in extending the diagonal backpropagation method to nested skip-connections. We define the _diagonal_ operator \(\mathcal{D}\) for a quadratic matrix \(M\) of size \(m\in\mathbb{N}^{+}\) \[\mathcal{D}:\mathbb{R}^{m\times m}\rightarrow\mathbb{R}^{m\times m}\] by \[[\mathcal{D}(M)]_{ij}:=\left\{\begin{array}{ll}M_{ij}&\text{ if }i=j\\ 0&\text{ if }i\neq j\end{array}\right.\qquad\forall i,j=1,\ldots,m\] Note that this is a trace-preserving operator, a property that will be inherited by our final approximation. Moreover, note that applying this operator induces a systematic bias, specifically it decreases the Von Neumann entropy of the matrix. This is due to a smoothing effect on the eigenspectrum. ### Jacobian of the Neural Network We are interested in the Jacobian \(J_{\theta}f_{\theta}(x_{0})\) of the nn with respect to the parameter \(\theta\). Each column of the Jacobian is the derivative of the output vector w.r.t. a single parameter. We can then group the parameters (i.e. columns) layer by layer \[J_{\theta}f_{\theta}(x_{0}) =\left(\begin{array}{c|c|c|c|c}J_{\theta_{1}}f_{\theta}(x_{0})& \ldots&J_{\theta_{1}}f_{\theta}(x_{0})&\ldots&J_{\theta_{L}}f_{\theta}(x_{0}) \\ \hline\end{array}\right)\] \[=\left(\begin{array}{c|c|c|c}J_{\theta_{1}}\left(f_{\theta_{1}}^ {(1)}\circ\cdots\circ f_{\theta_{L}}^{(L)}\right)(x_{0})&\ldots&J_{\theta_{i}} \left(f_{\theta_{i}}^{(i)}\circ\cdots\circ f_{\theta_{L}}^{(L)}\right)(x_{i-1} )&\ldots&J_{\theta_{L}}f_{\theta_{L}}^{(L)}(x_{L-1})\\ \end{array}\right),\] where the second equality comes from the fact that each layer only depends on its respective parameters, i.e. \[J_{\theta_{j}}f_{\theta_{i}}^{(i)}(x_{i-1})=0\quad\text{ if }i\neq j.\] Exploiting this block-structure, we can focus on a single layer Jacobian \(J_{\theta_{i}}f_{\theta}(x_{0})\), and concatenate them afterwards. With the chain rule we get \[J_{\theta_{i}}f_{\theta}(x_{0})=J_{\theta_{i}}\left(f_{\theta_{i}}^{(i)}\circ \cdots\circ f_{\theta_{L}}^{(L)}\right)(x_{i-1})=\left(\prod_{j=L}^{i+1}J_{x_{ j-1}}f_{\theta_{j}}^{(j)}(x_{j-1})\right)J_{\theta_{i}}f_{\theta_{i}}^{(i)}(x_{i-1}). \tag{18}\] The intuition for the chain rule is that the Jacobian \(J_{\theta_{i}}f_{\theta}(x_{0})\) for layer \(i\) is the composition of the Jacobians w.r.t. the _input_\(J_{x_{j-1}}f_{\theta_{j}}^{(j)}(x_{j-1})\) of subsequent layers \(j=L,L-1,\ldots,i+2,i+1\), times the Jacobian w.r.t. the _parameters_\(J_{\theta_{i}}f_{\theta_{i}}^{(i)}(x_{i-1})\) of the specific layer \(i\). Thus, we can reuse computation for one layer to improve the computation of other layers, specifically the product of Jacobians w.r.t. the input. ### Generalized Gauss Newton method For the Laplace approximation of a given loss function \(\mathcal{L}:\mathcal{Y}\rightarrow\mathbb{R}\) we need to compute the Hessian of the composition of nn and loss with respect to the parameters \(\nabla_{\theta}^{2}\mathcal{L}(f_{\theta}(x_{0}))\in\mathbb{R}^{|\theta| \times|\theta|}\). In the following we denote the length of vector \(v\) by \(|v|\). According to the chain rule it holds, that \[\underbrace{\nabla_{\theta}^{2}\mathcal{L}(f_{\theta}(x_{0}))}_{=:H_{\theta}} =\underbrace{J_{\theta}f_{\theta}(x_{0})^{\top}\cdot\nabla_{x_{L}}^{2} \mathcal{L}(x_{L})\cdot J_{\theta}f_{\theta}(x_{0})}_{=:\text{GGN}_{\theta}}+ \sum_{o=1}^{|x_{L}|}[\nabla_{x_{L}}\mathcal{L}(x_{L})]_{o}\cdot\nabla_{\theta }^{2}[f_{\theta}(x_{0})]_{o}, \tag{19}\] where \([v]_{o}\) refers to the \(o\)-th component of vector \(v\). In this sense, the Generalized Gauss-Newton matrix \(\text{GGN}_{\theta}\) is commonly used as a positive-definite approximation of the full Hessian \(H_{\theta}\). The positive definiteness follows from the positive definiteness of the Hessian of the loss function with respect to the nn output \(H^{\mathcal{L}}:=\nabla_{x_{L}}^{2}\mathcal{L}(x_{L})\in\mathbb{R}^{|x_{L}| \times|x_{L}|}\), which holds for common losses like Mean Squared Error or Cross-Entropy. Consider a layer-by-layer block structure of the Generalized Gauss-Newton \[\text{GGN}_{\theta}=\left(\begin{array}{cccc}J_{\theta_{1}}f_{\theta}(x_{0}) ^{\top}H^{\mathcal{L}}J_{\theta_{1}}f_{\theta}(x_{0})&J_{\theta_{1}}f_{\theta} (x_{0})^{\top}H^{\mathcal{L}}J_{\theta_{2}}f_{\theta}(x_{0})&\ldots&J_{\theta_{1 }}f_{\theta}(x_{0})^{\top}H^{\mathcal{L}}J_{\theta_{L}}f_{\theta}(x_{0})\\ J_{\theta_{2}}f_{\theta}(x_{0})^{\top}H^{\mathcal{L}}J_{\theta_{1}}f_{\theta}(x_{0}) &J_{\theta_{2}}f_{\theta}(x_{0})^{\top}H^{\mathcal{L}}J_{\theta_{2}}f_{\theta }(x_{0})&&\\ \vdots&&\ddots&\\ J_{\theta_{L}}f_{\theta}(x_{0})^{\top}H^{\mathcal{L}}J_{\theta_{1}}f_{\theta}(x_ {0})&&J_{\theta_{L}}f_{\theta}(x_{0})^{\top}H^{\mathcal{L}}J_{\theta_{L}}f_{ \theta}(x_{0})\\ \end{array}\right).\] Its _block-diagonal_ approximation \[\textsc{GGnb}_{\theta}=\left(\begin{array}{ccccc}J_{\theta_{1}}f_{\theta}(x_{0} )^{\top}H^{\mathcal{L}}J_{\theta_{1}}f_{\theta}(x_{0})&0&\dots&0\\ 0&J_{\theta_{2}}f_{\theta}(x_{0})^{\top}H^{\mathcal{L}}J_{\theta_{2}}f_{\theta }(x_{0})&&&\\ \vdots&&\ddots&\\ 0&&&J_{\theta_{L}}f_{\theta}(x_{0})^{\top}H^{\mathcal{L}}J_{\theta_{L}}f_{ \theta}(x_{0})\end{array}\right)\] is usually considered as a cheaper-to-compute approximation of the full \(\textsc{GGnb}_{\theta}\). According to chain rule in Eq. (18), we can explicitly write the diagonal block \(\textsc{GGnb}_{\theta}^{(i)}=J_{\theta_{i}}f_{\theta}(x_{0})^{\top}H^{\mathcal{ L}}J_{\theta_{i}}f_{\theta}(x_{0})\) of the \(i\)-th layer as \[\textsc{GGnb}_{\theta}^{(i)}=J_{\theta_{i}}f_{\theta_{i}}^{(i)}(x_{i-1})^{ \top}\left(\prod_{j=i+1}^{L}J_{x_{j-1}}f_{\theta_{j}}^{(j)}(x_{j-1})^{\top} \right)\cdot H^{\mathcal{L}}\cdot\left(\prod_{j=L}^{i+1}J_{x_{j-1}}f_{\theta_{ j}}^{(j)}(x_{j-1})\right)J_{\theta_{i}}f_{\theta_{i}}^{(i)}(x_{i-1}). \tag{20}\] This can be rearranged as a sequence of \((J^{\top}MJ)\)-like operators iteratively applied to \(H^{\mathcal{L}}\), as \[\textsc{GGnb}_{\theta}^{(i)}=J_{\theta_{i}}f_{\theta_{i}}^{(i)}(x_{i-1})^{\top }\Bigg{(}J_{x_{i}}f_{\theta_{i+1}}^{(i+1)}(x_{i})^{\top}\bigg{(}J_{x_{i+1}}f_ {\theta_{i+2}}^{(i+2)}(x_{i+1})^{\top}\Big{(}\quad\dots\quad\big{(}J_{x_{L-1}} f_{\theta_{L}}^{(L)}(x_{L-1})^{\top}\cdot H^{\mathcal{L}}. \tag{21}\] \[\qquad\qquad\qquad\cdot J_{x_{L-1}}f_{\theta_{L}}^{(L)}(x_{L-1})\big{)}\quad \dots\quad\Big{)}J_{x_{i}}f_{\theta_{i+1}}^{(i+1)}(x_{i})\Bigg{)}J_{x_{i+1}}f_ {\theta_{i+2}}^{(i+2)}(x_{i+1})\Bigg{)}J_{\theta_{i}}f_{\theta_{i}}^{(i)}(x_{i -1}).\] From this expression, we can build an efficient backpropagation-like algorithm to compute \(\textsc{GGnb}_{\theta}\). ``` \(M=H^{\mathcal{L}}\) for\(j=L,L-1,\dots,1\)do \(\textsc{GGnb}_{\theta}^{(j)}=J_{\theta_{j}}f_{\theta_{j}}^{(j)}(x_{j-1})^{\top }\cdot M\cdot J_{\theta_{j}}f_{\theta_{j}}^{(j)}(x_{j-1})\) \(M=J_{x_{j-1}}f_{\theta_{j}}^{(j)}(x_{j-1})^{\top}\cdot M\cdot J_{x_{j-1}}f_{ \theta_{j}}^{(j)}(x_{j-1})\) endfor \(\textsc{GGnb}_{\theta}=(\textsc{GGnb}_{\theta}^{(1)},\dots,\textsc{GGnb}_{ \theta}^{(L)})\) return\(\textsc{GGnb}_{\theta}\) ``` **Algorithm 1** Computation of \(\textsc{GGnb}_{\theta}\) (exact backpropagation) Note the two memory bottlenecks of this algorithm: after each step \(j\) the involved matrices have sizes \[\textsc{GGnb}_{\theta}^{(j)}\in\mathbb{R}^{|\theta_{j}|\times|\theta_{j}|} \qquad\qquad M\in\mathbb{R}^{|x_{j-1}|\times|x_{j-1}|}.\] The former one is commonly avoided by just computing the diagonal of each block \(\textsc{GGnb}_{\theta}^{(j)}\), and thus just computing the diagonal of the Generalized Gauss-Newton matrix, which we can refer to as \(\textsc{GGnd}_{\theta}:=\mathcal{D}(\textsc{GGn}_{\theta})=\mathcal{D}( \textsc{GGnb}_{\theta})\). Using this approach together with a Laplace approximation of the loss landscape is called _diagonal Laplace_ and scales linearly in the number of parameter \(|\theta|\). ``` \(M=H^{\mathcal{L}}\) for\(j=L,L-1,\dots,1\)do \(\textsc{GGnd}_{\theta}^{(j)}=\mathcal{D}\left(J_{\theta_{j}}f_{\theta_{j}}^{(j) }(x_{j-1})^{\top}\cdot M\cdot J_{\theta_{j}}f_{\theta_{j}}^{(j)}(x_{j-1})\right)\) \(M=J_{x_{j-1}}f_{\theta_{j}}^{(j)}(x_{j-1})^{\top}\cdot M\cdot J_{x_{j-1}}f_{ \theta_{j}}^{(j)}(x_{j-1})\) endfor \(\textsc{GGnd}_{\theta}=(\textsc{GGnd}_{\theta}^{(1)},\dots,\textsc{GGnd}_{ \theta}^{(L)})\) return\(\textsc{GGnd}_{\theta}\) ``` **Algorithm 2** Computation of \(\textsc{GGnd}_{\theta}\) (exact backpropagation) The memory bottlenecks of this modified version of Eq. (1) are \[\textsc{gond}_{\theta}^{(j)}\in\mathbb{R}^{|\theta_{j}|}\qquad\qquad M\in\mathbb{ R}^{|x_{j-1}|\times|x_{j-1}|}.\] However, the quadratic dependence on the size of \(x_{j}\) is still potentially problematic. In an encoder-decoder architecture, as in our case, the maximum of this size is realized in the input \(x_{0}\) and output \(x_{L}\). For image-based networks, this size corresponds to the number of pixels. With _reasonable_ resources this backpropagation algorithm is feasible for images with size \(28\times 28\), but it became infeasible for larger images. In order to deal with high-resolution images we need to avoid the quadratic dependency on the number of pixels in the output. ### Fast diagonal backprop The diagonal backpropagation computes an approximation of the Generalized Gauss-Newton. Inspired by Eq. (21), it is defined, for each layer \(i=1,\ldots,L\), as \[\textsc{db}_{\theta}^{(i)}=J_{\theta_{i}}f_{\theta_{i}}^{(i)}(x_{i-1})^{\top} \mathcal{D}\Bigg{(}J_{x_{i}}f_{\theta_{i+1}}^{(i+1)}(x_{i})^{\top}\mathcal{D }\bigg{(}J_{x_{i+1}}f_{\theta_{i+2}}^{(i+2)}(x_{i+1})^{\top}\mathcal{D}\Big{(} \quad\ldots\quad\mathcal{D}\big{(}J_{x_{L-1}}f_{\theta_{L}}^{(L)}(x_{L-1})^{ \top}\cdot\mathcal{D}(H^{\mathcal{L}})\cdot \tag{22}\] \[\qquad\qquad\cdot J_{x_{L-1}}f_{\theta_{L}}^{(L)}(x_{L-1})\big{)} \quad\ldots\quad\Big{)}J_{x_{i}}f_{\theta_{i+1}}^{(i+1)}(x_{i})\bigg{)}J_{x_{ i+1}}f_{\theta_{i+2}}^{(i+2)}(x_{i+1})\Bigg{)}J_{\theta_{i}}f_{\theta_{i}}^{(i)}(x_{i-1}).\] As before, we can build an efficient backpropagation-like algorithm from this expression to compute \(\textsc{db}_{\theta}\). ``` \(M\) = \(H^{\mathcal{L}}\) for\(j=L,L-1,\ldots,1\)do \(\textsc{db}_{\theta}^{(j)}\) = \(\mathcal{D}\left(J_{\theta_{j}}f_{\theta_{j}}^{(j)}(x_{j-1})^{\top}\cdot M \cdot J_{\theta_{j}}f_{\theta_{j}}^{(j)}(x_{j-1})\right)\) \(M\) = \(\mathcal{D}\left(J_{x_{j-1}}f_{\theta_{j}}^{(j)}(x_{j-1})^{\top}\cdot M\cdot J _{x_{j-1}}f_{\theta_{j}}^{(j)}(x_{j-1})\right)\) endfor \(\textsc{db}_{\theta}\) = \((\textsc{db}_{\theta}^{(1)},\ldots,\textsc{db}_{\theta}^{(L)})\) return\(\textsc{db}_{\theta}\) ``` **Algorithm 3** Computation of \(\textsc{db}_{\theta}\) (approximated backpropagation) The memory bottlenecks of this algorithm are \[\textsc{db}_{\theta}^{(j)}\in\mathbb{R}^{|\theta_{j}|}\qquad\qquad M\in\mathbb{ R}^{|x_{j-1}|}\] thus allowing for linear scaling in both parameter and number of pixels in the output. A key aspect, that enables this algorithm to be so efficient, is the simultaneous diagonalization and computation of the Jacobian product. Instead of first computing the full matrix and then discarding all non-diagonal elements, we directly compute only the diagonal elements of the products. However, this means that this operation needs to be implemented separately for every type of layer. Moreover, all diagonal matrices are stored implicitly, i.e. diagonal matrices are stored as vectors of the diagonal entries. ### Skip-connections In PyTorch, objects of the class _module_ can be nested, allowing to create classes in a tree-like structure. All modules, but the root of such a tree are usually referred to as _submodules_. It's then possible to form sequential neural networks by composing different _submodules_, each representing a (potentially parametric) function as part of the architecture. For a given submodule \(g_{\theta}:\mathbb{R}^{I}\rightarrow\mathbb{R}^{O}\), a skip-connection layer \(\textsc{sc}(g)_{\theta}\) concatenates the submodule with the identity. It is defined as \[\textsc{sc}(g)_{\theta}:\mathbb{R}^{I} \longrightarrow\mathbb{R}^{O+I} \tag{23}\] \[x \longmapsto(g_{\theta}(x),x)\] Note that this is a submodule itself, and its parameter \(\theta\) is the same as the parameter of the used submodule \(g_{\theta}\). The Jacobian with respect to the parameter is the same as the Jacobian of the submodule \[J_{\theta}\text{sc}(g)_{\theta}(x)=J_{\theta}g_{\theta}(x)\in\mathbb{R}^{O\times| \theta|} \tag{24}\] while the Jacobian with respect to the input is \[J_{x}\text{sc}(g)_{\theta}(x)=\left(\frac{J_{x}g_{\theta}(x)}{\mathbb{I}} \right)\in\mathbb{R}^{(O+I)\times I}, \tag{25}\] where \(\mathbb{I}\in\mathbb{R}^{I\times I}\) is the identity matrix of the same size as the input. In order to perform the fast diagonal backpropagation through this kind of layer we need to compute the \(\mathcal{D}(J^{\top}\cdot M\cdot J)\) operator efficiently. The one with respect to the parameter is straightforward: being the Jacobian identical to the one of the submodule \(g\), we simply need to call the operator implemented on the submodule. The one with respect to the input, on the other hand, is slightly different. We can exploit the same block structure on the matrix \(M\in\mathbb{R}^{(O+I)\times(O+I)}\) as \[M=\begin{pmatrix}M_{11}&M_{12}\\ M_{21}&M_{22}\end{pmatrix},\qquad\qquad\text{where }M_{11}\in\mathbb{R}^{O \times O},M_{22}\in\mathbb{R}^{I\times I}. \tag{26}\] We can then explicity write \[J_{x}\text{SC}(g)_{\theta}(x)^{\top}\cdot M\cdot J_{x}\text{SC} (g)_{\theta}(x) =\left(\begin{array}{c}J_{x}g_{\theta}(x)^{\top}\bigm{|}\mathbb{I }\end{array}\right)\begin{pmatrix}M_{11}&M_{12}\\ M_{21}&M_{22}\end{pmatrix}\left(\frac{J_{x}g_{\theta}(x)}{\mathbb{I}}\right)\] \[=J_{x}g_{\theta}(x)^{\top}M_{11}J_{x}g_{\theta}(x)+M_{12}J_{x}g_{ \theta}(x)+J_{x}g_{\theta}(x)^{\top}M_{21}+M_{22}\] and thus, assuming \(M\) is already in diagonal form, which implies \(M_{12}=0\), \(M_{21}=0\), we have \[\mathcal{D}\left(J_{x}\text{SC}(g)_{\theta}(x)^{\top}\cdot M\cdot J_{x}\text {SC}(g)_{\theta}(x)\right)=\mathcal{D}\left(J_{x}g_{\theta}(x)^{\top}M_{11}J_ {x}g_{\theta}(x)\right)+\mathcal{D}(M_{22}). \tag{27}\] This means that we can efficiently perform this backpropagation by calling the same operator on the submodule \(g\), which returns \(\mathcal{D}\left(J_{x}g_{\theta}(x)^{\top}M_{11}J_{x}g_{\theta}(x)\right)\) and then adding the diagonal \(\mathcal{D}(M_{22})\). #### a.7.1 Recursiveness We highlight that this efficient implementation builds on a recursive structure, which unlocks several possibilities. The idea of the library is that every module has implemented the two backpropagation operators: \(\mathcal{D}(J_{x}^{\top}\cdot M\cdot J_{x})\) with respect to the input and \(\mathcal{D}(J_{\theta}^{\top}\cdot M\cdot J_{\theta})\) with respect to the parameter. Most importantly, these operators have to share the same syntax across all modules. For basic modules such as linear, conv2d, relu, tanh, these operators are hardcoded with tensor operations. The Skip-connection module however, has these operators coded with a recursive call on the submodule, as in Eq. (27). In the Sequential module the two operators are implemented with a backpropagation-like structure, as in algorithm (3), with a recursive call on all the submodules in the sequence (in reverse order in the for loop). In our work we use this to implement the skip-connections of the U-net (Ronneberger et al., 2015). The submodule of a skip-connection is a sequential, which contains convolutions and other layers, as well as another skip-connection with the same structure.
2310.04567
DPM-TSE: A Diffusion Probabilistic Model for Target Sound Extraction
Common target sound extraction (TSE) approaches primarily relied on discriminative approaches in order to separate the target sound while minimizing interference from the unwanted sources, with varying success in separating the target from the background. This study introduces DPM-TSE, a first generative method based on diffusion probabilistic modeling (DPM) for target sound extraction, to achieve both cleaner target renderings as well as improved separability from unwanted sounds. The technique also tackles common background noise issues with DPM by introducing a correction method for noise schedules and sample steps. This approach is evaluated using both objective and subjective quality metrics on the FSD Kaggle 2018 dataset. The results show that DPM-TSE has a significant improvement in perceived quality in terms of target extraction and purity.
Jiarui Hai, Helin Wang, Dongchao Yang, Karan Thakkar, Najim Dehak, Mounya Elhilali
2023-10-06T20:13:57Z
http://arxiv.org/abs/2310.04567v2
# Dpm-TSE: A Diffusion Probabilistic Model for Target Sound Extraction ###### Abstract Common target sound extraction (TSE) approaches primarily relied on discriminative approaches in order to separate the target sound while minimizing interference from the unwanted sources, with varying success in separating the target from the background. This study introduces DPM-TSE, a first generative method based on diffusion probabilistic modeling (DPM) for target sound extraction, to achieve both cleaner target renderings as well as improved separability from unwanted sounds. The technique also tackles common background noise issues with DPM by introducing a correction method for noise schedules and sample steps. This approach is evaluated using both objective and subjective quality metrics on the FSD Kaggle 2018 dataset. The results show that DPM-TSE has a significant improvement in perceived quality in terms of target extraction and purity. Jiarui Hai\({}^{1,\dagger}\), Helin Wang\({}^{2,\dagger}\), Dongchao Yang\({}^{3}\), Karan Thakkar\({}^{1}\), Najim Dehak\({}^{2}\), Mounya Elhilali\({}^{1,2}\)\({}^{1}\)Laboratory for Computational Auditory Perception, Johns Hopkins University, Baltimore, USA \({}^{2}\)Center for Language and Speech Processing, Johns Hopkins University, Baltimore, USA \({}^{3}\)The Chinese University of Hongkong, Hongkong, China Target sound extraction, diffusion probabilistic model, generative model, noise schedule ## 1 Introduction There are countless sounds in the world that offer crucial information about our environment, including the melody of a violin during a concert and sirens in the streets. Our daily lives could be significantly enhanced if we were able to create listening devices that could filter out unwanted sounds and focus on the sounds we want to hear. In recent years, machine hearing has studied target sound extraction and removal applications, which aim to identify specific speakers [1], musical instruments [2], and sound events [3, 4, 5, 6]. Among them, the extraction of sound events is much more challenging than others because of a wide range of sounds, such as animal noises, baby cries, and telephone calls. This work addresses the problem of target sound extraction (TSE). TSE aims to separate the sound of a specific sound event class from a mixed audio given a target sound [5, 7, 8]. Researchers have explored the challenges of new classes and weakly-labelled data, with some proposing solutions such as combining one-hot-based and enrollment-based target sound extraction [4], weakly-supervised sound separation [6], and random sound mixing [5]. These methods are based on discriminative models, which minimize the difference between estimated audio and target audio. They can produce good separation for non-overlapping regions but always suffer severe performance drops when addressing overlapping regions. Indeed, overlap often occurs in real-world scenarios, making it one of the key issues that needs to be addressed in TSE. Wang _et al._[9] propose a TSE method utilizing timestamp information with a target-weighted loss function. However, this system requires an additional accurate detection network, and the discriminative model still struggles in separating overlaps. Unlike discriminative methods, generative modelling that aims to match the distribution of signals allows to approximate complex data distributions, which have the potential to produce more natural audio. Denoising Probabilistic Models (DPMs) have recently become increasingly popular due to their remarkable performance and reliable training. In particular, the intersection of DPMs and audio signal generation and synthesis tasks, such as neural vocoder [10], voice conversion [11], and singing voice synthesis [12], has seen significant progress. For speech enhancement and separation, CDifruSE [13] is a DPM-based speech enhancement model designed to remove the environmental noise directly during the reverse stage of DPM, which essentially performs a discriminative task. SGMSE [14] is a purely generative model which demonstrates measurable advancements for speech enhancement. Scheibler _et al._ propose a source separation method [15] based on score-matching of a stochastic differential equation with higher perceptual quality than discriminative methods. However, to the best of our knowledge, the application of DPMs in TSE has not been explored. In this paper, we first introduce a DPM-based generative method for TSE, called DPM-TSE1. This method could better extract the target sound in the overlapping regions than discriminative methods; however, it might introduce additional noise, especially in non-target sound areas, compromising the purity of predictions. To overcome this problem, we apply a correction method for noise schedules and sampling steps in DPM. We conduct experiments on the FSD Kaggle 2018 dataset [16] and objective measures show that the perceptual quality of DPM-TSE is much better than the-state-of-art discriminative models. Subjective evaluations consistently show a preference among human listeners for the audio extracted via DPM-TSE, underscoring its heightened efficacy in extracting target sounds and eliminating irrelevant sounds. ## 2 Methodology ### Diffusion Probabilistic Model Diffusion probabilistic models include a forward and a backward process. The forward process gradually adds Gaussian noise to the data, commonly based on a manually-defined variance schedule \(\beta_{1},\ldots,\beta_{T}\). \[q\left(x_{1:T}\mid x_{0}\right):=\prod_{t=1}^{T}q\left(x_{t}\mid x_{t-1}\right) \tag{1}\] \[q\left(x_{t}\mid x_{t-1}\right):=\mathcal{N}\left(x_{t};\sqrt{1-\beta_{t}}x_{t -1},\beta_{t}\mathbf{I}\right) \tag{2}\] The forward process allows sampling \(x_{t}\) at an arbitrary timestep \(t\) in a closed form: \[q\left(x_{t}\mid x_{0}\right):=\mathcal{N}\left(x_{t};\sqrt{\bar{a}_{t}}x_{0 },\left(1-\bar{\alpha}_{t}\right)\mathbf{I}\right) \tag{3}\] Equivalently: \[x_{t}:=\sqrt{\bar{a}_{t}}x_{0}+\sqrt{1-\bar{a}_{t}}\epsilon,\quad\text{ where }\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I}) \tag{4}\] where \(\alpha_{t}:=1-\beta_{t}\) and \(\bar{\alpha}_{t}:=\prod_{s=1}^{t}\alpha_{s}\). Diffusion models learn the reverse process to recover information step by step. In this way, DPM can generate new data from random Gaussian noises. When \(\beta_{t}\) is small, the reverse step is also found to be Gaussian: \[p_{\theta}\left(x_{0:T}\right):=p\left(x_{T}\right)\prod_{t=1}^{T}p_{\theta} \left(x_{t-1}\mid x_{t}\right) \tag{5}\] \[p_{\theta}\left(x_{t-1}\mid x_{t}\right):=\mathcal{N}\left(x_{t-1};\tilde{ \mu}_{t},\tilde{\beta}_{t}\mathbf{I}\right) \tag{6}\] where variance \(\tilde{\beta}_{t}\) can be calculated from the forward process posteriors: \(\tilde{\beta}_{t}:=\frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_{t}}\beta_{t}\) In most previous DPs, neural networks are used to predict noise \(\epsilon\), since: \[\tilde{\mu}_{t}:=\frac{1}{\sqrt{\alpha_{t}}}\left(x_{t}-\frac{\beta_{t}}{ \sqrt{1-\bar{\alpha}_{t}}}\epsilon\right) \tag{7}\] ### Corrected Noise Schedule and Sampling Steps The original noise schedule commonly used in DPs will lead to a non-zero Signal-to-noise ratio (SNR) at the last timestep \(T\), where the SNR can be calculated as: \[\mathrm{SNR}(t):=\frac{\bar{\alpha}_{t}}{1-\bar{\alpha}_{t}} \tag{8}\] In the field of image generation, this problem is assumed to limit the generated images to have plain medium brightness, making it difficult to generate completely dark or white image content [17]. When it comes to TSE, the extracted target sound often contains many silent regions. Therefore, a non-zero terminal SNR might prevent the model from generating completely silent frames, impairing the purity and overall performance of sound extraction. Following [17], we adjust existing noise schedules to enforce zero terminal SNR by keeping \(\sqrt{\bar{\alpha}_{1}}\) unchanged, changing \(\sqrt{\bar{\alpha}_{T}}\) to zero, and linearly rescaling \(\sqrt{\bar{\alpha}_{t}}\) for intermediate \(t\in[2,\ldots,T-1]\) respectively. When SNR is zero at the terminal step, it becomes meaningless to predict noise \(\epsilon\), as the input and output become the same. Therefore, the neural network is switched to predict velocity \(v\) instead: \[v_{t}:=\sqrt{\bar{\alpha}_{t}}\epsilon-\sqrt{1-\bar{\alpha}_{t}}x_{0} \tag{9}\] \[\epsilon=\sqrt{\bar{\alpha}_{t}}v+\sqrt{1-\bar{\alpha}_{t}}x_{t} \tag{10}\] According to (4) and (7), the backward process is then performed by the following functions: \[x_{0}:=\sqrt{\bar{\alpha}_{t}}x_{t}-\sqrt{1-\bar{\alpha}_{t}}v_{t} \tag{11}\] \[\tilde{\mu}_{t}:=\frac{\sqrt{\bar{\alpha}_{t-1}}\beta_{t}}{1-\bar{\alpha}_{t} }x_{0}+\frac{\sqrt{\alpha_{t}}\left(1-\bar{\alpha}_{t-1}\right)}{1-\bar{\alpha} _{t}}x_{t} \tag{12}\] At the terminal step, the neural network with \(v\) prediction now predicts the mean of the data distribution under the given conditions. Additionally, the diffusion sampler always starts from the last timestep during inference. Figure 1: The inference framework of Diff-TSE. ### DPM-TSE Framework As shown in Figure 1, DPM-TSE comprises two modules:: a diffusion model for generating the log-mel spectrogram of the target sound conditioned on the mixture audio and the target sound token, and a neural vocoder for time-domain signal reconstruction. The neural network \(v_{\theta}(x_{t},m,c,t)\) with parameters \(\theta\) in the diffusion model is used to predict velocity \(v_{t}\) given the noisy target sound \(x_{t}\), the audio mixture \(m\), the one-hot target sound token \(c\), and the corresponding diffusion step \(t\),. The diffusion step \(t\) is encoded by sinusoidal position embedding [18]. The architecture of the diffusion network is based on U-Net [19] consisting of 4 downsampling blocks and 4 upsampling blocks, each of which includes 2 convolutional blocks for local feature extraction, and 2 attention blocks for capturing global time-frequency dependencies. The HiFi-GAN vocoder [20] trained on AudioSet [21] is employed as the neural vocoder for universal audio waveform reconstruction. ## 3 Experimental setups ### Dataset Following [8, 9], we formulate datasets comprised of synthetic sound event mixtures using the Freesound Dataset Kaggle 2018 corpus (FSD) [16]. This corpus encompasses a wide variety of 41 sound event categories ranging from human-produced sounds to musical instruments and object noises. Audio clips in the FSD have durations varying from 0.3 to 30 seconds. We generate 10-second audio mixtures. Each mixture incorporates one target sound and 1-3 interfering sounds randomly selected from the FSD. These are then superimposed at arbitrary time points over a 10-second background noise, which we obtain from the DCASE 2019 Challenge's acoustic scene classification task [22]. The signal-to-noise ratio (SNR) for each foreground sound is randomly set within a range of -5 to 10 dB. To optimize computational efficiency, all audio clips are down-sampled to 16 kHz. The dataset is partitioned into training, validation, and testing sets, containing 47,356, 16,000, and 16,000 samples respectively. ### DPM-TSE Setups The default U-Net model in DPM-TSE has 4 downsampling and 4 upsampling blocks configured with 128, 256, 512, and 512 channels respectively, totaling 106.40M parameters. The larger model variant has channel configurations of 194, 384, 768, and 768, with 239.30M total parameters. One-hot vector is applied for each target event with an embedding of 256 hidden units. Mel-spectrogram is used as our training target of U-Net since it can provide compact acoustic features and has been successfully used in many audio tasks [23]. In our experiments, we use 64-dimensional mel-spectrograms with a window size of 64 ms and a hop size of 10 ms, and we zero-pad mel-spectrograms if the number of frames is not a multiple of 4. We use randomly segmented mel-spectrogram clips containing part of target sounds for training. The model is trained using the Adam optimizer with a learning rate of 0.0001, a weight decay of 0.0001, batch size of 24 and 150 epochs. The default DPM-TSE model uses corrected schedule and sampling steps and is trained with \(v\) prediction. The diffusion steps and inference steps for the default DPM-TSE are 1000 and 50, and the corresponding variance is set from 0.0001 to 0.02. For the DPM-TSE with 100 diffusion steps and 30 inference steps, the variance is set from 0.0001 to 0.06. ### Baselines We utilize two latest TSE models, namely WaveFormer and Tim-TSENet with the same settings of their original implementations, as our baselines. WaveFormer [24] is a time-domain source separation model based on Conv-TasNet [25], which incorporates transformer blocks. Tim-TSENet [9] proposes an STFT-based TSE model with the similar masking strategy used in Conv-TasNet. For fair comparision, we also tried mel-spectrogram-based Tim-TSNet and STFT-based DPM-TSE 2. The Mel-spectrogram-based Tim-TSENet exhibited a performance degradation due to the difficulty of introducing a time-domain loss function using inverse STFT as in the original Tim-TSENet. Meanwhile, the STFT-based DPM-TSE suffered from significant performance degradation and excessive computational complexity. Footnote 2: Results of other attempts: [https://flu-lcap.github.io/DPM-TSE/](https://flu-lcap.github.io/DPM-TSE/). ### Evaluation Metrics The primary objective of this study is to enhance the auditory quality of the output generated through TSE. As such, we opt for a several perceptual evaluations and subjective assessment to gauge the performance of the target sound extraction models. We steer away from relying solely on objective measures such as Signal-to-Distortion Ratio (SDR) [26], since existing objective metrics for source separation are imperfect proxies for human auditory perception, as highlighted in previous research [27, 28]. **Objective metrics:** We use two automatic evaluation functions: (1) **ViSQOL**[29] is an algorithm originally designed to predict the quality of speech signals, and has since been adapted to assess the quality of audio signals by approximating human perceptual responses based on five-scaled mean opinion scores. (2) **CDPAM**[30] is a perceptual audio metric based on deep neural network that correlates well with human subjective ratings across sound quality assessment tasks, measuring audio similarity by distance of deep features. **Human evaluation:** For subjective evaluation, 15 participants with recording or music production experiences were recruited to evaluate the listening perceptual quality of audios predicted by different TSE models. We randomly selected 1 sample from 41 sound categories from the test set. Each subject was asked to evaluate 20 randomly assigned audio pairs for each model, and each audio pair contains both a ground truth and a model prediction for the extracted sound. They were given two questions for each audio pair: (1) **Extraction: Does the generated audio contain everything from the reference audio?** Rating from 1 to 5, where 1 means that the content of the reference audio cannot be heard at all in the generated audio, and 5 means that the second segment completely contains everything from the reference audio (2) **Purity: Does the generated audio only have the sound from the reference audio?** Rating from 1 to 5, where 1 means that it is pretty obvious that the generated audio has a lot of sound that the reference audio doesn't have, and 5 means that the generated audio only has the sound corresponding to the reference audio and other sounds cannot be detected. ## 4 Results The results in Table 1 demonstrate that DPM-TSE achieves the best performance in both subjective and objective experiments. The key observations include: (1) DPM-TSE has a promising performance in localizing and recovering target sound. (2) DPM-TSE shows a significant advantage of producing cleaner target sound, while Tim-TSENet and WaveFormer fail to remove non-target sound very well, especially in regions where there is overlapping between target and other sounds. In Fig. 2, we further explore the performance of target sound extraction in different sounds categories based on objective metrics. The three models simultaneously show good results in short-duration events (like finger snapping, tambourine, cowbell and hi-hat) while performs poor for long-duration complex events (like bus, saxophone, chime and flute). CDPAM and ViSQOL has similar distribution across the majority of classes. In most categories, DPM-TSE demonstrates pronounced advantages. In addition, we conduct ablation study on noise schedule methods, number of training and inference steps, and model size. As shown in Table 2, the proposed corrected noise schedule significantly improves the performance of the model. We find that the DPM-TSE using the original noise schedule produces additional noise, which is prominently noticeable in non-target sound regions. The DPM-TSE with larger model shows a performance degradation, which may be limited by the size of dataset. Comparing 100 training steps with 1000 training step, we find that the DPM-TSE model with fewer diffusion and inference steps still achieves relatively good performance and can be used in situations where faster inference is preferred. ## 5 Conclusion In this paper, we propose a DPM-based generative method for TSE, which is quite effective at extracting target sounds and removing irrelevant sounds. In future works, our focus will pivot towards (1) enhancing the sampling speed of DPM-TSE and (2) delving into innovative avenues including zero-shot TSE and text-guided TSE and audio editing techniques. \begin{table} \begin{tabular}{l c c|c c|c c} \hline **Model** & **Schedule** & **Steps** & **ViSQOL**\(\uparrow\) & **CDPAM**\(\downarrow\) \\ \hline Base & Default & 1000/50 & \(2.39\pm 0.06\) & \(0.34\pm 0.02\) \\ Base & Corrected & 100/30 & \(2.43\pm 0.05\) & \(0.25\pm 0.01\) \\ **Base** & **Corrected** & **1000/50** & \(\mathbf{2.53\pm 0.05}\) & \(\mathbf{0.22\pm 0.01}\) \\ Large & Corrected & 1000/50 & \(2.38\pm 0.05\) & \(0.24\pm 0.01\) \\ \hline \end{tabular} \end{table} Table 2: Results of ablation study on DPM-TSE based on objective scores with their 95% confidence intervals. \begin{table} \begin{tabular}{l c|c c|c c} \hline **Method** & **ViSQOL**\(\uparrow\) & **CDPAM**\(\downarrow\) & **ViSQOL-T**\(\uparrow\) & **CDPAM-T**\(\downarrow\) & **Extraction**\(\uparrow\) & **Purity**\(\uparrow\) \\ \hline WaveFormer [24] & \(1.96\pm 0.05\) & \(0.38\pm 0.02\) & \(1.78\pm 0.05\) & \(0.50\pm 0.02\) & \(3.38\pm 0.17\) & \(2.61\pm 0.19\) \\ TSENet [9] & \(2.32\pm 0.05\) & \(0.31\pm 0.02\) & \(2.04\pm 0.05\) & \(0.42\pm 0.02\) & \(3.80\pm 0.18\) & \(3.19\pm 0.21\) \\ **DPM-TSE** & \(\mathbf{2.53\pm 0.05}\) & \(\mathbf{0.22\pm 0.01}\) & \(\mathbf{2.18\pm 0.05}\) & \(\mathbf{0.38\pm 0.03}\) & \(\mathbf{4.19\pm 0.14}\) & \(\mathbf{3.74\pm 0.18}\) \\ \hline \end{tabular} \end{table} Table 1: Objective and subjective scores with their 95% confidence intervals. ViSQOL-T and CDPAM-T are calculated with the target sound regions, while other scores are calculated with the whole audio. Figure 2: Distribution of objective performance by sound category in ascending order of average sound event duration.
2303.03330
On the number of parts in all partitions enumerated by the Rogers-Ramanujan identities
The celebrated Rogers-Ramanujan identities equate the number of integer partitions of $n$ ($n\in\mathbb N_0$) with parts congruent to $\pm 1 \pmod{5}$ (respectively $\pm 2 \pmod{5}$) and the number of partitions of $n$ with super-distinct parts (respectively super-distinct parts greater than $1$). In this paper, we establish companion identities to the Rogers-Ramanujan identities on the number of parts in all partitions of $n$ of the aforementioned types, in the spirit of earlier work by Andrews and Beck on a partition identity of Euler.
Cristina Ballantine, Amanda Folsom
2023-03-06T17:59:38Z
http://arxiv.org/abs/2303.03330v1
# On the number of parts in all partitions enumerated by the Rogers-Ramanujan identities ###### Abstract. The celebrated Rogers-Ramanujan identities equate the number of integer partitions of \(n\) (\(n\in\mathbb{N}_{0}\)) with parts congruent to \(\pm 1\pmod{5}\) (respectively \(\pm 2\pmod{5}\)) and the number of partitions of \(n\) with super-distinct parts (respectively super-distinct parts greater than \(1\)). In this paper, we establish companion identities to the Rogers-Ramanujan identities on the number of parts in all partitions of \(n\) of the aforementioned types, in the spirit of earlier work by Andrews and Beck on a partition identity of Euler. Key words and phrases:Rogers-Ramanujan identities, partitions, Beck-type identities, \(q\)-series 2010 Mathematics Subject Classification: 11P84, 05A17, 05A19, 33D15 ## 1. Introduction The Rogers-Ramanujan identities are a pair of identities which assert that the number of integer partitions of \(n\) (\(n\in\mathbb{N}_{0}\)) with parts congruent to \(\pm 1\pmod{5}\) (respectively \(\pm 2\pmod{5}\)) equals the number of partitions of \(n\) with super-distinct parts (respectively super-distinct parts greater than \(1\)). Super-distinct parts are also referred to as \(2\)-distinct parts, and must differ by at least \(2\). The identities are typically expressed in analytic form, as \[\sum_{n=0}^{\infty}\frac{q^{n^{2}}}{(q;q)_{n}} =\frac{1}{(q;q^{5})_{\infty}(q^{4};q^{5})_{\infty}},\] \[\sum_{n=0}^{\infty}\frac{q^{n^{2}+n}}{(q;q)_{n}} =\frac{1}{(q^{2};q^{5})_{\infty}(q^{3};q^{5})_{\infty}},\] noting that the series and products appearing are the relevant partition generating functions. Here and throughout, the \(q\)-Pochhammer symbol is defined for \(n\in\mathbb{N}_{0}\cup\{\infty\}\) by \[(a;q)_{n}:=\prod_{j=0}^{n-1}(1-aq^{j})=(1-a)(1-aq)(1-aq^{2})\cdots(1-aq^{n-1}).\] For the remainder of the article, we assume \(|q|<1\) so that all series converge absolutely. The Rogers-Ramanujan identities have an extensive and rich history. Rogers and Ramanujan independently discovered the identities in the late 19th century/early 20th century, and Rogers provided the first known proof [17]. Rogers and Ramanujan later published a joint proof [18], around the same time that Schur independently rediscovered and proved the identities [19]. Since then, the identities have played important roles in and have made connections to diverse areas, including combinatorics, \(q\)-hypergeometric series, Lie Algebras, modular forms, statistical mechanics, and more (see, e.g., [1, 3, 10, 11, 12, 13, 14, 20, 21], for more). Like the Rogers-Ramanujan identities, many other identities in the subject of integer partitions equate the number of partitions of \(n\) with parts belonging to a certain set and the number of partitions of \(n\) satisfying a particular condition. Perhaps the oldest such result is Euler's identity, which equates the number of partitions of \(n\) with odd parts and the number of partitions of \(n\) with distinct parts. Centuries later in 2017, Beck made the following related conjecture concerning the number of parts in all partitions of the types appearing in Euler's identity, which we state as follows [16], [4, Conjecture]. **Conjecture 1** (Beck).: _The excess of the number of parts in all partitions of \(n\) with odd parts over the number of parts in all partitions of \(n\) with distinct parts equals the number of partitions of \(n\) with exactly one even part (possibly repeated)._ Andrews [4] quickly proved Beck's conjecture, and additionally showed that this excess also equals the number of partitions of \(n\) with exactly one part repeated (and all other parts distinct). Yang [23] and Ballantine-Bielak [7] also provided independent combinatorial proofs of Beck's conjecture. This work on Beck's conjecture on the number of parts in all partitions of the types appearing in Euler's identity has been followed by a number of generalizations and Beck-type companion identities to other well known identities, such as [5, 8, 15, 23]. In [9], Beck-type identities are generalized to all Euler pairs of order \(r\) as defined by Subbarao in [22]. In this paper, we state and prove Beck-type companion identities to the Rogers-Ramanujan identities on the excess of the number of parts in all partitions of \(n\) with parts congruent to \(\pm 1\pmod{5}\) (respectively \(\pm 2\pmod{5}\)) over the number of parts in all partitions of \(n\) with super-distinct parts (respectively super-distinct parts greater than \(1\)). These results are stated in Theorem 3.1 and Theorem 4.1 below, and we give proofs which are both analytic and combinatorial in nature in the sections that follow. ## 2. Preliminaries In this section, we give some background and preliminaries on partitions and \(q\)-series. ### Integer partitions Let \(n\in\mathbb{N}_{0}\). A _partition_ of \(n\), denoted \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{j})\), is a non-increasing sequence of positive integers \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{j}\) called _parts_ that add up to \(n\). We refer to \(n\) as the _size_ of \(\lambda\). The length of \(\lambda\) is the number of parts of \(\lambda\), denoted by \(\ell(\lambda)\). We abuse notation and use \(\lambda\) to denote either the multiset of its parts or the non-increasing sequence of parts. We write \(a\in\lambda\) to mean the positive integer \(a\) is a part of \(\lambda\). We write \(|\lambda|\) for the size of \(\lambda\) and \(\lambda\vdash n\) to mean that \(\lambda\) is a partition of size \(n\). For a pair of partitions \((\lambda,\mu)\) we also write \((\lambda,\mu)\vdash n\) to mean \(|\lambda|+|\mu|=n\). We use the convention that \(\lambda_{k}=0\) for all \(k>\ell(\lambda)\). When convenient we will also use the exponential notation for parts in a partition: the exponent of a part is the multiplicity of the part in the partition. This notation will be used mostly for rectangular partitions. We write \((a^{b})\) for the partition consisting of \(b\) parts equal to \(a\). The _Ferrers diagram_ of a partition \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{j})\) is an array of left justified boxes such that the \(i\)th row from the top contains \(\lambda_{i}\) boxes. We abuse notation and use \(\lambda\) to mean a partition or its Ferrers diagram. For example, the Ferrers diagram of \(\lambda=(5,2,2,1)\) is shown in Figure 1. Given a partition \(\lambda\), its _conjugate_\(\lambda^{\prime}\) is the partition for which the rows in its Ferrers diagram are precisely the columns in the Ferrers diagram of \(\lambda\). For example, the conjugate of \(\lambda=(5,2,2,1)\) is \(\lambda^{\prime}=(4,3,1,1,1)\). By the sum of the partitions \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{j})\) and \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{k})\) we mean the partitions \(\lambda+\mu=(\lambda_{1}+\mu_{1},\lambda_{2}+\mu_{2},\ldots,\lambda_{\ell}+ \mu_{\ell})\), where \(\ell=\max\{j,k\}\). As mentioned in Section 1, we say that the parts of a partition are _super-distinct_ if any two parts differ by at least \(2\). We refer to partitions with super-distinct parts as super-distinct partitions. Since our goal is to study the number of parts in partitions, we introduce the notion of marked partitions. A _marked partition_ is a partition with a single part marked. Note that \((5,2^{*},2,1)\) and \((5,2,2^{*},1)\) are different marked partitions. Then the number of parts in all partitions of \(n\) satisfying certain conditions is equal to the number of marked partitions of \(n\) satisfying the same conditions. If \(\mathcal{M}(n)\) is the set of marked partitions of size \(n\) whose parts satisfy certain conditions, we have a one-to-one correspondence between \(\mathcal{M}(n)\) and the set of pairs of partitions \((\lambda,(a^{b}))\vdash n\), where \(b\) is a positive integer and \(\lambda\) and \((a)\) are partitions whose parts satisfy the conditions of \(\mathcal{M}(n)\). To explain this, if \(\mu\in\mathcal{M}(n)\) has marked part \(\mu_{j}=a\) and \(\mu_{j}\) is the \(b\)th part equal to \(a\), we remove from \(\mu\) the first \(b\) parts equal to \(a\) to obtain a partition \(\lambda\). Then \(\mu\leftrightarrow(\lambda,(a^{b}))\). Thus, the number of parts in all partitions of \(n\) satisfying certain conditions is equal to the number of pairs of partitions \((\lambda,(a^{b}))\vdash n\), \(a,b>0\), such that \(a\) and the parts of \(\lambda\) satisfy the same conditions. For more details on partitions, we refer the reader to [2, 6]. ### Some results on \(q\)-series The \(q\)-binomial coefficients \(\left[\begin{array}{c}A+k\\ k\end{array}\right]_{q}\) may be defined as the generating function for the number of partition of \(n\) with at most \(k\) parts, each part at most \(A\)[6, p67], from which it follows that \[\sum_{0\leq n_{1}\leq n_{2}\cdots\leq n_{k}\leq A}q^{n_{1}+n_{2}+\cdots+n_{k}}= \left[\begin{array}{c}A+k\\ k\end{array}\right]_{q}, \tag{1}\] \[\left[\begin{array}{c}A+k\\ k\end{array}\right]_{q}=\left[\begin{array}{c}A+k\\ A\end{array}\right]_{q}, \tag{2}\] and \[\left[\begin{array}{c}A+k\\ k\end{array}\right]_{q}=0,\text{ if }A<0,\text{ }k<0,\text{ or }A+k<0. \tag{3}\] Figure 1. A Ferrers diagram The \(q\)-binomial series [6, Theorem 9] gives the following generating function for the \(q\)-binomial coefficients (\(|t|<1\)) \[\sum_{k=0}^{\infty}\left[\begin{array}{c}A+k\\ k\end{array}\right]_{q}t^{k}=\frac{1}{(t;q)_{A+1}}. \tag{4}\] Another \(q\)-series identity we will make use of is \[\sum_{A\leq n_{1}\leq n_{2}\cdots\leq n_{k}}q^{n_{1}+n_{2}+\cdots+n_{k}}=\frac{ q^{Ak}}{(q;q)_{k}}, \tag{5}\] which can be verified directly analytically, or combinatorially by viewing a partition \(\lambda\) with \(k\) parts greater than or equal to \(A>0\) as \(\lambda=\eta+(A^{k})\) where \(\eta\) is a partition with at most \(k\) parts. If \(A=0\), (5) is the generating function for partitions with at most \(k\) parts. ## 3. The number of parts in the first Rogers-Ramanujan identity Our first result, Theorem 3.1, gives the excess in the number of parts of partitions involved in the first Rogers-Ramanujan identity. We consider the empty partition a super-distinct partition. **Theorem 3.1**.: _The excess of the number of parts in all partitions of \(n\) with parts congruent to \(\pm 1\pmod{5}\) over the number of parts in all super-distinct partitions of \(n\) equals the number of pairs of partitions \((\lambda,(a^{b}))\) satisfying all of the following conditions: \(\lambda\) is a super-distinct partition of \(n-ab\), \(a\equiv\pm 1\pmod{5}\), \(b\geq 1\), and if \(a=1\), then at least one of \(b-1,b,b+1\) is a part of \(\lambda\)._ Before we prove the theorem we note that the original Beck identity (Conjecture 1) can be reformulated in terms of pairs of partitions as in Theorem 3.1 above as follows. The excess in the total number of parts in all partitions of \(n\) into distinct parts over the total number of parts in all partitions of \(n\) into odd parts equals the number of pairs of partitions \((\lambda,(a^{b}))\vdash n\), where \(\lambda\) is a partition into odd parts, \(a,b>0\) and \(a\) is even. This is also the number of pairs \((\lambda,(a^{b}))\vdash n\), where \(\lambda\) is a partition into distinct parts, \(a>0\), \(a\not\in\lambda\), and \(b\geq 2\). **Example 1**.: Let \(n=4\). The partitions with parts congruent to \(\pm 1\pmod{5}\) are (4) and \((1,1,1,1)\) and thus there are five parts in these partitions. The partitions into super-distinct parts are (4) and \((3,1)\) and there are three parts in these partitions. There are two pairs of partitions satisfying the conditions of the theorem: \(((2),(1^{2}))\) and \((\emptyset,(4))\). We provide two proofs of Theorem 3.1 below, the first of which is analytic, and the second of which is combinatorial. ### Analytic proof of Theorem 3.1 Let \(a(n,m)\) denote the number of partitions of \(n\) with parts congruent to \(\pm 1\pmod{5}\) and exactly \(m\) parts. Then, the generating function for \(a(n,m)\) is given by \[P_{1}(z;q):=\sum_{n\geq 0}\sum_{m\geq 0}a(n,m)z^{m}q^{n}=\frac{1}{(zq;q^{5})_{ \infty}(zq^{4};q^{5})_{\infty}}.\] Similarly, if \(b(n,m)\) is the number partitions of \(n\) with super-distinct parts and exactly \(m\) parts, the generating function for \(b(n,m)\) is \[R_{1}(z;q):=\sum_{n\geq 0}\sum_{m\geq 0}b(n,m)z^{m}q^{n}=\sum_{n=0}^{\infty}\frac{z ^{n}q^{n^{2}}}{(q;q)_{n}}.\] Considering the difference of the derivatives of these functions with respect to \(z\) evaluated at \(1\), we obtain the generating function for the excess in the number of parts in all partitions of \(n\) with parts congruent to \(\pm 1\pmod{5}\) over the number of parts in all super-distinct partitions of \(n\). We have \[\frac{\partial}{\partial z}\Big{|}_{z=1}(P_{1}(z;q)-R_{1}(z;q))\] \[\qquad=\frac{1}{(q;q^{5})_{\infty}(q^{4};q^{5})_{\infty}}\left( \sum_{m=1}^{\infty}\frac{q^{5m-1}}{1-q^{5m-1}}+\frac{q^{5m-4}}{1-q^{5m-4}} \right)-\sum_{n=0}^{\infty}\frac{nq^{n^{2}}}{(q;q)_{n}}\] \[\qquad=\left(\sum_{n=0}^{\infty}\frac{q^{n^{2}}}{(q;q)_{n}} \right)\left(\sum_{m=1}^{\infty}\frac{q^{5m-1}}{1-q^{5m-1}}+\frac{q^{5m-4}}{1 -q^{5m-4}}\right)-\sum_{n=0}^{\infty}\frac{nq^{n^{2}}}{(q;q)_{n}} \tag{6}\] \[\qquad=:T_{1}(q)-T_{2}(q).\] We next write down five different generating functions such that their sum is the generating function for the number of pairs of partitions \((\lambda,(a^{b}))\) with \(|\lambda|+ab=n\) satisfying all of the conditions given in Theorem 3.1). After doing so, we will prove that the resulting sum of generating functions is equal to \(T_{1}(q)-T_{2}(q)\). **Case 1.** The generating function for the number of pairs of partitions \((\lambda,(1^{b}))\) such that \(\lambda\) is a super-distinct partition of \(n-b\), and \(b-1\in\lambda,b+1\not\in\lambda\) is \[\sum_{b=1}^{\infty}q^{b}\sum_{m=1}^{\infty}\sum_{j=1}^{m}\sum_{ \begin{subarray}{c}0\leq n_{1}\leq n_{2}\cdots\leq n_{j-1}\leq n_{j}<n_{j+1} \leq n_{j+2}\cdots\leq n_{m}\\ 2j-1+n_{j}=b-1\end{subarray}}q^{(1+n_{1})+(3+n_{2})+\cdots+(2m-1+n_{m})}, \tag{7}\] which we explain as follows. Any super-distinct partition with \(m\geq 1\) parts is of the form \((2m-1+n_{m},2m-3+n_{m-1},\ldots,3+n_{2},1+n_{1})\), where \(0\leq n_{1}\leq n_{2}\leq\cdots\leq n_{m}\). We sum over all possible positions for a specified part \(b-1\) in such a position, namely \(2j-1+n_{j}=b-1\) for \(1\leq j\leq m\). Since \(b+1\) can not be in such a partition, we must have the the difference between the consecutive parts \(2j+1+n_{j+1}\) and \(2j-1+n_{j}=b-1\) is at least \(3\); equivalently, \(n_{j}<n_{j+1}\). The size of the second partition \(1^{b}\) in a pair \((\lambda,(1^{b}))\) appears in the exponent of \(q^{b}\), and we sum over all possible \(b\geq 1\). We re-write the inner sum in (7) as \[q^{m^{2}+b-2j}\left(\sum_{0\leq n_{1}\leq n_{2}\leq\cdots\leq n _{j-1}\leq b-2j}q^{n_{1}+n_{2}+\cdots+n_{j-1}}\right)\] \[\qquad\qquad\times\left(\sum_{b-2j+1\leq n_{j+1}\leq n_{j+2}\leq \cdots\leq n_{m}}q^{n_{j+1}+n_{j+2}+\cdots+n_{m}}\right),\] where we have also used that \(1+3+5+\cdots+2m-1=m^{2}\). Using (1) and (5), this can be rewritten as \[q^{m^{2}+b-2j}\left[\begin{array}{c}b-j-1\\ j-1\end{array}\right]_{q}\frac{q^{(b-2j+1)(m-j)}}{(q;q)_{m-j}}. \tag{8}\] Using (8), the generating function in (7) becomes \[\sum_{b=1}^{\infty} q^{b}\sum_{m=1}^{\infty}\sum_{j=1}^{m}q^{m^{2}+b-2j}\left[ \begin{array}{c}b-j-1\\ j-1\end{array}\right]_{q}\frac{q^{(b-2j+1)(m-j)}}{(q;q)_{m-j}}\] \[=\sum_{m=1}^{\infty}\sum_{j=1}^{m}\frac{q^{m^{2}-2j+(-2j+1)(m-j)} }{(q;q)_{m-j}}\sum_{b=1}^{\infty}\left[\begin{array}{c}b-j-1\\ b-2j\end{array}\right]_{q}q^{b(m-j+2)}\] \[=\sum_{m=1}^{\infty}\sum_{j=1}^{m}\frac{q^{m^{2}+m+j}}{(q;q)_{m-j }}\sum_{b=0}^{\infty}\left[\begin{array}{c}b+j-1\\ b\end{array}\right]_{q}q^{b(m-j+2)}\] \[=\sum_{m=1}^{\infty}\sum_{j=1}^{m}\frac{q^{m^{2}+m+j}}{(q;q)_{m-j }(q^{m-j+2};q)_{j}} \tag{9}\] \[=\sum_{m=1}^{\infty}\frac{q^{m^{2}}}{(q;q)_{m+1}}\sum_{j=1}^{m}q ^{m+j}(1-q^{m-j+1}),\] where we have also used (2), (3) and (4). **Case 2.** By an explanation similar to the one given in Case 1, the generating function for the number of pairs of partitions \((\lambda,(1^{b}))\) such that \(\lambda\) is a super-distinct partition of \(n-b\), and \(b+1\in\lambda,b-1\not\in\lambda\) is \[\sum_{b=1}^{\infty}q^{b}\sum_{m=1}^{\infty}\sum_{j=1}^{m}\sum_{0\leq n_{1} \leq n_{2}\leq\cdots\leq n_{j-1}<n_{j}\leq n_{j+1}\leq\cdots\leq n_{m}}q^{(1+n _{1})+(3+n_{2})+\cdots+(2m-1+n_{m})}. \tag{10}\] Arguing as in Case 1 and using (1)-(5), we obtain that this equals \[\sum_{m=1}^{\infty}\frac{q^{m^{2}}}{(q;q)_{m+1}}\sum_{j=1}^{m}q^{m+j}(1-q^{m-j +1}). \tag{11}\] **Case 3.** Similar to the previous cases, we have that the generating function for the number of pairs of partitions \((\lambda,(1^{b}))\) such that \(\lambda\) is a super-distinct partition of \(n-b\), and \(b-1,b+1\in\lambda\) is \[\sum_{b=1}^{\infty}q^{b}\sum_{m=1}^{\infty}\sum_{j=2}^{m}\sum_{ 0\leq n_{1}\leq n_{2}\cdots\leq n_{j-1}\leq n_{j}\leq n_{j+1}\leq\cdots\leq n_ {m}}q^{(1+n_{1})+(3+n_{2})+\cdots+(2m-1+n_{m})}. \tag{12}\] Arguing as in Case 1 and using (1)-(5), we obtain that this equals \[\sum_{m=1}^{\infty}\frac{q^{m^{2}}}{(q;q)_{m+1}}\sum_{j=2}^{m}q^{2j-2}(1-q^{m- j+1})(1-q^{m-j+2}). \tag{13}\] **Case 4.** Similar to the previous cases, we have that the generating function for the number of pairs of partitions \((\lambda,(1^{b}))\) such that \(\lambda\) is a super-distinct partition of \(n-b\), and \(b\in\lambda\) is \[\sum_{b=1}^{\infty}q^{b}\sum_{m=1}^{\infty}\sum_{j=1}^{m}\sum_{0\leq n_{1}\leq n _{2}\cdots\leq n_{j-1}\leq n_{j}\leq n_{j+1}\leq n_{j+2}\cdots\leq n_{m}}q^{(1 +n_{1})+(3+n_{2})+\cdots+(2m-1+n_{m})}.\] Arguing as in Case 1 using (1)-(5), we obtain that this equals \[\sum_{m=1}^{\infty}\frac{q^{m^{2}}}{(q;q)_{m+1}}\sum_{j=1}^{m}q^{2j-1}(1-q^{m-j +1}).\] **Case 5.** It is not difficult to see that generating function for the number of pairs of partitions \((\lambda,(a^{b}))\) with \(a\equiv\pm 1\pmod{5}\) and \(a>1\) such that \(\lambda\) is a super-distinct partition of \(n-ab\) is \[\left(\sum_{m=0}^{\infty}\frac{q^{m^{2}}}{(q;q)_{m}}\right)\left(\frac{q^{4}} {1-q^{4}}+\sum_{m=2}^{\infty}\left(\frac{q^{5m-1}}{1-q^{5m-1}}+\frac{q^{5m-4}} {1-q^{5m-4}}\right)\right),\] using that \(R_{1}(1;q)\) is the generating function for super-distinct partitions. Armed with the generating functions in Cases 1-5 above, we now complete the analytic proof of Theorem 3.1. The pairs of partitions described in Theorem 3.1 may be realized as a disjoint union of the pairs described in Cases 1-5 above. Thus, the generating function for the excess described in Theorem 3.1 may be realized as the sum of the generating functions given in (9), (11), (13), (15), and (16). We first add (9), (11), (13), and (15) to obtain: \[2\sum_{m=1}^{\infty}\frac{q^{m^{2}}}{(q;q)_{m+1}}\sum_{j=1}^{m}q^ {m+j}(1-q^{m-j+1})\\ +\sum_{m=1}^{\infty}\frac{q^{m^{2}}}{(q;q)_{m+1}}\sum_{j=1}^{m-1} q^{2j}(1-q^{m-j})(1-q^{m-j+1})\\ +\sum_{m=1}^{\infty}\frac{q^{m^{2}}}{(q;q)_{m+1}}\sum_{j=1}^{m}q^ {2j-1}(1-q^{m-j+1})\] \[=\sum_{m=1}^{\infty}\frac{q^{m^{2}}}{(q;q)_{m+1}}\] \[\times\Bigg{(}\sum_{j=1}^{m-1}\left(2q^{m+j}(1-q^{m-j+1})+q^{2j}(1- q^{m-j})(1-q^{m-j+1})+q^{2j-1}(1-q^{m-j+1})\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad+2q^{2m}(1-q)+q^{2m-1}(1-q) \Bigg{)}\] \[=\sum_{m=1}^{\infty}\frac{q^{m^{2}}}{(q;q)_{m+1}}\Bigg{(}\sum_{j= 1}^{m-1}\left(q^{2j}+q^{2j-1}-q^{m+j+1}-q^{2m+1}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+2q^{2m}(1-q)+q^{2m-1 }(1-q)\Bigg{)}\] \[=\sum_{m=1}^{\infty}\frac{q^{m^{2}}}{(q;q)_{m+1}}\Bigg{(}q\frac{1 -q^{2m-2}}{1-q}-q^{m+2}\frac{1-q^{m-1}}{1-q}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad-(m-1)q^{2m+1}+2q^{2m} (1-q)+q^{2m-1}(1-q)\Bigg{)}\] \[=\sum_{m=1}^{\infty}\frac{q^{m^{2}}}{(q;q)_{m+1}}\Bigg{(}q\frac{1 -q^{m+1}}{1-q}-(m+1)q^{2m+1}\Bigg{)}\] \[=\frac{q}{1-q}\sum_{m=1}^{\infty}\frac{q^{m^{2}}}{(q;q)_{m}}- \sum_{m=2}^{\infty}\frac{mq^{m^{2}}}{(q;q)_{m}} \tag{17}\] \[=\frac{q}{1-q}\sum_{m=0}^{\infty}\frac{q^{m^{2}}}{(q;q)_{m}}-T_{2 }(q).\] Adding (16) to \(\frac{q}{1-q}\sum_{m=0}^{\infty}\frac{q^{m^{2}}}{(q;q)_{m}}\) from (17) we obtain \(T_{1}(q)\), which completes the proof. **Remark 1**.: One can also see that (9), (11), (13), and (15) are the respective generating functions for the pairs of partitions as at the start of Cases 1-4 above by viewing partitions into \(m\) super-distinct parts as the sum of an odd staircase of length \(m\), \(\delta_{m}=(2m-1,2m-3,\ldots,3,1)\), and the conjugate of a partition with parts at most \(m\). We explain this for (15), noting that (9), (11), and (13) can be interpreted similarly. To show combinatorially that (15) is the generating function for the number of pairs of partitions \((\lambda,(1^{b}))\vdash n\) such that \(\lambda\) has super-distinct parts, and \(b\in\lambda\), we observe that in \[\sum_{m=1}^{\infty}\sum_{j=1}^{m}q^{2j-1}\cdot q^{m^{2}}\cdot\frac{1-q^{m-j+1 }}{(q;q)_{m+1}},\] for fixed \(m,j\), the term \(\frac{1-q^{m-j+1}}{(q;q)_{m+1}}\) generates partitions with parts at most \(m+1\) and no part equal to \(m-j+1\). By conjugation, it generates partitions \(\eta\) with at most \(m+1\) parts and \(\eta_{m-j+1}=\eta_{m-j+2}\). The term \(q^{m^{2}}\) generates the staircase \(\delta_{m}\) and the term \(q^{2j-1}\) generates the partition \((1^{2j-1})\). Thus, \(q^{2j-1}\cdot q^{m^{2}}\cdot\dfrac{1-q^{m-j+1}}{(q;q)_{m+1}}\) generates triples \((\delta_{m},\eta,(1^{2j-1}))\). Each such triple corresponds to the pair \((\lambda,(1^{b}))\), where \(b=2j-1+\eta_{m-j+2}\) and \(\lambda=\delta_{m}+(\eta\setminus\{\eta_{m-j+2}\})\) has \(m\) super-distinct parts and \(\lambda_{m-j+1}=2j-1+\eta_{m-j+1}=b\). ### Combinatorial proof of Theorem 3.1 We interpret \(T_{1}(q)\) defined in (6) as the generating function for \(|\mathcal{T}_{1}(n)|\), where \(\mathcal{T}_{1}(n)\) is the set of pairs of partitions \((\lambda,(a^{b}))\vdash n\) such that \(\lambda\) is a super-distinct partition, \(a\equiv\pm 1\pmod{5}\), and \(b\geq 1\). By the first Rogers-Ramanujan identity, \(|\mathcal{T}_{1}(n)|\) is also the number of pairs of partitions \((\lambda,(a^{b}))\vdash n\) such that \(\lambda\) is a partition into parts congruent to \(\pm 1\pmod{5}\), \(a\equiv\pm 1\pmod{5}\), and \(b\geq 1\). Then, as explained in Section 1, \(|\mathcal{T}_{1}(n)|\) equals the number of parts in all partitions of \(n\) into parts congruent to \(\pm 1\pmod{5}\). We interpret \(T_{2}(q)\) defined in (6) as the generating function for \(|\mathcal{T}_{2}(n)|\), where \(\mathcal{T}_{2}(n)\) is the set of marked partitions of \(n\) with super-distinct parts. Thus, as explained in Section 1, \(|\mathcal{T}_{2}(n)|\) equals the number of parts in all partitions of \(n\) into super-distinct parts. To prove Theorem 3.1 combinatorially, we create an injection \(\varphi:\mathcal{T}_{2}(n)\to\mathcal{T}_{1}(n)\) as follows. If \(\mu\in\mathcal{T}_{2}(n)\) has marked part \(\mu_{i}\), then \[\varphi(\mu):=(\mu\setminus\{\mu_{i}\},(1^{\mu_{i}})).\] In terms of Ferrers diagrams, \(\varphi\) removes the row of length \(\mu_{i}\) and transforms it into a rectangular partition \((a^{b})\) with \(a=1\) and \(b=\mu_{i}\). The image \(\varphi(\mathcal{T}_{2}(q))\) of the injection consists of pairs \((\lambda,(1^{b}))\in\mathcal{T}_{1}(n)\) such that none of \(b-1,b,b+1\) is a part of \(\lambda\). The inverse of \(\varphi\) on \(\varphi(\mathcal{T}_{2}(q))\), takes \((\lambda,(1^{b}))\in\mathcal{T}_{1}(n)\) such that \(b-1,b,b+1\not\in\lambda\), and creates a marked partition \(\mu\) by inserting a part equal to \(b\) into \(\lambda\) and marking it. Then the excess of the number of parts in all partitions of \(n\) with parts congruent to \(\pm 1\pmod{5}\) over the number of parts in all partitions of \(n\) with super-distinct parts equals the size of \(\mathcal{T}_{2}(n)\setminus\varphi(\mathcal{T}_{2}(n))\), the set of pairs of partitions \((\lambda,(a^{b}))\) such that \(\lambda\) is a super-distinct partition of \(n-ab\), \(a\equiv\pm 1\pmod{5}\), \(b\geq 1\), and if \(a=1\), then at least one of \(b-1,b,b+1\) is a part of \(\lambda\). **Corollary 3.2**.: _Let \(n\geq 1\). The number of parts in all partitions of \(n\) into super-distinct parts is less than the number of parts equal to \(1\) in all partitions of \(n\) into parts congruent to \(\pm 1\pmod{5}\)._ Proof.: From the argument in the introduction, the number of parts equal to \(1\) in all partitions of \(n\) into parts congruent to \(\pm 1\pmod{5}\) is equal to \(|\mathcal{T}_{1,1}(n)|\), where \(\mathcal{T}_{1,1}(n)\) is the set of marked partitions of \(n\) into parts congruent to \(\pm 1\pmod{5}\) with at least one part equal to \(1\) and in which the marked part is one of the parts equal to \(1\). Let \(\varphi\) be the injection defined in the proof of Theorem 3.1. If \(\mu\in\mathcal{T}_{2}(n)\) with \(\mu_{j}\) marked, then \(\varphi(\mu)=(\mu\setminus\{\mu_{j}\},(1^{\mu_{j}}))\). Since \(\mu\setminus\{\mu_{j}\}\) is a partition of \(n-\mu_{j}\) into super-distinct parts, by the first Rogers-Ramanujan identity, it corresponds to a unique partition \(\eta\) of \(n-\mu_{j}\) into parts congruent to \(\pm 1\pmod{5}\). Consider the marked partition \(\xi(\mu)=\eta\cup(1^{\mu_{j}})\) with the \(\mu_{j}\)th part equal to \(1\) marked. This gives an injection from \(\mathcal{T}_{2}(n)\) into \(\mathcal{T}_{1,1}(n)\). **Remark 2**.: The injection in the combinatorial proof of Theorem 3.1 above establishes that the number of marked super-distinct partitions of \(n\) equals the number of pairs of partitions \((\lambda,(1^{b}))\vdash n\) such that \(\lambda\) is a partition into super-distinct parts, \(b\geq 1\), and none of \(b-1,b,b+1\) is in \(\lambda\) (equivalently, the difference between the number of pairs of partitions \((\lambda,(1^{b}))\vdash n\) such that \(\lambda\) is a partition into super-distinct parts, \(b\geq 1\), and the number of such pairs with at least one of of \(b-1,b,b+1\) in \(\lambda\)). This same identity also follows independently from the analytic proof of Theorem 3.1, which we explain as follows. We have that (17) is the generating function for the difference between the number of pairs of partitions \((\lambda,(1^{b}))\vdash n\) such that \(\lambda\) is a partition into super-distinct parts, \(b\geq 1\), and the number of marked super-distinct partitions of \(n\) On the other hand, (17) originated as the sum of (7), (10), (12), and (14), a sum which is the generating function for the number of pairs of partitions \((\lambda,(1^{b}))\vdash n\) such that \(\lambda\) is a partition into super-distinct parts, \(b\geq 1\), and at least one of \(b-1,b,b+1\) is in \(\lambda\). Equating these two interpretations for the \(q\)-series coefficients of (17) gives the (equivalent) identity due to the injection in the combinatorial proof of Theorem 3.1. ## 4. The number of parts in the second Rogers-Ramanujan identity In this section we formulate and prove a result for the second Rogers-Ramanujan identity that is analogous to Theorem 3.1. Somewhat surprisingly, it is more difficult to establish this theorem. As our proof of Theorem 4.1 will show, the excess of the number of parts in all partitions of \(n\) with parts congruent to \(\pm 2\pmod{5}\) over the number of parts in all partitions of \(n\) with super-distinct parts greater than \(1\) can be described combinatorially as the size of a subset \(\mathcal{S}(n)\) (see (18) below) of the set of pairs or partitions \((\lambda,(a^{b}))\vdash n\) such that \(\lambda\) has super-distinct parts greater than \(1\), \(a\equiv\pm 2\pmod{5}\), \(b\geq 1\). The conditions satisfied by the pairs of partition in \(\mathcal{S}(n)\) can be stated explicitly. This rather long list of conditions is built around residue classes of \(a\) and the interplay between \(a,b\), and certain parts of \(\lambda\), and we do not present it here in its explicit form for brevity's sake. **Theorem 4.1**.: _The excess of the number of parts in all partitions of \(n\) with parts congruent to \(\pm 2\pmod{5}\) over the number of parts in all partitions of \(n\) with super-distinct parts greater than \(1\) equals the number of pairs or partitions \((\lambda,(a^{b}))\vdash n\) such that \(\lambda\) has super-distinct parts greater than \(1\), \(a\equiv\pm 2\pmod{5}\), \(b\geq 1\), and satisfying conditions prescribed by \(\mathcal{S}(n)\)._ Proof.: Let \(c(n,m)\) denote the number of partitions of \(n\) with parts congruent to \(\pm 2\pmod{5}\) and exactly \(m\) parts. Then, the generating function for \(c(n,m)\) is given by \[P_{2}(z;q):=\sum_{n\geq 0}\sum_{m\geq 0}c(n,m)z^{m}q^{n}=\frac{1}{(zq^{2};q^{5 })_{\infty}(zq^{3};q^{5})_{\infty}}.\] Similarly, if \(d(n,m)\) is the number partitions of \(n\) with super-distinct parts greater than \(1\), and exactly \(m\) parts \[R_{2}(z;q):=\sum_{n\geq 0}\sum_{m\geq 0}d(n,m)z^{m}q^{n}=\sum_{n=0}^{\infty} \frac{z^{n}q^{n^{2}+n}}{(q;q)_{n}}.\] Considering the difference of the derivatives with respect to \(z\) evaluated at \(1\), we obtain the generating function for the excess in the number of parts. We have \[\frac{\partial}{\partial z}\Big{|}_{z=1}(P_{2}(z;q)-R_{2}(z;q))\] \[\qquad=\frac{1}{(q^{2};q^{5})_{\infty}(q^{3};q^{5})_{\infty}} \left(\sum_{m=1}^{\infty}\frac{q^{5m-2}}{1-q^{5m-2}}+\frac{q^{5m-3}}{1-q^{5m-3 }}\right)-\sum_{n=0}^{\infty}\frac{nq^{n^{2}+n}}{(q;q)_{n}}\] \[\qquad=\left(\sum_{n=0}^{\infty}\frac{q^{n^{2}+n}}{(q;q)_{n}} \right)\left(\sum_{m=1}^{\infty}\frac{q^{5m-2}}{1-q^{5m-2}}+\frac{q^{5m-3}}{1- q^{5m-3}}\right)-\sum_{n=0}^{\infty}\frac{nq^{n^{2}+n}}{(q;q)_{n}}\] \[\qquad=:S_{1}(q)-S_{2}(q).\] We interpret \(S_{1}(q)\) as the generating function for \(|\mathcal{S}_{1}(n)|\), where \(\mathcal{S}_{1}(n)\) is the set of pairs of partitions \((\lambda,(a^{b}))\vdash n\) such that \(\lambda\) has super-distinct parts greater than \(1\), \(a\equiv\pm 2\pmod{5}\}\) and \(b\geq 1\) if \(n\geq 1\), and \(|\mathcal{S}_{1}(0)|:=0\). Note that \(\lambda=\emptyset\) is allowed. As explained in Section 1, \(|\mathcal{S}_{1}(n)|\) is the number of parts in all partitions of \(n\) with parts congruent to \(\pm 2\pmod{5}\). We interpret \(S_{2}(q)\) as the generating function for \(|\mathcal{S}_{2}(n)|\), where \(\mathcal{S}_{2}(n)\) is the set of marked partitions of \(n\) with superdistinct parts greater than \(1\) if \(n\geq 1\), and \(|\mathcal{S}_{2}(0)|:=0.\) Thus, \(|\mathcal{S}_{2}(n)|\) is number of parts in all partitions of \(n\) with super-distinct parts greater than \(1\). For \(n\geq 1\), we create an injection \(\psi:\mathcal{S}_{2}(n)\to\mathcal{S}_{1}(n)\) as follows. Start with \(\mu\in S_{2}(n)\) and suppose the marked part of \(\mu\) is \(\mu_{i}=c\). Set \[x:=\begin{cases}\mu_{i+1}&\text{ if }i<\ell(\mu)\\ 0&\text{ if }i=\ell(\mu),\end{cases}\] and let \(y=c-x\). Thus, if the marked part is not the last part of \(\mu\), \(y\) is the difference between the marked part and the next part. Otherwise, \(y\) is equal to the marked part. Hence, \(y\geq 2\). Moreover, \(\mu\) does not contain any of \(c+1,c-1,x+1\) and \(x-1\) (if \(x\neq 0\)) as a part. We denote by \(\widetilde{\mu}\) the partition obtained from \(\mu\) by removing the marked part, i.e, \(\widetilde{\mu}:=\mu\setminus\{c\}\). Our definition of \(\psi\) depends on the parity of \(c\). Case 1:\(c=2k\), \(k\geq 1\). Then, we define \[\psi(\mu):=(\widetilde{\mu},(2^{k})).\] In terms of Ferrers diagrams, \(\psi\) removes the row of length \(2k\) from \(\mu\) and transforms it into a rectangular partition \((a^{b})\) with \(a=2\) and \(b=k\). The image under \(\psi\) of the subset of overpartitions in \(\mathcal{S}_{2}(n)\) in this case is \[\mathcal{I}_{1}(n):=\{(\lambda,(2^{k}))\in\mathcal{S}_{1}(n)\mid 2k-1,2k,2k+1 \not\in\lambda\}.\] To see that \(\psi\) is onto \(\mathcal{I}_{1}(n)\), given \((\lambda,(2^{k}))\in\mathcal{I}_{1}(n)\) we let \(\mu=\lambda\cup\{2k\}\) and mark part \(2k\). Then, \(\mu\in\mathcal{S}_{2}(n)\) and \(\psi(\mu)=(\lambda,(2^{k}))\). Case 2:\(c=2k+1\), \(k\geq 1\). To define \(\psi\) we need to consider different residue classes of \(c\) modulo \(5\). (A) If \(c\equiv 2\) or \(3\pmod{5}\), define \[\psi(\mu):=(\widetilde{\mu},(c)).\] In terms of Ferrers diagrams, \(\psi\) removes the row of length \(c\) from \(\mu\) and transforms it into a rectangular partition \((a^{b})\) with \(a=c\) and \(b=1\). The image under \(\psi\) of the subset of overpartitions in \(\mathcal{S}_{2}(n)\) in this case is \[\mathcal{I}_{2}(n):=\{(\lambda,(a))\in\mathcal{S}_{1}(n)\mid a\text{ odd and }a-1,a,a+1\not\in\lambda\}.\] To see that \(\psi\) is onto \(\mathcal{I}_{2}(n)\), given \((\lambda,(a))\in\mathcal{I}_{2}(n)\) we let \(\mu=\lambda\cup\{a\}\) and mark part \(a\). Then, \(\mu\in\mathcal{S}_{2}(n)\) and \(\psi(\mu)=(\lambda,(a))\). (B) If \(c\not\equiv 2\) or \(3\pmod{5}\), then \(c=2k+1\geq 5\), i.e., \(k\geq 2\), and we consider several subcases according to the size of \(y\). (i) If \(y=2\) or \(3\), then \(x\neq 0\) and we define \[\psi(\mu):=(\widetilde{\mu}\setminus\{x\}\cup\{x+1\},(2^{k})).\] In terms of Ferrers diagrams, \(\psi\) removes the row of length \(c\) from \(\mu\), adds one to the next part \(x\), and transforms \(c-1=2k\) into a rectangular partition \((a^{b})\) with \(a=2\) and \(b=k\). Note that, if \(y=2\), then \(x+1=2k\), and if \(y=3\), then \(x+1=2k-1\). The image under \(\psi\) of the subset of overpartitions in \(\mathcal{S}_{2}(n)\) in this case is \[\mathcal{I}_{3}(n):= \left\{(\lambda,(2^{k}))\in\mathcal{S}_{1}(n)\Big{|}\begin{array} []{l}k\geq 2,2k\in\lambda\text{ and }\\ 2k-2,2k+2\not\in\lambda\end{array}\right\}\bigcup\] \[\left\{(\lambda,(2^{k}))\in\mathcal{S}_{1}(n)\Big{|}\begin{array} []{l}k\geq 2,2k-1\in\lambda\text{ and }\\ 2k-3,2k+1,2k+2\not\in\lambda\end{array}\right\}.\] Note that in the first set above, we also have that \(2k-1\) and \(2k+1\) are not parts of \(\lambda\). However, this is clear since \(2k\in\lambda\) and \((\lambda,(2^{k}))\in\mathcal{S}_{1}(n)\). Similarly, in the second set \(2k-2,2k\not\in\lambda\) but we do not mention this explicitly since it is implied by \(2k-1\in\lambda\) and \((\lambda,(2^{k}))\in\mathcal{S}_{1}(n)\). For the remainder of the proof, we will not write these exclusions explicitly. Clearly, the two sets whose union is \(\mathcal{I}_{3}(n)\) are disjoint. To see that \(\psi\) is onto \(\mathcal{I}_{3}(n)\), let \((\lambda,(2^{k}))\in\mathcal{I}_{3}(n)\). If \(2k\in\lambda\), replace part \(2k\) by parts \(2k-1\) and \(2k+1\) and mark \(2k+1\). If \(2k\not\in\lambda\), then \(2k-1\in\lambda\) and we replace part \(2k-1\) by parts \(2k-2\) and \(2k+1\) and mark \(2k+1\). We obtain a partition \(\mu\in S_{2}(n)\) such that \(\psi(\mu)=(\lambda,(2^{k}))\). (ii) If \(y\geq 4\) and \(c\equiv 0\pmod{5}\), since \(c\) is odd, we write write \(c=10j+5=2(5j+2)+1\) with \(j\geq 0\). Notice that if \(j=0\), then \(x=0\). Define \[\psi(\mu):=\begin{cases}(\widetilde{\mu}\setminus\{x\}\cup\{x+1\},((5j+2)^{2} ))&\text{ if }x\neq 0\\ (\widetilde{\mu}\cup(5j+3),(5j+2))&\text{ if }x=0.\end{cases}\] In terms of Ferrers diagrams, if the marked part is not the last part of \(\mu\), \(\psi\) removes the row of \(\mu\) corresponding to the marked part \(c\), adds one to the next part \(x\), and transforms \(c-1\) into a rectangular partition \((a^{b})\) with \(a=(c-1)/2\) and \(b=2\). If the marked part \(c\) is the last part of \(\mu\), \(\psi\) removes the part \(c\) from \(\mu\), and transforms it into a new part equal to \((c+1)/2\) in \(\mu\) and a rectangular partition \((a^{b})\) with \(a=(c-1)/2\) and \(b=1\). Before we describe the image of \(\psi\) in this case, we introduce some helpful notation. For a positive integer \(u\), we denote by \(z_{u,\lambda}\) the largest part of \(\lambda\) that is less than or equal to \(u\) and it is implicit in this notation that there is such a part in \(\lambda\). Then, the image under \(\psi\) of the subset of overpartitions in \(\mathcal{S}_{2}(n)\) in this case is \[\mathcal{I}_{4}(n):= \left\{(\lambda,(a^{2}))\in\mathcal{S}_{1}(n)\left|\begin{array}{l}a >2,a\equiv 2\pmod{5}\text{ and }\\ 2a-1,2a,2a+1,2a+2\not\in\lambda\\ z_{2a-2,\lambda}\geq 3\text{ and }z_{2a-2,\lambda}-2\not\in\lambda\end{array} \right.\right\}\bigcup\] \[\left\{(\lambda,(a))\in\mathcal{S}_{1}(n)\left|\begin{array}{l}a \equiv 2\pmod{5},\ a+1\in\lambda\text{ and }\\ \text{if }z\leq 2a+2,z\neq a+1,\text{ then }z\not\in\lambda\end{array} \right.\right\}.\] Clearly, the two sets whose union is \(\mathcal{I}_{4}(n)\) are disjoint. To see that \(\psi\) is onto \(\mathcal{I}_{4}(n)\), let \((\lambda,(a^{k}))\in\mathcal{I}_{4}(n)\). Thus \(a\equiv 2\pmod{5}\) and \(k=1\) or \(2\). If \(k=2\), we add one to the largest part of \(\lambda\) that is less than or equal to \(2a-2\) and insert and mark a part equat to \(2a-1\) into \(\lambda\). If \(k=1\), we add \(a\) to the smallest part of \(\lambda\) and we mark the obtained part. We obtain a partition \(\mu\in S_{2}(n)\) such that \(\psi(\mu)=(\lambda,(a^{k}))\). (iii) If \(y\geq 4\) and \(c\equiv 4\pmod{5}\), since \(c\) is odd, we write write \(c=10j+9=2(5j+3)+3\) with \(j\geq 0\). Define \[\psi(\mu):=\begin{cases}(\widetilde{\mu}\setminus\{x\}\cup\{x+3\},((5j+3)^{2} ))&\text{ if }x\neq 0\\ (\widetilde{\mu}\cup\{3\},((5j+3)^{2}))&\text{ if }x=0.\end{cases}\] In terms of Ferrers diagrams, \(\psi\) removes the row of \(\mu\) corresponding to the marked part \(c\), adds three to the next part \(x\) if \(x\neq 0\) and inserts a parts equal to \(3\) into \(\mu\) if \(x=0\); and transforms \(c-3\) into a rectangular partition \((a^{b})\) with \(a=(c-3)/2\) and \(b=2\). The image under \(\psi\) of the subset of overpartitions in \(\mathcal{S}_{2}(n)\) in this case is \[\mathcal{I}_{5}(n):=\] \[\left\{(\lambda,(a^{2}))\in\mathcal{S}_{1}(n)\left|\begin{array}[] {l}a\equiv 3\pmod{5},\ 3\in\lambda\text{ and }\\ \text{if }z\leq 2a+4,z\neq 3,\text{ then }z\not\in\lambda\end{array} \right.\right\}.\] Considering parts less than or equal to \(2a+2\), we see that the two sets whose union is \(\mathcal{I}_{5}(n)\) are disjoint. To see that \(\psi\) is onto \(\mathcal{I}_{5}(n)\), let \((\lambda,(a^{2}))\in\mathcal{I}_{5}(n)\). If \(3\) is the only part less than or equal to \(2a+4\), remove part \(3\) from \(\lambda\). Otherwise, subtract \(3\) from the largest part of \(\lambda\) that is less than or equal to \(2a+2\). Finally insert and mark a part equal to \(2a+3\) into \(\lambda\). We obtain a partition \(\mu\in S_{2}(n)\) such that \(\psi(\mu)=(\lambda,(a^{2}))\). (iv) If \(y\geq 4\) and \(c\equiv 1\pmod{5}\), since \(c\) is odd, we write write \(c=10j+1\) with \(j\geq 1\). If \(c=20h+11=4(5h+2)+3\) for some \(h\geq 0\), define \[\psi(\mu):=\begin{cases}(\widetilde{\mu}\setminus\{x\}\cup\{x+3\},((5h+2)^{4} ))&\text{ if }h>0,x\neq 0\\ (\widetilde{\mu}\cup\{3\},((5h+2)^{4}))&\text{ if }h>0,x=0\\ (\widetilde{\mu}\setminus\{x\}\cup\{x+2\},(3^{3}))&\text{ if }h=0,x\neq 0\\ (\widetilde{\mu}\cup\{2\},(3^{3}))&\text{ if }h=0,x=0.\end{cases}\] The image under \(\psi\) of the subset of overpartitions in \(\mathcal{S}_{2}(n)\) in this case is \[\mathcal{I}_{6}(n):=\] \[\left\{(\lambda,(a^{4}))\in\mathcal{S}_{1}(n)\left|\begin{array}{l} a\equiv 2\pmod{5},a>2\text{ and }\\ 4a+3,4a+4\not\in\lambda\\ z_{4a+2,\lambda}\geq 5\text{ and }z_{2a+2,\lambda}-t\not\in\lambda\text{ for }t=2,3,4 \end{array}\right\}\bigcup\] \[\left\{(\lambda,(a^{4}))\in\mathcal{S}_{1}(n)\left|\begin{array}[] {l}a\equiv 2\pmod{5},a>2,\ 3\in\lambda\text{ and }\\ \text{if }z\leq 4a+4,z\neq 3,\text{ then }z\not\in\lambda\end{array}\right. \right\}\bigcup\] \[\left\{(\lambda,(3^{3}))\in\mathcal{S}_{1}(n)\left|\begin{array}[] {l}10,11,12\not\in\lambda\\ \text{if }z_{9,\lambda}\geq 4\text{ and }z_{9,\lambda}-t\not\in\lambda\text{ for }t=2,3 \end{array}\right.\right\}\bigcup\] \[\left\{(\lambda,(3^{3}))\in\mathcal{S}_{1}(n)\left|\begin{array}[] {l}2\in\lambda\text{ and }\\ \text{if }z\leq 12,z\neq 2,\text{ then }z\not\in\lambda\end{array}\right. \right\}.\] By considering parts less than or equal to \(4a+2\) if \(a\equiv 2\pmod{5}\), and parts less than \(10\) otherwise, we see that the four sets whose union is \(\mathcal{I}_{6}(n)\) are disjoint. As in the previous cases, one can verify that \(\psi\) is onto \(\mathcal{I}_{6}(n)\). If \(c=20h+1\) for some \(h\geq 1\), write \(c=3m+r\) with \(0\leq r\leq 2\). Note that \(m\geq 7\). Moreover, if \(r=0\), then \(m\equiv 7\pmod{20}\); if \(r=1\), then \(m\equiv 0\pmod{20}\); and if \(r=2\), then \(m\equiv 13\pmod{20}\). We define \[\psi(\mu):=\] \[\left\{\begin{array}{ll}(\widetilde{\mu}\setminus\{x\}\cup\{x+r \},(3^{m}))&\text{ if }x\neq 0\\ \widetilde{\mu}\cup\{r\},(3^{m}))&\text{ if }x=0,r\neq 1\\ (\widetilde{\mu}\cup\{5(h-1)+8,5(h-1)+6,5(h-1)+4\},(5(h-1)+3))&\text{ if }x=0,r=1. \end{array}\right.\] The image under \(\psi\) of the subset of overpartitions in \(\mathcal{S}_{2}(n)\) in this case is \[\mathcal{I}_{7}(n):=\] \[\left\{\begin{array}{l}(\lambda,(3^{m}))\in\mathcal{S}_{1}(n) \left|\begin{array}{l}m\equiv 7\pmod{20}\text{ and }\\ 3m-3,3m-2,3m-1,3m,3m+1\not\in\lambda\\ \text{There is }z\in\lambda,z\leq 3m-4\end{array}\right.\right\}\bigcup\] \[\left\{\begin{array}{l}(\lambda,(3^{m}))\in\mathcal{S}_{1}(n) \left|\begin{array}{l}m\equiv 0\pmod{20},m>0\text{ and }\\ 3m+1,3m+2\not\in\lambda\\ z_{3m-2,\lambda}\geq 3\text{ and }z_{3m-2,\lambda}-2\not\in\lambda\end{array}\right. \right\}\bigcup\] \[\left\{\begin{array}{l}(\lambda,(3^{m}))\in\mathcal{S}_{1}(n) \left|\begin{array}{l}m\equiv 13\pmod{20}\text{ and }\\ 3m+1,3m+2,3m+3\not\in\lambda\\ z_{3m,\lambda}\geq 4\text{ and }z_{3m,\lambda}-t\not\in\lambda\text{ for }t=2,3 \end{array}\right.\right\}\bigcup\] \[\left\{\begin{array}{l}(\lambda,(3^{m}))\in\mathcal{S}_{1}(n) \left|\begin{array}{l}m\equiv 7\pmod{20}\text{ and }\\ \text{if }z\leq 3m+1,\text{ then }z\not\in\lambda\end{array}\right. \right\}\bigcup\] \[\left\{\begin{array}{l}(\lambda,(3^{m}))\in\mathcal{S}_{1}(n) \left|\begin{array}{l}m\equiv 13\pmod{20},2\in\lambda\text{ and }\\ \text{if }z\leq 3m+3,z\neq 2,\text{ then }z\not\in\lambda\end{array}\right. \right\}\bigcup\] \[\left\{\begin{array}{l}(\lambda,(a))\in\mathcal{S}_{1}(n) \left|\begin{array}{l}a\equiv 13\pmod{15},a+1,a+3,a+5\in\lambda\text{ and }\\ \text{if }z\leq 4a+10,z\neq a+1,a+3,a+5,\text{ then }z\not\in\lambda\end{array}\right. \right\}.\] Clearly, the six sets whose union is \(\mathcal{I}_{7}(n)\) are disjoint. Upon inspection, we see that the sets \(\mathcal{I}_{j}\), \(1\leq j\leq 7\) are mutually disjoint. Their union is the image of \(\mathcal{S}_{2}(n)\) under \(\psi\). Thus, the excess of the number of parts in all partitions of \(n\) with parts congruent to \(\pm 2\pmod{5}\) over the number of parts in all partitions of \(n\) with super-distinct parts greater than \(1\) equals \(|\mathcal{S}(n)|\), where \[\mathcal{S}(n):=\mathcal{S}_{1}(n)\setminus\bigcup_{j=1}^{7}\mathcal{I}_{j}. \tag{18}\] **Example 2**.: Let \(n=4\). The only partition with parts congruent to \(\pm 2\pmod{5}\) is \((2,2)\) and it has two parts. The only partition into super-distinct parts greater than \(1\) is \((4)\) and it has one part. The only pair of partitions in \(\mathcal{S}(4)\) is \((\lambda,(a^{b}))=((2),(2))\). Clearly, \(((2),(2))\in\mathcal{S}_{1}(4)\). Since \(a=2\) and \(b=1\), the pair is not in \(\mathcal{I}_{j}(4)\) for \(j=2,3,5,6,7\). The pair is not in \(\mathcal{I}_{1}(4)\) because \(2b=2\) is a part of \(\lambda=(2)\). Moreover, the pair is not in \(\mathcal{I}_{2}(4)\) because \(a+1=3\) is not a part of \(\lambda=(2)\). **Remark 3**.: The construction of the injection \(\psi\) above shows that likely other choices of injections exist. A simpler injection that allows for a nice description of the complement of its image in \(\mathcal{S}_{1}(n)\) is welcome.
2306.06496
Formalizing Box Inference for Capture Calculus
Capture calculus has recently been proposed as a solution to effect checking, achieved by tracking the captured references of terms in the types. Boxes, along with the box and unbox operations, are a crucial construct in capture calculus, which maintains the hygiene of types and improves the expressiveness of polymorphism over capturing types. Despite their usefulness in the formalism, boxes would soon become a heavy notational burden for users when the program grows. It thus necessitates the inference of boxes when integrating capture checking into a mainstream programming language. In this paper, we develop a formalisation of box inference for capture calculus. We begin by introducing a semi-algorithmic variant of the capture calculus, from which we derive an inference system where typed transformations are applied to complete missing box operations in programs. Then, we propose a type-level system that performs provably equivalent inference on the type level, without actually transforming the program. In the metatheory, we establish the relationships between these systems and capture calculus, thereby proving both the soundness and the completeness of box inference.
Yichen Xu, Martin Odersky
2023-06-10T17:42:06Z
http://arxiv.org/abs/2306.06496v1
# Formalizing Box Inference for Capture Calculus ###### Abstract. Capture calculus has recently been proposed as a solution to effect checking, achieved by tracking the captured references of terms in the types. Boxes, along with the _box_ and _unbox_ operations, are a crucial construct in capture calculus, which maintains the hygiene of types and improves the expressiveness of polymorphism over capturing types. Despite their usefulness in the formalism, boxes would soon become a heavy notational burden for users when the program grows. It thus necessitates the inference of boxes when integrating capture checking into a mainstream programming language. In this paper, we develop a formalisation of box inference for capture calculus. We begin by introducing a semi-algorithmic variant of the capture calculus, from which we derive an inference system where typed transformations are applied to complete missing box operations in programs. Then, we propose a type-level system that performs provably equivalent inference on the type level, without actually transforming the program. In the metatheory, we establish the relationships between these systems and capture calculus, thereby proving both the soundness and the completeness of box inference. + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science class Pair[+A, +B](x: A, y: B): def fst: A = x def snd: B = y Problems arise when we attempt to construct a pair with impure values: ``` defx:(ct)Int->String defy:(fs)Logger defp=Pair(x, y) ``` The problem is: what should the type of p capture? (ct, fs) is a sound option, but as the program grows, the capture sets will quickly accumulate and become increasingly verbose and imprecise. Moreover, it undermines the expressiveness of type polymorphism [Odersky et al.2022], which is illustrated in the following example: ``` defmapFirst[A,B,C](p:Pair[A,B],f:A=>C):Pair[C,B]= Pair(f(p.fst),p.snd) ``` Since pairs can be impure, we have to annotate the argument p to capture {*}, as well as the return type, This results in imprecise types. To address these issues, capture tunneling proposes to prevent the propagation of captured capabilities at the instantiation of type variables, and only pop out these capabilities when the value is used. Specifically in this example, the type of p will have an empty capture set, while the following function: ``` ()>p.fst :(ct)()->{ct}Int->String ``` which _accesses_ the member, will capture (ct). CC\({}_{<:\square}\) achieves capture tunneling by retricting type variable instances to shape types and utilizing boxes. The box hides the capture set and stops the propagation of the captures, and the unbox reveals the boxed capture set. Specifically, a capturing type {x1,..., xn} S can be encapsulated in a boxed type \(\square\){x1,..., xn} S, which conceals the capture set and turns the capturing type into a shape/pure type. A boxed term introduces a box in the type. which can then be removed by the unbox expression when the term under the box is to be accessed. In our example, the expression for creating p will then look like: ``` defp=Pair[\(\square\){ct}Int->String,\(\square\){fs}Logger](\(\square\)x,\(\square\)y) ``` Accessing the term under the box requires the use of an _unbox_ expression which unveils the hidden capture set, the following: ``` ()>{ct}->p.fst :(ct)()->{ct}Int->String ``` Here, {ct}->p.fst is the syntax for unboxing. Moreover, knowing that type variable instances are pure, the mapFirst function can be written as it is in the previous example. It is concise and at the same time expressive enough to work polymorphically on pairs. Boxes are an essential formal device within the CC\({}_{<:\square}\) framework. Yet, being a theoretical tool which exist exclusively on the conceptual level, boxes do not exist at runtime and have no effects in program semantics. When putting capture checking into practice, mandating the annotation of the program with appropriate boxes is unnatural, posing an unnecessarily heavy notational burden. Hence, the introduction of box inference becomes a pressing need. Scala, which has a practical implementation of capture checking, does not have boxes in the surficial language. Instead, the capture checker infer the box operations wherever they are necessary to make the input program well-typed. Such an approach enables complete concealment of boxes as a technical detail of the underlying theory and liberates users from the knowledge of their existence. This aspect is crucial to integrating capture calculus into a mainstream language. Despite that the capture checker in Scala has already implemented box inference, we still lack the theoretical foundation of box inference. In fact, box inference is unexpectedly non-trivial to be implemented correctly. To demonstrate this point, let us inspect how box inference should work in the following example, where we want to execute a list of IO operations: ``` classList[+A]:... defmap[B](op:A=>B):List[B]=... defops:List[{io}()->Unit]=... defrunOp(op:{io}()->Unit):Unit=op() defrunOps:{io}()->Unit=()=>ops.map[Unit](runOp) ``` Here, we have ops which is a list of IO operations, an function runOp which takes an IO operation and execute it, and the function runOps which execute all the IO operations in the list. In CC\({}_{<\Box}\), A=>B is an alias of {*} A->B, which essentially means an impure function that could possibly perform all kinds of effects. This behavior is achieved by the subcapturing mechanism of CC\({}_{<\Box}\), which we are going to explain later in the backgrounds. We expect to infer a box in the type of ops, making it List[\(\Box\) {io}()->Unit]. Otherwise, the program will be syntactically incorrect, as the type variable should be instantiated to a shape type. The other place involving box inference, which is also where the nontriviality arises, is ops.map(runOp). In this application, map is expecting an argument of type (\(\Box\) {io}()->Unit)=>Unit, whereas the type of runOp is ({io}()->Unit)=>Unit. To correctly infer the boxes, we suprisingly have to perform eta-expansions. The box inference should transform runOp into (op:\(\Box\) {io}()->Unit)=>runOp({io}->runOp). It is also noteworthy that although the body of runOps in the surfacial does not capture io in any way, as both ops and runOp has an empty capture set, runOps in the elaborated program does capture the io capability due to the insertion of the unbox operation during box inference. The necessity of eta-expansion and its effect on the capture sets both illustrate the non-triviality of box inference. To this end, in this paper we propose a formal foundation of box inference, which can lead to a deeper and principled understanding of it, and facilitate the development of a reliable and sound box inference algorithm in the compiler. Actually, our formalization of box inference eventually fostered two pull-requests in the Scala 3 compiler that make fixes and improvements to the box inference algorithm. Our contributions can be summarized as follows: * We develop CC\({}_{<:\Box}\), A syntax-directed variant of CC\({}_{<:\Box}\), which straightforwardly gives rise to a semi-algorithm for typechecking. * Based on the semi-algorithmic variant, we propose a box inference system BI\({}_{t}\) that completes the missing box operations in the program while typechecking it. * Since box operations has zero runtime effects, transforming the program during box inference is a waste. We take one step further and propose a type-level box inference system BI\({}_{T}\) that performs provably the same reasoning but only operating on types without referring to terms. * We develop a metatheory for these systems, proving the equivalence between them and capture calculus. * To provides insights into the implementation of capture checking and box inference in other languages, we discuss the implementation of box inference in Scala 3, which has capture checking available as an experimental language feature. ## 2. Background In the background, we will briefly introduce the key ideas of capture calculus, which is the base framework of our work. Then, we will discuss the problem of box inference in more details. ### Introduction to Capture Calculus In Section 1 we have made a brief overview of capture calculus, Now we will delve deeper into the concepts of this system, providing additional explanations to help user gain a better understanding and touching on other important aspects. Capture sets and capturing typesOne fundamental concept of \(\text{CC}_{<:\square}\) is to track captured variables in types. \(\text{CC}_{<:\square}\) stratifies types into * _Shape types_ which describes the shape of values, e.g. functions, type variables, and boxed types. * _Capturing types_ which, in addition to the shape type, have a _capture set_ specifying the captured variables of the associated term. A capture set is a finite set of variables, e.g. {file, console} in previous examples. Apart from the program variables in scope, it can include the special variable \(\star\), which is the root capability. The capture set in the type predicts the captured variables of a value. In the previous example, the function that accesses file and console will have both variables in the capture set of its type. Note that only the reference to capabilities in the environment will be considered as captures, similar to the idea of free variables. The following example from Odersky et al. (2022) illustrates the idea: ``` deftest(fs:FileSystem):{fs}String->Unit= (x:String)>Logger(fs).log(x) testwill be typedas{}(fs:FileSystem)->{fs}String->Unit.The capability fs is only captured by the returned closure. SubcapturingCC\({}_{<:\square}\) employs subcapturing to compare capture sets. Given capture sets \(C_{1}\) and \(C_{2}\), we say \(C_{1}\) is a subcapture of \(C_{2}\) (written \(C_{1}\)<: \(C_{2}\)) if every element \(x\) in \(C_{1}\) has been accounted by \(C_{2}\). \(x\) is considered to be accounted by \(C_{2}\) either if \(x\) is an element in \(C_{2}\), or the capture set \(C\) of \(x\) is a subcapture of \(C_{2}\) (\(C\)<: \(C_{2}\)). Let us inspect a concrete example. In the following environment: ``` file:{*}File console:{*}Console op:{file,console}()->Unit l:{console}Logger ``` the following subcapture relations hold: ``` {l}<:{file}<:{*} {op}<:{file,console}<:{*} {file}<:{file,console}<:{*} {l}<:{file,console}<:{*} ``` Since every capability is ultimately derived from \(\star\), it is a supercapture of all capture sets. As mentioned in Section 1, a function that capture \(\star\) (e.g. {\(\star\)}()->Unit or ()->Unit) can perform any sort of effects. Escape checkingBesides capture tunneling introduced in Section 1, the other crucial mechanism in \(\text{CC}_{<:\square}\) is escape checking (Odersky et al., 2022). In the following code, we create a file handle and pass it to a function, and close it at the end. ``` defusingLogFile[T](op:(file:{*}File)->T):T={ valfile=newFile("output.log") tryop(file) finally file.close() } defgood=()=>usingLogFile(f=>f.write(0)) defbad=()=>{ vallater=usingLogFile(f=>()=>f.write(0)) later() } Here, good should be allowed whereas bad should be rejected. bad returns the scoped capability out of its valid scope and try to use the escaped capability. We inspect the typing of the usingLogFile application step-by-step to illustrate how it is rejected. * The type of f=>()=>f.write(0) is (f: {*} File)->{f}()->Unit. * When instantiating the type argument of usingLogFile, we can only instantiate it to (*) ()->Unit. * The type of later is (*)()->Unit. * Unboxing a term whose capture set under the box contains * is disallowed. * The escaped capability cannot be used, and therefore later() will be rejected. ### The Box Inference Problem Now we take a close look at the box inference problem. Given a program with possibly missing box constructs, box inference aims to find a way to complete the program with box operations, such that the completed one is well-typed in the base calculus. For instance, for the aforementioned Pair example, the user is allowed to write the program new Pair[{fs} String, {console} Int](x, y). The missed boxes should be inferenced by the compiler, producing the well-typed expression new Pair[O {fs} String, {console} Int](O x, y). There are two kinds of boxes getting inferenced: the boxes in types, and boxes in terms. Boxes in types can be trivially inferenced based on the syntactic class of a type. In other words, when we are instantiating a type variable with a capturing type, it should get boxed. In this example, we find out that two impure types are supplied in the type application, and infer the two boxes for them. By contrast, inferencing the boxes in terms is non-trivial. Box inference over terms involves eta-expansion and induces complex effects on the capture sets. In this paper, we focus on _box inference over terms_. and presents three systems that in together solve this problem. Specifically, given a program \(t\), our systems attempts to complete the missing box constructs _in terms_ so that the completed program \(t^{\prime}\) becomes well-typed as \(T\). ## 3. Key Ideas of Box Inference In this section, we discuss the box inference systems informally and present the key ideas of them. ### Semi-Algorithmic Typing Our systems aim to formalize the box inference procedure. Therefore, we expect the systems to be semi-algorithmic (or syntax-directed), so that they straightforwardly give rise to procedures for box inference. As the original capture calculus is declarative, the first step of our work is to derive a syntax-directed variant of the calculus. The three major hindrance of syntax-directness is the transitivity rule in subtyping, the subsumption rule in typing, and the let rule where the choice of the type that avoids the local variable is ambiguous. The idea of the syntax-directed variant is to inline the of first two rules in subtyping and typing respectively, and derive a avoidance procedure for the let rule. The resulted system is equivalent to the declarative one, as proven in the metatheory. The semi-algorithmic calculus serves as the basis of the two box inference systems we are going to develop. ### Box Adaptation Before explaining _how_ we do box inference, let's first talk about _when_ to do it. In general, box infernece's input include (1) a term \(t\) of type \(T\) to be adapted, and (2) the expected type \(U\). We observe that this is exactly the same situation as subtyping checks. In the semi-algorithmic variant of \(\mathrm{CC}_{<:\Omega}\), the subtyping check is only when typechecking applications. Specifically, given the term \(\mathsf{f}(\mathsf{x})\), if \(\mathsf{f}\) is a function that receives an arguments of type \(\mathsf{U}\), and \(\mathsf{x}\) is typed as \(\mathsf{T}\), we invoke subtype check to answer the question _whether \(T\) is a subtype of \(U\)_, or _whether a value of type \(T\) can be used as \(U\)_. On the other hand, in box inference the question is _whether a value of type \(T\) can be adapted to \(U\) by completing missing box operations_. To this end, we employ **box adaptation** as a plug-in replacement for subtype checks, which inserts box operations in addition to performing regular subtyping. It answers exactly the above question, and returns the adapted term if answer is yes. Box adaptation is where box inference actually happens in our system. The basic idea of box adaptation is to compare the types recursively as subtyping does, but transform the term when it sees a mismatch in boxes. For example, when adapting the term \(\mathsf{x}\) of type \(\{\mathsf{fs}\}\)\(\mathsf{String}\) against the expected type \(\Box\)\(\{\mathsf{fs}\}\)\(\mathsf{String}\), box adaptation discovers a mismatch in boxes, so it inserts a box, and returns the adapted term \(\Box\)\(\mathsf{x}\). The design of the box adaptation judgement will be explained in details in Section 4.2.2. As the term is transformed during box adaptation, the typing judgment of the box inference system not only derives a type \(T\) of a input term \(t\), but also returns the adapted term \(t^{t}\) which is well-typed as \(T\) in \(\mathrm{CC}_{<:\Omega}\). ### Adapting Functions with Eta-Expansion Box-adapting functions requires eta-expansion. In Section 1 we have already seen an example: given \(\mathsf{f}\) of type ((io) () -> Unit) -> Unit, and the expected (D (io) () -> Unit) -> Unit, we have to eta-expand the function \(\mathsf{f}\) so that we can box-adapt its argument. After eta-expansion \(\mathsf{f}\) into \(\mathsf{x}\) -> \(\mathsf{f}(\mathsf{x})\), we find that the type of the argument \(\mathsf{x}\) is \(\Box\)\(\{\)io\(\}\) () -> Unit, where as to apply the function the expected type is \(\{\)io\(\}\) () -> Unit. We therefore insert an unbox and returns the adapted term \(\mathsf{x}\) -> \(\mathsf{f}(\{\)io\(\}\) -> \(\mathsf{x})\). Eta-expansion and box adaptation not only changes the boxes in types, but also changes the capture sets. In the aforementioned example, the type of the adapted form becomes \(\{\)io\(\}\) (\(\{\)io\(\}\) Unit -> Unit) -> Unit. Note that box adaptation charges the outermost capture sets with additional captured reference \(\mathsf{io}\). ### Type-Level Box Inference The transformation performed by box inference (including the box/unbox operations and the eta-expansion) does not have any runtime effects. The only point that matters is whether or not the value of one type could be adapted to another type based on box inference. Therefore, actually computing the transformed term could be a waste. It is natural to ask: can we check the possibility of such adaptation without actually keeping track of the result term? The answer is yes. The type-level box inference system does this by predicting the type-level effects of box adaptation, without actually transforming the terms. For example, when we infer a box for the variable x of type {fs}String, the type-level system predicts that the type of the adapted term becomes (fs)String, without computing the term \(\Box\)x. To properly predict the type-level effects of box inference, the system has to track not only the change of boxes in types, but also the capture sets. As we have seen before, eta-expansion will charge the capture sets with additional references. Considering the example of executing wrapped executions. ``` typeWrapper[T]=[X]->(f:(*)T->X)->X defrun(wrapped:Wrapper[\(\verb{io}\)Unit->Unit])= def(op:(io)Unit->Unit):Unit=op(unit) wrapped(f) ``` When typing the application wrapped(f), we adapt f of type (op: {io}Unit->Unit)->Unit to the expected type {*} (op: \(\Box\) {io}Unit->Unit)->Unit. The term-level system adapts the term into x=>f({io}-x), as discussed before, whose type is {io} (op: \(\Box\) {io}Unit->Unit)->Unit. If we defineWrapper[T] as **type**Wrapper[T] = [X]->(f:{}T->X)->X, which requires the function f to be pure, the program should be rejected, since after box adaptation the function f is impure (and it should be, since it runs a wrapped IO operation). The type-level box inference has to keep track of the fact that the capture set of the function type is charged with io, in addition to predicting the box in the result type. Furthermore, the function run captures io as well after box inference due to the box adaptation that happened in its closure. The type-level system has to take care of these effects on captured variables when typing the closures. ## 4. Formal Presentation of Box Inference ### Semi-Algorithmic Capture Calculus \(\mathbf{CC}_{\texttt{<}\Box}\) We first present the semi-algorithmic variant \(\mathbf{CC}_{\texttt{<}\Box}\), based on which our two box inference systems are developped. \(\mathbf{CC}_{\texttt{<}\Box}\) is derived from \(\mathbf{CC}_{\texttt{<}\Box}\) by making the rules syntax-directed. \(\mathbf{CC}_{\texttt{<}\Box}\) is extended from System F\({}_{\texttt{<}}\): with the added captrue checking constructs including capture sets, capturing types and boxes (Odersky et al., 2022). It is in monadic normal form and dependently typed, allowing for capture sets in types to reference program variables. **Syntax.**\(\mathbf{CC}_{\texttt{<}\Box}\) shares the same syntax with \(\mathbf{CC}_{\texttt{<}\Box}\). The syntax is presented in Figure 1. The main differences between the syntax of \(\mathbf{CC}_{\texttt{<}\Box}\) and System F\({}_{\texttt{<}}\): are highlighted in greyboxes. There are the box \(\mathbf{\Box}\)x, unbox \(C\)\(\mathtt{\leavevmode\hbox to17.49pt{\vbox to17.49pt{\pgfpicture\makeatletter\hbox to 0. 0pt{\pgfsys@beginscope{}{\definecolor{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}{}\pgfsys@color@rgb@fill{0}{0}{0}{} \pgfsys@setlinewidth{0.4pt}{}\nullfont}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@stroke {0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0 pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0 pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0 pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0 pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0 pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0 pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0 pt}{0.0pt}{0. **Definition 4.1** (Captured variables): \(\operatorname{cv}\left(t\right)\) _computes the set of captured variables of a term \(t\), and is defined as follows:_ \[\operatorname{cv}\left(\lambda(x\colon T).t\right) = \operatorname{cv}\left(t\right)/\left\{x\right\}\] \[\operatorname{cv}\left(\lambda[X<:S].t\right) = \operatorname{cv}\left(t\right)\] \[\operatorname{cv}\left(x\right) = \left\{x\right\}\] \[\operatorname{cv}\left(\operatorname{let}x=v\text{ in }t\right) = \left\{\operatorname{cv}\left(t\right)\right\}\quad\text{if }x\notin \operatorname{cv}\left(t\right)\] \[\operatorname{cv}\left(\operatorname{let}x=t\text{ in }u\right) = \operatorname{cv}\left(t\right)\cup\operatorname{cv}\left(u \right)/\left\{x\right\}\] \[\operatorname{cv}\left(x\right) = \left\{x,y\right\}\] \[\operatorname{cv}\left(x[S]\right) = \left\{x\right\}\] \[\operatorname{cv}\left(\operatorname{\boldsymbol{\Uppi}}x\right) = \left\{\right\}\] \[\operatorname{cv}\left(C\left\lgroup\cdot x\right) = \left\{x\right\}\cup C\] **Syntax-directed typing and subtyping.** The type system of \(\operatorname{CC}_{{}_{<:\square}\rightarrow}\) is shown in Figure 2, where the grey boxes highlight the main differences from \(\operatorname{CC}_{{}_{<:\square}}\). The rules are made syntax-directed by inlining _subsumption_ in typing, and _transitivity_ in subtyping. Also, we make use of the avoid function in (alg-let), that computes the least supertype of a type \(U\) that does not mention the local variable \(x\). We will discuss avoidance in more details later. The _subsumption_ rule in typing is inlined to two application rules (alg-app) and (alg-tapp). They check whether the argument type conforms to the parameter type, or the type variable bounds. In subtyping, the transitivity is inlined in the (alg-tvar) rule. Also, the (alg-refl) rule only applies on type variables. We prove the equivalence between \(\operatorname{CC}_{{}_{<:\square}\rightarrow}\) and \(\operatorname{CC}_{{}_{<:\square}}\) in metatheory. We show the subcapturing rules as well for the completeness of the presentation, though they are unchanged from \(\operatorname{CC}_{{}_{<:\square}}\). **Avoidance.** In (alg-let) we invoke the avoid function to algorithmically construct the least super type of the body type such that the locally-bound variable is not mentioned. The let rule of the original \(\operatorname{CC}_{{}_{<:\square}}\) is shown below, which requires that the result type avoids \(x\), while the choice of \(U\) is ambiguous and thus undermines the semi-algorithmics. Figure 1: Syntax of \(\operatorname{CC}_{{}_{<:\square}}\) and the box inference systems [Odersky et al.2022] **Algorithmic typing**\(\Gamma\mapsto t\colon T\) \[\begin{array}{c}\Gamma(x)=C\,S\\ \Gamma\mapsto x\colon\{x\}\,S\end{array}\] (alg-var) \[\begin{array}{c}(\Gamma,\,x\colon U)\mapsto t\colon T\\ \Gamma\mapsto U\;\textbf{wf}\\ \Gamma\mapsto\lambda(x\colon U).t\colon\nicefrac{{\mathrm{ev}(t)}}{{x}}\; \forall(x\colon U)T\end{array}\] (alg-abs) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-tabs) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-tabs) \[\begin{array}{c}(\Gamma\mapsto x\uparrow C\;\forall[z:T]U\\ \Gamma\mapsto y\colon T^{\prime}\\ U^{\prime}=\mathrm{avoid}\left(x,\mathrm{cv}\left(T\right),U\right)\\ \Gamma\mapsto x\;y\colon\left[\nicefrac{{\mathrm{y}}}{{z}}\right]U\end{array}\] (alg-app) \[\begin{array}{c}\Gamma\mapsto x\;\uparrow C\;\forall(z:T)U\\ \Gamma\mapsto y\colon T^{\prime}\\ U^{\prime}=\mathrm{avoid}\left(x,\mathrm{cv}\left(T\right),U\right)\\ \Gamma\mapsto\mathrm{let}\;x=s\;\mathrm{in}\;t\colon U^{\prime}\end{array}\] **Algorithmic subtyping**\(\Gamma\mapsto T<:U\) \[\begin{array}{c}\Gamma\mapsto\;\raisebox{-1.29pt}{\mbox{\scriptsize$\chi$}}<: \;\raisebox{-1.29pt}{\mbox{\scriptsize$\chi$}}\quad\mbox{\scriptsize$\chi$}\\ X<:S\in\Gamma\\ \overline{\Gamma}\mapsto S<:S^{\prime}\end{array}\] (alg-tvar) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto T_{1}<:T_{2}\\ \overline{\Gamma}\mapsto\psi[X<:S_{1}]T_{1}<:\psi[X<:S_{2}]T_{2}\\ \overline{\Gamma}\mapsto X<:S^{\prime}\end{array}\] (alg-tvar) \[\begin{array}{c}(\Gamma\mapsto C_{1}<:C_{2}\\ \Gamma\mapsto S_{1}<:S_{2}\\ \overline{\Gamma}\mapsto C_{1}S_{1}<:C_{2}\,S_{2}\end{array}\] (alg-captvar) \[\begin{array}{c}(\Gamma,\,x:U_{2})\mapsto T_{1}<:T_{2}\\ \overline{\Gamma}\mapsto\psi(x:U_{1})T_{1}<:\psi(x:U_{2})T_{2}\end{array}\] (alg-fun) \[\begin{array}{c}\Gamma\mapsto C_{1}<:C_{2}\\ \Gamma\mapsto S_{1}<:S_{2}\\ \overline{\Gamma}\mapsto C_{1}S_{1}<:C_{2}\,S_{2}\end{array}\] (alg-captvar) \[\begin{array}{c}(\Gamma,\,x:U_{2})\mapsto T_{1}<:T_{2}\\ \overline{\Gamma}\mapsto\psi(x:U_{1})T_{1}<:\psi(x:U_{2})T_{2}\end{array}\] (alg-fun) \[\begin{array}{c}\Gamma\mapsto T_{1}<:T_{2}\\ \overline{\Gamma}\mapsto\psi(x:U_{1})T_{1}<:\psi(x:U_{2})T_{2}\end{array}\] (alg-boxed) \[\begin{array}{c}\mbox{\scriptsize$\Gamma$}\mapsto C_{1}<:C_{2}\\ \Gamma(x)=C\,S\\ \Gamma\mapsto\lambda(x:U).t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\left(t\right)\;\forall[X<:S]T\\ \end{array}\] (alg-base) \[\begin{array}{c}(\Gamma,\,X<:S)\mapsto t\colon T\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto S\;\textbf{wf}\\ \Gamma\mapsto\lambda[X<:S].t\colon\mathrm{cv}\ \(\operatorname{avoid}\left(x,\operatorname{cv}\left(T\right),U\right)\) computes the aforementioned smallest \(x\)-avoiding supertype \(U^{t}\) for \(U\). The avoidance function is a traversal of the input type, where we approximate \(x\) to \(C_{x}\) at covariant occurrences and to \(\{\}\) at contravariant occurrences. **Type variable widening.** In (\(\mathtt{ALG-APP}\)), (\(\mathtt{ALG-TAPP}\)) and (\(\mathtt{ALG-UNBOX}\)) the \(\Gamma\vdash x\uparrow T\) judgement looks up variable \(x\) in the environment, and widen its type to a concrete type if it is bound to a type variable. For instance, given an application \(x\,y\) where \(x\) is typed as a type variable \(X\) whose bound is a function. We expect the type of \(x\) to be a function, but in lack of a separate subsumption rule, we can not widen the type of \(x\) to the function via subtyping. To solve this problem, we employ the \(\Gamma\vdash x\uparrow T\) judgment here to widen the type variable to a concrete shape type, which reveals the underlying function type and enable the application to be typechecked. ### \(\mathtt{BI}_{t}\): Term-Level Box Inference Based on the semi-algorithmic type system \(\operatorname{CC}_{<:\Omega\rightarrow}\), we develop the term-level box inference system \(\operatorname{BI}_{t}\). Its typing and subtyping rules are defined in Figure 4 and 5 respectively. The term syntax is unchanged. The typing and subtyping rules are based on the semi-algorithmic system, but equipped with the functionality to inference the missing boxes by transforming the input term. #### 4.2.1. Typing The typing judgement in \(\operatorname{BI}_{t}\) is now in the form of \(\Gamma\vdash t\rightarrow\!t^{\prime}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\begin{array}{c}\hline\hline\hline\begin{array}{c}\hline\hline\hline\end{array}& \begin{array}{c}\hline\hline\end{array}&\begin{array}{c}\hline\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}&\begin{array}{c} \hline\end{array}&\begin{array}{c}\hline\end{array}\\ \hline\end{array}&\begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}& \begin{array}{c}\hline\end{array}&\begin{array}{c}\hline\end{array}&\begin Box adaptation is a replacement for subtyping, which inserts box operations in addition to regular subtype checks to make the adapted term conforming to the expected type. The details of box adaptation will be given in Section 4.2.2. Another place where box inference is involved is the variable typing judgement \(\Gamma\vdash x\uparrow\uparrow t\to T\). In addition to widening the type variables as in \(\Gamma\vdash x\uparrow T\), the variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\tau t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). The variable typing is defined as \(\Gamma\vdash x\uparrow t\to T\). variable typing with unboxing inserts an unbox operation, if the widened type is boxed. We use this rule to lookup variables in (Bi-app) and (Bi-TAPP), because this two rules expect the variable \(x\) to be functions, and \(x\)'s type may be a boxed function after widening the type variables. In such cases, we perform box inference to unbox the variable, so that the system is more expressive. In (Bi-App), we return the normalized term. Term normalization \(\left|t\right|\) applies simplifications on the term. It will be defined in Section 4.2.2, and among the simplifications what matters here is that the let-bindings in the form of let \(x=y\) in \(t\) will be reduced into \(t[x\mapsto y]\). This is necessary for the completeness of \(\operatorname{BI}_{t}\) (i.e. any term typeable in \(\operatorname{CC}_{<:\Box\rightarrow}\) should be typeable in \(\operatorname{BI}_{t}\)). Now we demonstrate its necessity. In (Bi-App) we have to bind the adapted functions and arguments before applying them, due to the monadic normal form the calculus is in. We will re-bind the variables \(x\) and \(y\) even if box adaptation keeps them untouched, which implies that \(x\) and \(y\) are well-typed in the original system without box inference involved. In this case, the transformed term before normalization will be let \(x^{\prime}=x\) in let \(y^{\prime}=y\) in \(x^{\prime}y^{\prime}\). Instead of being typed as \([z:=y]T\) (which is derivable in \(\operatorname{CC}_{<:\Box\rightarrow}\)), it will be typed as avoid \(\left(y^{\prime},\{y\},[z:=y^{\prime}]T\right)\), which is larger than \([z:=y]T\) since it avoids \(y^{\prime}\) to the empty set at contravariant places. To ensure completeness, we have to simplify let \(x^{\prime}=x\) in let \(y^{\prime}=y\) in \(x^{\prime}y^{\prime}\) to \(x\)\(y\) so that the most precise type can be derived. The (Bi-var) rule looks up the variable without transforming it. The (Bi-abs) and (Bi-Tabs) rule types their body with box inference, transforming the lambda to a new one with the adapted body. The capture set of the closure are computed based on the adapted body, too. The (Bi-TAPP) rule unboxes the function variable when possible, and bind it to a variable in order to apply it. Regular subtype check is used to compare the argument type and the bound. The (Bi-box), (Bi-unbox) and (Bi-let) rule types the term as in the semi-algorithmic system, returning the input term unchanged. #### 4.2.2. Box Adaptation The box adaptation judgement \(\Gamma\mapsto x\rhd T\rightsquigarrow t_{x}\) (defined in Figure 5) states that the variable \(x\) can be box-adapted to term \(t_{x}\) so that \(t_{x}\) conforms to the expected type \(T\). It first looks up the type of the variable \(x\) in the environment, and invokes the _adaptation subtyping_ to perform box inference. The adaptation subtyping \(\Gamma\mapsto(x\colon T)\rhd U\rightsquigarrow t_{x}\) could be understood as: given a variable \(x\) of type \(T\), it can be box-adapted into term \(t_{x}\) so that it conforms to the expected type \(U\). The adaptation subtyping is developed on the basis of semi-algorithmic subtyping defined in \(\operatorname{CC}_{<:\Box\rightarrow}\). Compared to semi-algorithmic subtyping, adaptation subtyping (1) transforms the input variable \(x\) and returns the adapted term \(t_{x}\) which can be typed as \(U\) given that \(x\) is bound to \(T\) in the typing context, and (2) has the (Bi-box) and (Bi-unbox) rules for inserting box- and unbox-operations when there is a mismatch of boxes between the actual type and the expected type. **Inlining the (alg-capt) rule.** In System \(\operatorname{BI}_{t}\) the (alg-capt) rule has been inlined to each of the adaptation subtyping rules. This is because box adaptation could change the captured variables of the term, which could impose additional constraints on the capture sets of closures' types. For instance, in the (bi-unbox) rule, the set of captured variables changes from \(\operatorname{cv}\left(x\right)=\left\{x\right\}\) to \(\operatorname{cv}\left(\text{let}\ y=C\mathbin{\vartriangle}\text{ }x\text{ in }t_{y}\right)=\left\{x\right\}\cup C\cup \operatorname{cv}\left(t_{y}\right)\). In the (bi-fun) rule, since \(x_{f}\) is adapted into \(t^{\prime}_{f}\), which possibly captures more variables (e.g. due to an unbox operation inserted by (Bi-unbox) when adapting the function body), we have to make sure that \(C^{\prime}\) subcaptures the captured variables of the adapted term in the premise. **Eta-expansion.** In (BA-fun) and (BA-ftun) rules, we eta-expand the input variable, so that we can box-adapt the argument and the result. For instance, in the (BA-fun) rule, the general idea can be illustrated in the following diagram. \[x_{f}\xrightarrow[\text{\scriptsize{\begin{array}{ll}\text{\scriptsize{ \begin{array}{ll}\text{\scriptsize{\begin{array}{ll}\text{\scriptsize{ \begin{array}{ll}\text{\scriptsize{\begin{array}{ll}\text{\scriptsize{ \begin{array}{ll}\text{\scriptsize{\begin{array}{ll}\text{\scriptsize{ \begin{array}{ll}\text{\scriptsize{\begin{array}{ll}\text{\scriptsize{ \beginbegin{array}[]{ll}\text{\scriptsize{\begin{array}{ll}\text{\scriptsize{ \beginbegin{array}[]{ll}\text{\scriptsize{\begin{\color{\color{\color{0}{\color{0}}}}{ \scriptsize{\begin{\color{\ we insert let bindings everywhere during the adaptation. Plus, the function gets eta-expanded in function adaptation rules. Therefore, without term normalisation, the box adaptation system transforms the term even if there is a subtyping relation between the actual and the expected type, which will undermine the completeness of box inference, as we will see in the metatheory. ### Bl\({}_{T}\): Type-Level Box Inference In term-level box inference system, the transformation on the terms (i.e. inserting box-operations, let-bindings and eta-expansions) does not change the semantics of the program. What matters is _whether_ it is possible to transform the program to be well-typed with box inference, but not _what_ is the result term of box inference. To eliminate unnecessary computational and memory burden to compute and store the result terms, in this section we investigate the possibility of _predicting_ the effect of box inference solely on the type level. In this section, on the basis of the term-level box inference system, we further develop the type-level system BI\({}_{T}\), which does the equivalent reasoning without computing the transformed terms. Figure 7 and 6 shows the typing and adaptation rules of BI\({}_{T}\). The key problem of modeling box inference on type level is how to predict the effects of term transformations. First of all, the transformation inserts boxes and unboxes, thus adding or dropping boxes in types. More importantly, we have to predict how the term transformations changes the capture sets. It turns out that to do this we will need to keep track of the captured variable sets of transformed terms. #### 4.3.1. Type-Level Adaptation Subtyping The type-level adaptation subtyping rules has the form \(\Gamma\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{\scriptsize$ \bullet$}}\hbox{\scriptsize$\bullet$}}}}T\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{\scriptsize$ \bullet$}}\hbox{\scriptsize$\bullet$}}}}U\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{ \scriptsize$\bullet$}}\hbox{\scriptsize$\bullet$}}}}U\mathrel{\mathop{ \kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{\scriptsize$\bullet$}}\hbox{\scriptsize$ \bullet$}}}}U\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{ \scriptsize$\bullet$}}\hbox{\scriptsize$\bullet$}}}}C\). In the metatheory, we prove that if \(\Gamma\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{\scriptsize$ \bullet$}}\hbox{\scriptsize$\bullet$}}}}T\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{ \scriptsize$\bullet$}}\hbox{\scriptsize$\bullet$}}}}T\mathrel{\mathop{ \kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{\scriptsize$\bullet$}}\hbox{\scriptsize$ \bullet$}}}}U\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{ \scriptsize$\bullet$}}\hbox{\scriptsize$\bullet$}}}}U\mathrel{\mathop{ \kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{\scriptsize$\bullet$}}\hbox{\scriptsize$ \bullet$}}}}C\) is derivable in BI\({}_{T}\), we have \(\Gamma\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{\scriptsize$\bullet$}}\hbox{\scriptsize$ \bullet$}}}}(x;T)\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{ \scriptsize$\bullet$}}\hbox{\scriptsize$\bullet$}}}}U\mathrel{\mathop{ \kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{\scriptsize$\bullet$}}\hbox{\scriptsize$ \bullet$}}}}t_{x}\) in the term-level system BI\({}_{t}\), such that \(C\) is the captured variables of \(t_{x}\) (i.e. \(\mathop{\rm cv}\left(t_{x}\right)\)). In other words, the type-level system models the equivalent transformation, and predict the captured variables of the adapted term, without explicitly computing the term. It turns out that to predict the captured variable set correctly, we have to keep track of the _kind_ of terms too, which is the \(\mathcal{U}\) in the judgement, and we have \(f_{\mathcal{U}}\left(t_{x}\right)=\mathcal{U}\), where \(f_{\mathcal{U}}\left(t_{x}\right)\) returns \(t_{x}\)'s kind. **Term kinds.** We categorize the terms into three kinds, which are * **Variables** X; * **Values** V: lambda \(\lambda(x\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{\scriptsize$ \bullet$}}\hbox{\scriptsize$\bullet$}}}}T).t\), type lambda \(\lambda[X\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{ \scriptsize$\bullet$}}\hbox{\scriptsize$\bullet$}}}}S].t\), and box \(\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{\scriptsize$\bullet$}}\hbox{\scriptsize$ \bullet$}}}x\); * **Terms** T: application \(x\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{\scriptsize$\bullet$}}\hbox{\scriptsize$ \bullet$}}}}y\), type application \(x[S]\), let binding let \(x=t\) in \(u\), and unbox \(C\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{\scriptsize$\bullet$}}\hbox{\scriptsize$ \bullet$}}}}x\). We observe that it is necessary to keep track of the kind of the adapted terms so that we can correctly predict the captured variables of them. For example, in the (t-BI-box) rule, given the input variable \(x\), its actual type \(C\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{\scriptsize$\bullet$}}\hbox{ \scriptsize$\bullet$}}}}S\) and the expected type \(\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{\scriptsize$\bullet$}}\hbox{\scriptsize$ \bullet$}}}C^{\prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{ \scriptsize$\bullet$}}\hbox{\scriptsize$\bullet$}}}}S^{\prime}\), the (BI-box) rule in BI\({}_{t}\) first adapts the variable to \(C^{\prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{ \scriptsize$\bullet$}}\hbox{\scriptsize$\bullet$}}}}y\) by transforming it into \(t_{x}\), then boxes it, resulting in the term let \(y=t_{x}\) in \(\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{\scriptsize$\bullet$}}\hbox{\scriptsize$ \bullet$}}}y\). Based on the definition of \(\mathop{\rm cv}\left(\cdot\right)\), if \(t_{x}\) is either a variable or a value (i.e. \(f_{\mathcal{U}}\left(t_{x}\right)\in\mathop{\rm X},\mathop{\rm V}\)), \(\mathop{\rm cv}\left(\text{let }y=t_{x}\in\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox{\hbox{ \scriptsize$\bullet$}}\hbox{\scriptsize$\bullet$}}}y\right)=\{\}\), otherwise the captured variable set will be \(\mathop{\rm cv}\left(t_{x}\right)\). In the (t-BI-box) rule in System BI\({}_{T}\), we make the correct prediction of the captured variables based on the kind of the adapted term. This illustrates the necessity of keeping track of the _kind_ during type-level adaptation. **Holes in Adaptation Subtyping.** In the adaptation subtyping rules we have a special construct, hole, written \(\Diamond\). We need this special _hole_ construct, because the captured variable sets predicted by the system could possibly contains the input variable \(x\), but \(x\) is unknown in the adaptation subtyping derivations. We therefore use \(\Diamond\) as a placeholder for this input variable, and fill the hole with the actual \(x\) when it becomes available, as in (t-ADAPT). \(C[x]\) fills the hold in \(C\) with \(x\), i.e. \(C[x]=C\setminus\{\Diamond\}\cup\{x\}\). \[\begin{array}{c}\Gamma\mapsto c\succ U\ \to\mathcal{U}\ |\ C\\ \Gamma\mapsto x\colon U\\ \Gamma\mapsto U\succ T\ \to\mathcal{U}\ |\ C\ **Adaptation rules.** Now we inspect these adaptation subtyping rules one-by-one. These rules are formed in a way that each of them predicts the captured variable sets and the term kinds in the corresponding rule of the term-level system. The (t-BA-REFL) and (t-BA-TOP) rule is straightforward: on the term level this two rules returns the input variable as it is. Therefore we predict that the kind of the adapted term is X and the captured variables is just \(\{\lozenge\}\). The (t-BA-TVAR) rule widens the type variable to its bound and calls the rule recursively, which reflects what happens on the term level. The (t-BA-boxed) rule performs a case analysis on \(\mathcal{U}\) (which corresponds to the Figure 7: Typing rules of System \(\text{Bl}_{T}\). kind of \(t_{y}\) in (t-BA-BOXED)). If \(\mathcal{U}\) is the variables, on term-level the norm (\(\cdot\)) simplifies the result term to the input variable, so the result kind is X and the captured variable set is just \(\{\Diamond\}\). If \(\mathcal{U}\) is V or T, the let-bindings and box/unbox operators will not get simplified. So the result kind is T. Both \(C_{1}\) and \(\{\Diamond\}\) are in the captured variable set, because of the unbox operation \(C\mathrel{\hbox{\vrule height 6.0pt depth -0.0pt width 0.4pt\vrule height 6.0pt depth -0.0pt width 0.4pt} }x\) in the result term. \(C\) is not included in the captured variable set if \(t_{y}\) is a value. In the (t-BA-FUN) rule, we adapt the argument and the result recursively in the premise. The result kind is X only if both \(\mathcal{U}_{1}\) and \(\mathcal{U}_{2}\) are X (which means the resulted term is simplified to the input variable \(x_{f}\) by norm (\(\cdot\))), and it is otherwise always V since the adapted term will be a lambda. We predict the captured variables set with \(C_{1}\) and \(C_{2}\). The (t-BA-TFUN) is really similar. The resulted kind is X when \(\mathcal{U}\) is X, in which case the result is normalised to the variable. The captured variable set could again be computed from \(C_{2}\). Note that regular subtyping check \(\Gamma\mathrel{\hbox{\vrule height 6.0pt depth -0.0pt width 0.4pt\vrule height 6.0pt depth -0.0pt width 0.4pt} }S_{2}<:S_{1}\) is used to compare the bounds. For the (t-BI-BOX) and (t-BI-UNBOX) rule, the resulted kind is always T, since (BI-BOX) or (BI-UNBOX) always return a let expression. The captured variable set can be computed using \(C_{0}\). Notably, in the (BI-BOX) rule the result term captures the empty set when \(\mathcal{U}\) is X or V. In this case the related references, including the input variable, are hidden by the box operation. **Boxes on the top.** After inspecting the adaptation subtyping rules in System \(\mathrm{BI}_{T}\), we observe that the hole \(\Diamond\) is always an element in the captured variable set, unless the (t-BI-BOX) is in the root of the derivation tree, in which case a box is inferred on the top of the term. In other words, box adaptation only _hides_ references in this specific case; other than this, box adaptation always makes the term to capture _more_ references. This observation facilitates the implementation of box inference. #### 4.3.2. Typing The typing judgement \(\Gamma\mathrel{\hbox{\vrule height 6.0pt depth -0.0pt width 0.4pt\vrule height 6.0pt depth -0.0pt width 0.4pt} }t\colon T\mathrel{\hbox{\vrule height 6.0pt depth -0.0pt width 0.4pt\vrule height 6.0pt depth -0.0pt width 0.4pt} }C\) in System \(\mathrm{BI}_{T}\) now returns the captured variable set of the adapted term, without actually computes the term. In metatheory, it is proven that if \(\Gamma\mathrel{\hbox{\vrule height 6.0pt depth -0.0pt width 0.4pt\vrule height 6.0pt depth -0.0pt width 0.4pt} }t\colon T\mathrel{\hbox{\vrule height 6.0pt depth -0.0pt width 0.4pt\vrule height 6.0pt depth -0.0pt width 0.4pt} }C\), in the term-level system, the typing judgement \(\Gamma\mathrel{\hbox{\vrule height 6.0pt depth -0.0pt width 0.4pt\vrule height 6.0pt depth -0.0pt width 0.4pt} }t\mathrel{\hbox{\vrule height 6.0pt depth -0.0pt width 0.4pt\vrule height 6.0pt depth -0.0pt width 0.4pt} }t\mathrel{\hbox{\vrule height 6.0pt depth -0.0pt width 0.4pt\vrule height 6.0pt depth -0.0pt width 0.4pt} }t\mathrel{\hbox{\vrule height 6.0pt depth -0.0pt width 0.4pt\vrule height 6.0pt depth -0.0pt width 0.4pt} }t\mathrel{\hbox{\vrule height 6.0pt depth -0.0pt width 0.4pt\vrule height 6.0pt depth -0.0pt width 0.4pt} }T\) is derivable for some term \(t^{\prime}\), and \(\mathrm{cv}\left(t^{\prime}\right)=C\). In other words, the type-level typing rule predicts _whether_ the input program \(t\) can become well-typed after box inference, and returns its captured variable set if so. In System \(\mathrm{BI}_{T}\), the captured variable sets of resulted terms are predicted by the judgement, but the actual terms resulted from box are unknown. The (t-BI-VAR), (t-BI-BOX), (t-BI-TAPP) and (t-BI-UNBOX) rules are standard and expected, while they returns the captured variable set of the transformed term instead of the term itself. \(\mathrm{BI}_{T}\) uses the type-level box adaptation in (t-BI-APP). In the (t-BI-B-ABS) and (t-BI-TABS) we make use of these predicted captured variable sets, instead of computing them using \(\mathrm{cv}\left(\cdot\right)\) as in the term-level system. In the (t-BI-LET) rule, we do a case analysis to compute the correct captured set variable, which reflects the two cases for let-expressions in the definition of \(\mathrm{cv}\left(\cdot\right)\). In the same vein as the typing rules, the variable typing rule now returns the captured set of the possibly unboxed term, instead of returning this term directly. ## 5. Metatheory In the metatheory, we develop the proof of the relations between System \(\mathrm{CC}_{<:\Box}\) and the family of algorithmic box inference systems. The relationships between these systems is illustrated in the following diagram. \[\mathsf{wtp}(\mathrm{CC}_{<:\Box})\quad=\quad\mathsf{wtp}(\mathrm{CC}_{<: \Box-})\quad C\quad\mathsf{wtp}(\mathrm{BI}_{t})\quad=\quad\mathsf{wtp}( \mathrm{BI}_{T})\] Here \(\mathsf{wtp}(\cdot)\) denotes the set of well-typed programs in the system. As depicted in the diagram, an equivalence exists between \(\mathrm{CC}_{<:\Box}\) and \(\mathrm{CC}_{<:\Box-}\), as well as between \(\mathrm{BI}_{t}\) and \(\mathrm{BI}_{T}\). However, System \(\mathrm{BI}_{t}\) is _more expressive_ than System \(\mathrm{CC}_{<:\Box-}\), as it accepts more programs by resolving the inconsistencies between boxes with box infernece. ### Equivalence Between \(\text{CC}_{<:\square}\) and \(\text{CC}_{<:\square}\) We first prove the equivalence between \(\text{CC}_{<:\square}\) and \(\text{CC}_{<:\square}\). It implies that a correspondance exists between every derivation in \(\text{CC}_{<:\square}\) and \(\text{CC}_{<:\square}\)., This is expressed in the the following theorem. Theorem 5.1 (Equivalence between \(\text{CC}_{<:\square}\) and \(\text{CC}_{<:\square}\)).: \(\forall\Gamma,t,T.\Gamma\models t:T\leftrightarrow\Gamma\models t:T\). This theorem follows directly from Theorem A.34 and A.37. Detailed proof is given in Section A.2. The proof of this theorem relies on the admissibility of both the reflexivity and the transitivity rule in \(\text{CC}_{<:\square}\)'s subtyping system. The following two lemmas demonstrates the admissibility of the rules respectively. Lemma 5.2 (Reflexivity of algorithmic subtyping).: _For any \(T\), \(\Gamma\models T<:T\)._ This lemma's proof can be easily established through induction on the type \(T\). Lemma 5.3 (Transitivity of algorithmic subtyping \(\Gamma\models T_{1}<:T_{2}\)).: _In any environment \(\Gamma\), \(\Gamma\models T_{1}<:T_{2}\) and \(\Gamma\models T_{2}<:T_{3}\), then \(\Gamma\models T_{1}<:T_{3}\)._ The proof for this lemma requires induction on the type in the middle (i.e. \(T_{2}\)). In addition to proving the aforementioned two lemmas for subtyping, it is required to establish a lemma about _avoidance_ which demonstrates that it is possible to construct the least supertype of a type that avoids a local variable. This lemma is crucial to proving the case for let-bindings in the completeness theorem. Lemma 5.4 ().: _If \(\Gamma\models s:T_{1}\), and \(\Gamma,x:T_{1}\models t:T_{2}\), then \(T_{2}^{\prime}=\text{avoid}\left(x,\text{cv}\left(T_{1}\right),T_{2}\right)\) is the least supertype of \(T_{2}\), such that \(T_{2}^{\prime}\) does not mention \(x\). Here, least super type means for every \(U\) such that \(\Gamma,x:T_{1}\models T_{2}<:U\) and \(x\notin\text{fv}(U)\), we have \(\Gamma\models T_{2}^{\prime}<:U\)._ We can prove this lemma by induction on the subtype derivation \(\Gamma,x:T_{1}\models T_{2}<:U\). Once we have established these lemmas, both direction of the equivalence between the two systems can be proven by induction. ### Relation Between \(\text{CC}_{<:\square}\) and \(\text{BI}_{t}\) \(\text{BI}_{t}\) is more expressive than \(\text{BI}_{t}\). Specifically, every typing derivation \(\text{CC}_{<:\square}\) can be mapped to a valid derivation in \(\text{BI}_{t}\), whereas there are certain programs that possess valid typing derivations in \(\text{BI}_{t}\) but not in \(\text{CC}_{<:\square}\). Such a program is ill-typed in \(\text{CC}_{<:\square}\) due to the inconsistent boxes, but there could exist a typing derivation \(\Gamma\models t\leadsto t^{\prime}\): \(T\) in \(\text{BI}_{T}\) which resolves the mismatches between boxes through box inference and thus accept the program \(t\). Also, it is crucial for \(t^{\prime}\) to be well-typed in \(\text{CC}_{<:\square}\), or else \(\text{BI}_{t}\) will be unsound. The following theorem, based on the above ideas, establishes the relation between the two systems formally. Theorem 5.5 ().: \(\forall\Gamma,t,T\). _we have: (1) if \(\Gamma\models t:T\), \(\exists t^{\prime}\) such that \(\Gamma\models t\leadsto t^{\prime}\): \(T\); and (2) if for some \(t^{\prime}\) we have \(\Gamma\models t\leadsto t^{\prime}\): \(T\), then \(\Gamma\models t^{\prime}\): \(T\)._ We prove this theorem using Theorem A.51 and A.50, with detailed proof given in Section A.3. The completeness and soundness of box adaptation is crucial to the proof, which is established by the following two lemmas. Lemma 5.6 (Completeness of box adaptation).: _If \(\Gamma\models x\uparrow T\) and \(\Gamma\models T<:U\) then \(\Gamma\models(x:T)\not\rhd U\to x\)._ **Lemma 5.7** (Soundness of box adaptation).: _If \(\Gamma\mapsto x\rhd T\leadsto t_{x}\) then \(\Gamma\vdash t_{x}\colon T\)._ Notably, in the completeness lemma (Lemma 5.6), we demonstrate that box adaptation will return the input _as it is_ if \(T\) is a subtype of \(U\) (which means a variable of \(T\) can be used as \(U\) without box inference involved). This is important for the completeness of System \(\mathrm{BI}_{t}\). To see why, consider the application \(x\,y\), where \(\Gamma\vdash,x\colon C\Psi(z\colon U)T\), \(\Gamma\vdash,y\colon U^{\prime}\), and \(\Gamma\vdash U^{\prime}<:U\). In \(\mathrm{BI}_{t}\), we lookup \(x\), and box-adapt the argument \(y\) into \(t_{y}\). The resulting term is \(\mathrm{norm}_{\textsc{left}}\left(\text{let }x^{t}=x\text{ in let }y^{t}=t_{y}\text{ in }x^{t}y^{t}\right)\). We have to ensure that \(t_{y}=y\) in this case. We have to ensure that \(t_{y}=y\) because otherwise the type of the transformed term will be \(\text{avoid}\left(y^{t},\text{cv}\left(U\right),\left[z:=y^{t}\right]T\right)\), which would be wider than the type \([z:=y]T\) in System \(\mathrm{CC}_{<:\Omega\rightarrow}\). Additionally, we introduce an auxilliary subtyping relation, which is derived from the subtyping rules of \(\mathrm{CC}_{<:\Omega\rightarrow}\) by inlining the (alg-capt) rule. It eases the development of the proof on the relation between subtyping in \(\mathrm{CC}_{<:\Omega\rightarrow}\) and adaptation subtyping, as the adaptation subtyping rules also have the capt rule inlined. Figure 8 shows the definition of this auxilliary relation. In the metatheory, we first establish the equivalence between the auxilliary subtyping and the one in \(\mathrm{CC}_{<:\Omega\rightarrow}\), then develop the relation between the auxiliary subtyping rules and box adaptation subtyping rules. ### Equivalence Between \(\mathrm{BI}_{t}\) and \(\mathrm{BI}_{T}\) In the next part of metatheory, we aim to demonstrate the equivalence between System \(\mathrm{BI}_{t}\) and \(\mathrm{BI}_{T}\), through following theorem. **Theorem 5.8**.: \(\forall\Gamma,t,T\)_. we have (1) if \(\Gamma\vdash t\leadsto t^{t}\colon T\) is derivable for some \(t^{t}\), then \(\Gamma\vdash t\colon T\mid C\) is derivable, where \(C=\text{cv}\left(t^{t}\right)\); and (2) if \(\Gamma\vdash t\colon T\mid C\) is derivable for some \(C\), then there exists \(t^{t}\) such that \(\Gamma\vdash t\leadsto t^{t}\colon T\) is valid in \(\mathrm{BI}_{t}\) and \(\text{cv}\left(t^{t}\right)=C\)._ This theorem can be proven using Theorem A.61 and A.59. It indicates that box inference can be performed with identical expressiveness on type-level, without storing and computing the adapted terms. We can prove both directions of this equivalence through induction on the derivation tree. For a comprehensive proof, please refer to Section A.4. ### Termintaion of Box Inference The type checking process of system \(\mathrm{F}_{<:}\), which is the basis of System \(\mathrm{CC}_{<:\Omega}\), is known to be undecidable. It has been demonstrated (Pierce, 1992) that certain subtype query will cause the subtype check to loop. As System \(\mathrm{CC}_{<:\Omega}\) is based on System \(\mathrm{F}_{<:}\), it encounters the same undecidability problem that emerges from subtyping between type functions. Despite this, the extensions introduced by \(\mathrm{CC}_{<:\Omega}\) and \(\mathrm{BI}_{t}\) does not degrade the system's decidability. In other words, the extensions should not lead to non-termination in more situations than in System F. To demonstrate this idea formally (in Theorem 5.9 and 5.10), we are going to prove that the typechecking of a program \(t\) in \(\mathrm{CC}_{<:\Omega\rightarrow}\) terminates as long as typechecking the program \(t^{t}\) does, where \(t^{t}\) is derived from \(t\) by erasing all \(\mathrm{CC}_{<:\Omega}\)-related constructs. Now we define the function \(\varepsilon(\cdot)\) to erase the capture sets and the boxes. It traverses the type and drops all \(\text{CC}_{<:\square}\)-specific constructs. \[\varepsilon(CS) = \varepsilon(S)\] \[\varepsilon(T) = \varepsilon(T)\] \[\varepsilon(\Psi(z:T)U) = \Psi(z:\varepsilon(T))\varepsilon(U)\] \[\varepsilon(\Psi[X<:S]T) = \Psi(X:\varepsilon(S))\varepsilon(T)\] \[\varepsilon(X) = X\] \[\varepsilon(\top) = \top\] When applied to a context \(\Gamma\), \(\varepsilon(\Gamma)\) is straightfowardly defined as the new context where \(\varepsilon(\cdot)\) are applied to each of the bound types. The following theorems demonstrate the termination conditions of typechecking in \(\text{BI}_{t}\). Here, subtype and subtype\({}_{\text{BI}}\) is the subtype checking procedure of System \(\text{F}_{<:}\) and \(\text{BI}_{t}\) respectively. Similarly, typecheck and typecheck\({}_{\text{BI}}\) are the typechecking procedures. They establish that the box adaptation and typechecking procedure of any program \(t\) in \(\text{BI}_{t}\) will terminate, provided that the subtyping and typechecking procedure of the erased program \(t^{t}\) terminate in System \(\text{F}_{<:}\). The simple-formed assumption rules out the types that has nested boxes (e.g. \(\square C\)\(\square C^{\prime}S\)), which do not make sense in practice. Section A.5 presents the detailed definitions and the proof. **Theorem 5.9** (Conditional termination of box adaptation).: _Given a well-formed and simple-formed context \(\Gamma\) and simple-formed types \(T,U\) that are well-formed in the environment, isSubtype\({}_{BI}(\Gamma,T,U)\) terminates as long as isSubtype\((\varepsilon(\Gamma),\varepsilon(T),\varepsilon(U))\) does._ **Theorem 5.10** (Conditional termination of typechecking in \(\text{BI}_{t}\)).: _Given a well-formed and simple-formed context \(\Gamma\) and the term \(t\) that are well-formed in the environment and the type annotations are simple-formed, if typecheck\((\varepsilon(\Gamma),\varepsilon(t))\) terminates, typecheck\({}_{BI}(\Gamma,t)\) terminates too._ ## 6. Implementing capture checking and box inference This section discusses the implementation of capture checking and box inference, based on our experience with Scala 3, which has practical support for capture checking available as an experimental language feature. Although the syntax-directed box inference calculi directly give rise to a procedure for box inference, the challenge lies in its efficient implemention and the integration into an existing language implementation. To begin, we briefly introduce Scala's capture checking implementation, and based on its framework, we proceed to discuss the implementation of box inference. ### Scala's Implementation of Capture Checking In Scala 3, capture checking is available as an experimental language feature users may choose to turn on. Since we expect capture checking to be an extension to the existing compiler that can be easily enabled or disabled, it is infeasible to deeply integrate it into the existing logic of type checking. Therefore, Scala 3 implements capture checking as a standalone phase after typer. It turns out that it is possible to do vanilla type checking and capture checking separately and in sequence. During typing, capture set annotations in types are simply ignored, with types being derived and checked normally. Then in the capture checking phase, we compute captured references of closures, inference missing capture sets and check the sets against the annotations. This phase is where box inference is involved. Also, capture set inference is a crucial part in the implementation of capture checking, but is left as future work. The capture checking phase computes the captured references of a closure (which corresponds to the \(\operatorname{cv}\left(\cdot\right)\) computation in fun and tfun rules). Instead of implementing a \(\operatorname{cv}\left(\cdot\right)\) function that traverses the body of a closure each time we assign capture sets to functions, we track the captured references of a term while checking it. To do this, a capturing environment is created for each closure, and the environments of nested closures are chained. References are pushed in these environments as we check the body of these closures. When checking the capture set in the type of closures, we retrieve these captured references from the corresponding environments. Let us inspect how capture checking behaves in the following example: ``` deffoo(io:{*}IO)= defbar():Unit= io.use() fs.use() () bar ``` The function is capture-checked as {fs} (io:{*}IO) -> {io, fs} () ->Unit. When checking the body of bar, we have two capturing environments created for bar and foo respectively, and the environment for bar has a pointer to foo, since they are nested. The references io and fs are pushed into bar's environment, after which they are propagated to the nesting environment for foo. However, io is not included in foo's environment because it is an argument of foo. This is in line with the definition of \(\operatorname{cv}\left(\cdot\right)\). Explicit capture annotations imposes constraint on the capture sets, which is enforced by comparing the derived capture sets and the explicit annotations. This is where box inference is involved. We want to check whether the derived type of a term conforms to the expectation with box-related constructs inferred and completed. The details of box inference implementation is discussed in the following section. Apart from box inference, we have to infer the capture sets omitted by the users. To this end, we create _capture set variables_ for the captrue sets to be inferred, and record the subcapturing relationships as we check the program. The algorithm for capture set inference is important in the implementation of capture checking, and gathers considerable complexity to fit in with the existing architecture of the compiler. The formalization of capture set inference is worth investigation too, and is left as possible future work. ### Implementing Box Inference #### 6.2.1. Box Adaptation as a Separate Check The central part of box inference implementation is box adaptation. In the box inference calculi, box adaptation is proposed as a replacement for subtyping checks, which simultaneously adapts the mismatched boxes and performs regular subtyping reasoning. However, completely rewriting the subtype comparison logic and reframing it with box adaptation is arguably too heavy a change, and makes the implementation less plugable than it should be. It is therefore favorable to implementation box adaptation as an independent step separated from subtype comparison. Given the derived type tp and the expected type pt, before we invoke the subtype comparison isSubtype(tp, pt) to check that the derived type conforms to the expectation, we first call box adaptation adapt(tp, pt) to transform the type into tp1 with box adaptation. During this, we traverse the type recursively and heal the mismatch of boxes as much as possible. Finally, we invoke the subtype comparison to check whether the adapted type tp1 could conform to the expectation. For example, given the actual type {} (op: (io) Unit -> Unit) -> Unit and the expected type {} (op: box (io) Unit -> Unit) -> Unit, box adaptation transforms the actual type into {io} (op: box {io} Unit -> Unit) -> Unit (which simulates an eta-expansion and an unbox on the term level). Now we invoke the subtype comparison on the adapted type and the expectation, which then reports a type-mismatch error. #### 6.2.2. Tracking Additional Captured References Now we propose an approach to implement box adaptation by tracking _additional_ captured references during the adaptation, which fits in well with the architecture of capture checking. The basis of the proposed approach is the following observation. _Observation._ Box adaptation only makes the adapted term capture _more_ references (i.e. \(\lozenge\in C\) if \(\Gamma\mapsto T\rhd U\leadsto\mathcal{U}\mid C\)) unless the derivation tree of box adaptation looks like the following. \[\begin{array}{c}\infer{\Gamma\vdash C_{1}\ S_{1}\rhd C_{2}\ S_{2}\leadsto V/X \mid C}{\Gamma\vdash C_{1}\ S_{1}\rhd C_{2}\ S_{2}\leadsto T\mid\{\}}\ The calculus proposes to track the capability for throwing an exception thereby ensuring that these capabilties will not escape their intended scope. The report contains an informal discussion of algorithmic typing and box inference, which outlines the general idea of the box inference implementation in Scala 3. Subsequently, (Odersky et al., 2022) elaborates the capture calculus and presents the full metatheory. (Boruch-Gruszecki et al., 2021) presents a similar calculus that tracks captures in types. By contrast, it does not have boxes, and type variables could potentially be impure and require being tracked. ### Algorithmic Typing Owing to their close correlation to compiler construction, the design and decidability of (semi-)algorithmic type systems have been the subject of extensive research. The syntax-directed type systems have been proposed and studied for System F and its variant, F\({}_{<:}\), which incorporates subtyping (Baldan et al., 1999; Hindley, 1969; Milner, 1978; Pierce, 1992, 2002). System F\({}_{<:}\) presents challenges in algorithmic typing due to the abiguity caused by the transitivity and the subsumption rules. To circumvent this, these rules are inlined into other rules to create a syntax-directed ruleset. As CC\({}_{<:\square}\) is based on System F\({}_{<:}\), our system CC\({}_{<:\square-}\) uses the same strategy to achieve syntax-directness, but it also modifies the rules to facilitate capturing types and employs algorithmic avoidance in (alg-let). However, it should be noted that the algorithmic typing of System F\({}_{<:}\) is shown to be undecidable (Baldan et al., 1999; Pierce, 1992, 2002). Furthermore, (Emir et al., 2006; Kennedy and Pierce, 2006) investigate the algorithmic type system with the presence of inheritances and variances, which essentially formalises typechecking algorithms in object-oriented scenarios. There are other research endeavors that attempt to achieve the algorithmics of typing in a variety of type systems (Bruce et al., 1993; Hu, 2019; Nieto, 2017). ## 8. Conclusion In this paper, we develop a family of semi-algorithmic box inference system, which inserts box operations to heal the mismatch of boxes in the input program. We start with the semi-algorithmic variant of capture calculus, named System CC\({}_{<:\square-}\), which makes the typing and subtyping rules in capture calculus syntax-directed. Based on it, we develop the box inference system BI\({}_{t}\) that transforms the terms by inserting the missing box-related constructs so that the result term is well-typed in capture calculus. Taking one step further, we propose BI\({}_{T}\) and show that it is possible to perform box inference on type level, operating only on types and not computing the transformed terms, but with the equivalent expressive power as term-level system BI\({}_{t}\). In metatheory we establish the relationship between these systems, showing that CC\({}_{<:\square-}\) is equivalent to CC\({}_{<:\square}\), that BI\({}_{t}\) has strictly more expressive power than CC\({}_{<:\square-}\), and that BI\({}_{t}\) and BI\({}_{T}\) are equivalent. ###### Acknowledgements. We thank Aleksander Boruch-Gruszecki, Jonathan Immanuel Brachthauser, Edward Lee and Ondrej Lhotak for their insightful suggestions and constructive feedbacks during the development of this work.
2308.10980
r-process Abundance Patterns in the Globular Cluster M92
Whereas light element abundance variations are a hallmark of globular clusters, there is little evidence for variation in neutron-capture elements. A significant exception is M15, which shows a star-to-star dispersion in neutron-capture abundances of at least one order of magnitude. The literature contains evidence both for and against a neutron-capture dispersion in M92. We conducted an analysis of archival Keck/HIRES spectra of 35 stars in M92, 29 of which are giants, which we use exclusively for our conclusions. M92 conforms to the light element abundance variations typical of massive clusters. Like other globular clusters, its neutron-capture abundances were generated by the r-process. We confirm a star-to-star dispersion in the r-process. Unlike M15, the dispersion is limited to "first-generation" (low Na, high Mg) stars, and the dispersion is smaller for Sr, Y, and Zr than for Ba and the lanthanides. This is the first detection of a relation between light element and neutron-capture abundances in a globular cluster. We propose that a source of the main r-process polluted the cluster shortly before or concurrently with the first generation of star formation. The heavier r-process abundances were inhomogeneously distributed while the first-generation stars were forming. The second-generation stars formed after several crossing times (~0.8 Myr); hence, the second generation shows no r-process dispersion. This scenario imposes a minimum temporal separation of 0.8 Myr between the first and second generations.
Evan N. Kirby, Alexander P. Ji, Mikhail Kovalev
2023-08-21T18:55:24Z
http://arxiv.org/abs/2308.10980v1
# \(r\)-process Abundance Patterns in the Globular Cluster M921 ###### Abstract Whereas light element abundance variations are a hallmark of globular clusters, there is little evidence for variation in neutron-capture elements. A significant exception is M15, which shows a star-to-star dispersion in neutron-capture abundances of at least one order of magnitude. The literature contains evidence both for and against a neutron-capture dispersion in M92. We conducted an analysis of archival Keck/HIRES spectra of 35 stars in M92, 29 of which are giants, which we use exclusively for our conclusions. M92 conforms to the light element abundance variations typical of massive clusters. Like other globular clusters, its neutron-capture abundances were generated by the \(r\)-process. We confirm a star-to-star dispersion in the \(r\)-process. Unlike M15, the dispersion is limited to "first-generation" (low Na, high Mg) stars, and the dispersion is smaller for Sr, Y, and Zr than for Ba and the lanthanides. This is the first detection of a relation between light element and neutron-capture abundances in a globular cluster. We propose that a source of the main \(r\)-process polluted the cluster shortly before or concurrently with the first generation of star formation. The heavier \(r\)-process abundances were inhomogeneously distributed while the first-generation stars were forming. The second-generation stars formed after several crossing times (\(\sim 0.8\) Myr); hence, the second generation shows no \(r\)-process dispersion. This scenario imposes a minimum temporal separation of 0.8 Myr between the first and second generations. ## 1 Introduction Globular clusters (GCs) were once thought to be the exemplars of single stellar populations, in which all the stars had the same age and elemental composition. Cracks in this perception appeared in the 1970s. Kraft (1979) discussed the growing photometric and spectroscopic evidence for large star-to-star variations in the abundances of carbon and nitrogen within individual clusters. At the time, there was a debate on the origin of the light element variations. Were they a sign of internal mixing or primordial pollution? Peterson's (1980) discovery of sodium variations in M13 marked a paradigm shift in the theory of the chemical evolution of GCs. Sodium variations were thought to be evidence of primordial pollution because sodium would not be manufactured by internal processes in the low-mass stars that are observable today. Peterson further saw evidence for anti-correlation between sodium and oxygen abundances in M13. Subsequent spectroscopic observations of other GCs confirmed the anti-correlation between sodium and cyanogen (Cottrell & Da Costa, 1981) and between sodium and oxygen (Kraft et al., 1992). Due to concerted observations of dozens of clusters (e.g., Carretta, 2019; Masseron et al., 2019), we know that chemical abundance inhomogeneities are ubiquitous in globular clusters. (See Milone & Marino 2022 for a review.) In fact, Carretta et al. (2010) suggested that a GC be defined as an object that exhibits the Na-O anti-correlation. Many clusters exhibit other element abundance variations, such as the Mg-Al anti-correlation (Kraft et al., 1997). Some clusters even show variations in the abundance of K (Cohen et al., 2011; Cohen & Kirby, 2012; Mucciarelli et al., 2012). With just a few exceptions, K is the heaviest element that typically shows any variation. Iron and iron-peak elements have immeasurably low dispersion in most GCs (e.g., Willman & Strader, 2012). The major exceptions are \(\omega\) Centauri, M54, and Terzan 5. The first two are thought to be the nuclei of accreted or accreting dwarf galaxies (Bekki & Freeman, 2003), but Terzan 5 is more mysterious (e.g., Origlia et al., 2013). A few more GCs have small Fe abundance variations (Marino et al., 2009, 2015; Simmerer et al., 2013; Johnson et al., 2015, among others). These GCs can often be identified by a splitting of the red giant branch (RGB) in the color-magnitude diagram (Milone et al., 2017). The light element abundance variations do not fit well into theories of GC formation. High-temperature hydrogen burning nearly explains the pattern of C, N, O, Na, Mg, and Al abundances (Denisenkov & Denisenkova, 1990; Langer & Hoffman, 1995; Cavallo et al., 1996). However, no theory manages to fit this nucleosynthetic process into a framework of star formation. It seems likely that the stars enhanced in Na and Al depleted in O and Mg formed after the stars with "normal" abundance patterns. However, each candidate source for high-temperature hydrogen burning, such as asymptotic giant branch (AGB) stars or fast-rotating massive stars, fails to explain some observable property of GCs (Bastian & Lardo, 2018). An even rarer phenomenon is variation in neutron-capture elements. The prototypical GC for this phenomenon is M15. Sneden et al. (1997) discovered a strong correlation between Ba and Eu abundances in M15. The presence of a correlation implies that that both elements vary from star to star. Furthermore, they were probably generated in the same nucleosynthetic event or events. The [Eu/Ba] ratio indicated that the elements were created in the \(r\)-process rather than the \(s\)-process. This discovery added new complexity to the study of GCs because \(s\)-process variation might be explained by AGB stars, a potential site of high-temperature hydrogen burning that could explain the light element abundance variations. Instead, there must be an additional nucleosynthetic site to explain the \(r\)-process abundance variations. Sneden et al. (2000a), Otsuki et al. (2006), Sobeck et al. (2011), and Worley et al. (2013) confirmed the \(r\)-process variation in M15. The neutron-capture abundances in M15 do not have any apparent connection to the light element abundances. In other words, stars with enhanced Na or depleted Mg do not preferentially show high or low abundances of the \(r\)-process elements. This observation further rules out a direct connection between the source of the light elements and the source of the \(r\)-process. Furthermore, it means that the abundance variations in the \(r\)-process and/or the light elements are not the result of temporal evolution. For example, consider that Na became enhanced and Mg became depleted over time in M15. If the abundances of Ba and Eu also grew over time, they should show a correlation with Na and an anti-correlation with Mg. However, no such patterns are observed. The difficulty in explaining the abundance trends in M15 is reflected in the dearth of models to describe it. Tsujimoto & Shigeyama (2014) proposed that a neutron star merger polluted M15 long after the cluster finished forming stars. The stars closest to the neutron star merger became polluted with the \(r\)-process, whereas those farther away received a lower "dose." This theory explains why the \(r\)-process variations do not correlate with the light element abundances. The theory also presented an observational test. Stars that evolve to the present-day RGB would undergo the first dredge-up, which would deplete the surface abundances of externally polluted material. Kirby et al. (2020) showed that stars on the main sequence have the same average Ba abundance as stars on the RGB, which ruled out an external pollution scenario. Tarumi et al. (2021) proposed instead that a lanthanide-rich event polluted the cluster while it was still forming. In this scenario, the event happens external to the cluster, and the stars must form quickly so that the cluster gas does not mix and homogenize before the star formation finishes. M15 remained unique in showing \(r\)-process abundance variations until Roederer & Sneden (2011) reported that M92 also showed correlated variations in the abundances of Y, Zr, La, and Eu, among other neutron-capture abundances. Their results were based on medium-resolution spectra from the WIYN/Hydra multi-object spectrograph (Barden et al., 1994). Like M15, they also showed abundance ratios indicative of the \(r\)-process, and they did not correlate with the light element abundance variations. It is interesting that M92 was the next cluster to show \(r\)-process variations because it is similar to M15 in several respects. Both clusters have nearly the same metallicity ([Fe/H] = \(-\)2.4). M15's luminosity is just one magnitude brighter than M92 (van den Bergh et al., 1991; Durrell & Harris, 1993; Harris, 1996; VandenBerg et al., 2016). M15 is core-collapsed, whereas M92 has a high central concentration but falls shy of being core-collapsed (Trager et al., 1995; McLaughlin & van der Marel, 2005). Roederer (2011) showed evidence that M5 and NGC 3201 also exhibit \(r\)-process dispersions. He also found marginal evidence for \(r\)-process dispersion in M3 and M13. That study was based on the quantification of correlations between different \(r\)-process abundances because any correlation between two or more elements implies that there is a dispersion among them. Roederer's study was based on a compilation of mostly high-resolution spectroscopic abundances from the literature. In some GCs, including M3, M5, and M13, the spectra came from multiple, heterogeneous studies. In a study of 12 stars in M92, Cohen (2011) could not confirm Roederer & Sneden's (2011) observation of \(r\)-process variations in M92. Cohen used Keck/HIRES (Vogt et al., 1994) spectra. She showed that neutron-capture absorption line strengths of stars with similar effective temperatures and surface gravities were similar to each other. As a result, she concluded that M92 \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Star & Gaia Source ID & RA (J2000) & Dec (J2000) & \(G_{0}\) & \(({\rm BP-RP})_{0}\) & \(({\rm BP-K})_{0}\) & \(v_{r}\) (km s\({}^{-1}\)) \\ \hline III-13 & 1360410343884596096 & \(17^{h}17^{m}21^{s}\!\!.71\) & \(+43^{\circ}12^{\prime}53\!\!.^{\prime\prime}4\) & 11.56 & 1.56 & 3.391 & \(-110.652\) \\ VII-18 & 1360384647101112832 & \(17^{h}16^{m}37\!\!.48\) & \(+43^{\circ}06^{\prime}15\!\!.^{\prime\prime}6\) & 11.68 & 1.50 & 3.251 & \(-118.552\) \\ X-49 & 1360404399652677760 & \(17^{h}17^{m}12^{s}\!\!.80\) & \(+43^{\circ}05^{\prime}41\!\!.^{\prime\prime}8\) & 11.77 & 1.47 & 3.221 & \(-132.417\) \\ III-65 & 1360405877121517568 & \(17^{h}17^{m}14^{s}\!\!.12\) & \(+43^{\circ}10^{\prime}46\!\!.^{\prime\prime}2\) & 12.01 & 1.41 & 3.091 & \(-118.964\) \\ VII-122 & 1360405262943998208 & \(17^{h}16^{m}57\!\!.37\) & \(+43^{\circ}07^{\prime}23\!\!.^{\prime\prime}7\) & 12.02 & 1.41 & 3.091 & \(-125.579\) \\ II-53 & 1360405808402036608 & \(17^{h}17^{m}13^{s}\!\!.06\) & \(+43^{\circ}09^{\prime}48\!\!.^{\prime\prime}3\) & 12.04 & 1.40 & 3.081 & \(-124.641\) \\ XII-8* & 1360218208524710272 & \(17^{h}17^{m}31^{s}\!\!.71\) & \(+43^{\circ}05^{\prime}41\!\!.^{\prime\prime}4\) & 12.36 & 1.31 & 2.91 & \(-119.156\) \\ V-45 & 1360408522818465536 & \(17^{h}16^{m}49\!\!.85\) & \(+43^{\circ}10^{\prime}41\!\!.^{\prime\prime}2\) & 12.42 & 1.31 & 2.91 & \(-121.735\) \\ XI-19 & 1360216765415718528 & \(17^{h}17^{m}18\!\!.74\) & \(+43^{\circ}04^{\prime}50\!\!.^{\prime\prime}9\) & 12.48 & 1.30 & 2.881 & \(-117.273\) \\ XI-80 & 1360404605811200256 & \(17^{h}17^{m}14^{s}\!\!.67\) & \(+43^{\circ}06^{\prime}24\!\!.^{\prime\prime}7\) & 12.58 & 1.28 & 2.861 & \(-125.783\) \\ II-70* & 1360405812699772416 & \(17^{h}17^{m}16\!\!.53\) & \(+43^{\circ}10^{\prime}449\!\!.^{\prime\prime}9\) & 12.70 & 1.25 & 2.771 & \(-111.765\) \\ IV-94* & 1360408694620020480 & \(17^{h}17^{m}05\!\!.87\) & \(+43^{\circ}10^{\prime}17\!\!.^{\prime\prime}2\) & 12.71 & 1.24 & 2.751 & \(-119.801\) \\ I-67 & 1360404811969647616 & \(17^{h}17^{m}21^{s}\!\!.23\) & \(+43^{\circ}08^{\prime}27\!\!.^{\prime\prime}0\) & 12.96 & 1.21 & 2.721 & \(-121.490\) \\ III-82 & 1360408728979777176 & \(17^{h}17^{m}08\!\!.06\) & \(+43^{\circ}10^{\prime}449\!\!.^{\prime\prime}9\) & 12.96 & 1.22 & 2.751 & \(-121.772\) \\ IV-10 & 1360409695350418176 & \(17^{h}16^{m}57\!\!.72\) & \(+43^{\circ}14^{\prime}11\!\!.^{\prime\prime}4\) & 13.06 & 1.21 & 2.731 & \(-118.071\) \\ XII-34* & 1360404747547336448 & \(17^{h}17^{m}21^{s}\!\!.57\) & \(+43^{\circ}07^{\prime}400\!\!.^{\prime\prime}8\) & 13.08 & 1.15 & 2.581 & \(-115.400\) \\ IV-79 & 1360408385382348672 & \(17^{h}17^{m}07\!\!.80\) & \(+43^{\circ}10^{\prime}25\!\!.^{\prime\prime}1\) & 13.11 & 1.20 & 2.701 & \(-121.686\) \\ IX-13 & 1360404163432211456 & \(17^{h}16^{m}56\!\!.12\) & \(+43^{\circ}04^{\prime}07\!\!.^{\prime\prime}3\) & 13.66 & 1.11 & 2.511 & \(-126.466\) \\ VIII-24* & 1360404953706355200 & \(17^{h}16^{m}50\!\!.34\) & \(+43^{\circ}05^{\prime}53\!\!.^{\prime\prime}1\) & 13.79 & 1.03 & 2.311 & \(-119.054\) \\ X-20 & 1360216662336507648 & \(17^{h}17^{m}13\!\!.33\) & \(+43^{\circ}04^{\prime}13\!\!.^{\prime\prime}4\) & 15.30 & 0.95 & 2.171 & \(-118.018\) \\ S2710 & 1360404330933201536 & \(17^{h}17^{m}15\!\!.71\) & \(+43^{\circ}05^{\prime}32\!\!.^{\prime\prime}4\) & 15.32 & 0.95 & 2.161 & \(-120.568\) \\ VI-90 & 1360408355320576128 & \(17^{h}16^{m}54\!\!.21\) & \(+43^{\circ}09^{\prime}21\!\!.^{\prime}1\) & 15.42 & 0.95 & 2.151 & \(-126.224\) \\ S2265 & 1360216593617025152 & \(17^{ does not exhibit any star-to-star \(r\)-process dispersion. She attributed her discrepant result to the higher spectral resolution and signal-to-noise ratio (S/N) of the Keck/HIRES spectra compared to Roederer & Sneden's WIYN/Hydra spectra. Roederer & Thompson (2015) also found that spurious correlations between neutron-capture abundance ratios could be introduced by errors in atmospheric parameters, especially effective temperature. The unphysical correlations persist even when the neutron-capture abundances are normalized to iron. For example, errors in effective temperature would cause an apparent correlation between [Eu/Fe] and [La/Fe] among a group of stars, even if the abundances of Fe, La, and Eu were constant. Roederer & Thompson (2015) suggested that earlier reports of neutron-capture dispersions in all "classical" globular clusters except for M15 were potentially the result of this spurious correlation. We revisit the question of \(r\)-process abundance variation in M92. We analyzed all 35 stars available in the Keck/HIRES archive. Our study uses higher-resolution spectra than those of Roederer & Sneden (2011), and it has a larger sample size than that of Cohen (2011). We describe the spectra in Section 2. We assign stellar parameters, like effective temperature, to the stars in Section 3. Section 4 describes the procedure by which we measure abundances, and Section 5 points out some trends in the abundances. We propose a scenario for the \(r\)-process abundance variation in Section 6, and we summarize our study in Section 7. ## 2 Observations ### Archival Spectra Nearly all of the data from this project come from the Keck Observatory Archive (KOA). We queried the archive using its web interface on 2020 February 5. We searched for all publicly available HIRES spectra within 30 arcmin of the center of M92. We also queried the archive again on 2023 July 18 to confirm that no additional spectra were added since our original query. The KOA provides both raw and extracted HIRES spectra. We used the extracted (one-dimensional, wavelength-calibrated, sky-subtracted) spectra provided in FITS format on the archive. We paired each spectrum with photometry from _Gaia_ Data Release 3 (Gaia Collaboration et al., 2022) and 2MASS (Skrutskie et al., 2006). The coordinates given in the metadata from the KOA are not always precise enough to give an unambiguous match to a star in the _Gaia_ catalog (\(G\), BP, and RP magnitudes) and 2MASS catalog (\(K_{s}\) magnitude). In some cases, we looked up the star in the SIMBAD astronomical database (Wenger et al., 2000) by the identifier assigned by the HIRES observer. We used the coordinates of the star listed in SIMBAD to match to _Gaia_ and 2MASS. _Gaia_ measured a significantly different proper motion for one star in the HIRES archive (S1521) than the other M92 stars. We excluded S1521 from further analysis because we assumed that it is not a cluster member, and we verified that the other stars in our sample have proper motions consistent with cluster membership. We also excluded S4375 because it is faint (\(G=18.80\)). Table 1 lists the coordinates of the stars along with photometry. The last column of Table 1 gives the \((\mathrm{BP}-K_{s})_{0}\) color, where available. The magnitudes and photometry in the table were corrected for extinction and reddening assuming \(E(B-V)=0.0191\)(Schlafly & Finkbeiner, 2011). The _Gaia_ magnitudes were corrected following the color-extinction relations provided by the Gaia Collaboration et al. (2018). The \(K_{s}\) magnitudes were corrected assuming \(A_{K}/E(B-V)=0.310\)(Fitzpatrick, 1999). Table 2 gives the observing log organized by star. Most stars were observed on multiple dates. The name of the principal investigator (PI) and exposure times for each exposure are given for each date. The first row for each star includes the total exposure time and the S/N per pixel. The S/N is calculated from the pixels between 4650 A and 4800 A in the continuum-divided spectrum. We sigma clipped this spectral range by excluding pixels more than 3.0 standard deviations above the continuum or 0.5 standard deviations below the continuum. The asymmetry of the sigma clipping is intended to exclude strong absorption lines from the measurement of S/N. Finally, the S/N is calculated as the inverse of the median absolute deviation of the spectrum from its mean. Figure 1 shows the _Gaia_ color-magnitude diagram (CMD) for M92. The figure shows the stars with HIRES spectra in rainbow colors. The colors correspond to the effective temperatures (Section 3). The symbol shape corresponds to evolutionary state. We determined the evolutionary state by close inspection of the CMD. A sufficiently zoomed-in CMD shows a clear delineation of the AGB from the RGB. (This distinction is not readily apparent in the zoomed-out CMD shown in Figure 1.) Our sample contains 35 stars: 24 RGB stars, 5 AGB stars, and 6 sub-giants. While we report the abundances of the sub-giants (Section 4.3), they do not play a role in our discussion (Section 6) because their abundances have larger uncertainties. In the tables and figure legends in this paper, AGB stars are indicated with asterisks next to their names. The small points in Figure 1 show stars within 7 arcmin of the cluster center and with proper motions within 1 mas/yr of the median proper motion. The right axis of the figure gives absolute magnitudes assuming an apparent distance modulus \((m-M)_{V}=14.74\)(VandenBerg et al., 2016), which we corrected for extinction to \((m-M)_{0}=14.69\). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{1}{c}{ Star} & Date & PI & Slit Width (\({}^{\prime\prime}\)) & Individual Exp. (s) & Tot. Exp. Time (min.) & S/N (pix\({}^{-1}\)) \\ \hline III-13 & 2005 May 29 & M. Bolte & 0.861 & \(3\times 1200\) & 60 & 85 \\ VII-18 & 1997 May 12 & J. Cohen & 0.861 & 1800 & 57 & 95 \\ & 2002 Sep 27 & J. Cohen & 1.148 & \(2\times 400\) & & \\ & 2003 Jun 24 & J. Cohen & 1.148 & 400 & & \\ & 2006 Apr 18 & J. Cohen & 1.148 & 400 & & \\ X-49 & 2002 Sep 27 & J. Cohen & 1.148 & \(2\times 400\) & 13 & 82 \\ III-65 & 2002 Sep 28 & J. Cohen & 1.148 & \(3\times 500\) & 195 & 114 \\ & 2002 Sep 29 & J. Cohen & 1.148 & 500 & & \\ & 2005 May 30 & J. Cohen & 0.861 & \(3\times 1800\) & & \\ & 2006 Apr 17 & J. Cohen & 1.148 & \(400+3\times 900\) & & \\ & 2006 Apr 18 & J. Cohen & 1.148 & \(2\times 600\) & & \\ VII-122 & 2002 Sep 30 & J. Cohen & 1.148 & \(2\times 500\) & 20 & 97 \\ & 2003 Aug 21 & J. Cohen & 1.148 & \(200\) & & \\ II-53 & 2002 May 01 & J. Cohen & 1.148 & \(2\times 600\) & 120 & 127 \\ & 2002 May 03 & J. Cohen & 0.861 & \(3\times 400+600\) & & \\ & 2002 Sep 27 & J. Cohen & 1.148 & \(2\times 400\) & & \\ & 2006 Apr 17 & J. Cohen & 1.148 & \(400+5\times 600\) & & \\ XII-8* & 2005 May 29 & M. Bolte & 0.861 & \(2\times 1800\) & 140 & 181 \\ & 2005 May 30 & M. Bolte & 0.861 & 1800 & & \\ & 2006 Apr 18 & M. Bolte & 1.148 & 900 & & \\ & 2009 May 10 & M. Bolte & 1.148 & \(3\times 300\) & & \\ & 2011 Jun 06 & M. Bolte & 1.148 & \(1200\) & & \\ V-45 & 2005 May 30 & F. Chaffee & 0.861 & \(2\times 1800\) & 165 & 180 \\ & 2005 Jul 04 & F. Chaffee & 0.861 & \(3\times 1800\) & & \\ & 2009 May 10 & F. Chaffee & 1.148 & \(3\times 300\) & & \\ XI-19 & 2002 Sep 30 & J. Cohen & 1.148 & \(2\times 500\) & 127 & 151 \\ & 2005 May 30 & J. Cohen & 0.861 & \(3\times 1800\) & & \\ & 2011 Jun 06 & J. Cohen & 1.148 & \(1200\) & & \\ XI-80 & 2002 Sep 29 & J. Cohen & 1.148 & \(2\times 500\) & 42 & 118 \\ & 2011 Jun 06 & J. Cohen & 1.148 & \(1500\) & & \\ II-70* & 2002 May 01 & J. Cohen & 1.148 & \(300\) & 25 & 99 \\ & 2002 Sep 27 & J. Cohen & 1.148 & \(2\times 600\) & & \\ IV-94* & 2002 Sep 27 & J. Cohen & 1.148 & \(2\times 600\) & 37 & 98 \\ & 2003 Jun 26 & J. Cohen & 1.148 & \(400\) & & \\ & 2008 Sep 23 & J. Cohen & 1.148 & \(600\) & & \\ I-67 & 2002 Sep 28 & J. Cohen & 1.148 & \(2\times 700\) & 23 & 102 \\ \hline \end{tabular} \end{table} Table 2: Archival Observations \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Star & Date & PI & Slit Width (\({}^{\prime\prime}\)) & Individual Exp. (s) & Tot. Exp. Time (min.) & S/N (pix\({}^{-1}\)) \\ \hline III-82 & 2002 Sep 28 & J. Cohen & 1.148 & \(2\times 700\) & 100 & 152 \\ & 2003 Jun 24 & J. Cohen & 1.148 & \(2\times 500\) & & \\ & 2006 Apr 18 & J. Cohen & 1.148 & \(4\times 900\) & & \\ IV-10 & 2011 Aug 07 & J. Cohen & 1.148 & \(500+1200\) & 28 & 71 \\ XII-34* & 2003 Jun 25 & J. Cohen & 1.148 & \(2\times 600\) & 53 & 141 \\ & 2011 Aug 06 & J. Cohen & 1.148 & \(800+1200\) & & \\ IV-79 & 2011 Aug 07 & J. Cohen & 1.148 & \(2\times 1200\) & 40 & 112 \\ IX-13 & 2002 Sep 29 & J. Cohen & 1.148 & \(4\times 500\) & 33 & 88 \\ VIII-24* & 2002 Sep 30 & J. Cohen & 1.148 & \(5\times 600\) & 50 & 106 \\ X-20 & 2008 Jun 15 & J. Cohen & 1.148 & \(1000\) & 17 & 47 \\ S2710 & 2008 Aug 20 & J. Cohen & 1.148 & \(3\times 1500\) & 75 & 71 \\ VI-90 & 2008 Aug 20 & J. Cohen & 1.148 & \(2\times 1500\) & 50 & 74 \\ S2265 & 2008 Aug 20 & J. Cohen & 1.148 & \(3\times 1800\) & 90 & 76 \\ VIII-45 & 2008 Jul 07 & J. Cohen & 1.148 & \(3\times 1800\) & 90 & 85 \\ VII-28 & 2008 Jun 10 & J. Cohen & 1.148 & \(2\times 1800\) & 90 & 79 \\ & 2008 Jun 11 & J. Cohen & 1.148 & \(1800\) & & \\ G17181\_0638 & 2003 Jun 25 & J. Cohen & 1.148 & \(2\times 1200\) & 120 & 50 \\ & 2003 Jun 26 & J. Cohen & 1.148 & \(4\times 1200\) & & \\ C17333\_0832 & 2003 Jun 26 & J. Cohen & 1.148 & \(2\times 1200\) & 300 & 69 \\ & 2003 Aug 23 & J. Cohen & 1.148 & \(6\times 1200\) & & \\ & 2009 Aug 27 & J. Cohen & 1.148 & \(1200+4\times 1800\) & & \\ S3108 & 2008 Sep 21 & J. Cohen & 1.148 & \(2\times 1800\) & 150 & 57 \\ & 2008 Sep 22 & J. Cohen & 1.148 & \(2\times 1800\) & & \\ & 2008 Sep 23 & J. Cohen & 1.148 & \(1800\) & & \\ S652 & 2008 Jun 15 & J. Cohen & 1.148 & \(4\times 1800\) & 150 & 52 \\ & 2008 Jul 06 & J. Cohen & 1.148 & \(1800\) & & \\ S19 & 2008 Jun 10 & J. Cohen & 1.148 & \(7\times 1800\) & 210 & 51 \\ D21 & 2007 Sep 06 & J. Cohen & 1.148 & \(2\times 1800\) & 210 & 52 \\ & 2008 Jun 10 & J. Cohen & 1.148 & \(5\times 1800\) & & \\ S3880 & 2008 Jun 11 & J. Cohen & 1.148 & \(7\times 1800\) & 210 & 48 \\ S4038 & 2008 Jun 14 & J. Cohen & 1.148 & \(4\times 1800\) & 240 & 56 \\ & 2008 Jun 14 & J. Cohen & 1.148 & \(4\times 1800\) & & \\ S61 & 2008 Jul 06 & J. Cohen & 1.148 & \(9\times 1800\) & 577 & 65 \\ & 2011 Jun 04 & J. Cohen & 1.148 & \(6\times 1800+2200\) & & \\ & 2011 Jun 06 & J. Cohen & 1.148 & \(3\times 1800\) & & \\ S162 & 2008 Jul 07 & J. Cohen & 1.148 & \(8\times 1800\) & 448 & 58 \\ & 2011 Jun 04 & J. Cohen & 1.148 & \(1652+2\times 1800\) & & \\ & 2011 Jun 06 & J. Cohen & 1.148 & \(4\times 1800\) & & \\ & 2011 Jun 06 & J. Cohen & 1.148 & \(4\times 1800\) & & \\ \hline \end{tabular} \end{table} Table 2: (continued) ### New Spectrum One star, X-20, drew our attention for what appeared to be an unusually large potassium abundance (see Section 5.3). We asked members of the California Planet Survey (Howard et al., 2010) to observe X-20 in early 2021. A. Howard and H. Isaacson observed the star on 2021 March 26. They reduced the spectrum following the procedure described by Howard et al. ### Preparation of the Spectra The KOA provides extracted spectra for each echelle order of each exposure. The spectra are in the heliocentric reference frame. Most stars had multiple exposures. In order to make the spectra suitable for measuring equivalent widths (EWs; Section 4.2), we combined all of the echelle orders and all of the exposures for each star into a single one-dimensional spectrum. Stacking all of the component spectra required that we re-bin them onto a common wavelength array. We used a logarithmically spaced wavelength array that encompassed the minimum and maximum wavelengths of all of the individual exposures. The bin size was chosen to be as fine as the finest wavelength bin in all of the individual orders. This ensured that we did not lose any information during the resampling. #### 2.3.1 Continuum normalization The KOA provides spectra in units of counts divided by the flat field. The spectra are not flux-calibrated. The flux of one echelle order is not necessarily continuous with the flux in adjacent orders. Therefore, we continuum-normalized each echelle order within each exposure. Continuum normalization of stellar spectra is notoriously sensitive to S/N and the depths of absorption lines. One approach to normalization is to exclude absorption lines from the continuum determination by sigma clipping, but that approach is especially sensitive to S/N. Instead, we modeled the absorption lines with a synthetic spectrum. We synthesized a spectrum using the LTE code MOOG(Sneden, 1973; Sneden et al., 2012). We used ATLAS9 model atmospheres (Kurucz, 1993; Kirby, 2011) with the effective temperatures and surface gravities determined in Section 3. We assumed a metallicity of \(\mathrm{[M/H]}=-2.4\) and an \(\alpha\) enhancement of \(\mathrm{[\alpha/M]}=+0.3\). The microturbulent velocity was calculated from an empirical, linear relation with surface gravity (Kirby et al., 2009). The line list was compiled from several sources. We used the August 2021 version of Linemake1(Placco et al., 2021, 2021) for the spectral range 3100-4100 A. Linemake is a compilation of atomic and molecular lines from a variety of references, which are listed on the Linemake website. We used all of the atomic and molecular lines available in Linemake. We used the line lists of Kirby et al. (2008, 2015) and Escala et al. (2019) for the spectral range 4100-9100 A. These lists were based on large line databases. Some oscillator strengths were adjusted to match the spectra of the Sun and Arcturus. Footnote 1: [https://github.com/vmplaceo/linemake](https://github.com/vmplaceo/linemake) For each echelle order, we divided the observed spectrum by the synthetic spectrum. MOOG synthesizes spectra that are already continuum-normalized. Therefore, the quotient is the observed spectrum with the absorption lines divided out. We fit a spline to this quotient. We used a breakpoint spacing of 500 pixels, and we performed symmetric 1.5\(\sigma\) clipping. This spline fit is the continuum. We divided the original observed echelle order by this continuum. #### 2.3.2 Spectral coaddition We rebinned each continuum-normalized echelle order onto the common wavelength array using the x_specrebin function in J. X. Prochaska's XIDL software library2. We coadded all of the individual echelle orders with inverse variance weighting. We repeated this process for all of the exposures of each star. Footnote 2: [https://github.com/profkj/xidl](https://github.com/profkj/xidl) The KOA provides vacuum wavelengths. We converted the wavelength arrays to their air equivalents. Figure 1: The _Gaia_ CMD for M92. Colors and magnitudes are corrected for reddening and extinction. The right axis gives the absolute \(G_{0}\) magnitude. The color scale gives the effective temperature (Section 3). The sub-giants are all hotter than 5420 K, and they are shown with the darkest shade of purple to give the RGB more color range. The symbol shape depicts the evolutionary state of the star, which is determined based on the location in the CMD. The slit widths for the individual exposures were either 0.861" or 1.148" (see Table 2). Some stars were observed with a combination of those two slit widths. In principle, stacking spectra of different slit widths leads to a superposition of spectra at different spectral resolutions. In practice, the spectral resolution depends on both the slit width and the seeing. We did not attempt to match the resolution of the individual spectra before stacking. Therefore, the line spread functions are not simple Gaussians. We revisit this issue in our discussion of the measurement of EWs (Section 4.2). #### 2.3.3 Radial velocity determination We needed to measure the radial velocity \(v_{r}\) of each star in order to put the spectra in the rest frame. We chose star III-65 as a reference star because of its high S/N, large wavelength range, and cool temperature, which gives strong absorption lines. We measured \(v_{r}=-118.964\) km s\({}^{-1}\) for III-65 by examination of the observed wavelengths of several narrow absorption lines. Then, we cross-correlated a 200 A region of the spectrum of each star with the spectrum of III-65. The region was 4200-4400 A, 4400-4600 A, or 4650-4850 A, depending on the wavelength coverage of the spectrum. Table 1 includes the resulting values of \(v_{r}\), which are all consistent with cluster membership. We shifted all the spectra into their rest frames accordingly. ## 3 Stellar parameters We measured abundances of elements and molecules through a combination of EWs (Section 4.2) and spectral synthesis (Section 4.4). Both approaches require an estimation of stellar parameters (Section 3), such as effective temperature (\(T_{\rm eff}\)), surface gravity (\(\log g\)), microturbulent velocity (\(v_{t}\)), metallicity ([M/H]), and \(\alpha\) enhancement ([\(\alpha\)/Fe]). We use the notation [M/H] to refer to the abundance of all elements heavier than He, assuming that their abundances are scaled to the solar composition (Asplund et al., 2009). The \(\alpha\) elements (O, Ne, Mg, Si, S, Ar, Ca, and Ti) are additionally scaled by [\(\alpha\)/Fe]. ### Effective temperature Stellar parameters can be derived from photometry or spectroscopy. For example, \(T_{\rm eff}\) can be measured from the photometric color of a star or by balancing the spectroscopic excitation equilibrium of one or more atomic species. The spectroscopic approach can be inaccurate when assuming LTE, especially for red giants (e.g., Frebel et al., 2013), which comprise the majority of our sample. Photometric colors can be tied to (semi-)direct measurements of temperature, which are not subject to non-LTE effects. The infrared flux method (IRFM) is a semi-direct method of determining stellar temperatures. It uses infrared flux and a model atmosphere to infer the star's angular diameter. The flux and diameter together give \(T_{\rm eff}\). Gonzalez Hernandez & Bonifacio (2009) provided empirical calibrations between 2MASS colors and temperatures derived from the IRFM for several hundred stars. Mucciarelli et al. (2021) extended these calibrations to _Gaia_ colors and combinations of _Gaia_ and 2MASS. The (\({\rm BP-}K_{s}\))\({}_{0}\) color has the highest precision (\(\sigma_{T_{\rm eff}}=52\) K) because it has longest dynamic range of the different filter combinations. Therefore, we adopted the (\({\rm BP-}K_{s}\))\({}_{0}\) color temperature for stars brighter than about \(G_{0}=16\). For fainter stars, the uncertainty in the \(K_{s}\) magnitude becomes the dominant error term in the temperature. Therefore, for fainter stars, we used the (\({\rm BP-RP}\))\({}_{0}\) color temperature (\(\sigma_{T_{\rm eff}}=83\) K). We used Mucciarelli et al.'s color-temperature relation for dwarfs for stars marked "sub-giant" in Figure 1 and the relation for giants otherwise. The relations require an assumption of metallicity, for which we used [M/H] = \(-2.41\), which is approximately the average [Fe II/H] of M92 that we eventually derived. Footnotes in Table 1 indicate which color was used to derive \(T_{\rm eff}\) for each star. We estimated the uncertainty on \(T_{\rm eff}\) by applying standard (uncorrelated) error propagation to the color-temperature polynomial. The relations also have an intrinsic scatter, which we added in quadrature to the random error. ### Surface gravity Like temperature, surface gravity can also be determined photometrically or spectroscopically. The spectroscopic method typically attempts to equalize the abundance derived from neutral and ionized absorption lines of the same element. However, overionization affects neutral absorption lines in an LTE analysis (Thevenin & Idiart, 1999). For stars of known distance, it is generally preferable to use photometric estimates of surface gravity. The surface gravity can be estimated from the Stefan-Boltzmann law in combination with the definition of surface gravity: \[g=\frac{4\pi GM\sigma_{\rm SB}T_{\rm eff}^{4}}{L} \tag{1}\] We assumed 0.75 \(M_{\odot}\), which is appropriate for ancient, metal-poor stars that have evolved past the main sequence turn-off. We calculated the luminosity from the extinction-corrected _Gaia_\(G_{0}\) magnitude, the distance modulus \((m-M)_{0}=14.69\) (see Section 2.1), and bolometric corrections from Andrae et al. (2018). We estimated the uncertainty on \(\log g\) using standard propagation of error. We assumed that the uncertainty on the stellar mass was 0.1 \(M_{\odot}\). We further assumed that the errors on \(M\), \(T_{\rm eff}\), and \(G_{0}\) are uncorrelated. Table 3 gives the model atmosphere parameters for each model: \(T_{\rm eff}\), \(\log g\), and \(v_{t}\) (see Section 4.3). The table does not show [M/H] or [\(\alpha\)/Fe] because these values are the same for every star. ## 4 Abundance Measurements ### Line List We used the same line list as Ji et al. (2020). The list is based mostly on Linemake(Placco et al., 2021, 2021) with updates as described by Ji et al. Some lines have hyperfine structure, which we also take from Linemake. ### Equivalent Widths We developed a graphical utility called hiresspec to analyze each spectrum. The software fits Gaussian profiles to each absorption line in the line list using MPFIT(Markwardt, 2012). The user examines each fit and has the option to change the fit by altering the wavelength bounds considered by MPFIT or the placement of the continuum. We gave special treatment to the two Mg b lines in the line list (5173 A and 5184 A). Because they often display obvious damping wings, we fit them with Voigt rather than Gaussian profiles. The EW for each line was calculated analytically from the amplitude and Doppler width (and Lorentzian width for the Voigt profile) determined in the fit. MPFIT also returns the \(1\sigma\) uncertainty on EW from the diagonal elements of the covariance matrix. We computed upper limits on EW for undetected lines. The upper limit was the strength of the Gaussian that was stronger than the observed spectrum by \(3\sigma\). Table 4 gives the atomic data (wavelength, excitation potential, and oscillator strength) for each line. It also gives each line's EW measured for each star. Only ten absorption lines and two stars are shown in the manuscript. The online version of the table includes all of the absorption lines and all of the stars. #### 4.2.1 Potassium We measured potassium from the resonance line K I \(\lambda\)7699. This region of the spectrum lies in the red tail of the telluric A band. The most prominent telluric \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{ Star} & \(T_{\rm eff}\) (K) & \(\log g\) (cm s\({}^{-1}\)) & \(v_{t}\) (km s\({}^{-1}\)) \\ \hline III-13 & \(4187\pm 51\) & \(0.48\pm 0.10\) & \(2.95\pm 0.21\) \\ VII-18 & \(4275\pm 50\) & \(0.59\pm 0.10\) & \(2.96\pm 0.15\) \\ X-49 & \(4301\pm 50\) & \(0.64\pm 0.10\) & \(2.58\pm 0.11\) \\ III-65 & \(4392\pm 51\) & \(0.79\pm 0.10\) & \(2.56\pm 0.10\) \\ VII-122 & \(4388\pm 51\) & \(0.79\pm 0.10\) & \(2.68\pm 0.10\) \\ II-53 & \(4399\pm 51\) & \(0.81\pm 0.10\) & \(2.67\pm 0.10\) \\ XII-8* & \(4530\pm 51\) & \(1.01\pm 0.10\) & \(2.85\pm 0.17\) \\ V-45 & \(4528\pm 52\) & \(1.03\pm 0.10\) & \(2.42\pm 0.06\) \\ XI-19 & \(4546\pm 51\) & \(1.07\pm 0.10\) & \(2.54\pm 0.07\) \\ XI-80 & \(4562\pm 51\) & \(1.12\pm 0.10\) & \(2.52\pm 0.10\) \\ II-70* & \(4636\pm 52\) & \(1.20\pm 0.10\) & \(2.47\pm 0.09\) \\ IV-94* & \(4652\pm 52\) & \(1.22\pm 0.10\) & \(2.62\pm 0.10\) \\ I-67 & \(4675\pm 51\) & \(1.33\pm 0.10\) & \(2.37\pm 0.09\) \\ III-82 & \(4648\pm 52\) & \(1.31\pm 0.10\) & \(2.35\pm 0.08\) \\ IV-10 & \(4673\pm 51\) & \(1.37\pm 0.10\) & \(2.38\pm 0.08\) \\ XII-34* & \(4793\pm 52\) & \(1.43\pm 0.10\) & \(2.57\pm 0.09\) \\ IV-79 & \(4690\pm 52\) & \(1.39\pm 0.10\) & \(2.30\pm 0.08\) \\ IX-13 & \(4854\pm 53\) & \(1.70\pm 0.10\) & \(2.23\pm 0.09\) \\ VIII-24* & \(5043\pm 55\) & \(1.83\pm 0.10\) & \(2.34\pm 0.12\) \\ X-20 & \(5185\pm 63\) & \(2.50\pm 0.10\) & \(1.87\pm 0.09\) \\ S2710 & \(5191\pm 60\) & \(2.51\pm 0.10\) & \(1.88\pm 0.08\) \\ VI-90 & \(5206\pm 65\) & \(2.55\pm 0.10\) & \(1.90\pm 0.11\) \\ S2265 & \(5269\pm 73\) & \(2.73\pm 0.10\) & \(1.78\pm 0.09\) \\ VIII-45 & \(5239\pm 72\) & \(2.74\pm 0.10\) & \(1.85\pm 0.10\) \\ VII-28 & \(5276\pm 83\) & \(2.78\pm 0.10\) & \(1.78\pm 0.09\) \\ G17181\_0638 & \(5328\pm 84\) & \(2.96\pm 0.10\) & \(2.01\pm 0.22\) \\ C17333\_0832 & \(5373\pm 83\) & \(3.17\pm 0.10\) & \(1.67\pm 0.10\) \\ S3108 & \(5413\pm 93\) & \(3.20\pm 0.10\) & \(1.89\pm 0.11\) \\ S652 & \(5322\pm 83\) & \(3.17\pm 0.10\) & \(1.84\pm 0.18\) \\ S19 & \(5793\pm 66\) & \(3.66\pm 0.10\) & \(1.81\pm 0.14\) \\ D21 & \(5954\pm 74\) & \(3.73\pm 0.10\) & \(1.51\pm 0.12\) \\ S3880 & \(5944\pm 77\) & \(3.76\pm 0.10\) & \(1.63\pm 0.19\) \\ S4038 & \(6069\pm 85\) & \(3.80\pm 0.10\) & \(1.86\pm 0.33\) \\ S61 & \(6308\pm 92\) & \(4.00\pm 0.10\) & \(1.25\pm 0.39\) \\ S162 & \(6573\pm 98\) & \(4.08\pm 0.10\) & \(2.00\pm 0.37\) \\ \hline \end{tabular} \end{table} Table 3: Model Atmosphere Parameters Figure 2: Spectra around the resonance line K I \(\lambda\)7699. The stars shown have effective temperatures within 32 K of each other. The dotted (solid) spectra are uncorrected (corrected) for telluric O\({}_{2}\) absorption. feature that affects the potassium line is an electronic transition of O\({}_{2}\) at a vacuum wavelength of 7697.96 A. This line arises from the \({}^{P}Q\) branch in the \(0-0\) transition of the \(b\,^{1}\Sigma_{g}^{+}\to X\,^{3}\Sigma_{g}^{-}\) series of O\({}_{2}\) in rotational level \(K=31\)(Babcock & Herzberg, 1948). Fortuitously, the line is always accompanied by the \(P\) branch of the same transition at 7698.98 A, a wavelength free of stellar features. The strength ratio of the lines is a constant because they arise from the same electronic energy level, and the lines are weak (typically 30 mA) and therefore on the linear portion of the curve of growth. The strength ratio is the ratio of their oscillator strengths (\(gf\)). We computed the oscillator strengths from their Einstein coefficients and level degeneracies given by the HITRAN molecular database (Yu et al., 2014; Gordon et al., 2022): \(gf(7697.96)/gf(7698.99)=0.959\). We decontaminated the K i line by subtracting the bluer of the O\({}_{2}\) lines. First, we modeled the redder O\({}_{2}\) line, which does not overlap stellar features, as a Gaussian. The Gaussian width is generally narrower than the width of the K i line because the broadening effects in the Earth's atmosphere are less than in the star. We fit a Gaussian centered at the observed (geocentric) wavelength, but we allowed the central wavelength, strength, and FWHM to vary. Then, we created a model of the bluer O\({}_{2}\) line by shifting the Gaussian in wavelength and multiplying the strength by 0.959. We subtracted both Gaussian models from the observed spectrum. Figure 2 shows three examples of the potassium spectral region. The dotted spectra are shown before O\({}_{2}\) subtraction, and the solid spectra are subtracted. We discuss potassium abundances further in Section 5.3. ### Determination of Abundances We used ATLAS9(Kurucz, 1993) model atmospheres in conjunction with MOOG(Sneden, 1973) to compute an abundance for each EW. We computed the model atmosphere by interpolating between the grid points of Kirby's (2011) grid of ATLAS9 models. We used \(T_{\rm eff}\) and \(\log g\) as determined in Sections 3.1 and 3.2. We started with an initial assumption of \(v_{t}\) based on a linear relation with \(\log g\)(Kirby et al., 2009). We assumed [M/H] \(=-2.41\) and [\(\alpha\)/Fe] \(=+0.41\). These values are consistent with past abundance estimates for M92 (Sneden et al., 1991) as well as our estimates. Because M92 has been shown repeatedly to have no metallicity variation (Sneden et al., 1991; Shetrone, 1996; Cohen & McCarthy, 1997; Sneden et al., 2000; Shetrone et al., 2001; Cohen, 2011), we kept [M/H] fixed at \(-2.41\).3 Some of the \(\alpha\) elements, especially Mg, are known to have large variations, but we also kept [\(\alpha\)/Fe] fixed to keep the analysis as consistent as possible between stars. Footnote 3: King et al. (1998) found that sub-giants in M92 have higher Fe abundances than giants. However, our measurements of heavy-element abundances in sub-giant stars are consistent with those of giants in M92. We calculated the abundance of each absorption line with the 2021 update to MOOG4. This version includes Sobeck et al.'s (2011) update, which separates the source function from the Planck function to allow Rayleigh scattering to be a significant source of opacity, which is important at \(\lambda<4500\) A in metal-poor giants, such as the majority of our M92 sample. Table 4 includes the \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & & & & \multicolumn{3}{c}{III-13} & \multicolumn{3}{c}{VII-18} \\ \cline{4-9} Species & Wavelength & Excitation Potential & \(\log gf\) & EW & abundance & weight & EW & abundance & weight \\ & (Å) & (eV) & & (mÅ) & & & & (mÅ) & & \\ \hline O i & 6363.78 & 0.020 & \(-\)10.190 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(<1.5\) & \(<6.15\) & \(\cdots\) \\ Na i & 5682.63 & 2.100 & \(-\)0.710 & 24.2 & 4.25 & 75.25 & 18.9 & 4.19 & 38.05 \\ Na i & 5688.20 & 2.100 & \(-\)0.410 & 44.1 & 4.27 & 70.81 & 31.4 & 4.14 & 38.06 \\ Na i & 5889.95 & 0.000 & 0.110 & \(\cdots\) & \(\cdots\) & \(\cdots\) & 408.0 & 4.48 & 19.66 \\ Na i & 5895.92 & 0.000 & \(-\)0.190 & \(\cdots\) & \(\cdots\) & \(\cdots\) & 341.9 & 4.33 & 7.61 \\ Mg i & 3986.75 & 4.350 & \(-\)1.060 & 65.2 & 5.38 & 8.84 & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ Mg i & 4167.27 & 4.350 & \(-\)0.740 & 120.5 & 6.05 & 7.07 & 74.8 & 5.39 & 20.69 \\ Mg i & 4571.10 & 0.000 & \(-\)5.620 & 166.7 & 5.29 & 1.50 & 134.4 & 5.24 & 1.53 \\ Mg i & 4702.99 & 4.350 & \(-\)0.440 & 150.6 & 5.77 & 6.85 & 91.3 & 5.06 & 20.19 \\ Mg i & 5172.68 & 2.710 & \(-\)0.390 & 347.0 & 5.36 & 6.56 & 280.0 & 5.06 & 10.40 \\ \hline \end{tabular} Note. –(This table is available in its entirety in a machine-readable form in the online journal. A portion is shown here for guidance regarding its form and content.) \end{table} Table 4: Line List with Equivalent Widths and Abundances abundance for each line. The abundances are corrected for non-LTE effects (Section 4.5) where appropriate. We also computed abundances from upper limits on EWs. For atomic species with no detected lines and more than one upper limit, we took the abundance from the absorption line with the most stringent abundance limit. For upper limits, Table 4 includes only the upper limit used in computing the limit on abundance. Some absorption lines display isotopic splitting, which we took from Linnanek. For the transition metals, we used the solar system isotope distribution (Asplund et al., 2009). For neutron-capture elements, we used the solar \(r\)-process isotope distribution (Simmerer et al., 2004; Sneden et al., 2008). We refined the initial guess at \(v_{t}\) by minimizing the trend of the abundance of Fe i lines with reduced width (\(\mathrm{RW}=\mathrm{EW}/\lambda\)). We considered only lines with \(\log\mathrm{RW}<-4.5\) because strong lines tend to have damping wings, which may not be well represented by our Gaussian fits. Furthermore, very strong lines (like very weak lines) are not very sensitive to \(v_{t}\) and are therefore not very useful at determining \(v_{t}\). We fit a line with least-squares regression to abundance vs. \(\log\mathrm{RW}\). We used MPFIT to minimize the slope of this line by varying \(v_{t}\). Table 3 gives the final values of \(v_{t}\) for each star. We also propagated the uncertainty on the EW (described in Section 4.2) to the uncertainty on the abundance of each line. We ran MOOG twice: once with the EWs perturbed upward by the EW uncertainty and another time with the EWs perturbed downward. We took the propagated error on abundance (\(e_{i}\)) on absorption line \(i\) as the average of the absolute values of the changes in abundance from the upward and downward perturbations. Table 5 gives the resulting value of [Fe/H] measured from Fe i lines with \(\log\mathrm{RW}<-4.5\). The table also gives the slope of the abundances measured from Fe i lines vs. \(\log\mathrm{RW}\). This slope is a diagnosis of microturbulence, as discussed above. The slopes are essentially zero because \(v_{t}\) was tuned to minimize these slopes. The third column of the table gives the slope of abundance vs. excitation potential (EP, in units of eV). This slope is a diagnosis of \(T_{\mathrm{eff}}\). Positive (negative) slopes indicate that the value of \(T_{\mathrm{eff}}\) is too low (high). Some deviation from zero is expected because of the deficiencies in the assumption of LTE. The last column of the table is the difference in the mean iron abundance between neutral and singly ionized Fe lines. This difference is a diagnosis of \(\log g\). Positive (negative) differences indicate that \(\log g\) is too low (high). As with \(T_{\mathrm{eff}}\), some deviation from zero is expected in an LTE analysis, especially due to the non-LTE overionization effect in cool giants (Thevenin & Idiart, 1999). ### Synthesis Some elements are better suited to abundance measurements by spectral synthesis than by EWs. This is especially true for measurements from molecular features (like CH), intrinsically broad atomic features (like Li i \(\lambda\)6707), and features contaminated by blends. We computed abundances of Li and C by spectral synthesis. We used Linnanek to create line lists for spectral synthesis. For Li, the wavelength range was 6704.7-6710.9 A, and the Li was purely \({}^{7}\)Li (no \({}^{6}\)Li). For C, the wavelength range was 4273.9-4333.0 A. The \({}^{12}\)C/\({}^{13}\)C ratio changes with evolution along the RGB. We used \(\log g\) as a proxy for evolution, and we used the following prescription (Keller et al., 2001; Kirby et al., 2015) for the isotope ratio: \[{}^{12}\mathrm{C}/^{13}\mathrm{C} =50 \mathrm{if}\ \log g>2.7\] \[{}^{12}\mathrm{C}/^{13}\mathrm{C} =63\ \log g-120 \mathrm{if}\ 2.0<\log g\leq 2.7\] \[{}^{12}\mathrm{C}/^{13}\mathrm{C} =6 \mathrm{if}\ \log g\leq 2.0 \tag{2}\] We used MPFIT to find the best-fitting abundance. We then calculated a continuum correction. For Li, the continuum correction was the median of the residual (the observed spectrum divided by the best-fit spectrum). For the CH feature, which spans a longer wavelength range, the continuum correction was a spline with a breakpoint spacing of 10 A and sigma clipping at \(\pm 2\sigma\). We divided the observed spectrum by the continuum correction and then re-measured the abundance. We iterated the continuum correction until the abundance changed by less than 0.001 between iterations. Figure 3 shows the syntheses for the CH feature for stars of various effective temperatures. ### Non-LTE Corrections We considered non-LTE corrections for most lines in the line list. These corrections are so large for some elements, like Al and K, that the corrections are required for any meaningful interpretation of the abundances. The corrections for some elements, like Mg, are moderate. Finally, the corrections for some elements, like O (when measured from the forbidden lines), are negligible. We measured Li i abundances from the marginally resolved doublet at 6708 A. The line shape and a blend with a nearby Fe i line required that we synthesize the absorption line rather than fit a simple Gaussian (see Section 4.4). Lind et al. (2009) computed non-LTE corrections to this feature, and we applied them to the abundance computed from the synthesis. We considered only the [O i] \(\lambda\lambda\)6300,6364 forbidden doublet. The non-LTE corrections are negligibly small for both lines (Amarsi et al., 2015; Bergemann et al., 2021). The main uncertainty arises from the oscillator strengths (e.g., Storey & Zeippen, 2000). We measured Na i abundances from the Na D resonance doublet and another doublet at 5683 and 5688 A. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{ Star} & [Fe I/H] & \(d\epsilon/d\log{\rm RW}\) & \(d\epsilon/d\)(EP) & \(\epsilon\)(Fe I) \(-\)\(\epsilon\)(Fe II) \\ \hline III-13 & \(-\)2.60 & \(+\)0.00 & \(-\)0.02 & \(-\)0.11 \\ VII-18 & \(-\)2.65 & \(-\)0.00 & \(+\)0.01 & \(-\)0.22 \\ X-49 & \(-\)2.56 & \(+\)0.00 & \(+\)0.00 & \(-\)0.10 \\ III-65 & \(-\)2.55 & \(+\)0.00 & \(-\)0.03 & \(-\)0.10 \\ VII-122 & \(-\)2.60 & \(-\)0.00 & \(+\)0.00 & \(-\)0.12 \\ II-53 & \(-\)2.59 & \(+\)0.00 & \(-\)0.02 & \(-\)0.10 \\ XII-8* & \(-\)2.71 & \(-\)0.00 & \(-\)0.05 & \(-\)0.14 \\ V-45 & \(-\)2.61 & \(+\)0.01 & \(-\)0.05 & \(-\)0.07 \\ XI-19 & \(-\)2.49 & \(-\)0.00 & \(-\)0.04 & \(-\)0.04 \\ XI-80 & \(-\)2.52 & \(-\)0.00 & \(-\)0.03 & \(-\)0.05 \\ II-70* & \(-\)2.58 & \(-\)0.00 & \(-\)0.04 & \(-\)0.01 \\ IV-94* & \(-\)2.62 & \(+\)0.00 & \(-\)0.03 & \(-\)0.06 \\ I-67 & \(-\)2.65 & \(+\)0.00 & \(-\)0.04 & \(-\)0.05 \\ III-82 & \(-\)2.53 & \(+\)0.00 & \(-\)0.04 & \(-\)0.07 \\ IV-10 & \(-\)2.44 & \(+\)0.00 & \(-\)0.03 & \(-\)0.00 \\ XII-34* & \(-\)2.52 & \(-\)0.00 & \(-\)0.05 & \(-\)0.04 \\ IV-79 & \(-\)2.54 & \(+\)0.00 & \(-\)0.04 & \(-\)0.04 \\ IX-13 & \(-\)2.41 & \(+\)0.00 & \(-\)0.04 & \(+\)0.01 \\ VIII-24* & \(-\)2.47 & \(-\)0.00 & \(-\)0.05 & \(-\)0.03 \\ X-20 & \(-\)2.36 & \(-\)0.00 & \(-\)0.05 & \(+\)0.06 \\ S2710 & \(-\)2.36 & \(+\)0.00 & \(-\)0.05 & \(+\)0.05 \\ VI-90 & \(-\)2.43 & \(+\)0.00 & \(-\)0.05 & \(+\)0.01 \\ S2265 & \(-\)2.42 & \(+\)0.00 & \(-\)0.06 & \(+\)0.01 \\ VIII-45 & \(-\)2.49 & \(-\)0.00 & \(-\)0.04 & \(-\)0.03 \\ VII-28 & \(-\)2.38 & \(+\)0.00 & \(-\)0.05 & \(-\)0.01 \\ G17181\_0638 & \(-\)2.50 & \(+\)0.00 & \(-\)0.05 & \(+\)0.02 \\ C17333\_0832 & \(-\)2.41 & \(+\)0.00 & \(-\)0.04 & \(-\)0.03 \\ S3108 & \(-\)2.47 & \(-\)0.00 & \(-\)0.05 & \(-\)0.00 \\ S652 & \(-\)2.46 & \(+\)0.00 & \(-\)0.04 & \(-\)0.08 \\ S19 & \(-\)2.43 & \(+\)0.00 & \(-\)0.06 & \(-\)0.01 \\ D21 & \(-\)2.29 & \(-\)0.00 & \(-\)0.07 & \(+\)0.03 \\ S3880 & \(-\)2.46 & \(+\)0.00 & \(-\)0.02 & \(-\)0.13 \\ S4038 & \(-\)2.45 & \(+\)0.00 & \(-\)0.00 & \(-\)0.03 \\ S61 & \(-\)2.60 & \(-\)0.00 & \(-\)0.05 & \(-\)0.11 \\ S162 & \(-\)2.33 & \(-\)0.00 & \(-\)0.07 & \(+\)0.07 \\ \hline \end{tabular} \end{table} Table 5: Abundance Trends We applied Lind et al.'s (2011) corrections to each of these lines. For Mg i, we applied the corrections computed by Bergemann et al. (2017). We used the online interface provided by the MPIA database of NLTE corrections5. The interface interpolates NLTE corrections from a pre-computed grid based on stellar parameters. We chose the corrections computed with the 1-D MARCS spherical model atmospheres (Gustafsson et al., 2008). The correction grids for any other options for model atmospheres did not include the full range of stellar parameters spanned by our sample. The typical correction was +0.2 dex, in the sense that the non-LTE abundance is higher than the LTE abundance. Footnote 5: [https://nlte.mpia.de/](https://nlte.mpia.de/) Nordlander and Lind (2017) computed 1-D NLTE corrections to Al i lines, including both lines used in our study. We read the corrections for Al i\(\lambda\)3962 from their Figure 13 based on each star's \(T_{\rm eff}\) and \(\log g\). They did not provide a similar plot for Al i\(\lambda\)3944, but the corrections for both lines are similar. Therefore, we applied the corrections for the redder line to the bluer line. Corrections ranged from +0.5 dex for the base of the RGB to +1.1 dex for the coolest giants. The MPIA database showed that corrections for Si i (Bergemann et al., 2013) are negligible. Furthermore, the corrections to Ca i are very small (Mashonkina et al., 2007). We applied no corrections to these elements. We used the K i\(\lambda\)7699 corrections of Reggiani et al. (2019). We read the corrections appropriate for each star's \(T_{\rm eff}\) and \(\log g\) from their Figure 12. Although the non-LTE correction for this line is very large in warm, metal-rich giants, the corrections at low metallicity are typically \(-0.2\) dex. Non-LTE corrections for the transition metals are complicated. For example, Bergemann (2011) gave Ti i corrections in the range +0.4 to +1.0 dex. Ti ii corrections are typically less than +0.1 dex, as would be expected for ionized lines in red giants. As a trial, we applied these corrections to the Ti lines in our study. For stars at the tip of the RGB, the difference between Ti i and Ti ii abundances decreased, and the slope of Ti i abundances with excitation potential approached zero. However, both of these diagnostics worsened after applying the non-LTE correction for the majority of the stars in our sample, with the difference between neutral and ionized abundances as large as +0.7 dex. Similarly, we applied Bergemann et al.'s (2012) corrections to Fe i and Fe ii lines. The corrections are typically +0.1 dex, much smaller than for Ti. The corrections did not improve the slope of Fe i abundances with excitation potential, and they exacerbated the differences between Fe i and Fe ii abundances. Therefore, we chose not to apply non-LTE corrections to transition metals. Importantly, our line list excludes commonly used Mn i resonance lines because they are particularly affected by non-LTE corrections (Bergemann et al., 2019). All of the Mn transitions used here have excitation potentials of at least 2 eV. We also considered non-LTE corrections for neutron-capture elements. For example, Bergemann et al. (2012) computed corrections for Sr i and Sr ii, and Korotin et al. (2015) computed corrections for Ba ii. The typical corrections for ionized lines in our study are less than 0.1 dex. Therefore, we applied no corrections to ionized lines of neutron-capture elements. However, Sr i\(\lambda\)4607 requires corrections of about 0.4 dex. However, Bergemann et al. (2012) did not compute corrections for \(\log g<2.2\). As a result, we omitted Sr i from our study. All of the Sr abundances reported here are based on Sr ii. ### Additional Luminosity-Based Corrections The abundances of some elements display a trend with the stellar luminosity (or surface gravity) even after the application of non-LTE corrections. Figure 4 shows these trends. Some elements, especially carbon, are expected to show a trend with luminosity. We discuss the carbon trend with luminosity in Section 5.1. Trends with luminosity for heavier elements could result from atomic diffusion, including gravitational settling and radiative acceleration (or levitation). The effect could be strong for very metal-poor globular clusters, like M92 (Richard et al., 2002). However, it is expected to be strongest at the main sequence turn-off. Figure 3: Syntheses of a CH feature (red) compared to the observed spectra (black). These spectra sample three different stellar luminosities in M92. Figure 4: Trends of abundance with luminosity (bottom axis) or surface gravity (top axis). The red lines show fits to the trend, as described in Section 4.6. For the remainder of this work, the abundances are de-trended such that the correction at \(M_{G,0}=0\) (dotted gray line) is zero. Elements without red trend lines were not corrected. The dashed blue lines show the median abundances before the luminosity correction. The polynomial terms of the trend are shown in parentheses. Giant stars have deep convective envelopes that re-mix the abundances to levels close to their initial surface values. Observations of metal-poor globular clusters show only small differences in the abundances of heavy metals, like Fe, between the main sequence turn-off and the giant branch (Cohen & Melendez, 2005; Korn et al., 2007; Lind et al., 2008). As a result, we do not expect to see any trends in abundance for elements heavier than oxygen for the giants in our sample. Nonetheless, we do observe abundance trends with luminosity. We interpret these trends as artifacts of our measurements. For example, they could be uncorrected non-LTE effects. The main goal of our study is to quantify star-to-star abundance variations. Therefore, we take the conservative approach of removing the trend with luminosity. We do not fit the luminosity trend of elements that are known to vary in globular clusters: Li, C, O, Na, Mg, Al, and K. For other elements, we fit the abundance of each element as a function of luminosity. For most elements, the function is a quadratic. For elements measured over a limited range of luminosity (Ce, Nd, and Sm), the fit is linear. The luminosity correction is a subtraction of this line or quadratic, normalized such that the correction is zero at \(M_{G,0}=0\). We used LEO-Py (Feldmann, 2019) to perform a "censored" fit, which takes into account upper limits. Figure 4 shows the fitted luminosity trends. We applied the luminosity correction only to elements with red trend lines in the figure. Elements not shown in the figure were not corrected. ### Error Analysis Error analysis in high-resolution spectroscopy can sometimes be simplistic or arbitrary. For species with multiple absorption lines, one common approach is to report the standard deviation divided by the square root of the number of lines. For species with just one or a few lines, sometimes arbitrary uncertainties--such as 0.1 or 0.2 dex--are reported. Some of this guesswork is unavoidable because the uncertainties on abundances inherit difficult-to-quantify uncertainties on predecessor variables, such as oscillator strengths, deficiencies in the model atmosphere, or non-LTE corrections. McWilliian et al. (1995, 2013) improved on the standard error analysis by accounting for uncertainties in atmospheric parameters, including covariance from correlated parameters, like \(T_{\rm eff}\) and \(\log g\). Accounting for covariance lessens the severity of the spurious correlations between abundance ratios that could be introduced by errors in atmospheric parameters (Roederer & Thompson, 2015). Ji et al. (2020) extended this framework to the abundance measurements themselves (the mean abundance from all the lines of a single species) rather than merely their uncertainties. Ji et al. documented their approach in their Appendix B, which we followed in this work. The framework requires that we know the correlations between the atmospheric parameters. The correlation between \(T_{\rm eff}\) and \(\log g\) is a direct consequence of the way in which surface gravity is computed. Specifically, \(g\propto T_{\rm eff}^{4}\) (Equation 1). We computed the Pearson correlation coefficients between the three unique pairs of \(T_{\rm eff}\), \(\log g\), and \(v_{t}\) from their measured values (Table 3). The value of [M/H] was identical for all stars in our sample, so we copied the correlation coefficients that involve [M/H] from Ji et al. (2020). Following Appendix B of Ji et al. (2020), the correlation matrix (\(\rho\)) is multiplied into a vector (\(\delta\)) containing the uncertainties in the atmospheric parameters for the star. Table 3 gives the uncertainties in \(T_{\rm eff}\), \(\log g\), and \(v_{t}\). We used 0.05 dex as the uncertainty in [M/H]. This number is the weighted standard deviation of the Fe ii abundances. The matrix \(\rho\) and the vector \(\delta\) can be used together with the \(e_{i}\) (the abundance uncertainties propagated from the EW uncertainties) to compute a matrix called \(\widetilde{\Sigma}\): \[\widetilde{\Sigma}={\rm diag}(e_{i}^{2}+s_{\rm X}^{2})+\delta\rho\delta^{T} \tag{3}\] The variable \(s_{\rm X}\) is a systematic uncertainty applied to every line of a given species. We calculated the \(s_{\rm X}\) values according to the procedure described by Ji et al. As a modification to that procedure, we forced the minimum value of \(s_{\rm X}\) to be 0.1. This minimum value approximates the hard-to-estimate systematic errors we discussed at the beginning of Section 4.7. The difference in our procedure is that we apply the minimum systematic error to each line rather than the final abundance measurements, which have no forced minimum. This framework permits negative weights. In effect, a negative weight on an absorption line pushes the average abundance _away_ from the abundance measured for that line. Negative weights are a result of the assumption that line strengths behave linearly with changes in abundance and in stellar parameters. This assumption is not necessarily true for strong lines. We resolve this conundrum by iteratively omitting lines with negative weights, then recomputing \(\widetilde{\Sigma}\) and the new weights. Table 4 includes the weights for each absorption line. The table omits lines that were rejected for having negative weights. Ji et al. also described how to calculate the error in abundance ratios. Most lines in our study vary in the same direction with changes in atmospheric parameters. As a result, the error on most abundance ratios is smaller than simply adding the individual elements' errors in quadrature. We also calculated the covariance between two different abundance ratios. The figures in Section 5 show the covariance as ellipses. Table 6 gives the abundances and errors for each star. Fe i and Fe ii are given as [Fe/H]. The other elements are shown as ratios to iron ([X/Fe]). The neutral species are shown relative to Fe i, and the ionized species are shown relative to Fe ii. The columns labeled \(N\) give the number of lines used in the abundance determinations. Elements measured with spectral synthesis are reported with "syn" in this column. ## 5 Abundance patterns Globular clusters have fascinating abundance patterns, some of which we discussed in Section 1. In this section, we examine some of the well-known abundance patterns, like the Na-O anti-correlation, as well as some lesser-known patterns, such as the potassium distribution, in M92. Most interestingly, we discuss the variation of neutron-capture abundances. ### Trends with Luminosity Stellar evolution along the RGB for low-mass stars can alter the surface abundances of some elements. The main mechanisms for these alterations are the first dredge-up at the base of the RGB and thermohaline mixing at the luminosity function bump (Charbonnel & Zahn, 2007). The abundances of most elements heavier than O are not expected to show evolution on the RGB. Nonetheless, we observed trends with luminosity for many elements heavier than O, which we interpreted as systematic errors in our LTE analysis (Section 4.6). We applied corrections to some of those elements but not to Li, C, or O, which we discuss here. Lithium is the element that most dramatically exhibits changes on the RGB. The top panel of Figure 5 shows Li abundance measurements in M92 compared with those in NGC 6397 (Lind et al., 2009), a slightly more metal-rich globular cluster. Unlike the exquisite observations of Lind et al., our observations were not tailored to measure Li abundances. Therefore, the only stars in our sample that show Li detections are near the main sequence turn-off, where the first dredge-up has not yet depleted Li to the fullest extent. Previous measurements of Li abundance in M92 include those by Deliyannis et al. (1995) and Boesgaard et al. (1998). They observed Li abundances up to four times higher than we observed. As they pointed out, there appears to be a dispersion of \(A\)(Li) on the sub-giant branch, even at fixed \(T_{\rm eff}\). Whereas the first dredge-up dilutes Li by mixing the surface into layers of the star that were once hot enough to burn Li, thermohaline mixing actively destroys Li by mixing it into layers that are presently hot enough to burn Li. The same mechanism also mixes the surface material to layers that are hot enough to participate in the CNO cycle (Smith et al., 1996; Smith, 2002; Smith & Briley, 2005, 2006; Angelou et al., 2011). As a result, the surface abundances of C and O are depleted, and the surface abundance of N is enhanced. Figure 5 shows that the C abundances decline around an approximate absolute magnitude of zero. Our measurements of O on the lower RGB are not sufficiently sensitive to observe a decline in O abundances. There is a dispersion in C and O abundances even at fixed magnitude. GCs are well-known to show such dispersions. ### Light elements M92 has been known to exhibit a dispersion in light elements since Cohen (1979) measured abundances of red giants with the echelle spectrograph at the Kitt Peak/Mayall telescope. She found a star-to-star scatter of 0.8 dex in Na. Norris & Pilachowski (1985) found that Na abundances were correlated with N abundances, consistent with today's concordant view of light-element abundance variations in GCs. Sneden et al. (1991) first quantified the Na-O anti-correlation in M92, which Kraft et al. (1993) and Sneden et al. (1994) placed in context with the Na-O anti-correlations in other globular clusters. At the time, there was debate over whether the anti-correlation was primordial or a consequence of mixing during evolution on the RGB (Kraft, 1994). At last, Gratton et al. (2001) ruled out mixing as a source of the abundance variations observing the Na-O anti-correlation on the sub-giant branch of two slightly more metal-rich GCs. Figure 6 shows measurements of the Na-O (first panel) and Mg-Al (third panel) anti-correlations in M92. This figure and subsequent figures show only giant stars (RGB and AGB) to limit the effect of systematic errors. The error ellipses in the figure include covariance between the element ratios (Section 4.7). The measurements are color-coded by \(T_{\rm eff}\) in order to investigate whether abundances change with stellar parame Figure 5: The evolution of Li, C, and O abundances in M92 with stellar luminosity. The Li abundances in M92 are compared to those in NGC 6397 (Lind et al., 2009). ters. Trends between abundance and \(T_{\rm eff}\) could reflect abundance changes with evolution for some elements, like O. However, trends with \(T_{\rm eff}\) for most elements would indicate a systematic error in the abundance analysis. We do not observe any clear pattern with \(T_{\rm eff}\). In fact, the luminosity correction to abundances (Section 4.6) virtually ensures that there would be no such trend. The light element abundance correlations are notoriously difficult to observe in optical spectra. The atomic lines of O in the optical are very weak ([O I] \(\lambda\lambda 6300,6364\), dipole-forbidden and split by \(J\) degeneracy of the lower level) or highly subject to NLTE corrections (O i \(\lambda\lambda\lambda 772,7774,7775\), triply split by \(J\) degeneracy of the upper level). Likewise, the optical Al lines are either weak (Al i \(\lambda\lambda 6696,669\), split by \(J\) degeneracy of the upper level) or very strong and therefore subject to NLTE corrections and hypersensitivity to microturbulent velocity (Al i \(\lambda\lambda 3944,3962\), split by \(J\) degeneracy of the lower level). On the other hand, Na and Mg are comparatively easy to measure. The second panel of Figure 6 shows the Na-Mg anti-correlation. The relationship between Na and Mg is not as direct as between Mg and Al. High-temperature proton capture effectively converts Ne into Na and Mg into Al. The conversion of Mg into Al makes it sensible to show the Mg-Al anti-correlation, but the ease of measuring Na and Mg prompts us to show the Na-Mg anti-correlation instead. The separation into two populations is particularly clear in the Na-Mg anti-correlation. Population 1G has halo-like abundance patterns: sub-solar [Na/Fe] and elevated [Mg/Fe]. On the other hand, population 2G shows the signatures of high-temperature proton burning: enhanced [Na/Fe] and depleted [Mg/Fe]. Milone et al. (2017) quantified the degree to which various GCs separated into multiple populations using the "chromosome map" based on _HST_ wide-band filters. M92 is a fairly typical cluster in the chromosome map. However, it may stand out in Na and Al abundances (see APOGEE2 discussion below). We classified each star as belonging to 1G or 2G based on its Na or Mg abundance. For classification by Na abundance, we drew a dividing line at [Na/Fe] = +0.1, with 1G below the line and 2G above the line. There are no giants with Na abundances in the range \(-0.01<\) [Na/Fe] \(<+0.25\). Therefore, classification by Na abundance is unambiguous. We drew a separate dividing line at [Mg/Fe] = +0.47. However, the classification by [Mg/Fe] is not as clear as the classification by [Na/Fe] because one star (IV-79)--classified as 2G from its Na abundance--is on the 1G side of the [Mg/Fe] dividing line. Figures 6-7 and 10-12 show these dividing lines. Yong et al. (2005) discovered a Mg-Si anti-correlation in NGC 6752, which could be explained by proton capture onto \({}^{27}\)Al. That reaction has two branches: \({}^{27}\)Al\((p,\alpha)^{24}\)Mg and \({}^{27}\)Al\((p,\gamma)^{28}\)Si. The ratio of the second branch to the first branch increases with the temperature of the proton burning (Prantzos et al., 2017). The two branches naturally result in a Mg-Si anti-correlation and an Al-Si correlation. The right panel of Figure 6 shows the Mg-Si anti-correlation. While it is clear that Mg and Si both vary from star to star in M92, the anti-correlation is not nearly as obvious as for Na-Mg. The infrared spectrum is more amenable to measuring O from molecular features and Al from better-behaved atomic lines. Masseron et al. (2019) and Meszaros et al. (2020) used APOGEE2 \(H\)-band spectra (Abolfathi et al., 2018) to measure O, Mg, and Al abundances in M92. (The Na lines in the \(H\)-band are too weak to observe in M92.) The APOGEE2 version of the Mg-Al (and Al-O) anti-correlations are much more obvious than in our Figure 6. Along with M53, M92 is one of the two GCs with the clearest separation into two populations in the APOGEE2 abundance patterns, especially those involving Al. Masseron et al. (2019) observed a significant population of stars in M92 and M15 with [Mg/Fe] \(<0\). We do not observe such a population with our optical spectra. We do not know whether the discrepancy results from a different selection of stars or a difference in analysis techniques. Andrews et al. (2001) found that foreground interstellar Na i absorption varies over small spatial scales in the direction of M92. There are two main absorption clouds, separated in velocity by 19 km s\({}^{-1}\). We confirm from a qualitative inspection of the HIRES spectra that Figure 6: Light element abundance anti-correlations for giant stars in M92. The ellipses represent the 1\(\sigma\) uncertainties, including covariance. The dashed lines in the indicate the division between first- and second-generation stars (1G and 2G). there is a large scatter in EW of both interstellar clouds across the face of M92. The EWs of one cloud do not appear correlated with the EWs of the other cloud. Fortunately, the interstellar absorption is separated by at least 70 km s\({}^{-1}\) from the Na i lines in M92 stars. Consequently, the large interstellar variations do not affect the stellar abundances. ### Potassium Cohen et al. (2011), Cohen & Kirby (2012), and Mucciarelli et al. (2012) found that NGC 2419, an outer halo GC, exhibits an unusual variation in K abundances. The abundances are unusual for showing a dispersion, whereas most other GCs were not known to show a dispersion (Carretta et al., 2013). Furthermore, the K abundances in NGC 2419 are strongly anti-correlated with Mg. The anti-correlation suggests a similar nucleosynthetic pathway to the Na-O and Mg-Al anti-correlations: high-temperature hydrogen burning (Ventura et al., 2012; Iliadis et al., 2016). In the last decade, at least seven more GCs were found to show potassium abundance variations, with some clusters showing a Mg-K anti-correlation (Mucciarelli et al., 2015, 2017; Carretta, 2021, 2022; Alvarez Garay et al., 2022). Figure 7 shows the K abundances in M92 vs. Na and Mg abundances. There is no apparent correlation or anti-correlation between K and Na or Mg. All stars have [K/Fe] in the range +0.2 to +0.7. The scatter in [K/Fe] is slightly more than expected from the measurement uncertainty if they all had the same [K/Fe] value. The potassium abundance is only slightly lower for 1G than for 2G. Although the significance of this result is not high, it is qualitatively consistent with the Mg-K anti-correlation observed in NGC 2419 and other clusters. We initially measured a high K abundance in star X-20, but we had doubts about the telluric correction. Our colleagues from the California Planet Survey re-observed the star (Section 2.2) so that we could check this measurement. The EW of K i \(\lambda\)7699 was 32% smaller in the new spectrum than in the archival spectrum, probably due to difficulties in the telluric correction. X-20 still has the highest K abundance among our sample, but it is no longer a highly significant outlier. ### Iron The abundances of iron and the iron-peak elements in M92 behave like most other GCs. Specifically, there is little star-to-star dispersion. However, Langer et al. (1998) reported that one star, XI-19, has stronger iron lines than XII-8 and V-45, which are very close in the CMD. They found that [Fe/H] in XI-19 was \(0.18\pm 0.01\) higher than the other two stars. Broadband (Legardi et al., 2022; Lardo et al., 2022) and narrow-band (Lee, 2023) photometry also supports the presence of multiple metallicities in M92. All three of these stars are in our sample. In fact, we confirm that [Fe I/H] in XI-19 is \(0.19\pm 0.05\) higher than XII-8 and \(0.13\pm 0.04\) higher than V-45. The abundance of [Fe II/H] is higher in XI-19 than the other two stars by \(0.10\pm 0.06\). The abundances of other iron-peak elements scale with Fe. Our sample also contains XI-80, which is also extremely close in the CMD to the other three stars. The abundances of Ti and Fe in XI-80 are nearly identical to XI-19, which is to say that they are higher than XII-8 and V-45. Suble iron abundance variations have been detected in "normal" GCs (e.g., Marino et al., 2009), but the findings are sometimes controversial (e.g., Mucciarelli et al., 2015). The question is thorny because of subtleties, like NLTE effects and atmosphere modeling differences between the RGB and AGB, that can cause the appearance of iron abundance variations. Although we corroborate Langer et al.'s (1998) detection of an iron abundance variation in M92, our sample is not ideal to explore this question in detail because it spans the entire RGB. The ideal sample would be confined to a tight locus in the CMD. Thus, we save this question for a future sample (see Section 7). ### Neutron-capture elements The neutron-capture abundances in M92 have been controversial since Roederer (2011) and Roederer & Sneden (2011) claimed a significant star-to-star variation, followed by Cohen's (2011) refutation of that claim. In this subsection, we re-examine the variation of the neutron-capture abundances. Our sample includes all of the stars analyzed by Cohen (2011) because she observed them with Keck/HIRES. Hence, those spectra are in our HIRES archival sample. The simplest test for abundance variations is to compare spectra with a limited range of atmospheric parameters. Figure 8 shows spectra in three bins of \(T_{\rm eff}\). The stars represented in each panel were chosen to have \(T_{\rm eff}\) within 25 K of a central value. The actual ranges of \(T_{\rm eff}\) from top to bottom are 11 K, 34 K, and 42 K. The lines that vary the most from star to star are Eu ii \(\lambda\)4130, Eu ii \(\lambda\)4205, and Ba ii \(\lambda\)5854. Lines of lighter elements, like Ti and Fe, are not perfectly identical, but Figure 7: Potassium vs. sodium (left) and magnesium (right) abundances in M92. The dashed lines indicate the division between first- and second-generation stars (1G and 2G). they are less variable than the neutron-capture lines. Furthermore, the variations in the transition metals do not correlate with the variations in the neutron-capture elements. We conclude that there is a genuine dispersion in neutron-capture abundances in M92. Two quantities can be correlated only if there is a dispersion in both quantities. Figure 9 shows the correlations between the different permutations of Y, Ba, La, and Eu. The abundances of each pair are correlated. We judge the significance of the correlation with the linear Pearson correlation coefficient (\(r_{\rm measured}\), given in the figure), where zero is uncorrelated and one is perfectly correlated. We also show the \(p\) value, which gives the probability that a correlation at least as significant as \(r_{\rm measured}\) would appear by chance. The Pearson correlation coefficient does not take into account the covariance for each data point. Therefore, we also calculated the Pearson correlation coefficient, \(r(0)\), for the null test where the true values for all the points are identical. The true value of \(r(0)\) in this case is exactly zero. We sampled \(10^{4}\) "observed" values by taking the covariance ellipses as probability distributions. We computed the mean and standard deviation of the resulting correlation coefficients, which are given by \(r(0)\) in the figure. The difference between \(r(0)\) and zero is the correlation expected from covariance alone. In all cases, \(r_{\rm measured}\) is at least 3.8 times \(r(0)\), indicating that the correlations are significant, even when accounting for covariance. Figure 8: Keck/HIRES archival spectra of giant stars in M92. The spectra in each row are grouped by \(T_{\rm eff}\). The stars have \(T_{\rm eff}\) within 25 K of 4400 K (_top_), 4550 K (_middle_), and 4670 K (_bottom_). AGB stars are indicated with asterisks in their names. Abundances of Fe, Ba, and Eu are given. The absorption lines Eu ii \(\lambda\)4130, Eu ii \(\lambda\)4205, and Ba ii \(\lambda\)5854 are particularly variable. We also judged the degree of correlation by fitting a straight line to the data while accounting for both the variance and covariance in the uncertainties. We parameterized the line as \(y=x\tan\theta+b\). Such a parameterization allows for a flat prior in the slope \(\theta\) without giving undue weight to large absolute values of the slope (a common problem when the line is parameterized as \(y=mx+b\); Hogg et al., 2010). We used LEO-Py (Feldmann, 2019) to fit for \(\theta_{\rm LEO-Py}\), taking into account covariance. Figure 9 shows the resulting values. We do not include uncertainties on \(\theta_{\rm LEO-Py}\) because the formal uncertainties are less than 1%. A perfect correlation would result in \(\theta_{\rm LEO-Py}=45^{\circ}\), and no correlation would result in \(0^{\circ}\) or \(90^{\circ}\). All of the slopes are in the range \((45\pm 11)^{\circ}\). Our statistical tests show that the correlations between neutron-capture abundances are significant, even Figure 9: Correlations between the neutron-capture elements Y, Ba, La, and Eu. The top two rows show M92 by itself. The linear Pearson correlation coefficient, \(r_{\rm measured}\), is given in the upper left of each panel. Each panel also includes \(r(0)\), an estimate of the correlation coefficient if the scatter in the points was due entirely to their individual covariances, represented by the ellipses. Another test of correlation is \(\theta_{\rm LEO-Py}\), the slope of the best-fit line taking covariance into account. The bottom two rows show M92 with measurements in M15 from the literature (Sobeck et al., 2011; Worley et al., 2013). The dashed lines show the solar \(r\)-process pattern (Simmerer et al., 2004). The symbols (evolutionary state) and colors (\(T_{\rm eff}\)) have the same meaning as in Figure 1. M92 stars in the sample of Cohen (2011) are outlined in black in the top two rows. when accounting for covariance in the abundance uncertainties. We conclude that there is a significant dispersion in the neutron-capture abundances in M92. We further test whether the dispersion is a result of systematic errors by ruling out any residual trends with \(T_{\rm eff}\) or evolutionary state. The colors in Figure 9 correspond to \(T_{\rm eff}\), and the symbols distinguish RGB and AGB stars, as in Figure 1. There is no apparent trend with either \(T_{\rm eff}\) or evolutionary state. We also confirm that the abundances are consistent with the \(r\)-process. There is copious evidence that the "main" \(r\)-process abundance pattern is universal, such that the ratio of any two neutron-capture elements will be constant in a star whose neutron-capture nucleosynthesis was dominated by the \(r\)-process (e.g., Sneden et al., 1996). It is typical to compare neutron-capture abundances to the solar system \(r\)-process abundance pattern. Figure 9 shows the solar-scaled pattern computed by Simmerer et al. (2004, also reported by Sneden et al., 2008). The abundances in M92 agree very well with the solar system \(r\)-process pattern except for the abundance of La, which is larger in M92. Sneden et al. (2008) reported that the typical metal-poor star has a higher [La/Eu] ratio than the solar-system \(r\)-process value, but the difference is only about 0.05 dex, whereas we observe an offset of \(\sim\) 0.3 dex. Our luminosity correction (Section 4.6) could explain the offset. The luminosity correction for La is among the larger corrections. We compare the neutron-capture abundances in M92 with those in M15 in the bottom two rows of Figure 9. M15 shows a large dispersion in the \(r\)-process, as has been widely reported. The comparatively smaller dispersion in M92 is one reason why it has been difficult to conclude that is has a definitive dispersion. For some elements, like La and Eu, the abundances in M15 are consistent with the \(r\)-process. The abundances of other elements are not so clearly associated with a pure \(r\)-process. M15's Y abundances are quite scattered, as pointed out by Otsuki et al. (2006), who concluded that neutron-capture processes other than the main \(r\)-process were at work in M15. The Ba abundances are also more scattered than Eu and La, perhaps because Ba is often measured from strong lines, which are more sensitive than weak lines to errors in atmospheric parameters, like microturbulent velocity. Like M92, the La abundances in M15 are higher than the solar system \(r\)-process pattern reported by Simmerer et al. (2004). It is agreed in the literature that there is no relation between light elements and neutron-capture elements in M15 (e.g., Sneden et al., 1997, 2000a; Worley et al., 2013). Figures 10, 11, and 12 explore whether such a relation exists in M92. The left panels show neutron-capture abundances (Sr, Y, Zr, Ba, La, Ce, Nd, Eu, and Dy) vs. Na, and the right panel shows the same elements vs. Mg. Like in M15, the average neutron-capture abundance does not trend with the abundances of the light elements. However, the dispersion in neutron-capture elements is significantly larger in 1G (low Na, high Mg) stars than in 2G (high Na, low Mg) stars. Table 7 and each panel of Figures 10-12 give the standard deviation (\(\sigma\)) and the reduced chi-squared (\(\chi^{2}_{r}\)) for each generation. Both measurements take into account measurement uncertainties. The standard deviation is found by maximizing a Gaussian likelihood function: \[L_{i}=\frac{1}{\sigma\sqrt{2\pi}}\exp\left(\frac{\left(\rm[X/Fe]_{i}-\langle[X/ Fe]\rangle\right)^{2}}{\delta^{2}_{[X/{\rm Fe}],i}+\sigma^{2}}\right) \tag{4}\] where \(\delta_{[X/{\rm Fe}],i}\) is the error on [X/Fe] in star \(i\), taking into account covariance between the abundance of X and the abundance of Fe (Section 4.7). The total likelihood (\(\prod_{i}L_{i}\)) is maximized by varying the mean \(\langle[X/{\rm Fe}]\rangle\) and Figure 10: First-peak neutron-capture abundances (Sr, Y, and Zr) vs. light element abundances (Na and Mg). The dashed lines indicate the division between first- and second-generation stars (1G and 2G). Stars included in Cohen’s (2011) sample are outlined in black, but the abundance measurements are ours. \(\sigma\) through a Monte Carlo Markov chain (MCMC). The errors reported on \(\sigma\) in Table 7 are the 16\({}^{\rm th}\) and 84\({}^{\rm th}\) percentiles of the MCMC. The reduced chi-squared is \[\chi_{r}^{2}=\sum_{i}\left(\frac{\left[{\rm X/Fe}\right]_{i}-\langle[{\rm X/Fe}] \rangle}{\delta_{[{\rm X/Fe}],i}}\right)^{2} \tag{5}\] where \(\langle[{\rm X/Fe}]\rangle\) is the mean of [X/Fe] weighted by \(\delta_{[{\rm X/Fe}]}^{-2}\). In the cases where \(\chi_{r}^{2}<1\), we report upper limits for \(\sigma\). The separation by [Na/Fe] is more accurate because none of the stars have ambiguous identifications. On the other hand, more stars have Mg measurements than Na measurements because some spectra did not include the Na D doublet. In all cases, \(\sigma\) and \(\chi_{r}^{2}\) for all ten neutron-capture elements are larger for 1G than for 2G when the generations are divided by Mg abundance. Some elements (Sr, Zr, Ce, Nd, Sm) do not show a larger dispersion in 1G when the generations are divided by Na abundance, but in some of these cases, the dispersion is measured from just a few stars. There is some evidence that the difference between 1G and 2G is more pronounced for the first-peak \(r\)-process elements (Sr, Y, Ba) than for Ba and the lanthanides (Ba, La, Ce, Nd, Sm, Eu, and Dy). When 1G and 2G are divided by Mg abundance, \(\chi_{r}^{2}\) in 1G exceeds 3.8 for Ba and the lanthanides, but it is less than 3.8 for the first-peak \(r\)-process elements. We discuss the implications of this distinction in Section 6.1. #### 5.5.1 Comparison to Roederer & Sneden (2011) and Cohen (2011) Although we observed a neutron-capture dispersion, our measurement of the dispersion is not as large as reported by Roederer & Sneden (2011). Using WIYN/Hydra spectra, they found standard deviations of 0.12, 0.17, and 0.23 for [Y/Fe], [La/Fe], and [Eu/Fe], respectively. Those values are slightly larger than what we observed for 1G and significantly larger than for 2G. It is possible that the smaller dispersion that we report is a result of the higher spectral resolution and generally higher S/N of the HIRES spectra compared to the Hydra spectra. The Hydra spectra did not include useful absorption lines of Na or Mg. Instead, Roederer & Sneden crossmatched the Hydra spectra with the Na abundances of Sneden et al. (2000b), derived from high-resolution spectroscopy. They did not observe the pattern that we ob \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Na} & \multicolumn{3}{c}{Mg} \\ \cline{2-9} Element & \(\sigma\)(1G) & \(\sigma\)(2G) & \(\chi_{r}^{2}\)(1G) & \(\chi_{r}^{2}\)(2G) & \(\sigma\)(1G) & \(\sigma\)(2G) & \(\chi_{r}^{2}\)(1G) & \(\chi_{r}^{2}\)(2G) \\ \hline Sr & \(<0.06\) & \(0.05^{+0.04}_{-0.03}\) & 0.95 & 1.32 & \(0.11^{+0.04}_{-0.03}\) & \(0.04\pm 0.03\) & 3.70 & 1.22 \\ Y & \(<0.04\) & \(<0.02\) & 0.41 & 0.36 & \(0.04\pm 0.03\) & \(<0.02\) & 1.29 & 0.52 \\ Zr & \(<0.05\) & \(<0.02\) & 0.32 & 0.39 & \(0.07^{+0.04}_{-0.03}\) & \(<0.03\) & 2.14 & 0.61 \\ Ba & \(0.09^{+0.06}_{-0.05}\) & \(<0.02\) & 1.39 & 0.49 & \(0.12^{+0.03}_{-0.03}\) & \(<0.03\) & 3.99 & 0.79 \\ La & \(0.18^{+0.10}_{-0.06}\) & \(0.04^{+0.04}_{-0.03}\) & 2.68 & 1.06 & \(0.16^{+0.05}_{-0.04}\) & \(0.08^{+0.04}_{-0.03}\) & 6.95 & 1.74 \\ Ce & \(<0.08\) & \(<0.03\) & 0.09 & 0.64 & \(0.13^{+0.06}_{-0.04}\) & \(<0.04\) & 3.84 & 0.67 \\ Nd & \(<0.10\) & \(<0.04\) & 0.44 & 0.75 & \(0.16^{+0.07}_{-0.04}\) & \(0.07\pm 0.04\) & 5.19 & 1.37 \\ Sm & \(<0.26\) & \(0.07^{+0.06}_{-0.04}\) & 0.84 & 1.38 & \(0.18^{+0.06}_{-0.06}\) & \(0.09^{+0.05}_{-0.04}\) & 5.34 & 1.81 \\ Eu & \(0.14^{+0.08}_{-0.06}\) & \(<0.03\) & 2.10 & 0.61 & \(0.15^{+0.06}_{-0.04}\) & \(<0.04\) & 6.11 & 0.97 \\ Dy & \(0.22^{+0.29}_{-0.12}\) & \(0.09\pm 0.05\) & 1.64 & 1.68 & \(0.22^{+0.11}_{-0.06}\) & \(0.11^{+0.05}_{-0.03}\) & 9.19 & 2.65 \\ \hline \end{tabular} \end{table} Table 7: Abundance Dispersions in 1G and 2G Figure 11: Same as Figure 10 but for barium and some of the lanthanides (Ba, La, Ce). serve, wherein the dispersions in [La/Fe] and [Eu/Fe] are larger for stars with higher [Na/Fe]. The reason for this discrepancy is unclear, but the smaller uncertainties in our measurements of \(r\)-process abundances would make our sample more sensitive to differences in abundance dispersions. On the other hand, we do observe a dispersion, in contrast with the conclusion of Cohen (2011). Cohen concluded that the apparent dispersion observed by Roederer & Sneden was a result of lower data quality. However, Cohen (2011) did not analyze all of the available HIRES spectra. Figures 9-12 indicate the spectra that were part of her sample. By design, all 12 of her spectra are part of our sample because they are all in the KOA. Of the 12 stars, two were members of 1G, and 10 were members of 2G. A random sample of stars in M92 would indeed include more 2G stars than 1G stars because 2G is more populous (e.g., Masseron et al., 2019). We were able to detect a dispersion because we analyzed a sample that included more 1G stars. If we were to restrict our sample to the same stars analyzed by Cohen, we also would have concluded that there is no significant dispersion in neutron-capture abundances. ## 6 Discussion The most interesting result of this study is the decrease in \(r\)-process abundance dispersion from the first to the second generation of stars in M92. This is the first discovery of a relation between the light elements and neutron-capture elements in a GC. As a result, this is also the first linkage between \(r\)-process evolution and the evolutionary history of a GC. We propose that an \(r\)-process event occurred shortly before or concurrently with the formation of 1G in M92. The \(r\)-process event polluted the cluster gas inhomogeneously at first. The stars in 1G formed before the gas had time to mix evenly. The mixing timescale for the gas would be similar to the crossing time (\(t_{\rm cross}=R/v\)). If we assume that the typical length \(R\) is the scale length (\(2^{\prime}\) or 5 pc; Drukier et al., 2007), and the relevant velocity \(v\) is the velocity dispersion (6.3 km s\({}^{-1}\); Drukier et al., 2007), then \(t_{\rm cross}\) is 0.8 Myr. Regardless, 0.8 Myr is shorter than the \(\sim 30\) Myr timescale for AGB stars to produce the products of high-temperature hydrogen burning, like enhanced Na and depleted Mg (e.g., D'Ercole et al., 2008; Bastian & Lardo, 2018). By the time the gas was polluted with 2G material, the \(r\)-process material was already well mixed. This explains why there is little dispersion in the \(r\)-process in 2G. The preceding estimate is based on the current structural parameters of M92, which might have been different in the past. In fact, the cluster might not have been in dynamical equilibrium during its formation, so a dynamical timescale might not be relevant. Simulations of cluster formation show that the gas is far from a spherical distribution. In fact, it is highly asymmetric and filamentary (e.g., Li et al., 2019; Grudic et al., 2021). In summary, the timescale based on today's crossing time is more of a convenience rather than a rigorous estimate. Nonetheless, we will now consider the implications of a \(\gtrsim 1\) Myr delay between the start of formation of 1G and the start of formation of 2G. The low-mass stars we observe today in M92 required tens of millions of years (the Kelvin-Helmholtz timescale) to collapse from protostars into main sequence stars. Therefore, our scenario posits that the protostars in 1G had already begun to collapse, locking in the \(r\)-process material, before the gas became well-mixed. The protostars in 2G formed after the gas mixed Figure 12: Same as Figure 11 but for other lanthanides (Nd, Sm, Eu, and Dy). and homogenized. However, the low-mass protostars in neither population reached the main sequence until tens of millions of years after this homogenization. Our proposed scenario requires a very prompt source of the \(r\)-process if the source was a star that formed concurrently with 1G in M92. Although neutron star mergers (NSMs) and their associated kilonovae are the only confirmed sources of the \(r\)-process (e.g., Chornock et al., 2017), their delay times (30 Myr or longer6, Kalogera et al., 2001) exceed the requirement that the \(r\)-process be produced on the mixing timescale of \(t_{\rm cross}=0.8\) Myr. Bekki and Tsujimoto (2017) and Zevin et al. (2019) independently explored scenarios where a fast-merging NSM enriches a proto-GC. Both models result in 2G being preferentially enriched in the \(r\)-process relative to 1G, which is inconsistent with our observations. Footnote 6: It is possible that some binary neutron stars could merge as soon as 1 Myr after the core collapse supernovae that created them (Beniamini and Piran, 2019; Safarzadeh et al., 2019). However, these NSM candidates are extremely rare. Furthermore, the relevant timescale is the time from stellar birth to NSM, including the hydrostatic burning lifetime of at least several Myr. We conclude that a NSM could not be born in a GC and also enrich that GC in less than 0.8 Myr. Instead, a short delay time between 1G and 2G could imply that the \(r\)-process source is a massive star that formed concurrently with 1G. Some proposed mechanisms include magnetorotational supernovae (Nishimura et al., 2015) and collapsars (Siegel et al., 2019). On the other hand, the \(r\)-process source could have been a star born before M92 was formed. Our scenario requires that the star _exploded_ during or shortly before the formation of 1G. However, the precursor star could have been _born_ shortly after the Big Bang. In principle, the progenitor to a NSM could have been born about 100 Myr after the Big Bang and then exploded approximately 1 Gyr later, when M92 formed. Tarumi et al. (2021) proposed an alternative scenario for M15, wherein an \(r\)-process event occurred near but external to the GC. The cluster's natal gas cloud was inhomogeneously polluted with \(r\)-process material, as it would be if the event happened inside the cluster. However, the external \(r\)-process material could continue to fall onto the cluster for the duration of star formation in 1G and 2G. The extended pollution time is necessary to explain why the \(r\)-process dispersion persists in both stellar generations in M15. However, that requirement is not necessary in M92, where only 1G shows the dispersion. One problem with making the \(r\)-process with massive stars is that massive stars also synthesize Fe (Macias and Ramirez-Ruiz, 2019). Any inhomogeneity in the \(r\)-process should be matched by an inhomogeneity in Fe. However, the inhomogeneity in Fe will be washed out by the many other core collapse supernovae in M92 that also produced Fe.7 The \(r\)-process event would have been rare. In fact, there might have been just a single event.8 We observe a standard deviation in [Fe ii/H] of 0.05. The standard deviation of [Eu/Fe] in 1G is 0.15. If the \(r\)-process event inhomogeneously polluted the cluster with 0.1 \(M_{\odot}\) of Fe such that the dispersion in [Fe/H] was 0.15, only two more events that evenly polluted the cluster with 0.1 \(M_{\odot}\) of Fe each would be required to reduce the dispersion to 0.05. Naturally, individual supernovae would not evenly pollute the cluster on timescales less than \(t_{\rm cross}\). However, there would be \(\sim 10^{4}\) core collapse supernovae that produced Fe but not \(r\)-process. The Fe inhomogeneities would be averaged out, whereas the inhomogeneity of the rare \(r\)-process event would persist for at least \(t_{\rm cross}\). Footnote 7: M92 has a current stellar mass of \(2.7\times 10^{5}\)\(M_{\odot}\)(Baumgardt et al., 2020). Corrected for tidal stripping, its initial mass was about \(1.2\times 10^{6}\)\(M_{\odot}\)(Baumgardt et al., 2019). For a Kroupa (2001) initial mass function, 1% of stars explode as core collapse supernovae. Therefore, M92 experienced about \(10^{4}\) core collapse supernovae. Footnote 8: One event per \(10^{6}\) stars is slightly lower than the frequency of long gamma ray bursts and one-tenth the frequency of \(r\)-process events required to explain the scatter of \(r\)-process abundances in metal-poor stars (Brauer et al., 2021). M15 and M92 are the GCs that display the most obvious dispersions of \(r\)-process abundances. They are also the most metal-poor "classical" GCs. The association of low metallicity and \(r\)-process dispersion is probably not coincidental. Consider M5, which has 13 times the iron abundance of M92 (Harris, 1996). If M5 experienced the same type and frequency of \(r\)-process events as M92, then those events would contribute the same mass of \(r\)-process ejecta to M5's natal gas. The \(r\)-process abundance of the gas that forms 1G in M5 would be \([(r_{\rm new}+r_{\rm natal}))/{\rm Fe}_{\rm natal}]\). However, the natal abundance of Fe in M5 would be 13 times the value of M92. As a result, the \(r\)-process dispersion in M5's 1G would be 13 times smaller. Such a dispersion would be undetectable. Therefore, the abundances of metal-poor GCs would more obviously show the effect of rare events. ### First-Peak \(r\)-process Compared to Lanthanides Sr, Y, and Zr do not follow the same abundance pattern as the heavier, "main" \(r\)-process in metal-poor stars. As a result, Travaglio et al. (2004) posited the existence of a lighter element primary process (LEPP). One candidate for the LEPP is the "limited" (or "weak") \(r\)-process that occurs in neutrino-driven winds from proto-neutron stars formed during core collapse supernovae (Frohlich et al., 2006; Pruet et al., 2006; Wanajo, 2006, 2013; Arcones and Thielemann, 2013). On the other hand, the main \(r\)-process requires higher neutron densities, which might be found in neutron star mergers (Lattimer and Schramm, 1974), jet-driven, magnetorotational supernovae (Nishimura et al., 2015), or supernovae from massive, rapidly rotating stars (collapsars, Siegel et al., 2019). The prevalence and existence of each of these sites is hotly debated. Nonetheless, it is clear that the first-peak \(r\)-process has multiple formation channels. We found that the first-peak \(r\)-process elements exhibit a smaller distinction between 1G and 2G than Ba and the lanthanides (see Table 7 and Figures 10-12). One possibility is that the limited \(r\)-process site is common, whereas the main \(r\)-process site is rare. For example, consider that commonplace, low-mass core collapse supernovae could synthesize Sr, Y, and Zr. The large number (\(\sim 10^{4}\)) of these events during the early formation of M92 would cause the gas to converge on a single abundance of these lighter \(r\)-process elements. On the other hand, if Ba and the lanthanides were created by few events--perhaps a single event--then the gas would not converge on a single abundance of these heavier elements until it was evenly mixed by hydrodynamic processes. There is copious evidence that the main \(r\)-process source is rare and prolific (e.g., Ji et al., 2016; Brauer et al., 2019). In other words, the events must happen infrequently but must produce a great deal of \(r\)-process elements. The large dispersion of the main \(r\)-process and the smaller dispersion of the limited \(r\)-process in M92 add further support to the rare and prolific nature of the main \(r\)-process. Neutron star mergers, magnetorotational supernovae, and collapsars are all rare and prolific producers of the main \(r\)-process. The Milky Way also contains some evidence that the limited \(r\)-process happens more frequently and earlier in the Galaxy's history than the main \(r\)-process. For example, Holmbeck et al. (2020) found that limited-\(r\) stars (those with [Sr/Ba] \(>0.5\)) are more prevalent at lower metallicities. One explanation is the main \(r\)-process sources--those that produce Ba--are less frequent. Stars that formed at low metallicities sampled the ejecta of fewer events. If the limited \(r\)-process is more common than the main \(r\)-process, then some metal-poor stars could be enriched in Sr but poor in Ba. ## 7 Summary We measured detailed abundances of 35 stars in M92 from archival Keck/HIRES spectroscopy. We stacked all available spectra to achieve the maximum S/N possible. Our analysis takes into account covariance between stellar parameters, like temperature and surface gravity. We made the following observations: * M92 shows typical light-element abundance variations, like the Na-Mg anti-correlation. The cluster has a clear separation into first and second generations, which is especially apparent in the Na-Mg diagram. * M92 does not display an obvious Mg-K anti-correlation, such as the one observed in NGC 2419. * The neutron-capture abundance pattern in M92 is consistent with the \(r\)-process, with the exception that La appears overabundant relative to the solar-system \(r\)-process pattern. The same phenomenon has previously been observed in M15. * We affirm that M92 has a dispersion in \(r\)-process abundances. The existence of a dispersion was previously reported by Roederer & Sneden (2011). * The dispersion in the \(r\)-process is limited to the first generation of stars (low Na, high Mg). So far, M92 is unique in showing any relation between light element and neutron-capture abundances. * The dispersion is smaller for Sr, Y, and Zr than for Ba and the lanthanides. We posited that the higher frequency of limited \(r\)-process events compared to main \(r\)-process events explains the difference in dispersions. We proposed a scenario wherein a source of the main \(r\)-process polluted M92 as the first stars began to form. The first generation formed faster than a crossing time, which caused the \(r\)-process abundances to be inhomogeneous. The gas homogenized by the time the second generation formed, resulting in a negligible dispersion in \(r\)-process among the second generation. From rough estimates of the current crossing time in M92, we conclude that the formation starting times of the first and second generations in M92 were separated by at least 0.8 Myr. Although this timescale is shorter than other relevant timescales, like the lifetime of a massive AGB star, it could constrain GC formation theories where the two populations form simultaneously (e.g., early disk accretion, Bastian et al., 2013) or nearly simultaneously (e.g., very massive stars, Gieles et al., 2018). Our archival spectroscopic sample was not designed to measure abundance variations. The stars were selected from the full magnitude range of the RGB and even the main sequence turn-off. The ideal sample of stars to detect abundance dispersions would span a narrow range of temperature and surface gravity (i.e., color and magnitude). In such a sample, absorption line strengths would correspond almost directly to abundance. We have already begun to acquire such samples in M15 and M92. These samples will provide more precise quantifications of dispersions among the \(r\)-process abundances and the relation between the \(r\)-process and lighter elements. We are very grateful to Andrew Howard, Howard Isaacson, and the California Planet Search consortium for observing star X-20 with HIRES. We thank Eric Bell, Ivanna Escala, Oleg Gnedin, Keith Hawkins, J. Chris Howk, Rebecca Surman, Ralph Wijers, and especially Ian Roederer for insightful conversations. This research has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation (NSF). This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. E.N.K. acknowledges support from NSF CAREER grant AST-2233781. A.P.J. acknowledges support from the U.S. National Science Foundation (NSF) grant AST-2206264. This work was performed in part at the Aspen Center for Physics, which is supported by NSF grant PHY-1607611. Keck:I (HIRES) MOOG (Sneden, 1973; Sneden et al., 2012), ATLAS9 (Kurucz, 1993), XIDL, MPFIT (Markwardt, 2012), LEO-Py (Feldmann, 2019), Linemake (Placco et al., 2021a, b)
2305.01871
Convolutional neural network-based single-shot speckle tracking for x-ray phase-contrast imaging
X-ray phase-contrast imaging offers enhanced sensitivity for weakly-attenuating materials, such as breast and brain tissue, but has yet to be widely implemented clinically due to high coherence requirements and expensive x-ray optics. Speckle-based phase contrast imaging has been proposed as an affordable and simple alternative; however, obtaining high-quality phase-contrast images requires accurate tracking of sample-induced speckle pattern modulations. This study introduced a convolutional neural network to accurately retrieve sub-pixel displacement fields from pairs of reference (i.e., without sample) and sample images for speckle tracking. Speckle patterns were generated utilizing an in-house wave-optical simulation tool. These images were then randomly deformed and attenuated to generate training and testing datasets. The performance of the model was evaluated and compared against conventional speckle tracking algorithms: zero-normalized cross-correlation and unified modulated pattern analysis. We demonstrate improved accuracy (1.7 times better than conventional speckle tracking), bias (2.6 times), and spatial resolution (2.3 times), as well as noise robustness, window size independence, and computational efficiency. In addition, the model was validated with a simulated geometric phantom. Thus, in this study, we propose a novel convolutional-neural-network-based speckle-tracking method with enhanced performance and robustness that offers improved alternative tracking while further expanding the potential applications of speckle-based phase contrast imaging.
Serena Qinyun Z. Shi, Nadav Shapira, Peter B. Noël, Sebastian Meyer
2023-05-03T03:09:06Z
http://arxiv.org/abs/2305.01871v1
# Convolutional neural network-based single-shot speckle tracking for x-ray phase-contrast imaging ###### Abstract X-ray phase-contrast imaging offers enhanced sensitivity for weakly-attenuating materials, such as breast and brain tissue, but has yet to be widely implemented clinically due to high coherence requirements and expensive x-ray optics. Speckle-based phase contrast imaging has been proposed as an affordable and simple alternative; however, obtaining high-quality phase-contrast images requires accurate tracking of sample-induced speckle pattern modulations. This study introduced a convolutional neural network to accurately retrieve sub-pixel displacement fields from pairs of reference (i.e., without sample) and sample images for speckle tracking. Speckle patterns were generated utilizing an in-house wave-optical simulation tool. These images were then randomly deformed and attenuated to generate training and testing datasets. The performance of the model was evaluated and compared against conventional speckle tracking algorithms: zero-normalized cross-correlation and unified modulated pattern analysis. We demonstrate improved accuracy (1.7 times better than conventional speckle tracking), bias (2.6 times), and spatial resolution (2.3 times), as well as noise robustness, window size independence, and computational efficiency. In addition, the model was validated with a simulated geometric phantom. Thus, in this study, we propose a novel convolutional-neural-network-based speckle-tracking method with enhanced performance and robustness that offers improved alternative tracking while further expanding the potential applications of speckle-based phase contrast imaging. machine learning, x-ray. ## I Introduction X-ray phase-contrast imaging (PCI) has proven to be a powerful technique for non-destructive material testing and biomedical imaging [1, 2]. While conventional x-ray imaging relies on absorption in high-density materials for signal generation, PCI measures changes in the wavefront (phase shift) when x-rays pass through an object. For typical x-ray energies and materials of low atomic numbers - such as human tissue - the generation of phase shift is several orders of magnitude larger than absorption. Therefore, for weakly-attenuating materials, PCI provides enhanced sensitivity (i.e., visualization of soft-tissue contrast) that is inaccessible in conventional x-ray imaging. The clinical potential of PCI has been demonstrated for a wide range of pathologies and anatomical sites, such as the musculoskeletal system [3], central nervous system [4], breast [5], and vasculature [6]. Although various solutions for sensing x-ray phase information have been developed in the last decades [7], their widespread clinical application is still limited to prototypes [8, 9, 10]. Typical PCI systems utilize 1D or 2D gratings (grating interferometry [11, 12]) and grids to produce a periodic reference interference pattern in the detector plane. However, the translation of these PCI systems from research laboratories to clinical centers faces major obstacles because of high coherence requirements, complex optical systems for translation of phase shifts into measurable intensity variations, and phase-wrapping effects from periodic reference patterns [13]. Speckle-based x-ray phase contrast imaging (XPCI) [13, 14, 15, 16] is a recently proposed method for phase-contrast and dark-field imaging that utilizes x-ray near-field speckles generated from a random diffuser. The principle of XPCI is schematically shown in Fig. 1. Coherent x-rays impinge on the diffuser, randomly scatter, and mutually interfere with the incident beam to create a random intensity pattern, named the reference image. Sample-induced phase shifts cause refraction that translates into a transverse displacement of the original speckle pattern to generate the sample image. The sample image can then be compared to the reference to calculate the corresponding phase contrast signal of the object [13]. XPCI overcomes several of the limitations of typical PCI systems as it offers a simple setup with excellent dose efficiency, only has moderate coherence requirements, does not require precise system alignment, and negates the propagation distance restrictions imposed by fractional Talbot distances [15, 17]. This is crucial for preclinical PCI systems, potentially used for small animal imaging, due to less stringent requirements for small detector pixels and long propagation distances compared to laboratory setups. The key to obtaining high-quality phase contrast images from XPCI systems is accurate tracking of the sample-induced speckle pattern modulations. Out of all speckle tracking modes [13], single-shot speckle tracking (XST), i.e., using only one reference and sample image pair, is desirable for a preclinical translation since it allows a fast and dose efficient acquisition with a stationary diffuser. Several algorithms have been successfully developed for XST. Zero-normalized cross-correlation (ZNCC) [14] and unified modulated pattern analysis (UMPA) [16, 18, 19, 20] are direct tracking algorithms based on windowed image correlation. Although both algorithms produced impressive results, the trade-off between spatial resolution and angular sensitivity requires careful selection of the window size [19]. One possible solution to overcome this limitation is the optical flow method (OF) [21]. By implicitly tracking speckles, i.e., without the use of a correlation window, measuring displacement fields can be considered an optical flow problem through geometrical-flow conservation. A study by Rouge-Labriet _et al._[22] established that of the three speckle tracking techniques, the OF method provided the best qualitative image quality and the lowest naturalness image quality evaluator score with a reduced number of sample exposures for low dose PCI with both theoretical and biomedical sample models. However, this method depends on the assumption that the sample is transparent to x-rays and utilizes a high-pass filter which can result in image artifacts and affect the quantitative accuracy. Convolutional neural network (CNN) have been successfully implemented for various problems in computer vision [23], focusing on classification [24], segmentation [25], and registration [26]. More recently, CNNs have been extended to the general optical flow problem, defined as the pattern of apparent motion of objects between two frames, using deep learning architectures [27, 28]. FlowNet [29] and its variants [27] use the multiscale loss function for optimization and are U-shaped with contracting and expanding paths. FlowNet2 [28], a fusion network generated by stacking different FlowNet variants, achieved superior performance compared to traditional optical flow algorithms. This architecture has been successfully utilized for displacement estimation in ultrasound elastography [27] and various applications in civil engineering [30]. However, compared to these applications, the sample-induced displacement for x-ray speckle tracking is typically much smaller at subpixel levels. The StrainNet architecture, designed by Boukhtache _et al._[23], can retrieve dense displacement and strain fields from optical images of an object exposed to mechanical compression. StrainNet has successfully demonstrated comparable results in retrieval performance and computing time compared to traditional algorithms. With its ability to perform subpixel displacement retrievals, StrainNet could be a promising solution for XST. In this paper, we present the CNN-based Analysis for Displacement Estimation (CADE), an extension of the StrainNet CNN algorithm to track x-ray speckles in XPCI. Intrinsic performance characteristics for CADE were quantitatively investigated using standard criteria for digital image correlation and compared to established x-ray speckle tracking algorithms. In addition, the performance of CADE for speckle-based PCI was evaluated using numerical wave-optics simulations. ## II Methods ### _Wave-optics simulation_ Numerical wave-optics simulations were performed using a previously-developed in-house Python simulation framework [31]. The simulation process relied on an iterative use of the angular spectrum method to propagate the wave-field from the source through the speckle-based imaging setup. The disturbance of the wave-field due to the presence of an object (i.e., attenuation and phase-shift) was then calculated in projection approximation. All simulations were conducted with the following configuration. A monochromatic 30 keV point source with a 10 \(\upmu\)m focal spot was simulated. The diffuser was located 1 m away from the source and was modeled as 10 layers of sandpaper sheets, each consisting of a rough aluminum oxide (\(Al_{2}O_{3}\)) surface with a 200 \(\upmu\)m backing of diethyl pyrocarbonate (\(C_{6}H_{10}O_{5}\) Figure 1: Principle of speckle-based phase-contrast imaging **(A)** A random intensity pattern (red solid line) is shown in comparison to the displaced intensity pattern in the sample image (gray dashed line). **(B)** and **(C)** show a reference and sample image, respectively, and demonstrate a sub-pixel displacement of the speckle marked by the red arrow. [31]. The detector was located 3 m away from the source and had an effective pixel size of 12 \(\upmu\)m, utilizing a point spread function of 1/2.355 pixels. As the diffuser is simulated with different surface structures, the resulting speckle sizes represented by the full width half maximum (FWHM) of the speckle pattern autocorrelation function ranged from 22 \(\upmu\)m to 110 \(\upmu\)m, or approximately 2 to 10 pixels at the detector level. This range offered a minimum speckle visibility of 20% and a good representation of expected speckle sizes. ### Data augmentation Data augmentation for supervised training and network architecture details are shown in Fig. 2. As described in Section II-A, wave-optics simulations were utilized to obtain 364 independent 256 x 256 pixel reference speckle images. Random piece-wise smooth deformations were applied to each reference image using one of six deformation patch sizes (4, 8, 16, 32, 64, or 128 pixels) with displacements ranging from -1 to +1 pixel. The deformation patch size indicates the distance of linear interpolation of displacements, i.e., a patch size of 8 x 8 pixels corresponded to independent 8 x 8 patches of smooth displacements in only one direction (either all positive or all negative). The random deformations were applied to each reference image to generate the corresponding sample image. Sixty and 10 independent deformations were used for each reference image to generate the network training and testing datasets, respectively. The generated displacement values follow a normal distribution centered around 0 and ranging from -1 to +1 pixels. We randomly selected 0.5% of all image pairs to utilize each of the following deformation maps: (1) identity maps (i.e., all displacements equal zero); (2) constant displacement (i.e., the same displacement value across the entire map) in x direction and an identity map in y direction; (3) vice versa of (2); and, finally, (4) constant displacements in both x and y directions. This resulted in 98% of the data having random deformations, and the remaining 2% underwent identity or constant displacement maps. The constant displacements within \(\pm\)0.15 pixels were included to improve CADE's performance at extremely small displacements. Additional processing of image pairs included noise and attenuation. First, individual Poisson noise maps were generated and applied for reference and sample images. All sample images were then randomly attenuated to mimic 50 - 100% transmission in the same manner as the deformation patches. This resulted in a training and testing dataset of 21841 and 3640 image sets, respectively. Each data set comprises a reference image, sample image, and ground truth displacement field. ### CNN architecture and training #### Ii-C1 Network architecture The StrainNet-f architecture [23] adapted for XPCI (Fig. 2) is an end-to-end full-resolution network consisting of two main components. The first component extracted feature maps with successive convolutional layers. The 10 convolutional layers include 7 x 7 filters for the first, 5 x 5 filters for the second and third, and 3 x 3 filters for the remaining seven. The latter portion predicts displacement fields via five convolutional layers Figure 2: Schematic view of data augmentation and network architecture. Percentages show the proportion of data that underwent the respective processing. Examples of reference and displacement images are shown on the left (gray arrows). The feature extraction level and displacement field prediction level were both performed four times. ReLU stands for rectified linear unit. Down-sampling and up-samplings were performed at a stride of 2. with 3 x 3 filters and eight transposed convolutional layers. The architecture simplified FlowNetS with four down-samplings and four up-sampling. The same loss function and levels in FlowNetS were used. #### Iii-A2 CNN training The hyperparameters of the network were initially set to the original StrainNet-f configuration [23] and further fine-tuned via grid search to maintain equal or improved model convergence and faster training times. The final values of each hyperparameter are reported in Table 1. Model convergence was evaluated with training and testing endpoint error (EPE), which is the Euclidean distance between the predicted and ground truth displacement vectors normalized over all pixels: \[L_{\text{\emph{e}pe}}=\frac{1}{N}\sum_{\forall\left(\mathbf{u}_{GT}(x,y)- \mathbf{u}(x,y)\right)^{2}},\left(1\right)\] where \(N\) denotes the total number of pixels in the image, \(\mathbf{u}_{GT}(x,y)\) is the ground truth displacement of each pixel, and \(\mathbf{u}(x,y)\) is the estimated displacement of pixel \(x,y\)[23]. The training was performed with four cores of an NVIDIA Tesla T4 16GB GPU at a runtime of 58 hours. An EPE of 0.050 for training and 0.113 for testing was achieved after 350 epochs. ### State-of-the-art speckle tracking algorithms To reconstruct the (differential) phase-contrast image, the displacement vector field \(\left(\mathbf{u}(x,y)\right)\) between the sample (\(\mathbf{I_{s}}\)) and reference (\(\mathbf{I_{r}}\)) image must be determined by locally tracking the speckle pattern modulation. Two conventional speckle tracking algorithms were examined: zero-normalized cross-correlation (ZNCC) [32] and unified modulated pattern analysis (UMPA) [18, 19, 33]. #### Iii-D1 Zinc Small patches of size (\(2M+1\)) \(\times\) (\(2M+1\)) from \(\mathbf{I_{r}}\) were compared against a template region in \(\mathbf{I_{s}}\). The relative transverse displacement for the central pixel (\(\mathbf{x}_{0},\mathbf{y}_{0}\)) of the template was determined as the shift with the highest correlation coefficient: \[=argmax_{u_{x},u_{y}}\left\{\frac{\sum_{i,j}\left[\mathbf{I^{\prime}_{s}}(x_{ i},y_{j})\mathbf{I^{\prime}_{r}}(x_{i}+u_{x},y_{j}+u_{y})\right]}{\left[\sum_{i,j} \mathbf{I^{\prime}_{s}}(x_{i},y_{j})^{2}\sum_{i,j}\mathbf{I^{\prime}_{r}}(x_{ i}+u_{x},y_{j}+u_{y})^{2}\right]}\right\}, \tag{2}\] where \(\mathbf{I^{\prime}_{r}}\) and \(\mathbf{I^{\prime}_{s}}\) are the normalized reference and sample images obtained by subtracting the mean value of the patch. The summation was performed over all pixels in the corresponding patch. Sub-pixel precision was obtained by Gaussian fitting to the peak in the correlation map. #### Iii-D2 Umpa A physical model is used to describe the influence of the sample on the speckle pattern in terms of transmission T and transverse speckle displacements (\(u_{x},u_{y}\)). All signals are extracted with a windowed least-square minimization between the model and the measured sample image \(\mathbf{I_{s}}\): \[\mathbf{u}(x_{0},y_{0})=argmin_{u_{x},u_{y}}\sum_{i,j}w\left(x_{i},y_{j}\right)\] \[\times\left\{I_{s}\left(x_{i},y_{j}\right)-T\left(x_{i},y_{j}\right)I_{r} \left(x_{i},y_{j}\right)I_{r}\left(x_{i}+u_{x},y_{j}+u_{y}\right)\right\}^{2}, \tag{3}\] where \(w\) is a windowing function of (\(2M+1\)) \(\times\) (\(2M+1\)) pixels centered on (\(x_{0},y_{0}\)). Sub-pixel precision was achieved through a paraboloid fit of the neighborhood of the minimum. ### Performance validation and comparison Ten independent reference images were generated with speckle parameters described in Section II-B and used for validation and comparison evaluations. Different types of displacement maps were applied for each evaluation. Poisson noise and transmission of 90% were applied to all image pairs unless otherwise stated. All algorithms were run on a MacBook Pro with an Apple M1 chip. #### Iii-D1 Speckle tracking accuracy Before adding noise and attenuation, each reference image was transformed with a constant displacement map ranging from 0 to 1 pixel at steps of 0.1 pixels. Bias and root mean squared difference (RMSE) of the retrieved displacement fields were calculated as [34, 35], \[\begin{split} Bias(x,y)=\left(\mathbf{u}_{GT}(x,y)-\frac{\sum \mathbf{u}(x,y)}{N}\right)+\mathbf{u}_{GT}(x,y),\left(4\right),\\ RMSE(x,y)=\sqrt{\frac{\sum_{x,y=1}^{N}(\mathbf{u}(x,y)-\mathbf{u}_{ GT}(x,y))^{2}}{N}},\left(5\right).\end{split}\] \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline \multicolumn{6}{|l|}{**Training Hyperparameters**} \\ \hline **Bias Decay** & 0 & **Epoch Size** & 0 & **Weight Decay** & 0.0004 \\ \hline **Solver Algorithm** & Adam & **Batch Size** & 16 & **Algorithm** & StrainNet\_f \\ \hline **Div Flow** & 2 & Learning Rate & 0.001 & **Multiscale Weights** & [0.005, 0.01, 0.02, 0.08, 0.32] \\ \hline **Epochs** & 350 & **Momentum** & 0.9 & **Data Loading Workers** & 8 \\ \hline **Starting Epoch** & 0 & **Beta** & 0.999 & **Milestones** & [40, 80, 120, 160, 200, 240] \\ \hline \end{tabular} \end{table} Table 1: **Network hyperparameters used for training. The highlighted parameters deviated from the initial configuration. Beta corresponds to the beta parameter for the Adam solver algorithm. Div Flow represents the value by which the flow will be divided every 40 epochs to decrease the runtime.** 2 Spatial resolution The spatial resolution of each algorithm was evaluated using a star displacement map, which consisted of a unidirectional sinusoidal displacement with linearly increasing frequency toward the left. A key characteristic of the pattern is the constant amplitude of 0.5 pixel across the horizontal symmetry axis in the center of the image. Limiting spatial resolution for each algorithm was defined by the frequency at a bias of 10% [23]. #### Ii-A3 Noise, window size dependency, and computational time Reference images were deformed by a gradient map ranging from 0 to +1 pixel displacement. The effect of noise was then evaluated by applying seven different noise levels ranging from a loss of 0-6% signal-to-noise ratio (SNR) in the same manner as in Section II-B. RMSE and spatial resolution were calculated for UMPA and ZNCC with window sizes between 10 and 50 pixels and compared to CADE to examine window size dependency. Finally, computational times were evaluated using 10 image pairs of 256 x 256, 512 x 512, and 768 x 768 pixels. #### Ii-A4 Method validation The wave-optics simulation was used to validate our speckle tracking method for imaging data obtained from an XPCI acquisition of with the setup described in Section II-A. The simulated polymethyl methacrylate object consisted of a 1500 \(\upmu\)m wide rectangular base with a thickness profile modulated in the x direction by a sine wave. Hence, the thickness \(t(x,y)\) of the sample is represented by \[t(x,y)=b+A\cdot\sin(wx),(6)\] where \(b=1000\)\(\upmu\)m is the base thickness of the sample and \(A=800\)\(\upmu\)m and \(w=1.33\times 10^{-3}\)\(\upmu\)m\({}^{-1}\) are the amplitude and frequency of sinusoidal thickness modulation, respectively. The gradient of the phase shift \(\Phi\) introduced by the sample is proportional to the refraction angle \(\alpha=(\alpha_{x},\alpha_{y})\), where \(x\) and \(y\) are the transverse coordinates orthogonal to the optical axis. This can in turn be geometrically related to the speckle displacement vector \(\mathbf{u}\left(x,y\right)=\left(u_{x}\left(x,y\right),u_{y}\left(x,y\right)\right)\) (see Fig. 1) in small-angle approximation: \[\left(\frac{\partial\Phi}{\partial x},\frac{\partial\Phi}{\partial y}\right) =\frac{2\uppi}{\lambda}\big{(}\alpha_{x},\alpha_{y}\big{)}=\frac{2 \uppi}{\lambda}\big{(}u_{x},u_{y}\big{)}\frac{p}{d} \tag{7}\] where \(\lambda\) is the x-ray wavelength, \(p\) is the detector pixel pitch, and \(d\) is the sample-detector distance [36]. The phase shift can also be calculated from the sample properties as \[\mathbf{\Phi}(x,y)=-\frac{2\pi\delta}{\lambda}t(x,y) \tag{8}\] using the refractive index decrement \(\delta\) of the sample's material. Hence, the ground truth refraction angle \(\boldsymbol{\alpha}_{x}(x,y)\) for this sample is given by \[\boldsymbol{\alpha}_{x}(x,y)=\frac{\lambda}{2\pi}\frac{\partial\Phi(x,y)}{ \partial x}=-\delta\frac{\partial t(x,y)}{\partial x}\] \[=-\delta Aw\cdot\cos(wx). \tag{9}\] computational time and image size. CADE resulted in shorter computational times, and its advantage increased substantially with larger image sizes. At an image size of around \(6\times 10^{5}\) pixels, CADE had an average runtime of 18 s, whereas the runtimes for ZNCC and UMPA were three (53 s) and ten times (181 s) longer, respectively. Displacement maps and reconstructed refraction angle for the sine wave sample are shown in Fig. 6. Qualitatively, CADE-generated displacement maps appeared less noisy and in better agreement with the ground truth than for conventional algorithms. In Fig. 6B, CADE achieved improved accuracy compared to ZNCC and UMPA, particularly at the peaks located at \(\pm 2\) urad. The displacement maps RMSE was 4.10, 4.37, and \(4.30\times 10^{-2}\) pixels for CADE, UMPA, and ZNCC, respectively. ## IV Discussion XPCI presents a cost-effective method with moderate coherence requirements for enhanced sensitivity compared to conventional x-ray systems. However, accurately tracking sample-induced speckle pattern modulations is crucial for obtaining high-quality phase contrast images. Although several methods have been proposed for this task, they often necessitate a trade-off between tracking accuracy and spatial resolution or rely on several assumptions. To overcome these limitations, we present CADE, a novel windowless CNN-based speckle tracking algorithm, and compare and validate its performance against conventional algorithms. The key findings from this study are: (1) successful application of CADE for speckle tracking, (2) improved tracking performance compared to Fig. 4: Evaluation of the speckle tracking spatial resolution with star pattern displacement maps. **(A)** High frequency area of the star pattern and corresponding displacement maps obtained from the speckle tracking algorithms. Scalebar represents 0.1 mm. **(B)** Spatial resolution of CADE and conventional algorithms. True (thin dotted line) and moving averaged (thick solid line) displacement bias of each method is shown versus the spatial wavelength of the star pattern. Spatial resolution is defined by 10% bias (black dotted line). Fig. 5: Comparison of displacement RMSE **(A)** and spatial resolution **(B)** of conventional algorithms for increasing correlation window size. The CADE results do not dependent on window size. Mean (marker) and standard deviation (error bar) are obtained from evaluating ten image pairs for each window size. CADE achieved improved RMSE for all window sizes and improved spatial resolution for window sizes greater than 15 pixels. conventional algorithms, and (3) greatly reduced computational time. Most importantly, CADE achieved superior performance particularly at high refraction angles when validated on a simulated object. Compared to CADE, current state-of-the-art algorithms like UMPA and ZNCC are window-based algorithms. Unlike these extrinsic approaches, or iterative pixel-wise algorithms, intrinsic speckle tracking algorithms rely on solving a partial differential equation formulated at the whole-image level rather than explicitly tracking individual speckles. Recent examples include the geometric-flow approach [21] and multimodal intrinsic speckle-tracking [37], which combines the geometric-flow formalism with a Fokker-Planck-type generalization. A recent publication by De Marco's [33] presents an enhanced implementation of UMPA, characterized by greatly improved computation efficiency, the capability of multithreading, and the reduction of estimation bias. As we implemented an older version of UMPA, the performance of this updated UMPA algorithm is unknown, but we believe that the general trend is similar to what we have presented in this paper. Finally, a machine learning method for speckle tracking with model validation on experimental data has been proposed lately [38]. While both algorithms utilized simulated random displacement maps for training and offered improved runtime and image quality, there were several distinct differences: (a) speckle patterns were generated with a coded binary mask versus our random sandpaper model; (b) they used a basic plane-wave model while we utilized a divergent-beam geometry for image propagation; (c) noise was calculated with either a random binary noise image or as Gaussian noise whereas we used Poisson noise; and, (d) their model was based on the SPINNet architecture while ours is adapted from the StrainNet-f architecture. We believe that our approach is a more realistic solution for speckle tracking as it utilizes more representative training and testing data from the sandpaper model and wave-optics simulation. On the other hand, further evaluations are needed to assess and compare the performances. Although CADE demonstrates improved speckle tracking performance and overcomes several of the issues of conventional algorithms, it still suffers from limitations. For example, CADE exhibits a slight decrease in accuracy for increasing displacements, which may be explained by the training data distribution (centered around zero and ranging from -1 to +1 pixel) due to the patch-based deformation method. However, as the intended application of this study focuses on the retrieval of subpixel displacements, CADE provides superior accuracy and spatial resolution compared to cross-correlation methods. The main limitation of our study is the lack of experimental image data for both training and validation purposes. Speckle patterns can be very well characterized by their statistical properties and thus simulation and numerical data can be reliably used to model a realistic diffuser setup. Although we were able to successfully validate CADE on a simulated geometric sample, training and testing with real image data with more complex samples would be ideal as this would produce the most realistic model but is unrealistic due to the large amount of data required by model training (i.e., currently requiring datasets of 25481 sets of images). In conclusion, this study successfully implemented and validated CADE, a windowless CNN-based speckle tracking method, which demonstrated superior performance, greatly decreased processing times, and robustness to noise. Furthermore, due to its ability to process much higher volumes of data with equal or better tracking accuracy, CADE brings the development and application of single-shot, low-dose XPCI to small-animal imaging one step further.
2306.07788
Quantum Entanglement in Top Quark Pair Production
Top quarks, the most massive particles in the standard model, attract considerable attention since they decay before hadronizing. This presents physicists with a unique opportunity to directly investigate their properties. In this letter, we expand upon the work of G. Iskander, J. Pan, M. Tyler, C. Weber and O. K. Baker to demonstrate that even with the most massive fundamental particle, we see the same manifestation of entanglement observed in both electroweak and electromagnetic interactions. We propose that the thermal component resulting from protons colliding into two top quarks emerges from entanglement within the two-proton wave function. The presence of entanglement implies the coexistence of both thermal and hard scattering components in the transverse momentum distribution. We use published ATLAS and CMS results to show that the data exhibits the expected behavior.
Mira Varma, O. K. Baker
2023-06-13T14:08:47Z
http://arxiv.org/abs/2306.07788v3
# Quantum Entanglement in Top Quark Pair Production ###### Abstract Top quarks, the most massive particles in the standard model, attract considerable attention since they decay before hadronizing. This presents physicists with a unique opportunity to directly investigate their properties. In this letter, we expand upon the work of G. Iskander, J. Pan, M. Tyler, C. Weber and O. K. Baker to demonstrate that even with the most massive fundamental particle, we see the same manifestation of entanglement observed in both electroweak and electromagnetic interactions. We propose that the thermal component resulting from protons colliding into two top quarks emerges from entanglement within the two-proton wave function. The presence of entanglement implies the coexistence of both thermal and hard scattering components in the transverse momentum distribution. We use published ATLAS and CMS results to show that the data exhibits the expected behavior. _Key words_: Quantum entanglement, Entanglement entropy, Top physics, Heavy quark production ## 1 Introduction Prior literature has established that the transverse momentum distribution of hadrons is best described by fitting the sum of an exponential and a power law. (See Refs. [1, 2] for more detail). The power law portion of the fit, which represents hard scattering, is well understood: it arises from the sizable momentum transfer between the quarks and gluons [1]. The thermal behavior of the transverse momentum distribution, on the other hand, remains a mystery in particle physics. There have been several competing ideas about why this behavior is present [3]-[10]. A common belief is that thermalization arises through re-scattering after nuclei collide [11]. This explanation is limited: it cannot explain the origin of thermalization in proton-proton (\(pp\)) collisions. A universal explanation for the behavior of the transverse momentum distribution is that it is due to entanglement between parts of the wave functions of the colliding particles. This idea has been studied for several interactions. G. Iskander et al. showed that for weak interactions, specifically neutrino scattering, there is entanglement between the probed and unprobed regions of the nucleon in the collision [12]. This theory has also been studied for electroweak processes, namely, deep inelastic scattering (DIS). K. Zhang et al. calculated the von Neumann entropy (interpreted as the entanglement entropy) of the DIS system, which they proposed was caused by entanglement between the probed and unprobed regions of the proton [13]. As discussed in Ref. [14], when two protons collide, the entire system undergoes a "quench" due to the sudden presence of a collision and a spectator region. The Hamiltonian is evolved, meaning, \(H=H_{0}\to H_{0}+V(t)\), where \(V(t)\) is the effect of a pulse of the color field. The uncertainty principle suggests that the momentum transfer of the collision, \(Q\), and the proper time (time measured in the particle's rest frame), \(\tau\), are related by: \(\tau\sim 1/Q\)[14]. If we approximate a \(pp\) collision as a short pulse of a (chromo) electric field, the effective temperature parameter (arising from thermalization), satisfies the following relation [14]: \[T_{th}\simeq(2\pi\tau)^{-1}\simeq\frac{Q}{2\pi}. \tag{1}\] In Eq. 1, \(T_{th}\) is a parameter of the thermal component of the transverse momentum distribution [14]. (See Refs. [15, 16, 1] for more detail). In this letter, we extend this idea to the most massive known particle in the standard model, the top quark. The top quark is particularly interesting due to its short lifetime (i.e. top quarks decay before hadronizing). We propose that when two protons collide, the thermal part of the transverse momentum distribution is caused by entanglement in the wave function of the proton-proton system. This entanglement is between a collision region, where the two protons overlap, which we call \(A\), and a region where the two protons do not overlap, which we call \(B\). In other words, we are proposing that when entanglement is present, the transverse momentum distribution has both thermal and hard scattering components. Naturally, therefore, where there is no entanglement, the thermal component is absent. ## 2 Background In the general quantum information formalism, a pure state of two quantum systems, \(A\) and \(B\), is denoted as, \[\rho_{AB}=|\psi_{AB}\rangle\langle\psi_{AB}|, \tag{2}\] where \(\rho_{AB}\) is the reduced density matrix. Physically, when a system is in a pure state, there is complete information about the wave function. On the other hand, if a system is in a mixed quantum state, there is incomplete information about the wave function at every point in time. A mixed state is a weighted sum of pure states. Mathematically, this is denoted as: \[|\Psi_{AB}\rangle=\sum_{i,j}c_{ij}|\psi_{i}^{A}\rangle\langle\psi_{j}^{B}|. \tag{3}\] The reduced density matrix is: \[\rho_{AB}=\sum_{i,j}c_{ij}|\psi_{i}^{A}\rangle\langle\psi_{j}^{A}|\otimes| \psi_{i}^{B}\rangle\langle\psi_{j}^{B}|. \tag{4}\] The reduced density matrix of either one of the subsystems is needed in order to calculate the entanglement entropy. We choose to use subsystem \(A\) because it is more interesting. The reduced density matrix of subsystem \(A\) can be written as: \[\rho_{A}=\mbox{Tr}_{B}(\rho_{AB}). \tag{5}\] Once we have \(\rho_{A}\), the entanglement entropy of subsystem \(A\) is given by: \[S_{A}=-\mbox{Tr}(\rho_{A}\ln\rho_{A}). \tag{6}\] If \(\mbox{Tr}(\rho_{\rm A})^{2}=1\), we have a pure state, and there is no entanglement present in \(|\Psi_{AB}\rangle\). If \(\mbox{Tr}(\rho_{\rm A})^{2}<1\), we have a mixed state, and \(|\Psi_{AB}\rangle\) is entangled [17]. When two protons collide, both protons are initially in a pure state (see Fig. 1). Once the protons have collided, two regions are present: an overlap (collision) region, \(A\), and a non overlap (spectator) region, \(B\). In Fig. 1, \(A\cup B\) represents a pure state, since we are considering the proton-proton system as a whole. However, when considering \(A\) or \(B\) separately, they are each in a mixed state, and we expect entanglement to be present. ## 3 Results and Analysis We begin our study by using the transverse momentum distribution of \(t\bar{t}\) pair production in the semi-leptonic decay channel, focusing on the hadronic decay products, which is described by: \[t\bar{t}\to W^{+}bW^{-}b\to q\bar{q}b+\ell\bar{\nu}\bar{b}+\mbox{jets}. \tag{7}\] The process described in Eq. 7 is depicted in Fig. 2. Throughout this letter, the center of mass (\(pp\) collision) energy is \(\sqrt{s}=13\) TeV. The following relations for the thermal and hard scattering components of the transverse momentum distribution (Eqs. 8 and 11) were originally proposed in Ref. [18] and have been used Figure 1: Diagram depicting a proton-proton collision. (Top) Both protons before they collide. (Bottom) The two protons during the collision, where region \(A\) is the collision region (region of overlap) and region \(B\) is the spectator region (non-overlap region). Regions \(A\) and \(B\) are entangled. Figure 2: Top anti-top quark decay in the semi-leptonic channel. The resulting W bosons can decay hadronically, resulting in a quark antiquark pair or leptonically, resulting in a lepton and a neutrino [19]. in other studies [1, 12, 20, 14, 21]. The thermal component of the transverse momentum distribution is given by the following, \[\frac{1}{p_{T}}\frac{d\sigma}{dp_{T}}=A_{\rm th}\exp\left(-\frac{m_{T}}{T_{\rm th }}\right), \tag{8}\] where \(p_{T}\) is the transverse momentum of the system, \(A_{th}\) is a fitting parameter, \(m_{T}\) is the transverse mass of the system, and \(T_{th}\) is the effective (thermal) temperature parameter. The transverse mass is calculated using the following relation, \[m_{T}^{2}=m^{2}+p_{T}^{2}, \tag{9}\] where \(m\) is the mass of the \(t\bar{t}\) system, which in this case is the mass of top quark & anti-top-quark together \(\approx 2\times(173\,{\rm GeV/c^{2}})\). The effective temperature parameter, which was extracted in Ref. [1], is given by, \[T_{\rm th}=0.098\left(\frac{s}{s_{0}}\right)^{0.06}\,{\rm GeV}, \tag{10}\] where \(\sqrt{s_{0}}\) is a normalization constant equal to \(\sqrt{s_{0}}=1\) GeV and \(\sqrt{s}\) is the proton-proton collision energy (which is \(\sqrt{s}=13\) GeV). The hard scattering component of the transverse momentum distribution is given by: \[\frac{1}{p_{T}}\frac{d\sigma}{dp_{T}}=\frac{A_{\rm hard}}{\left(1+\frac{m_{T} ^{2}}{T_{hard}^{2}n}\right)^{n}}. \tag{11}\] In Eq. 11, \(A_{hard}\) is a fitting parameter, \(T_{hard}\) is the hard scale parameter, and \(n\) is a scaling factor obtained from the power law fit. The values \(m_{T}\), \(\sqrt{s_{0}}\) and \(\sqrt{s}\) remain unchanged from their previous definitions. The hard scale parameter, which was determined in Ref. [1], is defined as: \[T_{\rm hard}=0.409\left(\frac{s}{s_{0}}\right)^{0.06}\,{\rm GeV}. \tag{12}\] The CERN ROOT fitting program and the SciPy curve_fit function were used to fit Eqs. 8 and 11 to \(t\bar{t}\) decay data arising from proton-proton collisions at the Large Hadron Collider. The transverse momentum distribution of \(t\bar{t}\) production for the hadronic decay products in the semi-leptonic decay channel (ATLAS) with an integrated luminosity of 3.2 fb\({}^{-1}\), is depicted in Fig. 3. As we can see, both a thermal component (red) and a hard scattering component (green) are needed to properly fit the data, which suggests the presence of entanglement. The sum of the two fits, which has a reduced chi-squared value of \(\chi^{2}/{\rm ndf}\approx 1.6\), is the blue curve. The error bars are smaller than the size of the data points. Fig. 4 depicts an analogous fit using CMS data, which yielded a reduced chi-squared value of \(\chi^{2}/{\rm ndf}\approx 1.3\). This increase in statistical precision was expected, since the integrated luminosity of the CMS data was 35.8 fb\({}^{-1}\), an order of magnitude larger than that of the ATLAS data. As the integrated luminosity increases, the number of recorded events increases as well, which results in a more precise transverse momentum distribution. Again, in Fig. 4, we can see the necessity of having both a thermal and a hard scattering component in the fit. Fig. 5 depicts the transverse momentum distribution of the additional leading jet. Since we cannot properly keep track of the jets, i.e., we lack complete information about their behavior, we cannot have set spectator and collision regions. Therefore, when studying the transverse momentum distribution of one of the additional leading jets, we would expect no entanglement due to the lack of information. As we can see in Fig. 5, only the hard scattering component is needed to fit the data, implying the absence of entanglement, as predicted. One can quantify the presence of a thermal component in the transverse momentum distribution by calculating the ratio between the area under the curve (integral) of the hard scattering component (Eq. 11) and the area under the curve (integral) of the sum of the fits (Eq. 8 + Eq. 11). This ratio, \(R\), is defined as, \[R=\frac{I_{p}}{I_{e}+I_{p}}, \tag{13}\] where \(I_{p}\) is the area under hard scattering (power law) portion of the curve and \(I_{e}\) is the area under the thermal (exponential) part of the curve. If there is no thermal component to the fit, \(I_{e}=0\). When ATLAS Figure 3: Transverse momentum distribution of top-antitop quark pair production from ATLAS data, with a center of mass energy of 13 TeV and a luminosity of 3.2 fb\({}^{-1}\). The reduced chi-squared fit value is \(\chi^{2}/{\rm ndf}\approx 24.7/15=1.6\). Data is taken from [23]. Figure 4: Transverse momentum distribution of top-antitop quark pair production from CMS data, with a center of mass energy of 13 TeV and a luminosity of 35.8 fb\({}^{-1}\). The reduced chi-squared fit value is \(\chi^{2}/{\rm ndf}\approx 10.3/8=1.3\). Data is taken from [23]. Figure 5: Transverse momentum distribution of top-antitop quark pair production from ATLAS data, with respect to the additional leading jet. Center of mass energy is 13 TeV and luminosity is 139 fb\({}^{-1}\). The reduced chi-squared fit value is \(\chi^{2}/{\rm ndf}\approx 17.0/13=1.3\). Data is taken from [24]. data was used, \(R\) was calculated to be 0.19 \(\pm\) 0.03. Using CMS data yielded a slightly different \(R\), which was 0.16 \(\pm\) 0.03. When we examined the transverse momentum with respect to the additional leading jet of the system, the \(R\) was found to equal one. This is exactly what we expected, as there was no entanglement present in this case. The calculated \(R\) values are consistent with the ratios computed for other \(pp\) collisions, as well the \(R\) values for charged weak interactions [12], [14], [21]. For the process given by \(\bar{\nu_{\mu}}+{}^{12}\)C \(\rightarrow\mu^{+}+\pi^{-}+{}^{12}\)C, no entanglement was expected since the event was diffractive, which implies that the nucleus as a whole was probed. Therefore, there were no identifiable collision and spectator regions which could be entangled with one another. This parallels our result for the transverse momentum distribution of the additional leading jet, since in this case, distinguishable collision and spectator regions were also absent. Table 1 summarizes the results from previous literature as well as our new results. ## 4 Conclusion In this letter, we have extended upon the ideas in Refs. [1, 12, 20, 14, 21] to show that even in \(t\bar{t}\) collisions, the thermal component of the transverse momentum distribution can be attributed to entanglement between different parts of the wave functions of the colliding particles (in this case, protons). In Ref. [25], Duan discusses this idea further, introducing a term called "entropy of ignorance." In his work, Duan agrees that in proton-proton collisions, there is entanglement between the collision and spectator regions of the proton. Since an experiment can only measure the collision region, we lack information about the spectator region. This lack of information is called the "entropy of ignorance." Studies of entanglement in \(t\bar{t}\) collisions can also be used to investigate possibilities of physics beyond the standard model. In Ref. [26], a term called quantum discord is discussed, which is a fundamental quantity that measures the "quantumness of correlations" [27]. If the quantum discord is asymmetric, this can hint at the presence of CP violation. It would be interesting to apply these ideas to new experimental measurements of \(t\bar{t}\) pair production or to other types of particle collisions. ## Acknowledgements The authors gratefully acknowledge funding support from the Department of Energy Office of Science Award DE-FG02-92ER40704. \begin{table} \begin{tabular}{|c|c|c|} \hline \(R\) & Process & Reference \\ \hline 0.16 \(\pm\) 0.05 & \(pp\rightarrow\) charged hadrons & [14], [21] \\ 0.15 \(\pm\) 0.05 & \(pp\rightarrow\) H \(\rightarrow\gamma\gamma\) & [14], [21] \\ 0.23 \(\pm\) 0.05 & \(pp\rightarrow\) H \(\to 4l(e,\mu)\) & [14], [21] \\ 1.00 \(\pm\) 0.02 & \(pp(\gamma\gamma)\rightarrow(\mu\mu)\)X’X” & [14], [21] \\ 0.13 \(\pm\) 0.03 & \(\bar{\nu_{\mu}}+N\rightarrow\mu^{+}+\pi^{0}+X\) & [12] \\ 1.00 \(\pm\) 0.05 & \(\bar{\nu_{\mu}}+{}^{12}\)C \(\rightarrow\mu^{+}+\pi^{-}+{}^{12}\)C & [12] \\ 0.19 \(\pm\) 0.03 & \(pp\to t\bar{t}\to WbWb\) (ATLAS) & current work \\ 0.16 \(\pm\) 0.03 & \(pp\to t\bar{t}\to WbWb\) (CMS) & current work \\ 1.00 \(\pm\) 0.05 & \(pp\to t\bar{t}\to WbWb\rightarrow\) jets (ATLAS) & current work \\ \hline \end{tabular} \end{table} Table 1: \(R\) values from prior studies and our current work.
2310.02116
Coarse-to-Fine Concept Bottleneck Models
Deep learning algorithms have recently gained significant attention due to their impressive performance. However, their high complexity and un-interpretable mode of operation hinders their confident deployment in real-world safety-critical tasks. This work targets ante hoc interpretability, and specifically Concept Bottleneck Models (CBMs). Our goal is to design a framework that admits a highly interpretable decision making process with respect to human understandable concepts, on two levels of granularity. To this end, we propose a novel two-level concept discovery formulation leveraging: (i) recent advances in vision-language models, and (ii) an innovative formulation for coarse-to-fine concept selection via data-driven and sparsity-inducing Bayesian arguments. Within this framework, concept information does not solely rely on the similarity between the whole image and general unstructured concepts; instead, we introduce the notion of concept hierarchy to uncover and exploit more granular concept information residing in patch-specific regions of the image scene. As we experimentally show, the proposed construction not only outperforms recent CBM approaches, but also yields a principled framework towards interpetability.
Konstantinos P. Panousis, Dino Ienco, Diego Marcos
2023-10-03T14:57:31Z
http://arxiv.org/abs/2310.02116v2
# Hierarchical Concept Discovery Models: ###### Abstract Deep Learning algorithms have recently gained significant attention due to their impressive performance. However, their high complexity and un-interpretable mode of operation hinders their confident deployment in real-world safety-critical tasks. This work targets _ante hoc_ interpretability, and specifically Concept Bottleneck Models (CBMs). Our goal is to design a framework that admits a highly interpretable decision making process with respect to human understandable concepts, on _multiple levels of granularity_. To this end, we propose a novel hierarchical concept discovery formulation leveraging: (i) recent advances in image-text models, and (ii) an innovative formulation for _multi-level concept selection_ via data-driven and sparsity inducing Bayesian arguments. Within this framework, concept information does not solely rely on the similarity between the _whole_ image and general _unstructured_ concepts; instead, we introduce the notion of _concept hierarchy_ to uncover and exploit more granular concept information residing in _patch-specific_ regions of the image scene. As we experimentally show, the proposed construction not only outperforms recent CBM approaches, but also yields a principled framework towards interpretability. ## 1 Introduction The recent advent of multimodal models has greatly popularized the deployment of Deep Learning approaches to a variety of tasks and applications. However, in most cases, deep architectures are treated in an alarming _black-box_ manner: given an input, they produce a particular prediction, with their mode of operation and complexity preventing any potential investigation of their decision-making process. This property not only raises serious questions concerning their deployment in safety-critical applications, but at the same time it could actively preclude their adoption in settings that could otherwise benefit societal advances, e.g., medical applications. This conspicuous _shortcoming_ of modern architectures has fortunately gained a lot of attention from the research community in recent years, expediting the design of novel frameworks towards Deep Neural Network (DNN) _interpretability_. Within this frame of reference, there exist two core approaches: _ante-_ and _post-_ hoc. The latter aims to provide _explanations_ to conventional pretrained models, e.g., Network Dissection (Bau et al., 2017), while the former aims to devise _inherently_ interpretable models. In the context of ante-hoc methods, Concept Bottleneck Models (CBMs) constitute one of the best-known approaches; these comprise: (i) an intermediate _Concept Bottleneck Layer_ (CBL), a layer whose neurons are tied to human understandable _concepts_, e.g., textual descriptions, followed by (ii) a linear decision layer. Thus, the final decision constitutes a linear combination of the CBL's _concepts_, leading to a more interpretable decision mechanism. However, typical CBM approaches are accompanied by four significant drawbacks: (i) they commonly require hand-annotated concepts for training and inference, (ii) they usually exhibit lower performance compared to their non-interpretable counterparts, (iii) their interpretability is substantially impaired due to the sheer amount of considered concepts, and (iv) they are not suited for tasks that require greater granularity. The first drawback has been recently addressed by incorporating image-text models in the CBM pipeline; instead of relying on a fixed concept set, any text can be projected in the image-text embedding space and compared with the image. At the same time, mechanisms to restore performance have also been proposed, e.g., residual fitting (Yuksekgonul et al., 2022). The remaining two limitations however, still pose a significant research challenge. Indeed, CBMs usually rely on a large amount of concepts, usually proportional to the number of classes for the given task; with more complex datasets, thousands of concepts may be considered. Evidently, this renders the investigation of the decision making tasks an _arduous_ and _unintuitive_ process. In this context, some works aim to reduce the amount of considered concepts by imposing sparsity constraints upon concept activation. Commonly, post-hoc class-wise sparsity methods are considered (Wong et al., 2021; Oikarinen et al., 2023); however, these tend to restrict the number of concepts on a _per-class_ basis, enforcing _ad hoc_ application-specific sparsity/performance thresholds, greatly limiting the flexibility of concept activation for each example. Recently, a data-driven per-example discovery mechanism has been proposed in Panousis et al. (2023); this leverages binary indicators founded upon Variational Bayesian arguments and explicitly denote the relevance of each concept on a per-example basis. This allows for a greater flexibility, since each example can activate a number of concepts that have been deemed essential to achieve the downstream task. Even though these approaches aim address the problem of concept over-abundance, they do not consider ways to emphasize finer concept information that may present in a given image; they still exclusively target similarity between concepts and the _whole image_. In this setting, localized, low-level concepts (e.g. object shape or texture), are predicted from a representation of the whole image, potentially leading to the undesirable use of top-down relations. For instance, the model detects some high-level concept (e.g., elephant), resulting in associated lower-level concept activations (e.g., tasks, wrinkled skin) that may not even be actually be visible. This can further lead to significant concept omission, i.e., information potentially crucial for tasks that require greater granularity, e.g., fine-grained part discovery, or even cases where the input is susceptible to multiple interpretations. Drawing inspiration from this inadequacy of CBM formulations, we introduce a novel multi-level paradigm that allows for discovering and capturing both _high_ and _low_ level concept information. We achieve this objective by: (i) leveraging recent CBM advances, namely Concept Discovery Models (CDMs), (ii) devising an end-to-end trainable hierarchical construction; in this setting, we exploit both the whole image, as well as information residing in individual isolated regions of the image, i.e., specific patches, to achieve the downstream task. These levels of hierarchy are linked together by intuitive and principled arguments, allowing for information and context sharing between them, paving the way towards more interpretable models. We dub our approach _Concept Pyramid Models_ (CPMs); in principle, our framework allows for arbitrarily deep hierarchies using different representations, e.g., super-pixels. Here, we focus on the two-level setting, as a proof of concept for the potency of the proposed framework. Our contributions can be summarized as follows: * We introduce a novel interpretable hierarchical model that allows for multi-level concept discovery, exploiting finer details residing in patch-specific regions of an image. * We propose a novel way of assessing the interpretation capacity of our model based on the Jaccard index between ground truth concepts and learned data-driven binary indicators. * We perform a thorough quantitative and qualitative analysis. We experimentally show that CPMs outperform other SOTA approaches classification-wise, while substantially improving interpretation capacity. ## 2 Related Work CBMs decompose the final task of prediction into multiple concept detection tasks via the composition of two functions, either: (i) by detecting concept presence probabilities given an image Lampert et al. (2009); Koh et al. (2020), or (ii) by making sure the model's internal representation is aligned with the concepts Chen et al. (2020); this allows for a richer evaluation of the model's reasoning. Early works on concept-based models Mahajan et al. (2011), were severely limited by requiring an extensive hand-annotated dataset comprising all the used concepts. The appearance of image-text models, chiefly CLIP Radford et al. (2021), has mitigated this issue, allowing to easily make use of thousands of concepts, followed by a linear operator on the concept presence probabilities to solve the downstream task Oikarinen et al. (2023); Yang et al. (2023). However, this generally means that all concepts may simultaneously contribute to a given prediction. With the number of concepts ranging from the 100s to the 1000s, this can severely undermine the sought-after interpretability. This has led to methods that seek also a sparse concept representation, either by design Marcos et al. (2020) or data-driven Panousis et al. (2023), which is the approach we follow in this work. ## 3 Concept Pyramid Models Let us denote by \(\mathcal{D}=\{\mathbf{X}_{n},\hat{\mathbf{y}}_{n}\}_{n=1}^{N}\), a dataset comprising \(N\) images, where \(\mathbf{X}_{n}\in\mathbb{R}^{N\times I_{H}\times I_{W}\times c}\) denotes each image and \(\hat{\mathbf{y}}_{n}\in\{0,1\}^{C}\) its class label. Within the context of Concept Bottleneck Models (CBMs), a _concept set_\(\mathbb{A}=\{a_{1},\dots,a_{H}\}\), comprising \(H\) concepts, e.g., textual descriptions, is also considered; the main objective is to re-formulate the prediction process, constructing a _bottleneck_ that relies upon the considered concepts, in an attempt to design _inherently interpretable models_. In this work, we deviate from the classical definition of CBMs and consider the setting of _multi-level concept-based_ classification based on similarities between images and concepts. Concept-based Classification.To discover the relations between images and attributes, image-language models, and specifically CLIP (Radford et al., 2021), are typically considered. These comprise an image and a text encoder, denoted by \(E_{I}(\cdot)\) and \(E_{T}(\cdot)\) respectively, trained in a contrastive manner (Sohn, 2016; Chen et al., 2020) to learn a common embedding space. After training, we can then project any image and text in this common space and compute the similarity between their (\(\ell_{2}\)-normalized) embeddings. Thus, assuming a concept set \(\mathbb{A}\), with \(|\mathbb{A}|=H\), the most commonly considered similarity measure \(\mathbf{S}\) is the cosine similarity: \[\mathbf{S}\propto E_{I}(\mathbf{X})E_{T}(A)^{T}\in\mathbb{R}^{N\times H} \tag{1}\] This _similarity-based representation_ has recently been exploited to design models with interpretable decision processes such as CBM-variants (Yuksekgonul et al., 2022; Oikarinen et al., 2023) and Network Dissection approaches(Oikarinen and Weng, 2023). Evidently, the similarity \(\mathbf{S}\) yields a unique representation for each image and can directly be used towards downstream tasks. Let us consider a \(C\)-class classification setting; by introducing a linear layer \(\mathbf{W}_{c}\in\mathbb{R}^{H\times C}\), we can perform classification via the similarity representation \(\mathbf{S}\). The output of such a network yields: \[\mathbf{Y}=\mathbf{S}\mathbf{W}_{c}^{T}\in\mathbb{R}^{N\times C} \tag{2}\] In this setting, the image and text encoders are usually kept frozen, and training only pertains to the weight matrix \(\mathbf{W}_{c}\). This approach has been shown to yield impressive results despite the simplicity of the approach and even on low-resolution datasets such as CIFAR-10 (Panousis et al., 2023). However, this simple formulation comes with a key deficit: it is by-design limited to the granularity of the concepts that it can potentially discover in any particular image. Indeed, for any given image, image-text models are commonly trained to match _high-level concepts_ present therein; this leads to a _loss of granularity_, that is, important details in the image are either omitted or considered irrelevant. Yet, in complex tasks such as fine-grained classification or in cases where the decision is ambiguous, this can potentially hinder both the downstream task, but also interpretability. In these settings, it is likely that any low-level information present is not captured, obstructing any potential low-level investigation on how the network reasoned on the high-level concept. Moreover, this approach considers the _entire concept set_ to describe an input; this not only greatly limits the flexibility of the considered framework, but also renders the interpretation analyses questionable due to the sheer amount of concepts considered(Ramaswamy et al., 2023). In this work, we consider a novel hierarchical concept discovery formulation, introducing the notion of _hierarchy_ of concepts, represented by two distinct yet dependent modeling _levels_: _High (H)_ and _Low (L)_. To this end, we introduce: (i) the high level concepts \(\mathbb{A}_{H},\ |\mathbb{A}_{H}|=H\); each concept therein is characterized by a number of attributes, thus forming the (ii) low-level pool of concepts (attributes) \(\mathbb{A}_{L},\ |\mathbb{A}_{L}|=L\). The former are used to discover an image's concept representation in the context of the _whole_ image, while the latter are used to uncover finer information residing in patch-specific regions. Each considered level aims to achieve the given downstream task, while information sharing takes place between them as we describe in the following. ### High Level Concept Discovery For the high-level, we consider: (i) the whole image, and (ii) the set of \(H\) concepts \(\mathbb{A}_{H}\). Using the definitions of concept-based classification, i.e. Eqs.(1), ( 2), we can perform classification using a single linear layer with weights \(\mathbf{W}_{Hc}\in\mathbb{R}^{H\times C}\): \[\mathbf{S}_{H}\propto E_{I}(\mathbf{X})E_{T}(A_{H})^{T}\in\mathbb{R}^{N \times H} \tag{3}\] \[\mathbf{Y}_{H}=\mathbf{S}_{H}\mathbf{W}_{He}^{T}\in\mathbb{R}^{N\times C} \tag{4}\] In this formulation however, all the considered concepts are potentially contributing to the final decision, not taking into account the relevance of each concept towards the downstream task or any information redundancy; simultaneously, the interpretation capacity is also limited due to the large amount of concepts used. To bypass this drawback, we consider a novel, data-driven mechanism for concept discovery based on auxiliary _binary_ latent variables. Concept Discovery.To discover the _essential subset_ of high-level concepts to represent each example, we introduce appropriate auxiliary binary latent variables \(\mathbf{Z}_{H}\in\{0,1\}^{N\times H}\); these operate in an "on"-"off" fashion, indicating, for each example, if a given concept needs to be considered to achieve the downstream task, i.e., \([\mathbf{Z}_{H}]_{n,h}=1\) if concept \(h\) is _active_ for example \(n\), and \(0\) otherwise. The output of the network is now given by the inner product between the classification matrix \(\mathbf{W}_{He}\) and the _effective concepts_ as dictated by the binary indicators \(\mathbf{Z}_{H}\): \[\mathbf{Y}_{H}=(\mathbf{Z}_{H}\cdot\mathbf{S}_{H})\mathbf{W}_{He}^{T}\in\mathbb{R}^{N\times C} \tag{5}\] A naive definition of these indicators would require computing and storing one indicator per example. To avoid the computational complexity and generalization limitations of such a formulation, we consider an _amortized_ approach similar to (Panousis et al., 2023). To this end, we introduce a data-driven random sampling procedure for \(\mathbf{Z}_{H}\), and postulate that the latent variables are drawn from appropriate Bernoulli distributions; specifically, their probabilities are proportional to a separate linear computation between the _embedding of the image_ and an _auxiliary linear layer_ with weights \(\mathbf{W}_{Hs}\in\mathbb{R}^{K\times M}\), where \(K\) is the dimensionality of the embedding, yielding: \[q([\mathbf{Z}_{H}]_{n})=\mathrm{Bernoulli}\left([\mathbf{Z}_{H}]_{n}\middle|\mathrm{ sigmoid}\left(E_{I}(\mathbf{X}_{n})\mathbf{W}_{Hs}^{T}\right)\right)\in\{0,1\}^{H}, \quad\forall n \tag{6}\] This formulation exploits an _additional source of information_ emerging solely from the image embedding; this allows for an _explicit_ mechanism for inferring concept relevance in the context of the considered task, instead of exclusively relying on the _implicit_ CLIP similarity measure. However, considering only the high-level concept information can be insufficient, since it potentially ignores the effect of any fine-grained details present in an image. To this end, we introduce a novel low-level concept discovery mechanism that is then directly tied to the described high-level formulation. ### Low Level Concept Discovery For formulating a finer concept discovery mechanism, we introduce the notion of _concept hierarchy_. Specifically, we assume that each of the \(H\) high-level concepts is characterized by a number of low-level attributes; these are pooled together to form the set of \(L\) low-level concepts \(\mathbb{A}_{L}\). In general, high-level concepts may or may not share any low-level attributes. Within this framework, re-using the whole image may hinder concept discovery since fine-grained details may be ignored in the context of the whole image. Moreover, prominent objects may dominate the discovery task, especially in complex scenes, while other significant attributes present in different regions of the image can be completely be ignored. Thus, to facilitate the discovery of low-level information, avoiding conflicting information in the context of whole image, we split it into a set of _\(P\) non-overlapping_ patches: \(\mathbb{P}=\{\mathbf{P}_{1},\mathbf{P}_{2},\dots,\mathbf{P}_{P}\}\), where \(\mathbf{P}_{p}\in\mathbb{R}^{P_{H}\times P_{W}\times c}\) and \(P_{H},P_{W}\) denote the height and width of each patch respectively. In this context, each patch is now treated as a standalone image. To this end, we first compute the similarities with respect to the pool of low-level concepts. For each image \(n\) split into \(P\) patches, the patches-concepts similarity computation reads: \[[\mathbf{S}_{L}]_{n}\propto E_{I}([\mathbf{P}]_{n})E_{T}(A_{L})^{T}\in\mathbb{R}^{P \times L},\quad\forall n \tag{7}\] We define a single classification layer with weights \(\mathbf{W}_{Lc}\in\mathbb{R}^{L\times C}\), while for obtaining a single representation vector for each image, we introduce an _aggregation_ operation to combine the information from all the patches. This can be performed before or after the linear layer. Here, we consider the latter, using a maximum rationale. Thus, for each image \(n\), the output \([\mathbf{Y}_{L}]_{n}\in\mathbb{R}^{C}\), reads: \[[\mathbf{Y}_{L}]_{n}=\max_{p}\left[[\mathbf{S}_{L}]_{n}\mathbf{W}_{Lc}^{T}\right]_{p}\in \mathbb{R}^{C},\quad\forall n \tag{8}\] This formulation still exhibits the same issue as the simple concept-based approach: all low-level concepts are potentially considered, hindering the interpretation process. To this end, we define the corresponding concept discovery mechanism for the low level to address information redundancy and then introduce an information linkage between the different levels. **Concept Discovery.** For each patch \(p\) of image \(n\), we consider latent variables \([\mathbf{Z}_{L}]_{n,p}\in\{0,1\}^{L}\), operating in an "on"-"off" fashion as before. Specifically, we introduce an amortization matrix \(W_{Ls}\in\mathbb{R}^{K\times L}\), \(K\) being the dimensionality of the embeddings. In this setting, \([\mathbf{Z}_{L}]_{n,p}\) are drawn from Bernoulli distributions driven from the patch embeddings, s.t.: \[q([\mathbf{Z}_{L}]_{n,p})=\mathrm{Bernoulli}\left([\mathbf{Z}_{L}]_{n,p}\big{|} \mathrm{sigmoid}\left(E_{I}([\mathbf{P}]_{n,p})\mathbf{W}_{Ls}{}^{T}\right)\right) \in\{0,1\}^{L},\quad\forall n,p \tag{9}\] The output is now given by the inner product between the _effective low level concepts_ as dictated by \(\mathbf{Z}_{L}\) and the weight matrix \(\mathbf{W}_{Lc}\), yielding: \[[\mathbf{Y}_{L}]_{n}=\max_{p}\left[\left([\mathbf{Z}_{L}]_{n}\cdot[\mathbf{S}_{L}]_{n} \right)\mathbf{W}_{Lc}{}^{T}\right]_{p}\in\mathbb{R}^{C},\;\forall n \tag{10}\] The formulation of the low-level, patch-focused variant is now concluded. This can be used as a standalone network to uncover information residing in patch-specific regions of an image and investigate the network's decision making process. However, we can further augment this functionality by linking the two described levels, allowing the flow of information between them. ### Linking the two levels For tying the two different levels together, we exploit: (i) the latent variables \(\mathbf{Z}_{H},\mathbf{Z}_{L}\), and (ii) the relationship between the high and low level concepts. Since for each high-level concept we have access to which concepts from the low-level pool of attributes characterizes it, we can use this information for context exchange between the two levels. Specifically, for each high-level concept \(h\), we consider a _fixed_\(L\)-sized binary vector \(\mathbf{b}_{h}\in\{0,1\}^{L}\) that encodes its relationship with the attributes; these are concatenated to form the matrix \(\mathbf{B}\in\{0,1\}^{L\times H}\). Each entry \(l,h\) therein, denotes if the low-level attribute \(l\) characterizes the high-level concept \(h\); if so, \([\mathbf{B}]_{l,h}=1\), otherwise \([\mathbf{B}]_{l,h}=0\). It is important to highlight that we do not require any ground truth information for constructing \(\mathbf{B}\); its construction is solely based on the concept sets. However, if ground-truth indicators denoting the relation between high and low level concepts is available, we can easily exploit it as prior information. Constructing \(\mathbf{B}\) is a very intuitive process. For example consider the high-level concept _cat_ and a pool of attributes [_fur, paws, bricks, eggs, tail_]. In this setting, \(\mathbf{b}_{\text{cat}}=[1,1,0,0,1]\), since we expect a _cat_ to be characterized by _fur, paws_ and _tail_, and not by _bricks_ and _eggs_. Thus, if _cat_ is selected as a high-level concept, _eggs_ will not be considered on the low-level. In general, in this way, we can _mask_ the low-level concepts, and zero-out the ones that are irrelevant. During training, we learn which high-level concepts are active, and subsequently discover the relevance of low-level attributes; this leads to an information exchange between the high and the low levels of the network towards achieving the downstream task. To formalize this linkage, we first consider which high-level concepts are active via \(\mathbf{Z}_{H}\) and \(\mathbf{B}\) to uncover which low-level attributes should be considered in the final decision; this is computed via a mean operation, averaging over the high-level dimension \(H\). Then, we use the indicators \(\mathbf{Z}_{L}\) to further mask the remaining low-level attributes. This yields: \[\mathbf{Z}=\left(\frac{1}{H}\mathbf{Z}_{H}\mathbf{B}^{T}\right)\cdot\mathbf{Z}_{L} \tag{11}\] Thus, by replacing the indicators \(\mathbf{Z}_{L}\) in Eq.10 with \(\mathbf{Z}\), the two levels are linked together and can be trained on an end-to-end fashion. A graphical illustration of the proposed Concept Pyramid Models (CPM) is depicted on Fig. 1. The introduced framework can easily accommodate more than two levels of hierarchy, while allowing for the usage of different input representations, e.g., super-pixels. ### Training & Inference Training.Considering a dataset \(\mathcal{D}=\{(\mathbf{X}_{n},\hat{\mathbf{y}}_{n})\}_{n=1}^{N}\), we employ the standard cross-entropy loss, denoted by \(\mathrm{CE}\big{(}\hat{\mathbf{y}}_{n},f(\mathbf{X}_{n},\mathbf{A})\big{)}\), where \(f(\mathbf{X}_{n},\mathbf{A})=\mathrm{Softmax}([\mathbf{Y}]_{n})\) are the class probabilities. For the simple concept-based model, i.e., without any discovery mechanism, the logits \([\mathbf{Y}]_{n}\) correspond to either \([\mathbf{Y}_{H}]_{n}\) (Eq.(4)), or \([\mathbf{Y}_{L}]_{n}\) (Eq.(8)), depending on the considered level. In this context, the only trainable parameters are the classification matrices for each level, i.e., \(\mathbf{W}_{Hc}\) or \(\mathbf{W}_{Lc}\). For the full model, the presence of the indicator variables, i.e., \(\mathbf{Z}_{H}\) and/or \(\mathbf{Z}_{L}\), necessitates a different treatment of the objective. To this end, we turn to the Variational Bayesian (VB) framework, and specifically to Stochastic Gradient Variational Bayes (SGVB) (Kingma and Welling, 2014). We impose appropriate prior distributions on the latent indicators \(\mathbf{Z}_{H}\) and \(\mathbf{Z}_{L}\), such that: \[\mathbf{Z}_{H}\sim\mathrm{Bernoulli}(\alpha_{H}),\qquad\mathbf{Z}_{L}\sim\mathrm{Bernoulli }(\alpha_{L}) \tag{12}\] where \(\alpha_{H}\) and \(\alpha_{L}\) are non-negative constants. In the following, we consider the case where the levels are linked together. Obtaining the objective for a single level is trivial; one only needs to remove the other level's terms. Since the network comprises two outputs, the loss function consists of two distinct CE terms: (i) one for the high-level, and (ii) one for the low-level. The final objective function takes the form of an Evidence Lower Bound (ELBO) (Hoffman et al., 2013): \[\begin{split}\mathcal{L}_{\mathrm{ELBO}}=&\sum_{i}^ {N}\varepsilon\mathrm{CE}\big{(}\hat{\mathbf{y}}_{n},f(\mathbf{X}_{n},\mathbf{A}_{H},[\mathbf{ Z}_{H}]_{n})\big{)}+(1-\varepsilon)\mathrm{CE}\big{(}\hat{\mathbf{y}}_{n},f(\mathbf{X}_{n}, \mathbf{A}_{L},[\mathbf{Z}]_{n})\big{)}\\ &-\beta\big{(}\mathrm{D}_{KL}\big{(}q([\mathbf{Z}_{H}]_{n})\big{|} \big{|}p([\mathbf{Z}_{H}]_{n})\big{)}+\sum_{p}\mathrm{D}_{KL}\big{(}q([\mathbf{Z}_{L} ]_{n,p})\big{)}\big{|}p([\mathbf{Z}_{L}]_{n,p})\big{)}\big{)}\end{split} \tag{13}\] where we augmented the CE notation to reflect the dependence on the binary indicators and \(\varepsilon\) is a balancing term. \(\beta\) is a scaling factor (Higgins et al., 2017) to avert the KL term from dominating the downstream task. The KL term encourages the posterior to be close to the prior; setting \(\alpha_{H},\alpha_{L}\) to a very small value "pushes" the posterior to sparser solutions. Through training, we aim to learn which of these components effectively contribute to the downstream task. For computing Eq. (13), we turn to Monte Carlo (MC) sampling using a single reparameterized sample for each latent variable. Since, the Bernoulli is not amenable to the reparameterization trick (Kingma and Welling, 2014), we turn to its continuous relaxation using the Gumbel-Softmax trick (Maddison et al., 2017; Jang et al., 2017); we present the exact sampling procedure in the appendix. Figure 1: A schematic of the envisioned Concept Pyramid Models. We consider a set of high level concepts, each described by a number of attributes; this forms the _pool_ of low-level concepts. Our objective is to discover concepts that describe the whole image, while exploiting information residing in patch-specific regions. To this end, we match low-level concepts to each patch and aggregate the information to obtain a single representation to achieve a downstream task. The levels are tied together via the concept indicators \(\mathbf{Z}_{H},\mathbf{Z}_{L}\) and the relationship between the concepts. Inference.After training, we can directly draw samples from the learned posteriors and perform inference. Specifically, let us assume an input image \(\mathbf{X}\); this is first passed through the high-level discovery mechanism (Eq. (6), from which we draw samples of the high-level concept indicators \(\mathbf{Z}_{H}\) and compute the high-level output based on Eq.5. We then turn to the low-level: first the image is split into patches. We then draw samples for the patch-specific indicators \([\mathbf{Z}_{L}]\) according to Eq.(9). We combine the low and the high level information through Eq.11 and compute the output for the low-level. Finally, apart from assessing the classification capacity, we can investigate the latent indicators on each level to gain insights on the network's decision making process. ## 4 Experimental Evaluation Experimental Setup.We consider three different benchmark datasets for evaluating the proposed hierarchical framework, namely, CUB, SUN, and ImageNet-1k. These constitute highly diverse datasets varying in both number of examples and applicability: ImageNet is a \(1000\)-class object recognition benchmark, SUN comprises \(717\) classes with a limited number of examples for each, while CUB is used for fine-grained bird species identification spanning \(200\) classes. For the Vision-Language models, we turn to CLIP(Radford et al., 2021) and select a common backbone, i.e., ViT-B/16. To avoid having to calculate the embeddings of both images/patches and text at each iteration, we pre-compute them with the chosen backbone. Then, during training, we directly load them and compute the necessary quantities. For the high level concepts, we consider the class names for each dataset. For the low-level concepts, we consider: (i) for SUN and CUB, the ground-truth attributes comprising \(102\) and \(312\) descriptions respectively, and (ii) for ImageNet, we randomly select \(20\) concepts for each class from the concept set described in Yang et al. (2023a). These distinct sets enables us to assess the efficacy of the proposed framework in highly diverse configurations. For constructing \(\mathbf{B}\), we consider: (i) for SUN and CUB, a per-class summary stemming from the ground truth relationship between classes and attributes, (ii) for ImageNet, a binary representation of the \(20\) active entries for each concept. We consider both classification accuracy, as well as the capacity of the proposed framework towards interpretability. Further details can be found in the Appendix. Accuracy.We begin our experimental analysis by assessing both the classification capacity of the proposed framework, but also its _concept sparsification_ ability. To this end, we consider: (i) a baseline non-interpretable backbone, (ii) the recently proposed SOTA Label-Free CBMs (Oikarinen et al., 2023), (iii) classification using only the clip embeddings either of the whole image (CLIP Embeddings\({}^{\text{H}}\)) or the image's patches (CLIP Embeddings\({}^{\text{L}}\)), (iv) classification based on the similarity between images and the _whole_ concept set (\(\text{CDM}^{\text{H}}\)\(\mathbf{\mathcal{A}}\)iscovery), and (v) the approach of Panousis et al. (2023) that considers a data-driven concept discovery mechanism only on the whole image (\(\text{CDM}^{\text{H}}\)\(\mathbf{\mathcal{V}}\) discovery). We also consider the proposed patch-specific variant of CDMs defined in Sec. 3.2, denoted by \(\text{CDM}^{\text{L}}\). The baseline results and the Label-Free CBMs are taken directly from (Oikarinen et al., 2023). We denote our novel hierarchical framework as CPM. The obtained comparative results are depicted in Table 1. Therein, we observe that the proposed framework exhibits highly improved performance compared to Label-Free CBMs, while on par or even improved classification performance compared to the concept discovery-based CDMs. At this point, it is important to highlight the effect of the hierarchical construction and the linkage of the levels to the overall behavior of the network. In all the considered settings, we observe: (i) a drastic improvement of the classification accuracy of the low-level module, and (ii) a significant change in the patterns of concept discovery on both levels. We posit that the information exchange that takes place between the levels, conveys a _context_ of the relevant attributes that should be considered. This is reflected both to the capacity to improve the low-level classification rate compared to solely using the CLIP embeddings or \(\text{CDM}^{\text{L}}\), but also on the drastic change of the concept retention rate of the low level. At the same time, the patch-specific information discovered on the low-level alters the discovery patterns of the high-level, since potentially more concepts should be activated in order to successfully achieve the downstream task. This behavior is extremely highlighted in the ImageNet case: our approach not only exhibits significant gains compared to the alternative concept-based \(\text{CDM}^{\text{H}}\) on the high-level, but also the low-level accuracy of our approach _outperfoms_ it by a large margin. These first investigations hint at the capacity of the proposed framework to exploit patch-specific information for improving on the considered downstream task. Attribute Matching.Even though classification performance constitutes an important indicator of the overall capacity of a given architecture, it is not an appropriate metric for quantifying its behavior within the context of interpretability. To this end, and contrary to recent approaches that solely rely on classification performance and qualitative analyses, we introduce a metric to measure the effectiveness of a concept-based approach. Thus, we turn to the _Jaccard Similarity_ and compute the similarity between the binary indicators \(\mathbf{z}\) that denote the _discovered_ concepts and the binary ground truth indicators that can be found in both CUB and SUN; we denote the latter as \(\mathbf{z}^{\text{gt}}\). Let us denote by: (i) \(M_{11}\) the number of entries equal to \(1\) in both binary vectors, (ii) \(M_{0,1}\) the number of entries equal to \(0\) in \(\mathbf{z}\), but equal to \(1\) in \(\mathbf{z}^{\text{gt}}\), and (iii) \(M_{1,0}\) the number of entries equal to \(1\) in \(\mathbf{z}\), but equal to \(0\) in \(\mathbf{z}^{\text{gt}}\); we consider the _asymmetric case_, focusing on the importance of correctly detecting the presence of a concept. Then, we can compute the Jaccard similarity as: \[\mathrm{Jaccard}(\mathbf{z},\mathbf{z}^{\text{gt}})=\frac{M_{1,1}}{M_{1,1}+M_{1,0}+M_{0,1}} \tag{14}\] The considered metric can be exploited as an objective score for evaluating the quality of the obtained concept-based explanations across multiple frameworks, given they consider the same concept set and the ground truth indicators exist. For a baseline comparison, we train a CDM with either: (i) the whole image (CDM) or (ii) the image patches (\(\text{CDM}^{\text{L}}\)), using the _whole set_ of low-level attributes as the concept set for both SUN and CUB. We consider the same set for the low-level of CPMs; due to its hierarchical nature however, CPM can exploit _concept hierarchy_ as described in Sec.3.3 to narrow down the concepts considered on the low-level. For both SUN and CUB, we have ground truth attributes on a per-example basis (_example-wise_), but also the present attributes per class (_class-wise_). We assess the matching between these ground-truth indicators and the inferred indicators both in terms of binary accuracy, but also in terms of the considered Jaccard index. In Table 2, the attribute matching results are depicted. Therein we observe, that our CPMs outperform both CDM and \(\text{CDM}^{\text{L}}\) in all the different configurations and in both the considered metrics with up to \(10\%\) improvement. These results suggest that by exploiting concept and representation hierarchy, we can uncover low-level information and more relevant concepts. However, it is also important to note how the binary accuracy metric can be quite misleading. Indeed, the ground truth \begin{table} \begin{tabular}{l c c|c c|c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Concepts} & \multirow{2}{*}{Discovery} & \multicolumn{2}{c|}{Dataset (Accuracy \((\%)\) )} & \multirow{2}{*}{Sparsity \((\%)\)} \\ \cline{1-1} \cline{5-6} & & & & CUB & & \\ \hline Baseline (Images) & ✗ & ✗ & \(76.70\) & \(42.90\) & \(76.13\) \\ Label-Free CBMs & ✓ & ✗ & \(74.59\) & \(-\) & \(71.98\) \\ \hline \hline \multirow{2}{*}{CLIP Embeddings\({}^{\text{st}}\)} & ✗ & ✗ & \(81.90\) & \(65.80\) & \(79.40\) \\ CLIP Embeddings\({}^{\text{st}}\). & ✗ & ✗ & \(47.80\) & \(46.00\) & \(62.85\) \\ \hline \(\text{CDM}^{\text{M}}\) & ✓ & ✗ & \(\mathbf{80.30}\) & \(\mathbf{66.25}\) & \(75.22\) \\ \(\text{CDM}^{\text{st}}\) & ✓ & ✗ & \(39.05\) & \(37.00\) & \(49.20\) \\ \hline \(\text{CDM}^{\text{st}}\) & & ✓ & \(78.90\)\(|19.00\) & \(64.55\)\(|13.00\) & \(76.55\)\(|14.00\) \\ \(\text{CDM}^{\text{st}}\) & ✓ & ✓ & \(59.62\)\(|58.00\) & \(42.30\)\(|67.00\) & \(58.20\)\(|25.60\) \\ \hline \multirow{2}{*}{CPM(Ours)} & High: & ✓ & \(77.80\)\(|42.30\) & \(54.00\)\(|47.58\) & \(\mathbf{77.40}\)\(|27.20\) \\ & Low: & ✓ & \(\mathbf{72.00}|42.40\) & \(57.10\)\(|23.33\) & \(\mathbf{78.45}\)\(|15.00\) \\ \hline \end{tabular} \end{table} Table 1: Classification Accuracy and Average Percentage of Activated Concepts (Sparsity). By bold black/blue, we denote the best-performing high/low level _sparsity_-inducing concept-based model. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Attribute Set Train} & \multirow{2}{*}{Attribute Set Eval} & \multicolumn{2}{c}{SUN} & \multicolumn{2}{c}{CUB} \\ \cline{3-4} & & & \(51.43\)\(|26.00\) & \(39.00\)\(|17.20\) \\ \(\text{CDM}^{\text{(Panousi et al., 2023)}}\) & whole set & example-wise & \(48.45\)\(|15.70\) & \(36.15\)\(|19.50\) \\ \hline \(\text{CDM}^{\text{L}}\) & whole set & class-wise & \(30.35\)\(|26.70\) & \(25.81\)\(|19.60\) \\ \(\text{CDM}^{\text{st}}\) & whole set & example-wise & \(20.70\)\(|15.00\) & \(17.65\)\(|10.40\) \\ \hline \(\text{CDM}^{\text{st}}\) & hierarchy & class-wise & \(\mathbf{53.10}\)\(|\mathbf{28.20}\) & \(\mathbf{79.85}\)\(|\mathbf{27.20}\) \\ \(\text{CPM}^{\text{(Ours)}}\) & hierarchy & example-wise & \(\mathbf{49.92}\)\(|\mathbf{16.80}\) & \(\mathbf{81.00}\)\(|\mathbf{16.10}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Attribute matching accuracy. We compare our approach to the recently proposed CDM model trained the considered low-level concept sets. Then, we predict the matching, in terms of Jaccard similarity, between the inferred per-example concept indicators and: (i) the per example and (ii) class-wise ground truth attributes found in both SUN and CUB. indicators, particularly in CUB, are quite sparse; thus, if a model predicts that most concepts are not relevant, we yield very high binary accuracy. Fortunately though, the proposed metric can successfully address this false sense of confidence as a more appropriate measure for concept matching. Qualitative Analysis.For our qualitative analysis, we focus on the ImageNet-1k validation set; this decision was motivated by the fact that it is the only dataset where attribute matching could not be assessed due to the absence of ground-truth information. Thus, in Fig. 2, we selected a random class (_Sussex Spaniel_) and depict: (i) the \(20\) originally considered concepts and (ii) the results of the concept discovery. In this setting, we consider a concept to be relevant to the class if it is present in more than \(40\%\) of the examples of the class; these concepts are obtained by averaging over the class examples' indicators. We observe that our CPM is able to retain highly relevant concepts from the original set, while discovering equally relevant concepts from other classes such as _australian terrier, soft-coated wheat terrier_ and _collie_. Finally, in Fig.3, for a random image from the ImageNet-1k validation set, we illustrate: (i) the original set of concepts describing its class (_Black Swan_), and (ii) some of the low-level attributes discovered by our CPM. We observe that the original concept set pertaining to the class cannot adequately represent the considered example. Indeed, most concepts therein would make the interpretation task difficult even for a human annotator. In stark contrast, the proposed framework allows for a more interpretable set of concepts, capturing finer information residing in the patches; this can in turn facilitate a more thorough examination of the network's decision making process. ## 5 Limitations & Conclusions A potential limitation of the proposed framework is the dependence on the pretrained image/text encoders. The final performance and interpretation capacity is tied to the suitability of the backbone with respect to the task at hand. If the embeddings cannot adequately capture the relation (in terms of similarity) between images/patches-concepts, there is currently no mechanism to mitigate this issue. However, if this issue arises, the introduced construction can easily accommodate any suitable modifications by simply altering the embedding networks. Concerning the complexity of the proposed CPM framework, by precomputing all the required embeddings for a considered task, the resulting complexity is significantly lower than training a backbone such as ResNet-18. We performed our experiments on a single commodity GPU without data parallelization. In more complex settings, a more distributed way of computation should be considered to improve performance. Figure 3: A random example from the _Black Swan_ class of ImageNet-1k validation set. On the upper part, the original concept set corresponding to the class is depicted, while on the lower, some of the concepts discovered via our novel patch-specific formulation. Figure 2: Original and additional discovered concepts for the _Sussex Spaniel_ ImageNet class. By green, we denote the concepts retained from the original low-level set pertaining to the class, by maroon, concepts removed via the binary indicators \(\mathbf{Z}\), and by purple the newly discovered concepts. In this work, we proposed an innovative framework in the context of ante-hoc interpretability based on a novel hierarchical construction. We introduced the notion of _concept hierarchy_, in which, high-level concepts are characterized by a number of lower-level attributes. In this context, we leveraged recent advances in CBMs and Bayesian arguments to construct an end-to-end multi-level network that can exploit these distinct concept representations, by considering both the whole image, as well as its individual patches; this facilitated the discovery and exploitation of finer information residing in patch-specific regions of the image. We validated our paradigm both in terms of classification performance, while considering a new metric for evaluating the network's capacity towards interpretability. As we experimentally showed, we yielded networks that retain or even improve classification accuracy, while allowing for a more granular investigation of their decision process.
2302.02307
Finite-Time Analysis of Crises in a Chaotically Forced Ocean Model
We consider a coupling of the Stommel box model and the Lorenz model, with the goal of investigating the so-called "crises" that are known to occur given sufficient forcing. In this context, a crisis is characterized as the destruction of a chaotic attractor under a critical forcing strength. We document the variety of chaotic attractors and crises possible in our model, focusing on the parameter region where the Lorenz model is always chaotic and where bistability exists in the Stommel box model. The chaotic saddle collisions that occur in a boundary crisis are visualized, with the chaotic saddle computed using the Saddle-Straddle Algorithm. We identify a novel sub-type of boundary crisis, namely a vanishing basin crisis. For forcing strength beyond the crisis, we demonstrate the possibility of a merging between the persisting chaotic attractor and either a chaotic transient or a ghost attractor depending on the type of boundary crisis. An investigation of the finite-time Lyapunov exponents around crisis levels of forcing reveals a convergence between two near-neutral exponents, particularly at points of a trajectory most sensitive to divergence. This points to loss of hyperbolicity associated with crisis occurrence. Finally, we generalize our findings by coupling the Stommel box model to other strange attractors and thereby show that the behaviours are quite generic and robust.
Andrew R. Axelsen, Courtney R. Quinn, Andrew P. Bassom
2023-02-05T05:19:08Z
http://arxiv.org/abs/2302.02307v2
# Finite Time Analysis of Crises in a Chaotically Forced Ocean Model+ ###### Abstract We consider a coupling of the Stommel Box and the Lorenz models, with the goal of investigating the so-called "crises" that are known to occur given sufficient forcing. In this context, a crisis is characterised as the bifurcation of a system that is produced by the application of forcing to a bistable system, thereby reducing it to a monostable system. We document the variety of chaotic attractors and crises possible in our model and demonstrate the possibility of a merging between the stable chaotic attractor that persists after a crisis with either a chaotic transient or a ghost attractor. An investigation of the finite time Lyapunov exponents around crisis levels of forcing reveals a strong alignment between the first Stommel and neutral Lorenz exponents, particularly at points of a trajectory most sensitive to divergence around these levels. We discuss possible predictors that may identity those chaotic attractors liable to collapse as a consequence of a crisis and show the chaotic saddle collisions that occur in a boundary crisis. Finally, we comment on the generality of our findings by coupling the Stommel Box model with other strange attractors and thereby show that many of the behaviours are quite generic and robust. C 37G35, 37N10, 65L07 ## 1 Introduction The Stommel Box Model [24] and the Lorenz model [17] (hereafter referred to as SBM and L63) are two fundamental conceptual climate models that represent surface fluxes in the North Atlantic and atmospheric convection respectively. The SBM has proved to be a useful idealisation of the mathematics that underpins the principal mechanisms at play within the North Atlantic Ocean, while the L63 model has become recognised as a sytem which is ubiquitous in the world of chaotic dynamics. More recently the L63 model has been used to add chaotic forcing to the SBM dynamics [2] and this idea forms the cornerstone of the present study. Before we consider the interplay of the two models, we recall some of their individual key properties. The SBM was suggested as an approach to assessing the surface flux variance in the North Atlantic Ocean. One of these, the thermal water flux, transports warm, saline water from the equatorial to polar regions. The other is the freshwater flux, in which fresh water is transported from the polar towards equatorial regions. The main aim of the model is to assess what happens when the two opposing surface fluxes meet through a two-box approach as explained by Dijkstra and Ghil [10]. A variety of refinements to the standard SBM have been proposed (e.g. [16], [24]), but for the purposes of our work, we adopt the form used in [2] and [10]. Consequently, we suppose that the quantities \(T\) and \(S\), which denote the temperature and salinity gradients from the equatorial regions to the polar regions respectively, are coupled according to \[\begin{array}{rcl}\dot{T}(t)&=&\xi-T(1+|T-S|),\\ \dot{S}(t)&=&\eta-S(\zeta+|T-S|).\end{array} \tag{1.1}\] Moreover, there are three key constants that arise: \(\xi\) denotes the strength of the thermal water flux, \(\eta\) is the strength of the freshwater flux and \(\zeta\) is the ratio between the thermal and freshwater-restoring timescales [10]. We also note that when \(T>S\), we say that the model is in a thermal-driven (TH) state, while \(T<S\) indicates a saline-driven (SA) state. This model possesses two main modes of stability in the form of monostability (one stable equilibrium value) or bistability (two stable equilibria and a saddle which determine the basins of attraction, with the two equilibria in opposite states). Furthermore, it is possible that some very special cases of stability may occur; for instance for particular combinations of parameters there can be two stable equilibrium with one of these on the \(T=S\) line with a basin of attraction that consists a single point, that is itself. The L63 system was forwarded in [17] as a simple model of atmospheric convection that relates various physical properties of a two-dimensional layer that is warmed from below and cooled from above. With certain choices of parameters, the model can create the chaotic signal that is arguably the most famous example of its type in the field. The model is simply given by: \[\begin{array}{rcl}\dot{x}(t)&=&\mu(y-x),\\ \dot{y}(t)&=&x(\rho-z)-y,\\ \dot{z}(t)&=&xy-\beta z,\end{array} \tag{1.2}\] where \(x\) represents the rate of convection, and \(y\) and \(z\) denote the horizontal and vertical variations in the temperature respectively. The parameters \(\mu\), \(\rho\), and \(\beta\) correspond to the ratio of viscosity to thermal conductivity, the temperature difference between the top and bottom of the section, and the width to height ratio of the section. For the purposes of this study, we fix \((\mu,\rho,\beta)=(10,28,\frac{8}{3})\), a combination for which a well-known chaotic signal exists [17]. A recent seminal paper by Ashwin and Newman [2], which focussed on measures for pullback attractors and tipping point probabilities, combined these two models into what we shall subsequently refer to as the Lorenz-Forced Stommel Box Model (LFSM). In this, the SBM (1.1) is forced by the L63 system which is used solely as a device for introducing chaotic variability. This is done by adding a forcing term to the SBM equations which is proportional to the convection component of the Lorenz variables. Then, following [2], we are left with the combined system \[\begin{array}{rcl}\dot{x}(t)&=&\mu(y-x),\\ \dot{y}(t)&=&x(\rho-z)-y,\\ \dot{z}(t)&=&xy-\beta z,\\ \dot{T}(t)&=&\xi+ax-T(1+|T-S|),\\ \dot{S}(t)&=&\eta+ax-S(\zeta+|T-S|).\end{array} \tag{1.3}\] This system is an example of a technique often used to analyse non-autonomous systems. In this approach a forced (i.e. non-autonomous) system is converted into a fully autonomous version by expressing the forcing as a stand alone ODE system which is then augmented by the appropriate number of equations [6]. With small forcing, bistable solutions of the SBM model are preserved in the LFSBM extension. As the forcing is increased it reaches a critical threshold at which the system undergoes a so-called "crisis"; then the bistability in the system is reduced to monostability. Our focus in this work is to probe the properties of such crisis events and the rest of the paper is organised as follows. We commence in Section 2 with a brief review of some of the key concepts that we shall need later. Then, in Section 3, we look at the crises that arise in the LFSBM and monitor the evolution of chaotic attractors as the forcing increases further. Section 4 is devoted to an investigation of the behaviour of solutions both prior and subsequent to the occurrence of a crisis and we tackle this using a finite-time analysis. Section 5 considers how our findings might have generality beyond our specific model. The paper closes with a few final remarks while more technical aspects of the numerical methods are relegated to an Appendix. ## 2 Background Consider the autonomous continuous dynamical system \[\dot{\mathbf{x}}(t)=\mathbf{f}(\mathbf{x},\alpha), \tag{1}\] with initial condition \[\mathbf{x}(0)=\mathbf{x}_{0}. \tag{2}\] Here, \(\mathbf{f}\) denotes a function vector and \(\mathbf{x}\in\mathbf{R}^{n}\) the solution vector while \(\alpha\) is a set of parameters that are held constant with respect to time \(t\). In the introduction, we made passing reference to the notion of chaotic attractors and how the presence of Lorenz chaos might affect the solutions. Now we formalise the notion of a chaotic attractor, at least in the context we shall need later. We begin with the idea of a forward limit set: **Definition 1** **(Forward Limit Set)**: Given a continuous dynamical system (1) with an initial condition (2), we define the forward limit set of \(\mathbf{x}_{0}\) to be trajectory of the solution from \(\mathbf{x}_{0}\) after time \(T\in\mathbf{R}^{+}\). That is, \[\omega_{T}(\mathbf{x}_{0}):=\{\mathbf{x}:\forall\epsilon>0,\exists\,t>T\,\text {s.t.}\,||\mathbf{x}_{0}(t)-\mathbf{x}||_{2}<\epsilon\}, \tag{3}\] where \(||\cdot||_{2}\) is the usual 2-norm. \({}_{\Box}\) This forward limit set, that is, the trajectory of a solution after some time \(T\), forms the foundation on which we shall build the definitions of the basins of attraction and our chaotic attractors. From its definition (1), we can infer the following: * \(\mathbf{x}\) is an equilibrium value if and only if \(\omega_{T}(\mathbf{x})=\{\mathbf{x}\}\ \forall\ T\), and * if \(\mathbf{x}\in\omega_{T}(\mathbf{x}_{0})\), then \(\mathbf{x}\in\omega_{K}(\mathbf{x}_{0})\ \forall\ K<T\). We can use the forward limit set to introduce the formal ideas of attraction and the associated basins: **Definition 2** (Attraction): _Given a continuous dynamical system (1) with two initial conditions \(\mathbf{x}(0)=\mathbf{x}_{0}\) and \(\mathbf{x}(0)=\mathbf{x}_{1}\), if for some \(T\) we have \(\omega_{T}(\mathbf{x}_{1})\subseteq\omega_{T}(\mathbf{x}_{0})\), then we say that \(\omega_{T}(\mathbf{x}_{1})\) is attracted to \(\omega_{T}(\mathbf{x}_{0})\)._ **Definition 3** (Basin of Attraction): _Given a continuous dynamical system (1) with initial condition (2) and a forward limit set \(\omega_{T}(\mathbf{x}_{0})\), we define the basin of attraction for \(\omega_{T}(\mathbf{x}_{0})\) to be the set of initial conditions whose forward limit sets are attracted to \(\omega_{T}(\mathbf{x}_{0})\). That is,_ \[B(\omega_{T}(\mathbf{x}_{0})):=\{\mathbf{x}:\omega_{T}(\mathbf{x})\subseteq \omega_{T}(\mathbf{x}_{0})\} \tag{4}\] _for some real and positive \(T\)._ The basin of attraction for a forward limit set is a central concept for this study. Next, we introduce: **Definition 4** (Attractor): _Given a continuous dynamical system (1) with initial condition (2), we define its forward limit set \(\omega_{T}(\mathbf{x}_{0})\) to be an attractor if there exists some \(y\in\mathbf{R}^{n}\) such that \(y\notin\omega_{T}(\mathbf{x}_{0})\) and \(y\in B(\omega_{T}(\mathbf{x}_{0}))\)._ This means that attractors are forward limit sets that are either asymptotically stable (e.g. stable equilibria, "stable" limit cycles) or are saddle points (since points starting on its stable manifold converge towards the saddle point). This means that attractors are forward limit sets that are either asymptotically stable or are saddle points (since points starting on its stable manifold converge towards the saddle point). Given the definition of an attractor and its associated basin, we now in a position to define a chaotic set. In general, a function \(f\) on a metric space \(X\) is said to be chaotic (following [8]) if it satisfies the following conditions: * \(f\) is topologically transitive, * \(f\) has dense periodic orbits in \(X\), and * \(f\) has sensitive dependence on initial conditions. For this study, we consider chaos through the idea of Lyapunov exponents. **Definition 5** (Lyapunov Exponent): _Given a continuous dynamical system (1) with initial condition (2), we define the \(i^{th}\) Lyapunov exponent of a trajectory (for \(i\in\{1,2,...,n\}\)) to be_ \[\lambda_{i}:=\lim_{t\rightarrow\infty}\frac{1}{t}log\frac{||\epsilon_{i}(t)|| _{2}}{||\epsilon_{i}(0)||_{2}}, \tag{5}\] _where \(\epsilon_{i}(t)=\epsilon\dot{x}_{i}(t)\) for some small \(\epsilon>0\), \(\epsilon_{i}(0)\) is the initial perturbation of the trajectory, and \(\dot{x}_{i}(t)\) is the change in the solution trajectory along the \(i^{th}\) axis with respect to time. We define the Lyapunov spectrum to be the set of Lyapunov exponents for the trajectory, \(\{\lambda_{1},\lambda_{2},...,\lambda_{n}\}\), with \(\lambda_{1}\geq\lambda_{2}\geq...\geq\lambda_{n}\)._ Lyapunov exponents give us a powerful tool with which we can characterise chaos for continuous systems. Dingwell [11], in his review of Lyapunov exponents, suggests that chaos arises when the Lyapunov exponents satisfy the following conditions: * there is at least one positive Lyapunov exponent, and * the sum of Lyapunov exponents is negative, ensuring the system is dissipative. We impose an extra qualifier for continuous systems by requiring that the system is of dimension at least three, since chaos cannot occur in continuous, two-dimensional problems [7]. Hence, we are now left with the following definition of a chaotic set: **Definition 2.6** (Chaotic Set).: _Given a continuous dynamical system (1) with initial condition (2), we define the forward limit set \(\omega_{T}(\textbf{x}_{0})\) to be chaotic if the Lyapunov exponents \(\lambda_{1},\lambda_{2},...,\lambda_{n}\) (with \(\lambda_{1}\geq\lambda_{2}\geq...\geq\lambda_{n}\)) along the solution trajectory are such that:_ * \(n>2\)_,_ * \(\lambda_{1}>\) _0, and_ * \(\sum_{i=1}^{n}\lambda_{i}<\) _0._ Finally, given we have tied down the ideas of a chaotic set and an attractor, we can now define our chaotic attractor: **Definition 2.7** (Chaotic Attractor).: _Given a continuous dynamical system (1) with initial condition (2), we define the forward limit set \(\omega_{T}(\textbf{x}_{0})\) to be a chaotic attractor if it is both a chaotic set and an attractor._ While basins of attraction and chaotic attractors are important to this study, they are not the only facets we are interested in; we are also concerned with the analysis of any crises that might occur in the system. The way we shall accomplish this is via an finite-time analysis, of which the main component is the Finite-Time Lyapunov Exponent (FTLE): **Definition 2.8** (Finite-Time Lyapunov Exponent).: _Given continuous dynamical system (1) with initial condition (2), we define the \(i^{th}\) Finite-Time Lyapunov exponent of a trajectory (for \(i\in\{1,2,...,n\}\)) over a time interval \(t\in[t_{0},t_{0}+T]\) to be_ \[\Lambda_{i}(t)_{t_{0}}^{t_{0}+T}:=\frac{1}{T}log\frac{||\epsilon_{i}(t)_{t_{0} }^{t_{0}+T}||_{2}}{||\epsilon_{i}(0)_{t_{0}}^{t_{0}+T}||_{2}}, \tag{6}\] _where \(\epsilon_{i}(t)_{t_{0}}^{t_{0}+T}=\epsilon\dot{x}_{i}(t)_{t_{0}}^{t_{0}+T}\) for some small \(\epsilon>0\), \(\epsilon_{i}(0)\) is the initial perturbation of the trajectory, and \(\dot{x}_{i}(t)_{t_{0}}^{t_{0}+T}\) is the change in the solution trajectory along the \(i^{th}\) axis with respect to time over the interval._ The main difference between FTLEs and Lyapunov exponents is that since the former are only calculated over a finite-time interval, we can create a "moving window" by gradually shifting the boundaries of our interval each step, thereby allowing us to make FTLEs a function of time and assessing how they evolve along a given trajectory. ## 3 Behaviour of the chaotically forced model We now consider our LFSBM model given by the system (3). Under no forcing (i.e. when \(a=0\)), then the LFSBM behaves exactly like the SBM (1) and the L63 model (2) in their respective phase planes. When we introduce forcing into the LFSBM, then while the Lorenz solutions remain invariant, the SBM solutions change (unless we choose the initial condition of the L63 model to be the origin). When the L63 solution is chaotic (_i.e._ the L63 initial conditions are not set to an equilibrium value), the equilibria in the Stommel phase plane are then turned into chaotic attractors as a consequence. These chaotic attractors can be one of two types: * Single-Regime: We deem single-regime chaotic attractors to be structures that possess only one regime in the Stommel phase plane. These single-regime attractors are rather diverse in character and their stability properties can vary. In the event of a boundary crisis (a concept we introduce below), a single-regime chaotic attractor that loses stability may begin to exhibit excursive behaviours where solution trajectories may temporarily deviate from the main attractor (Figure 1). * Dual-Regime: Chaotic attractors that have two regimes on the Stommel phase plane are called dual-regime attractors. Such structures often assume a distinct shape consisting of two mini-loops augmented by one larger loop that connects them (Figure 1). Once the forcing reaches some critical value \(a_{c}\), the LFSBM undergoes a crisis, when bistability in the system collapses into monostability as a result of one of the chaotic attractors losing stability. Mehra and Ramaswamy [18] define three different types of crises for systems with multiple chaotic attractors (such as the LFSBM): Figure 1: _[Top Left, Bottom Left] Examples of single-regime chaotic attractors under Stommel parameters \((a,\xi,\eta,\zeta)=(0.0159,3,1,0.3)\) and \((0.0276,0,0,-1)\). [Top Right] An example of a dual-regime chaotic attractor with \((a,\xi,\eta,\zeta)=(0.0226,0,1,-2.1)\) [Bottom Right] An example of a single-regime chaotic attractor with \((a,\xi,\eta,\zeta)=(0.0541,1,-1,-2)\) that exhibits excursive behaviour, showing signs of losing stability under further forcing._ * Boundary Crises: A chaotic attractor is destroyed (typically as a result of a collision with a saddle) and is reduced to a chaotic transient that flows into another attractor. * Interior Crises: The size of a chaotic attractor suddenly increases or decreases due to a collision with the stable manifold of an unstable periodic orbit inside its basin of attraction. * Attractor-Merging Crises: Two or more chaotic attractors simultaneously collide with the stable manifold of an unstable periodic orbit along a shared boundary in the basin of attraction, resulting in the chaotic attractors fusing. In their work, Mehra and Ramaswamy use the Maximal Lyapunov exponent (MLE) as a predictor of which type of crisis is likely to occur. In our chosen model (3), the combination of the Lorenz equations and a typical stable equilibrium in the Stommel model with negative Lyapunov exponents implies that the MLE of the system remains constant (\(\approx 0.9057\)). This means that interior and attractor-merging crises cannot occur in the LFSBM since they require significant changes in the MLE [18], but the possibility of boundary crises cannot be ruled out (we provide an example of a boundary crisis in figure 2). However there is also a fourth family of crisis defined by: [Vanishing Basin Crisis] Let \(B_{a}(\omega_{T}(\mathbf{x}_{0}))\) be the basin of attraction for a chaotic attractor \(\omega_{T}(\mathbf{x}_{0})\) at some prescribed forcing strength \(a\). If, for some \(a=a_{0}\), we have * \(B_{a_{0}+\varepsilon}(\omega_{T}(\mathbf{x}_{0}))\subset B_{a_{0}}(\omega_{T}( \mathbf{x}_{0}))\) (and \(B_{a_{0}+\varepsilon}(\omega_{T}(\mathbf{x}_{0}))\neq\emptyset\)) when \(0<\varepsilon<A\) (where \(A=a_{c}-a_{0}>0\)), and * \(\lim_{\varepsilon\to A^{-}}|B_{a_{0}+\varepsilon}(\omega_{T}(\mathbf{x}_{0})) |=0\), then we say that \(\omega_{T}(\mathbf{x}_{0})\) undergoes a vanishing basin crisis, and loses its basin of attraction completely at some \(a_{c}\). We remark that for the vanishing basin crisis here, since our attention is confined to the Stommel phase plane, we only consider the values of \(T\) and \(S\) within the basin of attraction. We do this by fixing \((x,y,z)=(x_{0},y_{0},z_{0})\) at time \(t\) = 0. The main distinction between this type of crisis and the other three families, is that with the vanishing basin the crisis does not occur instantaneously. Rather, the crisis develops with an increasing forcing strength, starting from some \(a_{0}\) before completely losing its stability at some \(a_{c}\) (\(>a_{0}\)). As the forcing increases from \(a_{0}\) to \(a_{c}\), it can be shown that the size of the basin of attraction shrinks concomitantly. At the completion of the crisis the chaotic attractor is not destroyed. Instead, the attractor simply loses its basin of attraction and becomes a ghost attractor (a designation coined by Belykh et al. [5]). We provide an example of a vanishing basin crisis in Figure 3. As forcing increases past \(a_{c}\), the general regime pattern that existed shortly before the crisis remains stable. It continues to develop until a second critical forcing level \(a_{r}\) is attained whereupon, though the system itself remains monostable, the chaotic attractor undergoes significant structural change. * If the system suffered a boundary crisis, then the resulting chaotic transient persists in that region of phase space before merging with the remaining chaotic attractor at forcing value \(a_{r}\) (Figure 4). * On the other hand, if the system was subject to a vanishing basin crisis, then the attractor that loses its associated basin simply vanishes from view, with solution trajectories attempting to (and failing) to locate it before entering the other attractor until the forcing reaches the value \(a_{r}\).When \(a=a_{r}\), the attractor with the disappearing basin re-appears (found by solution trajectories) and merges with the existing chaotic attractor, hence, becoming a ghost attractor (Figure 4). ### Chaotic Saddle Collisions As noted previously, a boundary crisis typically occurs when a chaotic attractor collides with a chaotic saddle. In order to illustrate such an event, we examine the evolution of the chaotic attractors and saddle for the system (3). We chose a set of Stommel parameters (albeit nonphysical) which lead to a long chaotic transient after the crisis and so we selected \(\xi=0\), \(\eta=1\) and \(\zeta=-2.1\). With these particular parameter values the critical forcing \(a_{c}\in(0.0226,0.0227)\) and we explore the effect of increasing the forcing strength beyond \(a_{c}\). Figure 2: Basins of attraction for Stommel parameters \((\xi,\eta,\zeta)=(1,-1,-2)\) with Lorenz initial conditions \((x,y,z)=(-1,-1,-1)\); [Top Left] \(a=0\), [Top Right] \(a=0.0541\), and [Bottom Left] \(a=0.0542\). Images in the top row show the thermally-driven (TH, black) and saline-driven (SA, orange) attractors. These remain stable until the boundary crisis happens in \(a\in(0.0541,0.0542)\), at which point, the SA attractor is destroyed. [Bottom Right] shows an example trajectory with initial conditions \((T,S)=(-1,1)\), showing the chaotic transient that remains and the visible point of divergence between the two trajectories. In order to visualise the saddle, we adopt the so-called _Saddle-Straddle Algorithm_[3, 19, 26]. The algorithm was originally developed in [3] as a way to detect segments which belong to the saddle. We implemented the algorithm in the manner described in [26]. In brief, the algorithm works by first selecting pairs of points which straddle the basin boundary by a predetermined length \(\delta\) (we used \(\delta=10^{-5}\)). The points are then iterated forward under the dynamics for some chosen window and refined again to ensure that they again straddle a small segment. We then assume the midpoint of the resulting segment to be part of the chaotic saddle. Figure 5 shows the results arising from this algorithm at increasing forcing strengths. We see that as \(a\) grows, so the saddle seems to expand in width (particularly in the more central region of its "T"-shape). Additionally, the TH (thermally-driven) attractor grows in phase space and begins to approach the saddle from below. Right before the crisis (\(a=0.0226\)) we see the TH attractor nearly collide with the saddle. Just after the crisis (\(a=0.0227\)) the Figure 3: Basins of attraction for Stommel parameters \((\xi,\eta,\zeta)=(0,0,-1)\) with Lorenz initial conditions \((x,y,z)=(1,1,1)\); [Top Left] \(a=0\), [Top Right] \(a=0.02\), and [Bottom Left] \(a=0.0276\). Images in the top row show the thermally-driven (TH, black) and saline-driven (SA, red) attractors. As forcing increases, the basin of attraction for the SA attractor starts to shrink. One such example is shown [Bottom Right] with its initial condition of \((T,S)=(0,1)\), losing its attraction to the SA attractor between \(a=0.0276\) and \(a=0.0277\). trajectory which shadows the previous TH attractor diverges from that attractor nearby to the apparent collision point. This creates a long chaotic transient which is not part of the surviving SA (saline-driven) attractor. The behaviour depicted in Figure 5 is very interesting when viewed from a finite-time standpoint. While the asymptotic behaviour after the crisis is convergence to the SA attractor, the system spends a long period of time elsewhere in phase space tracing the previously existing TH attractor. If this process were analysed in isolation, this might seem to be a potentially reversible transition between the TH and SA states,whereas in actuality the crisis has already occurred and the system is destined to eventually converge to the SA attractor. Similar behaviour has previously been seen in quasi-periodically forced delay models [21, 22] and in systems with delayed Hopf bifurcations (see e.g. [13] and references therein). In both of the aforementioned cases, the system undergoes a bifurcation in which the stability of one attractor is lost, but if initial conditions are sufficiently close to that attractor, then the trajectory can remain near it for an extended period of time before converging to the true attractor. Such behaviour is important to understand when considering reversible and irreversible transitions or regime shifts. ## 4 Finite-time Analysis We begin with the aim of seeing how the FTLEs of the system change over the course of a given solution trajectory for a given set of initial conditions and parameters. In order to do this, we require the use of various numerical methods (see Appendix A for details). From this, we derive five different FTLEs at any given point along the trajectory, three of which are associated with the Lorenz attractor forcing, and the other two with the SBM response. For our finite-time analysis, we arbitrarily set the step size for our FTLEs (and the ODE solver by extension) to be \(\Delta t=\frac{1}{400}\), and chose to calculate across Figure 4: [Left] An example of a chaotic transient merge using Stommel parameters \((\xi,\eta,\zeta)=(1,-1,-2)\) with initial condition \((x,y,z,T,S)=(-1,-1,-1,-1,1)\). [Right] An example of a ghost attractor merge using Stommel parameters \((\xi,\eta,\zeta)=(0,0,-1)\) with initial condition \((x,y,z,T,S)=(1,1,1,0,1)\). The blue trajectory gives a chaotic attractor that loses stability after a crisis, the red trajectory depicts the remaining chaotic attractor following a crisis, and the green trajectory indicates the chaotic attractor following the ghost attractor (or chaotic) transient merge. the range \(t\in[0,100]\), giving us 40,000 time steps in total. Any time step can be implemented and the detailed quantitative properties of the chaotic signal induced will be sensitive to time-step as will the change in the level of forcing required to induce a crisis depending on the parameters. We also remark at this point that the length of the FTLE window we select for the study will give different results in terms of FTLE behaviour; while smaller FTLE windows tend to give more volatile results, longer windows typically lead to much smoother outcomes. While we will discuss other FTLE windows at appropriate stages we set our default FTLE window to 400 time steps (_i.e._ of length 1). The five FTLEs we derive for the LFSBM are referred to in their typical order from highest to lowest: the unstable Lorenz FTLE, the neutral Lorenz FTLE, the first Stommel FTLE, the second Stommel FTLE and the stable Lorenz FTLE. We first consider the Stommel FTLEs in the unforced case (\(a=0\)). For a bistable system with two stable equilibria in differing regions (one in the TH region, the other in the SA region of the phase plane) the behaviour of these FTLEs will mostly depend on the nature of the equilibrium in the region of the phase plane given by performing Hartmann linearisation on Figure 5: _The evolution of the chaotic saddle at an increasing level of forcing strength: [upper left] \(a=0.001\), [upper right] \(a=0.0113\) and [lower left] \(a=0.0226\). In these three frames the dark blue depicts the chaotic saddle, black the TH (thermally-driven) attractor, and gold the SA (saline-driven) attractor. [Lower right] The trajectory for the same initial conditions is plotted for \(a=0.0226\) (black) and \(a=0.0227\) (turquoise)._ equation (1). If the equilibrium in the region is a stable node, then the Stommel FTLEs will converge towards the two eigenvalues of the stable node and then remain without major change (and does not vary with different FTLE window lengths). We refer to this natural behaviour of the Stommel FTLEs as distinct separation. If the equilibrium in the region is considered a stable focus, however, then the Stommel FTLEs will oscillate between two different values. This oscillation will be centred around the value of the real part of the eigenvalues, and with an amplitude that is dependent on the length of the FTLE window. A longer FTLE window will see the amplitude trend towards zero, while a shorter FTLE window will see the amplitude approaches the imaginary part of the eigenvalues (Figure 6). We refer to this natural behaviour of the Stommel FTLEs as a rapid oscillation. We showcase the two natural behaviours in Figure 7. As forcing is introduced to the model, the Stommel FTLEs start to behave differently de Figure 6: The evolution of the Stommel FTLEs over the trajectory corresponding to parameter choices \((\xi,\eta,\zeta)=(1,-1,-2)\) and initial condition \((x,y,z,T,S)=(-1,-1,-1,-1,1)\) and with no forcing. [Top-Left] A window size of 0.01 (4 steps), [Top-Right] A window size of 0.1 (40 steps), [Bottom-Left] A window size of 1 (400 steps), [Bottom-Right] A window size of 10 (4,000 steps). pending on the strength of the forcing. With the aforementioned window choice, the Stommel FTLEs start to exhibit spikes in their values at certain points along the trajectory (these are smoothed out in the larger window choices, see Figure 8), while the general values and bounds start to vary over time (Figure 9). If the trajectory of a solution crosses regions (_e.g._ from TH to SA, temporarily or otherwise), the FTLEs will exhibit the behaviour associated with the other region. This general behaviour persists as the forcing strengthens until it nears the crisis point. Around this stage, we begin to notice some more significant differences in the behaviour of the Stommel FTLEs (though this will vary from case to case). One such notable change is signs of FTLE alignment, particularly between the first Stommel and neutral Lorenz FTLEs. An example of this can be seen in Figure 8 (Top Left). To assess the alignment between the neutral Lorenz and first Stommel FTLEs, we used a simple absolute distance metric, which is defined as: \[d(x_{1},x_{2}):=|x_{1}-x_{2}| \tag{1}\] for some scalars \(x_{1}\) and \(x_{2}\) (representing the two FTLEs at a point in time). We measure the gap between the two FTLEs at a given instant in time, taking lower distances to be indicative of a stronger alignment. Using this metric, we found that just before and just after a crisis, this alignment is at its strongest when the trajectory is at a critical juncture such that only a slight increase in the forcing strength will be enough to cause convergence to switch to other attractor (Figure 10). This strong alignment is more evident in cases where the attractor that loses stability under forcing resides in the SA region, but is still present in the cases where the trajectory initially enters the TH attractor (Figure 11). However, the general alignment strength between the neutral Lorenz and first Stommel FTLEs across the entirety of a trajectory before and after a crisis is dependent on the system and initial conditions. In some cases there is a clear strengthening of the alignment, while others demonstrate a general weakening of the alignment, particularly in those cases when the remaining stable attractor Figure 7: _[Left] An example of rapid oscillation behaviour in the Stommel FTLEs under no forcing. [Right] An example of distinct separation behaviour in the Stommel FTLEs under no forcing._ is of the dual-regime type. (The example in Figure 11 is one where the stable attractor is a dual-regime attractor.) In general, however, more useful measures for alignment strength already exist in the use of Lyapunov vectors, which not only help characterise the dynamics of a system [12], but strong levels of alignment can be used to predict chaotic transitions (e.g. [4], [23]). Future research in the area would endeavour to identify some measure that could better characterise this alignment between the two FTLEs. Another notable change that happens to FTLEs as the forcing strength increases is that we notice instances in which the neutral Lorenz and first Stommel FTLEs interchange modes (see Figure 12 for an example of how the neutral Lorenz value is affected by the forcing strength). However, it is not evident as to whether this behaviour is a property inherent within these forced systems or is actually a side-effect of the algorithm used to calculate the FTLEs (a backwards QR method with Gram-Schmidt orthogonalisation). The algorithm works nicely Figure 8: The FTLEs along the trajectory given Stommel parameters \((\xi,\eta,\zeta)=(3,1,0.3)\), initial condition \((x,y,z,T,S)=(1,1,1,2.8,2.8)\) and a forcing of strength \(a=0.0160\). [Left] A window length of 1 (400 steps), and [Right] a window length of 10 (4,000 steps). [Top] A comparison of the neutral Lorenz and first Stommel FTLEs, while [Bottom] compares the two Stommel FTLEs. The loss of useful information with an increasing the window length is noticeable. when the FTLEs are reasonably well spaced, but tends to lose accuracy as the FTLEs approach each other. (This is a plausible explanation for the mode-swaps that we occasionally see for these trajectories.) This can be further affected by the particular algorithm used in calculating a new set of FTLEs at every time step. Further research in the area would test alternative algorithms for calculating FTLEs; this would shed light on the question as to whether the algorithm is directly contributing to the phenomenon of mode swapping. One key issue relates to the question as to whether there is a reliable predictor as to which chaotic attractor will be the one that loses stability as the forcing strength increases. We have been unable to find a definitive answer, but some possible predictors can be ruled out. In the course of our work, we looked at both the sum of Lyapunov exponents and the Kaplan-Yorke dimension [14] as possible predictors of a crisis. For the first of these, we conjectured that of the two chaotic attractors in a bistable solution, the attractor with the greater sum of Figure 9: Some distinct separation [Left, \((T,S)=(1.4,1.4)\)] and rapid oscillation [Right, \((T,S)=(2.8,2.8)\)] behaviours under forcing strengths \(a=0.001\) [Top] and \(a=0.01\) [Bottom] for Stommel parameters \((\xi,\eta,\zeta)=(3,1,0.3)\) and Lorenz initial conditions \((x,y,z)=(1,1,1)\). The differences between the two levels in forcing arise owing to subtle variations along the trajectory. Figure 10: The neutral Lorenz and first Stommel FTLEs for parameters \((\xi,\eta,\zeta)=(3,1,0.3)\) and initial conditions \((x,y,z,T,S)=(1,1,1,2.8,2.8)\) when [Top] \(a=0.01\), [Centre] \(a=0.0159\), and [Bottom] \(a=0.0160\). The results in the left-hand column show the values across the trajectory while the right-hand column demonstrates the distance between the two FTLEs across the trajectory. We see that as we approach the crisis there is a more visible alignment between the two FTLEs at certain points along the trajectory. Lyapunov exponents (under no forcing) would lose stability under imposed forcing. However, if we study equation (3) with \((\mu,\rho,\beta,\xi,\eta,\zeta)=(10,28,\frac{8}{3},3,1.2,0.3)\), we find that - 2.587471 = -16.46843 (No Forcing) - 0.850531 = -15.362963 (No Forcing) As we would expect, it is the TH attractor that has the lower sum of Lyapunov exponents, but under forcing, the TH attractor is the one that collapses (Figure 13). Hence, unfortunately, we cannot rely on using the sum of Lyapunov exponents to predict a collapse. If we turn to the Kaplan-Yorke dimension, we speculated that of the two chaotic attractors Figure 11: Results for parameters \((\xi,\eta,\zeta)=(0,1,-2.1)\) and initial condition \((x,y,z,T,S)=(1,1,1,0,-2)\) when \(a=0.0227\) (This forcing is sufficiently strong that the system undergoes a boundary crisis prior to \(t=100\)): [Top Left] The neutral Lorenz and first Stommel FTLEs along the trajectory and [Top Right] the value difference between the two FTLEs along the trajectory. The period of alignment for this and any other TH to SA attractor case is not particularly obvious, but if we zoom to around the point of transition from TH to SA [Bottom Left] and observe the value difference in this interval of \(t\in[40,45]\), we see a number of points in this interval where the two FTLEs are visibly close for long enough, especially around the point of transition. in a bistable solution, the one with the higher Kaplan-Yorke dimension (in the absence of forcing) would lose stability when forced. We calculated the Kaplan-Yorke dimension using: \[D_{L}:=j+\frac{\lambda_{1}+\lambda_{2}+...+\lambda_{j}}{|\lambda_{j+1}|}, \tag{2}\] for a given Lyapunov spectrum \(\{\lambda_{1},\lambda_{2},...,\lambda_{n}\}\) ordered so that \(\lambda_{1}\geq\lambda_{2}\geq...\geq\lambda_{n}\), and where \(j\) is the largest number such that \(\lambda_{1}+\lambda_{2}+...+\lambda_{j}>0\). However, if we select \((\mu,\rho,\beta,\xi,\eta,\zeta)\)\(=(10,28,\frac{8}{3},3.01,6,2)\) with any non-equilibrium initial condition, then we obtain a case where it is the attractor with the lower Kaplan-Yorke dimension that actually loses stability under forcing. This gives us two attractors, the SA attractor at \((T,S)=(2.351467,2.631519)\), and the TH attractor at \((T,S)=(2.998013,2.994014)\). These attractors have the following Lyapunov exponents: Figure 12: _Given Stommel parameters \((\xi,\eta,\zeta)=(3,1,0.3)\) and initial condition \((x,y,z,T,S)=(1,1,1,2.8,2.8)\), The neutral Lorenz FTLE along the trajectory for: [Top Left] \(a=0\), [Top Right] \(a=0.0001\), [Bottom Left] \(a=0.0159\), and [Bottom Right] \(a=0.0160\). We see as forcing is introduced that the neutral Lorenz FTLE is clearly influenced by the Stommel FTLEs and takes on a similar mode to what those FTLEs would do under the given level of forcing. This gives us the “mode-swapping” behaviour._ * LEs (TH attractor): 0.90565, -0.000462, -1.505546, -1.506779, -14.573199, * LEs (SA attractor): 0.90565, -0.000462, -0.258749, -3.582122, -14.573199, which yields Kaplan-Yorke dimensions of 2.601236 (TH) and 3.180463 (SA) respectively. When forcing increases, we find that it is the TH attractor which loses stability (Figure 14). This contradicts our supposition that the attractor with the lower Kaplan-Yorke dimension remains stable under forcing. Thus both the sum of Lypunov exponents and the Kaplan-Yorke dimension cannot be relied as accurate forecasters as to which unforced attractor will eventually lose stability. It would be of interest to identify some other characteristic which might be of more help in this regard. ## 5 More general considerations We next ask as to the wider applicability of some of our findings. We do this by looking at the properties of crises and finite-time analysis as they relate to the SBM forced now, not by the L63 model, but by other "strange" attractors. Several candidate models were considered, but after extensive testing, we restrict our comments to the Rossler, the Four-Wing and the Halvorsen attractor. In particular, for the Rossler-forced model, we took \[\begin{array}{rcl}\dot{x}(t)&=&-(y+z),\\ \dot{y}(t)&=&x+dy,\\ \dot{z}(t)&=&c+z(x-d),\\ \dot{T}(t)&=&\xi+ax-T(1+|T-S|),\\ \dot{S}(t)&=&\eta+ax-S(\zeta+|T-S|),\end{array} \tag{1}\] Figure 13: A counterexample to the idea of using Lyapunov exponent sums to predict the identity of a crisis. [Left] The basin of attraction when \(a=0.02\). [Right] The basin of attraction when \(a=0.03\). Eventually, the TH attractor (black) undergoes a boundary crisis and its transient will flush into the SA attractor (orange), despite having the lower sum of exponents. in which the parameters \((b,c,d)=(0.2,0.2,5.7)\). This combination is known to generate a chaotic signal [15]. For the Four-Wing-forced model, we supposed \[\begin{array}{rcl}\dot{x}(t)&=&bx+yz,\\ \dot{y}(t)&=&cx+dy-xz,\\ \dot{z}(t)&=&-(z+xy),\\ \dot{T}(t)&=&\xi+ax-T(1+|T-S|),\\ \dot{S}(t)&=&\eta+ax-S(\zeta+|T-S|),\end{array} \tag{2}\] with \((b,c,d)=(0.2,0.02,-0.4)\), see [27]. Lastly, for the Halvorsen-forced model: \[\begin{array}{rcl}\dot{x}(t)&=&-(bx+4y+4z+y^{2}),\\ \dot{y}(t)&=&-(by+4z+4x+z^{2}),\\ \dot{z}(t)&=&-(bz+4x+4y+x^{2}),\\ \dot{T}(t)&=&\xi+ax-T(1+|T-S|),\\ \dot{S}(t)&=&\eta+ax-S(\zeta+|T-S|),\end{array} \tag{3}\] where \(b=1.27\), [25]. We subjected these systems to the same procedures as applied to the LFSBM. We found that vanishing basin crises (1) can be replicated in the Halvorsen-forced model (3) with similar sets of Stommel parameters as the LFSBM. Other strange attractors tested did not seem to undergo such a crisis. We note that when \((\xi,\eta,\zeta)=(3,1,0.3)\) the Rossler-forced model (3) has an SA attractor whose basin decreases with forcing, but which undergoes a regular boundary crisis when \(a\approx 0.0229\) (Figure 15), unlike the Lorenz-forced and Halvorsen-forced models. The underlying reason why the Lorenz and Halvorsen models can give solutions that exhibit vanishing basin crises while the Rossler and Four-Wing models is not yet known. This is a possible topic for future research. Figure 14: The counterexample for the Kaplan-Yorke dimension. [Left] The basin of attraction under no forcing. [Right] The basin of attraction under a forcing strength of \(a=0.002\). The TH attractor (orange) is clearly undergoing a vanishing basin crisis as forcing increases, losing its stability at roughly \(a\approx 0.0024\). Our findings using other strange attractors also suggest a possible link between the number of regimes in a chaotic attractor and whether or not we can obtain persisting chaotic attractors that merge with chaotic transients under significant levels of forcing. We found that the behaviour of chaotic attractors that combine with chaotic transients (from a boundary crisis) in the Lorenz-forced model, a two-regime attractor, could be reproduced in the Four-wing-forced model, which is a four-regime attractor (Figure 16). However, we could find no analogous results for either the Rossler or Halvorsen examples, both of which are single-regime attractors. Future research in the matter might either validate or disprove such a conjecture with regards to resulting chaotic transients (and ghost attractors in the case of a vanishing basin crisis). Figure 15: Basin of attractions for Stommel parameters \((\xi,\eta,\zeta)=(3,1,0.3)\) at crisis levels of forcing for the following: [Top-Left] The Lorenz-forced model with initial condition \((x,y,z)=(1,1,1)\) and \(a=0.0159\), [Top-Right] The Halvorsen-forced model with initial condition \((x,y,z)=(1,1.1,1.2)\) and \(a=0.0285\), [Bottom-Left] The Rössler-forced model with initial condition \((x,y,z)=(1,1,1)\) and \(a=0.0228\), [Bottom-Right] The Four-wing-forced model with initial condition \((x,y,z)=(1,1,1)\) and \(a=0.0938\). This shows the contrast between the vanishing basin crises for the Lorenz- and Halvorsen-forced models and the boundary crises for the Rössler- and Four-wing-forced models in the significantly smaller (almost invisible in cases) basin of attractions for the attractors in question. In the Lorenz-forced model we found that the neutral Lorenz and first Stommel exponents aligned strongly at forcing levels around crises near the point where a trajectory is at its most sensitive (to ending up in either chaotic attractor). Finite-time analysis of the other forced models show that this alignment of the first Stommel FTLE and the equivalent neutral FTLE is also present in the other forced models (Figure 17) according to the absolute distance metric defined in equation (4.1). Instances of "mode-swapping" after a given crisis between the Stommel FTLEs and the strange attractor FTLEs also appear when the Stommel model is forced by other strange attractors. ## 6 Conclusion The Lorenz-Forced Stommel Box Model LFSBM, as defined by equation (1.3), is a recent hybrid model introduced by Ashwin and Newman in their study on measures for pullback attractors and tipping point probabilities [2]. We have explored solutions to the autonomous forced model in parameter regions that are bistable under no forcing, and have paid particular attention to forcing strengths around levels that induce a crisis which reduces the system to monostability. We have found two different types of crises in the model: a vanishing basin crisis that results in the basin of attraction for a chaotic attractor deplete over time (and ultimately become a ghost attractor), and a boundary crisis in which a chaotic attractor is destroyed following a collision with a chaotic saddle or other feature of the system. We also find that further increases to the forcing strength can mean that the surviving chaotic attractor merges with either a chaotic transient or a ghost attractor depending on the type of crisis to hand. Performing finite-time analysis on the LFSBM reveals that with no forcing, the Stommel FTLEs will either oscillate between two specific values or converge to particular values depending on the nature of the attractor it enters (in both cases, this is dependent on the Stommel eigenvalues of the attractor). With forcing present, these Stommel FTLEs start to Figure 16: [Left] The Lorenz-forced model with forcing levels \(a=0.15\) and \(a=0.2\). [Right] The Four-wing-forced model with \(a=0.65\) and \(a=0.7\). In both cases \((\xi,\eta,\zeta)=(1,-1,-2)\) and the initial conditions chosen are \((x,y,z,T,S)=(-1,-1,-1,-1,1)\). These results demonstrate how the chaotic attractor develops eventually merges with a resulting chaotic transient after a crisis. Figure 17: Results of simulations with parameters \((\xi,\eta,\zeta)=(3,1,0.3)\) and initial conditions \((x,y,z,T,S)=(1,1,1,2.8,2.8)\). Here we have conducted finite-time analysis using the Rössler-forced model [Top, \(a=0.0229\)], the Four-wing-forced model [Centre, \(a=0.0939\)], and the Halvorsen-forced model [Bottom, \(a=0.0286\)], and analyse around the time period where the solution trajectory permanently transitions from the SA region to the TH region [Right]. We note that the distance between the first Stommel and neutral attractor FTLEs [Left] during this transition is at its strongest either at the TH/SA transition (Halvorsen) or at points along the transition to the TH attractor (Rössler and Four-Wing to a lesser extent). behave a little more like Lorenz FTLEs, potentially mode-swapping with them on occasion, and near crisis levels we found that the neutral Lorenz and first Stommel FTLEs start to align around the point where a solution trajectory is at its most sensitive to transitioning to one or the other of the chaotic attractors. Our experiments in which we forced the Stommel model with other strange attractors enabled us to draw some conclusions as to the possible generality of the earlier findings. We could replicate the strong alignment between the first Stommel and the neutral strange attractor FTLEs in the most sensitive areas, while the observations regarding attractors merging with transients and the vanishing of basin crises do depend on the identity of the strange attractor in question. We conjecture that whether an attractor merges with the chaotic transient of a destroyed attractor (or a ghost attractor) depends on the regime count of the strange attractor used (where two or more regimes in the strange attractor are a requirement), while a possible explanation for the vanishing basin crisis that are seen in the Lorenz-forced and Halvorsen-forced model (23) is currently not apparent. Possible avenues for future research in the area include the use of Lyapunov vectors to better understand FTLE alignment just prior or subsequent to a crisis. It would be helpful to ascertain whether mode-swapping is an artefact of the use of the backwards QR algorithm with Gram-Schmidt orthogonalisation to calculate FTLEs. A further open question concerns the link between the regime count of a strange attractor and whether a stable chaotic attractor merges with a chaotic transient or ghost attractor. ## Appendix A Numerical Methods In order to analyse crises in the LFSBM (equation (3)), we needed to resort to numerical methods. In this appendix, we outline the techniques we employed to generate the results described in the body of the paper. All calculations were performed using MATLAB R2022a and we used standard ODE solvers to compute the solution trajectories, to calculate Lyapunov exponents and their finite-time equivalents along a trajectory, and to estimate the basin of attraction for a given (chaotic) attractor. ### The ODE solver The basic building block we used to obtain our results is an appropriate ODE solver. In this study, we emplioyed the standard fourth order Runge-Kutta scheme, which can be described by the following for a given system of differential equations: \[\begin{array}{rcl}K_{j}^{(1)}&=&F(t_{j},Y_{j}),\\ K_{j}^{(2)}&=&F(t_{j}+\frac{1}{2}\Delta t,Y_{j}+\frac{1}{2}\Delta tK_{j}^{(1)} ),\\ K_{j}^{(3)}&=&F(t_{j}+\frac{1}{2}\Delta t,Y_{j}+\frac{1}{2}\Delta tK_{j}^{(2)} ),\\ K_{j}^{(4)}&=&F(t_{j}+\Delta t,Y_{j}+\Delta tK_{j}^{(3)}),\\ Y_{j+1}&=&Y_{j}+\frac{1}{6}\Delta t[K_{j}^{(1)}+2K_{j}^{(2)}+2K_{j}^{(3)}+K_{j }^{(4)}],\\ t_{j+1}&=&t_{j}+\Delta t,\end{array} \tag{15}\] where \(h\) is the size of the time step, \(F\) represents the system of differential equations, \(t_{j}\) is the time at the \(j^{th}\) step, \(Y_{j}\) is the solution trajectory at the \(j^{th}\) step, and \(K_{j}^{(i)},i\in\{1,2,3,4\}\) are the intermediate evaluations for the method. In this study, we chose to hold \(\Delta t\) constant, giving us equivalent spacing throughout the lifespan of the trajectory. This is important for the calculation of FTLEs, as it allows us to use constant size windows and by extension, a constant number of data points in the FTLE calculation. In addition to the FTLE calculation, the ODE solver underpins everything else we do in this study. ### Lyapunov Exponents The Lyapunov exponents for a given trajectory were calculated using the backwards QR method coupled to Gram-Schmidt orthogonalisation. We began with the specified system of differential equations, an initial condition, the time interval over which we wished to calculate our exponents (\(t\in[T_{0}\), \(T_{m}]\)), the time-step \(\Delta t\), the tolerance value \(\varepsilon\), and a parameter \(r\). This latter quantity was used as a refresh rate to inform us as to how many steps should be taken before we reset the \(Q\) and \(R\) matrices through the Gram-Schmidt process (we took \(r=1\) for this study). We then computed the solution trajectory over the given time interval using \(N=(T_{m}-T_{0})/\Delta t\) steps and starting at time \(T_{0}\). At this point in space, we computed the Jacobian \(J\), evaluated the exponential matrix \(A\) = \(e^{J}\) and set the matrix \(Q\) to be the identity matrix before entering the calculation loop. The main calculation loop was done through the use of a backwards QR algorithm, see, for example, [9]. We initiated the calculation loop by putting \(Q\) = \(AQ\) and whenever the \(Q\) and \(R\) matrices needed to be reset we performed Gram-Schmidt orthogonalisation using \(Q\) and returning \(Q\) and \(R\). In our algorithm, we refreshed \(Q\) and \(R\) after every step. We then took the diagonal elements of \(R\) and then added the logarithm of each value to our Lyapunov spectrum. Next, we then prepared for the next iteration of the loop by progressing to the following step on the trajectory, taking the Jacobian \(J\) at that point, and then letting \(A=e^{J}\). Once we had completed our calculation loop, we divided each element by the total number of time steps \(N\), and returned the Lyapunov spectrum. In short, we defined \(Q_{i}\) and \(R_{i}\) iteratively by the QR decomposition of \(A_{i}Q_{i-1}\) such that \[Q_{i}R_{i}=A_{i}Q_{i-1}, \tag{12}\] and stored each \(R_{i}\), calculating each Lyapunov exponent via \[\lambda_{j}=\frac{1}{N}\sum_{1=1}^{N}\ln R_{i,jj}, \tag{13}\] where \(j\) represents the \(j^{th}\) row of the spectrum vector. For this study, we used the Gram-Schmidt Orthogonalisation code available online from web.mit.edu [20]. ### Basins of Attraction After computing the Lyapunov exponent over the time interval \([T_{0}\), \(T_{m}]\) (noting, in passing, that this becomes a FTLE as a result of being a finite interval in contrast to the standard exponent that corresponds to the limit as \(t\rightarrow\infty\)), we can used it for two applications. The first, and major application, is in the calculation of the basin diagrams; a graphical output of the basin of attraction for given chaotic attractors. This method, as piloted by Armiyoon and Wu [1], takes advantage of an invariance property that Lyapunov exponents have on a given chaotic attractor to efficiently compute the numerical basin of attraction for some given chaotic attractors. While Armiyoon and Wu [1] used Monte Carlo techniques to mitigate the run time for finding basins of attraction, we instead optimise by limiting our range to the most relevant sections of the Stommel phase plane (either \(T\), \(S\)\(\in\)\([-3,\)\(3]\), or \(T\), \(S\)\(\in\)\([0,\)\(6]\)), calculating over the finite interval, and taking advantage of the fact that given our ODE solver (A.1), solution trajectories will inevitably converge in finite time, guaranteeing similar Lyapunov exponents to confine the range of our calculation to a very small interval. After accounting for some basic initial conditions, we started by specifying the region in which we calculate the basin of attraction. The grid was discretised through the use of a single variable for the step parameter. A smaller parameter gives a higher resolution, but the price to be paid is a commensurate increase in the computational time. Each initial condition was assigned a colour, was decided by taking the resulting Lyapunov spectrum and comparing them to the current list of spectra. If the Lyapunov spectrum was unique, then we considered it to be a new chaotic attractor, recorded the trajectory and proceeded. Otherwise, we grouped it in with the appropriate existing spectrum and assigned it the corresponding colour. We expected a maximum of three attractors in any given basin, so if we found more than three or there were some that did not converge, we used a default colour (yellow) to indicate that the calculation has failed. Owing to the invariance property of Lyapunov exponents, we could minimise the time spent calculating the spectrum for each initial condition by restricting the computation to the latter portion of the trajectory without significant difference in the basin of attraction. Once all the initial conditions had been considered, we then output the basin of attraction and for each attractor, we recalculated the Lyapunov spectrum for accuracy and added the chaotic attractor to the basin diagram. This algorithm is not without its faults, however. One of the main issues arose when two attractors in different parts of the phase plane possessed identical Lyapunov exponents; an example of this occurs in equation ((1.3) when \(\xi,\eta,a=0\) and \(\zeta<0\). From a behavioural analysis, we know that equilibria will be present at \((T,S)=(0,\pm\zeta)\), so we could use this knowledge to hard-code one of the equilibria in this instance, and call on our knowledge of the stable manifold of the saddle at \((T,S)=(0,0)\) to use a different colour for the \(S>0\) region. ### Ftles The other major application of the Lyapunov exponent is to calculate the FTLEs along a given solution trajectory. In order to do this, we needed to take a windowed approach. The algorithm we used takes the same parameters were used to compute a regular Lyapunov exponent alongside an extra variable \(I\) to denote the length of each time interval (in the default MATLAB time unit of seconds). Given the interval over which we wished to calculate the trajectory \(T_{m}\), the length of a given time step \(\Delta t\), and the interval for an individual FTLE calculation \(I\), we then began the computation by calculating the FTLEs in the interval \([T_{0},T_{0}+I]\). We then incremented the window by \(\Delta t\) and then found the FTLEs over the interval \([T_{0}+\Delta t,T_{0}+\Delta t+I]\). We continued to increment by \(\Delta t\) and calculated the FTLEs in these small intervals until we reached the final interval of calculation: \([T_{m}-I,T_{m}]\). This technique is little more than a brute force approach in which we simply calculated the FTLE in an interval, recognise it as the FTLE at that instant, increment, calculate, and continue until the right hand side of the interval of calculation reaches the end of the solution trajectory. At the conclusion of the process, we returned a matrix of FTLEs in the natural sorted order as given by the backwards QR method. ## Acknowledgments We thank the Sydney Dynamics Group and attendees of the 2022 meeting in Auckland for the insightful discussions.
2309.00662
Model Selection with Baryonic Acoustic Oscillations in the Lyman-alpha Forest
The recent release of the final, complete survey of Lyman-alpha baryonic acoustic oscillation measurements provides the most significant and accurate data base for studying cosmic geometry at an effective redshift z_eff=2.334, which is inaccessible to other sources. In this Letter, we use these data to select among four distinct cosmologies: Planck LCDM, the R_h=ct universe, the Milne universe and Einstein-de Sitter. Given the breadth and depth of the Lyman-alpha study, this BAO measurement alone provides a strong model comparison, complementary to previous studies that combined Lyman-$\alpha$ data with measurements at lower redshifts. Though both approaches are useful, the latter tends to dilute the disparity between model predictions and the observations. We therefore examine how the models compare to each other strictly based on the BAO scale measured in the Lyman-alpha forest and background quasars. We find that Milne and Einstein-de Sitter are strongly ruled out by these data. There is also strong evidence disfavoring the standard model. The Lyman-alpha measurements are completely consistent with the cosmic geometry predicted by R_h=ct. As such, evidence continues to grow that the zero active mass condition from general relativity ought to be an essential ingredient in LCDM.
Fulvio Melia
2023-09-01T13:35:56Z
http://arxiv.org/abs/2309.00662v1
# Model Selection with Baryonic Acoustic Oscillations ###### Abstract The recent release of the final, complete survey of Lyman-\(\alpha\) baryonic acoustic oscillation measurements provides the most significant and accurate data base for studying cosmic geometry at an effective redshift \(z_{\rm eff}=2.334\), which is inaccessible to other sources. In this Letter, we use these data to select among four distinct cosmologies: Planck \(\Lambda\)CDM, the \(R_{\rm h}=ct\) universe, the Milne universe and Einstein-de Sitter. Given the breadth and depth of the Lyman-\(\alpha\) study, this BAO measurement alone provides a strong model comparison, complementary to previous studies that combined Lyman-\(\alpha\) data with measurements at lower redshifts. Though both approaches are useful, the latter tends to dilute the disparity between model predictions and the observations. We therefore examine how the models compare to each other strictly based on the BAO scale measured in the Lyman-\(\alpha\) forest and background quasars. We find that Milne and Einstein-de Sitter are strongly ruled out by these data. There is also strong evidence disfavoring the standard model. The Lyman-\(\alpha\) measurements are completely consistent with the cosmic geometry predicted by \(R_{\rm h}=ct\). As such, evidence continues to grow that the zero active mass condition from general relativity ought to be an essential ingredient in \(\Lambda\)CDM. pacs: 98.80.Cq pacs: 98.80.-k pacs: 98.80.Jk Inflation Cosmology Relativistic Astrophysics ## 1 Introduction The angular power spectrum in the cosmic microwave background (CMB) is characterized by an angular scale set at the surface of last scattering, when the radiation began dissociating from baryonic matter. Believed to represent a sonic horizon, \(r_{d}\), this comoving distance is apparently associated with sound waves created at or near the Planck regime, which then propagated across the baryon-photon fluid until the CMB was produced [1, 2, 3]. In a Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmology, in which all proper (or physical) distances grow in proportion to a universal scale factor, \(a(t)\), while comoving distances remain constant, the scale \(r_{d}\) is expected to have remained unchanged, reappearing subsequently as a characteristic length in the matter correlation function. In this context, the sonic scale is more commonly inferred from baryonic acoustic oscillations (BAO) seen in the large-scale structure, the first measurement of which was carried out using the auto-correlation of galaxy positions [4] at \(z\sim 0.35\), and the galaxy power spectrum [5] at \(z\sim 0.1\). More generally, the BAO scale at \(z\lesssim 2\) has been studied using a variety of discrete tracers, such as galaxy clusters [6], and quasars [7], in addition to the aforementioned galaxies [8, 9, 10, 11, 12, 13, 14, 15]. Beyond \(z\sim 2\), however, a measurement of the BAO scale must to be handled differently because the number density of observable discrete tracers is too low for high precision clustering studies. The method of choice for measuring \(r_{d}\) in the high-redshift Universe instead rests on the observation of opacity fluctuations in the Lyman-\(\alpha\) forest irradiated by background quasars. The first such studies, focusing on the Lyman-\(\alpha\) auto-correlation function, were carried out by refs. [16], [17], [18], [19], [20] and [21]. Complementary results based on the Lyman-\(\alpha\) and quasar cross-correlation function have also been reported by refs. [22], [23] and [24]. The BAO feature is a powerful diagnostic in cosmology because it yields both angular-diameter distances and the expansion rate normalized to the sound horizon \(r_{d}\). And while the luminosity distance to type Ia supernovae has already provided strong evidence for the existence of dark energy [25, 26, 27], these transient events are difficult to ob serve at redshifts \(z\gtrsim 1.8\). The BAO measurements in the Lyman-\(\alpha\) forest, at an effective \(z\sim 2.334\), may therefore be used in several distinct tests of the geometry of the Universe at intermediate redshifts not accessible to local surveys focusing on supernovae (\(z\lesssim 1.8\)) and instruments designed to study the CMB at \(z\gg 1\). A common approach is to combine all of the BAO data, those from the galaxy surveys at low redshifts and those from the Lyman-\(\alpha\) forest farther away, under the assumption that \(r_{d}\) is independent of \(z\). This can be done either in the context of \(\Lambda\)CDM, where one includes the predicted value of \(r_{d}\) in this model to extract the redshift-dependent angular-diameter distance and expansion rate, or in model selection by examining which cosmology is preferred by the BAO measurements. For the latter, one considers \(r_{d}\) to be 'unanchored,' since its value may not be the same from one model to the next. In this case, one either optimizes \(r_{d}\) along with the other model parameters individually for each cosmology being tested, or avoids it altogether by considering ratios of the angular-diameter and Hubble distances, both of which are proportional to the sonic horizon (see Eqs. 3 and 4 below). In previous work [28, 29, 30], we have followed the latter approach using older, less complete BAO catalogs than those available now to compare the standard model with one of its principal competitors known as the \(R_{\rm h}=ct\) universe [31, 32]. Interestingly, the aggregated BAO data have tended to favour the latter model rather than \(\Lambda\)CDM. But the BAO measurements using the lower redshift galaxy surveys are less discerning than their higher redshift counterparts, so while the outcome of these studies has been suggestive, it has not necessarily been compelling. In view of the significantly more complete eBOSS catalog available now (see section labeled 'Data' below), however, the BAO data at \(z=2.334\) by themselves should constitute an important constraint on the geometry of the Universe sampled solely by the quasars in this catalog (distributed at \(z\gtrsim 1.77\)) and their Lyman-\(\alpha\) forests. Since it is well known that a comparison of the model predictions and the BAO measurements tends to become more discordant with increasing redshift, it is desirable to use the Lyman-\(\alpha\) data by themselves for model selection purposes, in parallel to the already completed studies based on the BAO observations at various redshifts. In this Letter, we therefore complement our previous model selection analysis by restricting the comparison to just the Lyman-\(\alpha\) measurements. As noted, we do this for two principal reasons. First, the model contrast at \(z\gtrsim 2\) is significantly greater than that at \(z\sim 0\). Second, we now have the final BAO constraints from the Lyman-\(\alpha\) auto-correlation and Lyman-\(\alpha\)-quasar cross-correlation functions produced from the sixteenth (DR16) and final release [33] of the fourth generation Sloan Digital Sky Survey (SDSS-IV), containing all of the clustering and Lyman-\(\alpha\) data from the completed 'extended' Baryonic Oscillation Spectroscopic Survey (eBOSS) [34]. This catalog of quasars and Lyman-\(\alpha\) profiles is so large (see 'Data' below) that the BAO scale measured from it at an effective redshift \(\sim 2.334\) constitutes a crucial probe of the cosmology on its own merit. As we shall see, these data alone provide an important comparison of cosmological models and their predictions without having to pre-assume \(r_{d}\) and \(H_{0}\). The latter is especially desirable in view of the growing disparity between the measurements of \(H_{0}\) at low and high redshifts, creating a \(\sim 4\sigma\) uncertainty in its value [35]. We shall find that the depth and extent of the final Lyman-\(\alpha\) BAO analysis [36], along with the option of altogether avoiding the use of \(r_{d}\) and \(H_{0}\), provides us with a very clean and compelling test of the various FLRW cosmologies. ## 2 Data The analysis in this Letter utilizes the baryonic acoustic oscillations (BAO) measured in the Lyman-\(\alpha\) absorption and background quasars, with an effective (enemble) redshift \(z=2.334\). The data are taken from the complete extended Baryonic Oscillation Spectroscopic Survey (eBOSS) [36], which includes the Lyman-\(\alpha\) absorption profiles of 210,005 background quasars distributed at \(z_{q}>2.10\). The BAO scale has been measured in both the auto-correlation of the Lyman-\(\alpha\) absorbers and their cross-correlation with 341,468 quasars at \(z_{q}>1.77\). This data release represents several advances over previously published Lyman-\(\alpha\) BAO measurements, including improved statistics from a larger quasar catalog and deeper observations, and a more accurate modeling of the systematics. ## 3 Model Comparisons The Lyman-\(\alpha\) BAO survey measures the BAO scale in the Lyman-\(\alpha\) forest of absorption of light from distant quasars, both via the forest-forest correlation function [19], and in the forest-quasar cross-correlation [22]. The scale is measured along the line-of-sight, \[\Delta z=\frac{r_{d}}{\left(1+z\right)d_{H}(z)}\;, \tag{1}\] and in the transverse direction, \[\Delta\theta=\frac{r_{d}}{d_{M}(z)} \tag{2}\] where, as previously noted, \(r_{d}\) is the length corresponding to the peak of the matter two-point function in comoving coordinates. In addition, the quantity \[d_{H}(z)\equiv\frac{c}{H(z)} \tag{3}\] is the Hubble scale at \(z\), while \(d_{M}(z)\) is given in terms of the angular-diameter distance via the relation \[d_{M}(z)\equiv(1+z)\,d_{A}(z)\;. \tag{4}\] We mention here that surveys often also define the distance \[d_{V}(z)\equiv z^{1/3}d_{M}^{2/3}(z)\,d_{H}^{1/3}(z)\;, \tag{5}\] an angle-weighted average of \(d_{M}(z)\) and \(d_{H}(z)\), but this quantity was not derived independently of \(d_{M}\) and \(d_{H}\) for these data [36]. Thus, though it tends to be the best determined scale in measurements of BAO from galaxy surveys, we will not find it useful for our analysis in this particular instance, and we shall instead focus our attention on \(d_{M}\) and \(d_{H}\) themselves. Given that these BAO observables are proportional to \(r_{d}H_{0}\) (see below for the model-dependent functional form of \(H[z]\)), the ratios of the various BAO scales are completely independent of the sonic horizon and Hubble constant. Indeed, as long as we restrict our comparisons to these ratios, three of the models we compare here have no free parameters at all, while the standard model, \(\Lambda\)CDM (see Eq. 11 below), has just one: the scaled matter density \(\Omega_{\rm m}\). But to avoid unduly 'punishing' this cosmology by treating \(\Omega_{\rm m}\) as an unknown variable when assessing its likelihood, we shall simply adopt its _Planck_ optimized value, \(\Omega_{\rm m}=0.315\pm 0.007\), with a spatial curvature constant \(k=0\)[3]. We shall compare fits to the Lyman-\(\alpha\) data using four different cosmological models, each with its own prediction of the angular-diameter and Hubble distances. The quantity \(\Omega_{i}\) is the energy density of species \(i\), scaled to the critical density \[\rho_{c}\equiv\frac{3c^{2}H_{0}^{2}}{8\pi G}\;, \tag{6}\] in terms of the Hubble constant, \(H_{0}=67.4\pm 0.5\) km s\({}^{-1}\) Mpc\({}^{-1}\). In principle, we could completely avoid any reliance on the _Planck_ measurements by selecting the value of \(\Omega_{\rm m}\) that optimizes \(\Lambda\)CDM's fit to the Lyman-\(\alpha\) BAO data, but the improvement is too small to justify the introduction of an additional free parameter. The four models we compare are: 1. The \(R_{\rm h}=ct\) universe, a Friedmann-Lemaitre-Robertson-Walker cosmology with zero active mass, \(\rho+3p=0\), in terms of the total energy density (\(\rho\)) and pressure (\(p\)) in the cosmic fluid [31, 32]. In this case, \[d_{M}^{(1)}(z)=\frac{c}{H_{0}}\ln(1+z)\;,\] (7) and \[d_{H}^{(1)}(z)=\frac{c}{H_{0}(1+z)}\;.\] (8) 2. Flat _Planck_-\(\Lambda\)CDM, with \(\Omega_{\Lambda}=1-\Omega_{\rm m}\) and a dark-energy equation of state parameter \(w_{\rm de}=-1\). For this model, \[d_{M}^{(2)}(z)=\frac{c}{H_{0}}\int_{0}^{z}\frac{du}{E(u)}\;,\] (9) and \[d_{H}^{(2)}(z)=\frac{c}{H_{0}E(z)}\;,\] (10) where \[E(z)\equiv\left[\Omega_{\rm m}(1+z)^{3}+\Omega_{\Lambda}\right]^{1/2}\;.\] (11) At these redshifts, the contribution to \(E(z)\) from radiation is negligible, so we have ignored it for the calculation of \(d_{H}(z)\) and \(d_{M}(z)\). 3. The Milne universe. This well studied solution is also a Friedmann-Lemaitre-Robertson-Walker cosmology, but its energy density, pressure and cosmological constant are all zero. Its expansion instead derives from a non-zero spatial curvature, with \(k=-1\). Like the \(R_{\rm h}=ct\) universe, the Milne scale factor, \(a(t)\), is also linear in time [37, 38], but we include it here primarily because the observable signatures in these two models are very different. In the Milne universe, \[d_{M}^{(3)}(z)=\frac{c}{H_{0}}\sinh\left[\ln(1+z)\right]\;,\] (12) and \[d_{H}^{(3)}(z)=\frac{c}{H_{0}(1+z)}\;.\] (13) 4. Einstein-de Sitter (i.e., Eqs. 9 and 10 with \(\Omega_{\rm m}=1\) and \(\Omega_{\Lambda}=0\)): \[d_{M}^{(4)}(z)=\frac{2c}{H_{0}}\left(1-\frac{1}{\sqrt{1+z}}\right)\;,\] (14) and \[d_{H}^{(4)}(z)=\frac{c}{H_{0}(1+z)^{3/2}}\;.\] (15) This cosmology is already heavily disfavoured by many other observations, but we include it here because of its relevance as a former'standard' model. The combined BAO measurements from the auto- and cross-correlation in the final, complete eBOSS release [36] yield the following constraints: \[d_{H}(z=2.334)/r_{d} = 8.99\pm 0.19\] \[d_{M}(z=2.334)/r_{d} = 37.5\pm 1.1\;. \tag{16}\] Though a 'fiducial' cosmology is employed in the calculation of these distances, the ratios shown in Equation (16) are model independent, as studied in detail by ref. [39] in \begin{table} \begin{tabular}{l l l} \hline \hline Data/Model & \(d_{M}/d_{H}\) & (P-value) \\ \hline Data & \(4.17\pm 0.18\) \\ 1. \(R_{\rm h}=ct\) & 4.02 & (0.39) \\ 2. _Planck_-\(\Lambda\)CDM & 4.56 & (0.03) \\ 3. Milne & 5.058 & (\(<0.00001\)) \\ 4. Einstein-de Sitter & 5.507 & (\(<0.00001\)) \\ \hline \hline \end{tabular} \end{table} Table 1: Model comparison using the Lyman-\(\alpha\) BAO data the context of galaxy correlations, and confirmed for the Lyman-\(\alpha\) observations by ref. [36]. The model predictions are compared with these data in Table 1, prioritized in terms of the p-values estimated from the various fits. Given a model's prediction for \(R^{\rm th}\equiv d_{M}/d_{H}\) and the standard deviation (in this case, \(\sigma_{R}=0.18\)) of the measurement, this p-value represents the probability of observing a difference \(|R^{\rm obs}-R^{\rm th}|\) greater than \(|4.17-R^{\rm th}|\) under the assumption that the null hypothesis is true and that the distribution of \(R^{\rm obs}\) is normal. The error quoted for the measured value of \(d_{M}/d_{H}\) includes the correlation between \(d_{M}/r_{d}\) and \(d_{H}/r_{d}\), characterized by the correlation coefficient \(C(d_{H},d_{M})=-0.45\)[36]. Small p-values provide evidence against the assumed model, with the evidence getting stronger as the p-value approaches zero. Typical guidelines suggest the following hierarchy: (\(p>0.10\)) weak or no evidence; (\(0.05<p\leq 0.10\)) moderate evidence; (\(0.01<p\leq 0.05\)) strong evidence; (\(p\leq 0.01\)) very strong evidence. On the basis of these comparisons, it is clear that the Lyman-\(\alpha\) BAO measurements very strongly rule out the Milne and Einstein-de Sitter cosmologies, affirming an already established conclusion drawn from many other comparative tests [32]. There is also strong evidence disfavoring the standard model, an outcome confirmed by several previous studies, including those reported in refs. [36] and [40]. But the principal result of this work is that the Lyman-\(\alpha\) BAO measurements are completely consistent with the geometry of the cosmos predicted by the \(R_{\rm h}=ct\) universe. Pursuing this head-to-head comparison further, we note that if \(\Omega_{\rm m}\) were to differ from its _Planck_ value by \(3\sigma\), i.e., if \(\Omega_{\rm m}=0.294\), then the p-value for \(\Lambda\)CDM would improve somewhat to \(0.065\), so the evidence against the standard model in that case would be'moderate' instead of'strong.' Nevertheless, to make the standard model equally likely with \(R_{\rm h}=ct\), i.e., to improve its p-value to \(0.39\), \(\Omega_{\rm m}\) would have to be smaller than \(\sim 0.236\), a value different by many standard deviations from that inferred by _Planck_. It is also straightforward to perform a Bayesian analysis [41, 42] of the head-to-head comparison between \(\Lambda\)CDM and \(R_{\rm h}=ct\) based on the Lyman-\(\alpha\) data at \(z=2.334\), specifically, the measurement of \(d_{M}/d_{H}=4.17\pm 0.18\) shown in Table 1. Under the assumption that this ratio follows a normal distribution, and adopting a point model for both the null and alternative hypotheses (remember that we are fixing the value of \(\Omega_{\rm m}\) at the _Planck_ measurement to avoid unduly 'punishing' the standard model for having an additional free parameter), one infers a Bayes factor of \(7.39\). The marginal likelihoods at the measured value of \(d_{M}/d_{H}\), from which the Bayes factor is estimated, are shown in Figure 1. A Bayes factor between 3 and 10 (as we have here) indicates'moderate' or'substantial' evidence in favor of the alternative hypothesis [43, 44, 45]. As such, the Bayesian analysis complements and confirms our earlier conclusion, derived from the p-values, that the Lyman-\(\alpha\) BAO data at \(z=2.334\) favor the \(R_{\rm h}=ct\) cosmology over the current standard model. ## 4 Conclusion The strong rejection of the Milne and Einstein-de Sitter cosmologies by the final Lyman-\(\alpha\) BAO data release is hardly surprising in view of their similarly strong rejection by other observations. Our main conclusion from this study instead refocuses our attention on the fact that the data tend to favour the \(R_{\rm h}=ct\) cosmology over the current standard model. Indeed, the BAO measurements offer a new perspective on this comparison, following an earlier examination by ref. [40], who attempted to identify the reason behind the BAO anomaly in the context of \(\Lambda\)CDM. Their analysis showed that the BAO data at \(z>0.43\) are in tension with the standard model, whether or not the _Planck_ optimized parameters (e.g., for \(\Omega_{\rm m}\)) are assumed. They concluded that this tension arises not from the \(\Lambda\)CDM parameters, but instead from the dark energy evolution at \(0.57<z<2.334\). If one further sets \(r_{d}\) equal to the acoustic scale measured in the CMB, a cosmological constant for dark energy is firmly rejected. The \(R_{\rm h}=ct\) cosmology is essentially \(\Lambda\)CDM, though with the crucial additional constraint of zero active mass from general relativity [32]. This constraint features an equation-of-state, \(\rho+3p=0\), in terms of the total energy density \(\rho\) and pressure \(p\) in the cosmic fluid. Dark energy is therefore dynamic in this model, evolving along with the other constituents, presumably an extension to the standard model of particle physics. The current standard model suffers from several major conflicts and inconsistencies that continue to defy serious attempts at mitigation [46]. Together with the continued success of \(R_{\rm h}=ct\) to account for the data better Figure 1: Estimation of the Bayes factor from the marginal prediction of the \(R_{\rm h}=ct\) and \(\Lambda\)CDM models versus the measurement of the ratio \(d_{M}/d_{H}\) from the Lyman-\(\alpha\) forest at an effective redshift \(z=2.334\). than \(\Lambda\)CDM [47], as affirmed by the work reported in this Letter, the prospects for further development of this alternative FLRW cosmology look very promising. An especially exciting future observation to anticipate over the coming decade is the real-time measurement of redshift drift [48, 49], which should provide an unambiguous yes/no answer to the question of whether or not the cosmic fluid in fact driven by a zero active mass equation-of-state.
2310.12305
Building Random, Fair, and Verifiable Games on Blockchain. Raffle smart contract designs on Sui Network
Randomness plays a pivotal role in modern online gaming, but disputes have arisen over the accuracy of stated winning chances, resulting in legal issues and financial setbacks for gaming companies. Fortunately, blockchain-based games offer a solution to the transparency and fairness issue regarding randomness. Furthermore, emerging blockchain technology like Sui Network enhances the efficiency of smart contracts by eliminating traditional web3 barriers, such as inefficiencies and expensive transaction fees. This unlocks the potential for extensive decentralized gaming applications. This paper aims to provide insights into designing a fair, verifiable, and efficient smart contract game on blockchain by the example of building raffles on the Sui Network. We explore efficient methods for implementing randomness on smart contracts, including DRAND committee-based decentralized random beacons and single private-key-based verifiable random functions (VRF). Then, progress from basic to comprehensive smart contract design. We addressed limitations in developing blockchain games in general, such as data input and storage space constraints. We propose corresponding solutions, encompassing the utilization of Object Tables, Delegate Object Creation, and Zero-Knowledge Proofs (ZKP) to optimize storage and input efficiency. After testing our designs, we found that the transaction fees for DRAND beacons and private-key-based VRFs are similar. Moreover, Object Tables incur higher overall transaction fees, while the ZKP setup fee is cheap but becomes very expensive during the verification process. Moreover, we identified suitable designs for different application scenarios by comparing the pros and cons of different smart contract implementations. Our findings provide valuable guidance for future researchers and developers in building random, fair, and verifiable games with smart contracts.
Eason Chen, Justa Liang, Ray Huang, Pierce Hung, Damien Chen, Ashley Hsu, Konstantinos Chalkias, Stefanos Pleros
2023-10-18T20:12:44Z
http://arxiv.org/abs/2310.12305v3
# Building Random, Fair, and Verifiable Games on Blockchain. ###### Abstract. Randomness plays a pivotal role in modern online gaming. However, there have been controversies that the claimed winning probabilities in gaming do not align with reality, leading gaming companies to face legal challenges and even financial setbacks. Fortunately, blockchain-based games offer a solution to the transparency and fairness issue regarding randomness. Furthermore, emerging blockchain technology like Sui Network enhances the efficiency of smart contracts by eliminating traditional web3 barriers, such as inefficiencies and expensive transaction fees. This unlocks the potential for extensive decentralized gaming applications. This paper aims to provide insights into designing a fair, verifiable, and efficient smart contract game on blockchain by the example of building raffles on the Sui Network. We explore efficient methods for implementing randomness on smart contracts, including DRAND committee-based decentralized random beacons and single private-key-based verifiable random functions (VRF). Subsequently, progress from basic to comprehensive smart contract design. By taking advantage of smart-contract native cryptographic primitives, we addressed limitations in developing blockchain games in general, such as data input and storage space constraints. We propose corresponding solutions, encompassing the utilization of Object Tables, Delegate Object Creation, and Zero-Knowledge Proofs (ZKP) to optimize storage and input efficiency. After testing our designs, we found that the transaction fees for DRAND beacons and private-key-based house-owned VRFs are similar. Moreover, Object Tables incur higher overall transaction fees, while the ZKP setup fee is very cheap but becomes very expensive during the verification process. Moreover, we identified suitable designs for different application scenarios by comparing the pros and cons of different smart contract implementations. Our findings provide valuable guidance for future researchers and developers in building random, fair, and verifiable games with smart contracts, with a focus on gas optimization and security among the others. Distributed Ledger Technology, Blockchain, Smart Contract, Zero Knowledge Proof, Verifiable Random Function, DApps + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology blockchains. In light of this, our manuscript seeks to serve as a beacon for forthcoming blockchain game creators, using the development of a random, equitable, and auditable raffle game on Sui as a case study. Our primary research inquiry is as follows: * How can one construct a fair, verifiable, and cost-efficient raffle game employing smart contracts within the Sui Network? ### Paper organization In this paper, we will first review the background of web3 technologies and explain why the Sui is more suitable for game development than traditional blockchains. Next, we will describe several methods for implementing blockchain randomness, discussing advantages and limitations. Then, by progressing from simple to comprehensive designs, we elucidate the process of implementing a random raffle with fair, verifiable, and efficient smart contract designs on the Sui Network. We will discuss the advantages and limitations of each design and explore strategies to address these limitations. Finally, we will summarize the content above and provide an overall discussion and comparison of different designs. ## 2. Background ### Blockchain and Smart Contract Blockchain is a distributed ledger technology (DLT) that securely records transactions across a network, ensuring transparency, immutability, and data trust (Krishnan et al., 2017). Smart contracts are code-based logic running on the blockchain, capable of executing specific actions when predefined conditions are met, thereby automating a multitude of processes. These contracts function within blockchain networks, and presently, numerous decentralized applications (DApps) have been developed across various blockchains utilizing smart contracts. Every call to a blockchain function, such as writing data or executing smart contracts, requires costs known as transaction fees (Bordes et al., 2017). This practice is indispensable due to the decentralized nature of the blockchain, where computations and data storage occur across multiple nodes. Additionally, to ensure the consistency, transactions must be executed sequentially and packaged into blocks, which can limit the computational capacity of the platform. Transaction fees serve as a means to prevent resource misuse and maintain cost equilibrium. However, high transaction fees represent a significant obstacle to the widespread adoption of blockchain applications, as users may be disincentivized from using DApps when transaction fees surpass their potential earnings. Consequently, DApps about earning, such as Finance, Non-Fungible Tokens (NFT), and Gaming Finance, currently dominate the landscape, whereas those emphasizing playability attract fewer users (Krishnan et al., 2017). Thankfully, several modern groundbreaking DLT designs, such as the Sui Network, have emerged to reduce the computational and storage burdens on the network while enhancing computational efficiency, thus alleviating transaction fee issues. ### Sui Network Mainnet launched in May 2023. Sui is a decentralized, permissionless smart contract platform prioritizes low-latency asset management (Mao et al., 2020). Originating from Meta's Diem (formally known as Libra), Sui utilizes the Move programming language (Bordes et al., 2017) to define and manage assets owned by addresses, with custom rules for asset creation, transfer, and mutation. Unlike Ethereum's account-based design, in which assets exist as numerical variables within the address or smart contracts, Sui's asset management is object-based, similar to the Unspent Transaction Output (UTXO) structure found in Bitcoin (Bordes et al., 2017), allowing objects of digital assets to be fragmented, combined, and transferred to different addresses. Moreover, what sets Sui apart from traditional blockchain networks is its utilization of a Directed Acyclic Graph (DAG) model to record transactions. As shown in Figure 1, each transaction block on Sui includes several transactions with inputs of different objects on Sui Network. Then, these transactions will mutate or create new objects. Using DAG and object-based design, Sui can enable transactions with unrelated objects to be executed without a specific sequence, maximizing Sui's computational efficiency and scalability. As a result, despite Sui experiencing a rapid increase in transaction volume from thousands to tens of millions in a short period, the transaction fees have remained nearly consistent (Bordes et al., 2017). In addition, Sui has made optimizations in data storage. Sui allows the deletion of data on the network to free up space and obtain a Storage Rebate Fee (Mao et al., 2020). For example, in Figure 1, the first transaction requires writing 100 Kilobytes (KB) of participants into an array in the _RaffleObject_. It incurs a computation fee of 0.001 Sui and a storage cost of 0.779 Sui. However, when a subsequent transaction clears the array of participants and thus frees 100 KB of data at that _RaffleObject_, the transaction sender will receive a storage rebate of 0.77 Sui. This ultimately reduces the overall longterm cost of storing a substantial amount of data on Sui, leaving only the computational and necessary storage fees, as extra storage costs can be rebated by clearing data storage after operations are completed. Notably, even though on-network data is cleared, verification and reproduction remain possible since inputs are stored within the Transaction Block and retained in a database on the Sui Nodes without consuming network hot-memory resources. ## 3. Randomness Practice in Blockchain In general, the implementation of verifiable random functions (VRF) on blockchain can be broken down into the following two steps: 1. **Initiate**: Define validation criteria and operational logic through a smart contract for an unknown and unpredictable random number. 2. **Verify and Execute**: Once the random value is known, validate it against the specified criteria. If it meets the conditions, execute subsequent logic using this random number. The following sub-sections explain various practical methods of implementing randomness. ### Block-Hash Randomness In the early days of the Proof-of-Work (PoW) blockchain, the block hash was one method for getting random values. At PoW, miners competed in computational works to be the first to create new valid blocks with hash matching certain criteria (Han et al., 2017). Since the block hash is randomly generated, it can achieve randomness by specifying a hash of a future block in the smart contract logic as the seed of randomness (Bang et al., 2016). However, there are three **limitations** to using the block's hash as a random value. Firstly, miners may intentionally calculate hashes that meet randomness conditions if the reward for generating a hash that meets certain conditions exceeds the block generation reward (Bang et al., 2016). Secondly, the concentration of mining power in a few groups in today's PoW blockchain makes it susceptible to manipulation. Lastly, this method only applies to PoW blockchains, and as blockchain technology shifts towards more efficient methods of block production like Proof of Stake (Bradner et al., 2017), in which a group of miners who are stakeholders takes turns producing blocks, block hash randomness is no longer applicable, as there is greater freedom for miners to manipulate block hashes when producing blocks. ### Oracle-based Randomness Committee-based Oracles are another common randomization(Bang et al., 2016). Provided by third-party services, Oracles offer verifiable randomness for smart contracts, guaranteeing fairness in applications such as gaming and lotteries. When using Oracle-based random beacons, users first initiate a transaction and send it to the Oracle provider(s). Then, that provider sends a second transaction to input a committee-derived random value into the contract. However, an Oracle-based random beacon has two **limitations**. Firstly, its usage requires costly fees (i.e. 3 USD per request on Ethereum). Moreover, Oracle lacks the decentralization levels of a Layer 1 blockchain, typically due to smaller committees and with different incentives or byzantine fault tolerance guarantees. If the Oracle provider experiences downtime or is compromised by hackers, it may result in service interruptions and potentially impact the integrity of the service. There is also another potential issue related to long range attacks and the fact that the Oracles can perform blind, untraceable attacks (i.e. produce a future beacon before anyone else knows, and hence win a lottery without leaving footprints) (Bang et al., 2016). It is necessary to have a more cost-effective and decentralized approach to ensure the availability of randomness on a blockchain DApp. ### Drand Some modern blockchains support functionality to verify random beacons from the League of Entropy's DRAND, a distributed randomness beacon daemon (Han et al., 2017). The League of Entropy is a collaborative project that provides a verifiable, decentralized source of randomness accessible to anyone who needs public randomness. League members maintained the DRAND network by hosting nodes running the DRAND protocol. Note that although the DRAND network is also considered an Oracle service, we distinguish between paid Oracles, and a community transparent service like DRAND, cause use of it does not impose any paid services to the Oracle provider. Additionally, DRAND follows a standard cadence in producing publicly verifiable beacons, and it's not per request, hence reused by everyone for free. The current DRAND mainnet network generates a random value every 30 seconds, called a "round"1. The generation process of a DRAND random value follows a specific set of steps (Krishnan et al., 2017): Figure 1. Example of how Basic Raffle executed on the Sui Network. The first transaction block executed transactions to initiate the basic raffle via a DRAND round, while the second block settled the raffle with DRAND beacons and sent prizes to winners. 1. **DRAND Network Setup**: To establish the network, each node is equipped with an identical private key by _Distributed Private Key Generation_(Dras and Kessler, 2017) and agreed with a threshold. The threshold, typically set at 50% of the total node count, represents the minimum number of node signatures needed to reconstruct a complete signature. 2. **Partial Beacon Creation and Broadcast**: For each new round, a node generates a _partial signature_ by the current round number and the signature from the previous round. Then, broadcast it to the DRAND Network. 3. **Final Signature Creation**: For each incoming _partial signature_, a DRAND node first verifies it and then stores it in a temporary cache if it is valid. Once a minimum threshold of valid _partial signature_ is reached, the node combines them according to the BLS12-381 protocol (Dras and Kessler, 2017). This combination involves mapping these partial signatures to points on an elliptic curve, producing the final signature. By design, BLS signatures are deterministic; hence the threshold-combined BLS signature bytes can be considered a VRF output. 4. **Validation and Storage**: The node validates the new signature by public key and previous signatures, then saves the signature with the round number to the database. The random value of a DRAND round can be obtained by hashing the signature of that round with SHA-256. When utilizing DRAND for generating randomness within a smart contract, it typically involves the following steps: 1. **Set Up**: When deployed, hardcode the threshold-combined public key of the DRAND network into the contract as a constant. 2. **Initiate**: Calculate the current DRAND round number and designate a future round with a number greater than the current one as the target random value. Then, write the logic in the smart contract that should be executed based on this random value. 3. **Verify and Execute**: Wait until the future round is announced at DRAND, then input DRAND's signature to the contract. The contract will first verify the legitimacy of the DRAND signature through the round number, public key, and the signature of the previous round. Following successful verification, it proceeds to hash the signature to derive the random beacon, subsequently executing the predefined logic with it. The **advantages** of DRAND are that it is free, well recognized, decentralized, and has been widely adopted by many projects (Kessler, 2017). Additionally, thanks to the open nature of the DRAND Protocol, anyone can retrieve signatures from a DRAND Node and use them to invoke smart contracts for verification and execution once the designated DRAND round time passed. However, there is a fatal **limitation** when using DRAND randomness: users need to wait at least 30 seconds to obtain results. Such a lengthy wait may influence the user experience in games. ### Single-key VRF beacons To enhance the immediacy of random outcomes, a more centralized but timely, unpredictable, and verifiable random technique has emerged, using a single entity VRF variation of the known committeeal schemes (Dras and Kessler, 2017), which involves the following procedure: 1. **Set up**: Write the host's public key as a constant into the smart contract. 2. **Initiate**: The user inputs a seed into the smart contract. Then, the smart contract saves it with a unique and uncontrollable variable, such as a timestamp or block hash (this is to avoid replay attacks). 3. **Sign the seed**: The host uses a private key to sign the seed and the unique variable at the backend and obtains the VRF output (i.e., BLS signature). 4. **Verify and Execute**: The host calls the smart contract with the VRF output as a parameter. The smart contract verifies the validity of the signature using the host's public key, seed, and unique variable. Then, after successful verification, the smart contract hashes the signature to obtain a random value and execute the random logic. The **advantage** of single-key VRF is its remarkable immediacy. After users initiate a transaction, the host can promptly sign and invoke the contract to complete the randomness. In products currently employing single-key VRF, users can receive the random result within five seconds after sending the initiated transaction, demonstrating low latency. However, this method has two notable **limitations**. First, if a hacker compromised the VRF's private key, it could jeopardize the randomness, potentially leading to the complete depletion of the contract's assets. Moreover, only the host can _Verify and Execute_, as no one else can use the private key to create the signature / beacon. As a result, if the host encounters downtime, users may be unable to finish the execution and withdraw their assets from the contract. To circumvent the latter, solutions using time-locks have been proposed, by introducing penalties to VRF hosts who delay publishing their VRF outputs (Dras and Kessler, 2017; Kessler, 2017). ## 4. System Development In this section, we will begin by describing the implementation of the most basic raffle system, explaining its limitations, and introducing solutions. Then, we will gradually iterate our design to an advanced and production-ready raffle system. Source codes of different designs are at [https://github.com/Bucket-Protocol/raffle-paper](https://github.com/Bucket-Protocol/raffle-paper). ### Basic Raffle System with DRAND The goal of _Basic Raffle_ is to enable the host to randomly select a winner from a group of participants and send the prize to that winner. The design of the _Basic Raffle_ is intuitive and straightforward. As shown in Figure 1, to initiate the random raffle, the host calls the Move smart contract's _create_coin_raffle_, which includes three key parameters: _Clock_, _Participants_ and _Prize_. _Clock_ is an object on the Sui platform that allows smart contracts to obtain the current time. _Participants_ is an array containing all participants' addresses. _Prize_ is an object about the reward for the raffle winner. The _create_coin_raffle_ function then performs the following: 1. **Calculate the DRAND round**: The function calculates the current DRAND round according to the current time. 2. **Initiate**: The function creates a _Raffle Object_. Raffle Object includes an array of participants, _current DRAND round_ + \(N^{*}\) as the target DRAND round and the prize. N can be set to any number greater than 2, depending on when the creator wants to know the results. 3. **Lock the prize**: If the prize is a fungible coin, the function converts the prize object to balance and saves it in a field of the Raffle. If the prize is not fungible, the function saves the prize as a sub-object. Then, when the signature of the _current DRAND round + \(N\)_ is revealed, anyone can settle the raffle by calling _settle_coin_raffle_, which perform the following steps: 1. **Verify**: The function checks the input DRAND signature is valid. 2. **Execute**: The function first computes a random value based on the DRAND signature and subsequently uses this random value to pick a winner from the participants' array. It then proceeds to transfer the prize to the selected winner. 3. **Settle**: The function emits events about the raffle result. Then, clear the participants' array to release space. The creation and settlement process is illustrated at the first and second _Transaction Block_ in Figure 1. #### 4.1.1. **Advantages** The _Basic Raffle_ has three advantages. Firstly, it fully utilizes DRAND's capabilities, ensuring that the Raffle's outcome remains unknown initially while allowing anyone to settle and verify the results. Additionally, the rewards are securely locked within the contract once the Raffle is created, preventing organizers from running multiple Raffles and picking the favorable result, thereby guaranteeing fairness in the process. Lastly, the settlement process involves data clearance and offers a _Storage Rebate_, creating a strong incentive for anyone to settle the raffle. #### 4.1.2. **Limitations** However, there are several limitations when considering Basic Raffle for production use. First, Sui limits the size of transaction blocks, permitting an array size input of approximately 400 addresses. Consequently, the creation process will encounter errors if the host intends to include more than 400 participants in the Raffle. Furthermore, there is a size restriction of 250 KB for Sui Objects. Even if the host manages to input a sufficient number of addresses, the Raffle object cannot accommodate more than 7500 addresses. Therefore, it becomes necessary to leverage other features of Sui to accommodate more addresses. ### Raffle with Object Table The _Object Table_ feature within Sui enables a parent object to own other objects. functioning in a manner akin to _Mapping_ in _Solidity_ or a _Dictionary_ in _Python_. As depicted in Figure 2, _Object Table_ empowers a _Raffle Object_ to possess multiple sub-objects, such as several _Prize Objects_ and _Participant Address Objects_. Under these circumstances, we can establish a mechanism that automatically splits participants and prize addresses into separate sub-objects according to size. When the size surpasses 7,500, a fresh sub-object can be created and put under the _Object Table_, with its corresponding key added to the _Object Table_ key list. In the case of using a single-tier _Object Table_, this strategic setup allows a Raffle to accommodate an impressive \(7,500^{2}\), which equals \(56,250,000\) addresses, a capacity more than sufficient for most practical scenarios. Additionally, it is possible to implement multiple levels of _Object Table_ if even larger quantities are needed. #### 4.2.1. **Advantages** Through the use of _Object Table_, we have addressed the issue of the number of addresses that an Object can accommodate. This type of Raffle is suitable for situations where users actively join by paying for themselves, such as a lottery. An example scenario is as follows: 1. **Initiate**: The host initiates the raffle by locking the prizes within the contract, setting the entry fee, and establishing a deadline using DRAND's Round. 2. **Participants Join**: Participants submit their entry fees to the contract. The smart contract verifies that the deadline has not been exceeded and the fees are valid. Then, it adds the users' address to the _Address Object_ within the _Object Table_ fields of the _Raffle Object_. If the _Address Object_ is full, a new _Address Object_ is created under the _Object Table_. 3. **Settle**: When the designated time arrives, anyone can settle using DRAND Signatures, and the prizes are sent to the winner's wallet while the entry fees are transferred to the host's wallet. #### 4.2. **Limitations** In the above case of the raffle, the block size won't be an issue as users enter one address at a time. However, if we wish to input many addresses at once, we still have to deal with the block size limitation of only being able to enter 400 addresses in a single attempt. This limitation becomes particularly cumbersome Figure 2. Comparison between two raffle designs: one without an object table (a) and the other with a 1-layer object table (b). The design without an object table (a) on the left can hold up to \(7,500\) addresses, whereas the design with a 1-layer object table (b) on the right can accommodate \(7,500^{2}\) addresses. when, for example, users who want to create a raffle with 2000 addresses must go through the signing process five times. This problem can be resolved using the _Delegate Object Creation_ or _Zero-Knowledge Proof_ approach. ### Raffle with Delegate Object Creation _Delegate Object Creation_ aims to streamline the process for users with the help of a _Delegate Host_, enabling them to efficiently input numerous addresses into _Raffle Object_ at the Sui Network with just one wallet operation. The steps of Delegate Object Creation are depicted in Figure 3 and described in the following: 1. When a user wants to create a Raffle, he first sends all addresses to the backend of the Delegate Host. 2. The Delegate Host then batches these addresses and writes them into an _Delegated Object_ using its backend's Private Key while paying the storage fee. The fee will reflected as a usage fee in the field of _Delegated Object_. 3. Then, the user can initiate the raffle with the address table in _Delegated Object_ in a single transaction. Moreover, when using the Address Object, the user pays the storage fee and transaction fees to the Delegate Host. When users settle a raffle, they can rebate the storage fee by clearing their data to reclaim most usage fees. Additionally, if users do not use the Delegated Object for certain times, the Delegate Host can proactively remove data from the Delegated Object to rebate the storage fee. #### 4.3.1. **Advantages** The advantage of Delegate Object Creation is that users only need to open their wallet and perform the signature once to create a Raffle with lots of participant addresses. By doing so, all participants' addresses are stored on the transaction history of Sui Network, allowing every participant to verify their inclusion in the Raffle. #### 4.3.2. **Limitations** The limitation of _Delegate Object Creation_ is the waiting time. At the backend, it typically takes 1 second per transaction to upload 400 addresses. If there are 20,000 addresses, users will experience approximately a 50-second waiting time. Moreover, if users only need to select a few winners from many addresses, this method of writing and clearing addresses is inefficient. Consequently, we have developed another approach using Zero-Knowledge Proofs to quickly create a raffle with many addresses. ### Raffle with Zero-Knowledge Proof Zero-knowledge proof (ZKP) is a cryptography and mathematical technique (Kal #### 4.4.2. Limitations ZK-raffle has two limitations. First is its complicated prize distribution process. Each prize distribution involves multiple rounds of hashing calculations with Merkle Proofs, which can be costly when there are many winners. Second, the Merkle Tree's storage tends to be centralized with a database containing all Merkle Tree and Proofs data. This database is essential for users to confirm and claim their rewards or verify their raffle participation. If the database encounters data custody issues resulting in the loss of Merkle Proofs, all rewards may become locked within the smart contract. ### ZK-raffle with single-key VRF In addition to DRAND Randomness, employing a ZK-raffle with the host's VRF outputs is a wise choice because they share common limitations, necessitating a centralized host to provide Merkle Proof or single-key VRF to settle the raffle. Moreover, since the Merkle root won't be cleared after the raffle is settled, it is possible to use the same raffle multiple times. Therefore, they can be integrated to create a fair web3 raffle system, allowing users to draw random prizes fairly. Illustrated in Figure 5, implementing a ZK-raffle with Signature Randomness includes the following steps: 1. The host initiates the ZK-raffle smart contract, providing essential parameters such as the Merkle Root, prize count, and public key, while setting the entrance fee at 10 Sui. 2. A user enters the raffle by paying the 10 Sui entrance fee. Subsequently, the raffle smart contract generates a raffle ticket that includes the fee and forwards it to the host. 3. The host settles the raffle ticket by verifying the VRF output (i.e., BLS signature) and computing a random result through the signature. Afterward, by validating the Merkle Proof to confirm that the random result indeed corresponds to the Prize NFT provided by the host, the host received the 10 Sui Entrance Fee while the user received the Prize NFT. #### 4.5.1. Advantages A ZK-raffle with host's VRF offers three significant advantages. Firstly, it is highly lightweight, avoiding data storage issues commonly associated with other versions of raffles. Secondly, it ensures that the host cannot manipulate the probabilities, as anyone can calculate the Merkle Proof to determine their chances of winning the overall prize, making it ideal for prize raffles such as gaming loot-box. Thirdly, it strikes a good balance between centralization and decentralization. While both Merkle Proofs and Signatures require centralized resources, the host must provide these resources to claim the entrance fees from users. Consequently, the host is strongly incentivized to settle the raffle. #### 4.5.2. Limitations On the other hand, there exist two limitations. Firstly, since the randomness is generated from a single private key, the owner of that key (typically the host) can produce beacons at their discretion, granting them the ability to manipulate the raffle result in their favor. In the hands of a malicious host or a host Figure 4. Example of Raffle with Zero Knowledge Proof. The left (a) is the Raffle Object. The right (b) is the Merkle Tree computed with raffle participants. At left bottom, (c) is the example parameters to proof and claim reward with Merkle Proofs. Figure 5. Example flows of how Zero Knowledge Raffle with host’s VRF (signature) works. compromised by hackers, this private key can be used to exploit this vulnerability, resulting in the drawing of numerous grand prizes and undermining the distribution of prizes. Secondly, if the host refuses to settle the raffle, the users' entrance fee remains locked within the contract. Nevertheless, this limitation can be mitigated by leveraging the transparent nature of blockchain. We can ensure that the host acts fairly by monitoring the number of unsettled raffle tickets and even giving penalties accordingly. For instance, if the host deliberately avoids settling a particular raffle ticket, it may indicate dishonest behavior, and users can claim their winning once a predetermined time period has passed. ## 5. Transaction Fee Comparison To compare the transaction fees of different Raffle designs, we developed minimum viable smart contract codes and used the testing functionality of Sui Move Cli to obtain the transaction fee of each design. The source code is available at [https://github.com/Bucket-Protocol/raffle-paper](https://github.com/Bucket-Protocol/raffle-paper). The transaction fee of a raffle involving 200 participants and resulting in 10 winners is displayed in Table 1. The findings show that transaction fees are consistently cheap regardless of the method used. The results demonstrate that the transaction fees for _Signature Randomness_ and _DRAND Randomness_ are roughly equivalent. Additionally, more advanced raffle designs incur higher transaction fees. The setup fee for the raffle with _Object Table_ is more expensive due to the process of _Object Tables_. Regarding _ZK-Raffle_, the settlement cost is just 8, but each winner must spend a computation fee of 59 to validate the _Merkle Proof_, making it more costly overall. ## 6. Discussion Through the case study of building a random, fair, and verifiable raffle game on Sui Network, we explored various design approaches, inherent limitations, and potential solutions when developing games with randomness using smart contracts. Each design option comes with its unique set of advantages and drawbacks, and the choice should be made depending on the particular circumstances at hand. ### Randomness on Blockchain Two methods for achieving randomness in smart contracts are considered best practices: DRAND Randomness and host's key Signature Randomness. As shown in Table 1, both methods have a comparable transaction fee. Drand Randomness aggregates input from nodes across the DRAND network to generate random values, making it more decentralized and secure. However, the current official (before the mainstream update to a 3-second cadence) using DRAND requires a 30-second wait each round, which could impact the user experience. On the other hand, single-key VRF randomness generates signatures using the raffle host's private key for randomness and is faster. However, it is more centralized, requiring the host to secure the private key and provide a signature each time. ### Smart Contract Design Limitations, Solutions, and Recommendations When building a raffle in smart contracts, it's critical to consider network limitations, especially concerning the size of arrays and objects. We propose two solutions to address size constraints: "Raffle with Object Tables" and "Zero Knowledge Raffle." "Raffle with Object Tables" involves transforming the content of arrays into a multi-layered table data storage structure, maximizing the data storage capacity. The advantage of this approach is that all data remains on the blockchain. However, it has higher computational costs, requires multiple setup transactions, and incurs higher storage fees. The need for multiple setup transactions can be mitigated using the "Delegate Object Creation" method, where a delegate host assists in creating Object Tables. Storage fees can be reduced by reclaiming storage space through Storage Rebates by clearing data at the end of the Raffle. The raffle with Object Tables is perfect for situations that require absolute transparency and decentralization, such as selecting lucky winners among players to receive exclusive event prizes. Additionally, a host can utilize Raffle with Object Tables to create games like lotteries that require active participation from players. On the other hand, the Zero Knowledge Raffle transforms array fields into a Merkle Tree, enabling an infinite capacity of addresses by uploading only the root of the Merkle Tree to the smart contract. However, it comes with higher transaction fees when verifying the Merkle Tree during prize distribution. The Zero Knowledge Raffle is well-suited to reduce transaction costs in scenarios involving many candidate entities, such as Loot Box and Gacha Games. Moreover, by combining Private Key Randomness with the Zero Knowledge Raffle, it is possible to enhance the efficiency of the raffle while ensuring decentralization and incentives for the host to provide a Merkle Proof to settle the raffle. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Randomness Type** & **Host’s VRF Randomness** & \multicolumn{3}{c}{**DRAND Randomness**} \\ **Raffle Type** & **Basic Raffle** & **Basic Raffle** & **Raffle with Object Tables** & **ZK-Raffle** \\ \hline **Initiate Raffle Fee with 200 participants** & 66 & 46 & 454 & 5 \\ **Settle Raffle Fee with 10 winners** & 67 & 86 & 234 & 599 \\ **Settle Fee increased per winner added** & 6 & 6 & 9 & 59 \\ **Total Transaction Fee** & 133 & 132 & 688 & 604 \\ **Total Transaction Fee in Sui** & 0.00009975 & 0.00009900 & 0.00051600 & 0.00042225 \\ **Total Transaction Fee in USD** & 0.0000408975 & 0.0000405900 & 0.0002115600 & 0.0001731225 \\ \hline \hline \end{tabular} \end{table} Table 1. Comparison of transaction fees for various randomness approaches and raffle system designs. These fees are assessed based on the computational costs and do not include storage fees. The Fee in Sui is calculated according to the recommended Gas Price on Sui with formula \(Fee*750*10^{-9}\). The Fee in USD is calculated according to the Sui price of \(0.41\) USD on 2023/10/12. ## 7. Conclusion This paper explained how to build a fair, verifiable, efficient random raffle on the Sui Network. We investigate a range of smart contract design methodologies for constructing chance-based games through smart contracts, identify their inherent constraints, and propose remedies. Furthermore, we furnish insights drawing from the advantages and limitations of different design choices. Through these insights, we provide valuable guidance to researchers and developers engaged in randomness games using smart contracts.
2302.05468
Carbonaceous dust grains seen in the first billion years of cosmic time
Large dust reservoirs (up to $\sim 10^8 \, \mathrm{M_\odot}$) have been detected in galaxies out to redshift $z \sim 8$, when the age of the universe was only about 600 Myr. Generating significant amounts of dust within such a short timescale has proven challenging for theories of dust formation and has prompted the revision of the modelling of potential sites of dust production such as the atmospheres of asymptotic giant branch (AGB) stars in low-metallicity environments, supernovae (SNe) ejecta, and the accelerated growth of grains in the interstellar medium (ISM). However, degeneracies between different evolutionary pathways remain when the total dust mass of galaxies is the only available observable. Here we report observations of the $2175 \, \mathring{\rm A}$ dust attenuation feature, well known in the Milky Way (MW) and galaxies at $z \lesssim 3$, in the near-infrared spectra of galaxies up to $z \sim 7$, corresponding to the first billion years of cosmic time. The relatively short timescale implied for the formation of carbonaceous grains giving rise to this feature suggests a rapid production process, likely in Wolf-Rayet (WR) stars or SN ejecta.
Joris Witstok, Irene Shivaei, Renske Smit, Roberto Maiolino, Stefano Carniani, Emma Curtis-Lake, Pierre Ferruit, Santiago Arribas, Andrew J. Bunker, Alex J. Cameron, Stephane Charlot, Jacopo Chevallard, Mirko Curti, Anna de Graaff, Francesco D'Eugenio, Giovanna Giardino, Tobias J. Looser, Tim Rawle, Bruno Rodríguez del Pino, Chris Willott, Stacey Alberts, William M. Baker, Kristan Boyett, Eiichi Egami, Daniel J. Eisenstein, Ryan Endsley, Kevin N. Hainline, Zhiyuan Ji, Benjamin D. Johnson, Nimisha Kumari, Jianwei Lyu, Erica Nelson, Michele Perna, Marcia Rieke, Brant E. Robertson, Lester Sandles, Aayush Saxena, Jan Scholtz, Fengwu Sun, Sandro Tacchella, Christina C. Williams, Christopher N. A. Willmer
2023-02-10T19:00:45Z
http://arxiv.org/abs/2302.05468v3
# Carbonaceous dust grains within galaxies seen in the first billion years of cosmic time ###### Abstract Interstellar dust captures a significant fraction of elements heavier than helium in the solid state and is an indispensable component both in theory and observations of galaxy evolution[1, 2]. Dust emission is generally the primary coolant of the interstellar medium (ISM) and facilitates the gravitational collapse and fragmentation of gas clouds from which stars form, while altering the emission spectrum of galaxies from ultraviolet (UV) to far-infrared wavelengths through the reprocessing of starlight[3, 4, 5, 6, 7]. However, the astrophysical origin of various types of dust grains remains an open question, especially in the early Universe. Here we report direct evidence for the presence of carbonaceous grains from the detection of the broad UV absorption feature around 2175 A in deep near-infrared spectra of galaxies up to the first billion years of cosmic time, at a redshift (\(z\)) of \(\sim 7\). This dust attenuation feature has previously only been observed spectroscopically in older, more evolved galaxies at redshifts of \(z<3\). The carbonaceous grains giving rise to this feature are often thought to be produced on timescales of hundreds of millions of years by asymptotic giant branch (AGB) stars[8, 9, 10, 11, 12, 13]. Our results suggest a more rapid production scenario, likely in supernova (SN) ejecta. Large dust reservoirs (up to \(\sim 10^{8}\) M\({}_{\odot}\)) have been detected in galaxies out to redshift \(z\sim 8\), when the age of the universe was only about 600 Myr[14, 15, 16]. Producing significant amounts of dust within such a short timescale has proven challenging for theories of dust formation[17, 18] and has prompted the development of new scenarios, including (1) new models of AGB atmospheres in low-metallicity environments, (2) revision of SN dust production and the role of the associated shocks, (3) the accelerated growth and reprocessing of grains in the ISM[19, 20, 21]. These models are currently largely unconstrained, since having only the total dust mass, and its ratio with stellar mass, as viable observables in early galaxies has rendered different scenarios entirely degenerate. Determining the properties and nature of different types of dust grains in the first generation of galaxies is therefore of paramount importance to discriminate their possible origin. New observations with _JWST_ offer the possibility to test these scenarios via the potential detection of dust signatures in the spectra of distant galaxies. As part of the _JWST_ Advanced Deep Extragalactic Survey (JADES), we obtained deep Near-Infrared Spectrograph (NIRSpec) multi-object spectroscopy taken in the PRISM spectral configuration (spectral range 0.6 \(\mu\)m to 5.3 \(\mu\)m, resolving power \(R\sim 100\)). Using the NIRSpec micro-shutter array (MSA[22]), we observed 253 sources across three visits between 21 and 25 October 2022 (_JWST_ programme 1210; PI: Lutzgendorf), with exposure times per object ranging from 9.3 to 28 hours. The extracted one-dimensional spectra reached a continuum sensitivity (3\(\sigma\)) of \(\sim\) 7-40 \(\times\) 10\({}^{-22}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\) (\(\sim\) 27. 1-29. 0 AB magnitude) at \(\sim\)2 \(\mu\)m. Targets were selected with a specific focus on high-redshift galaxies in imaging with the _Hubble Space Telescope_ (_HST_) and _JWST_/Near-Infrared Camera (NIRCam[23, 24]). Through visual inspection of all spectra, we find strong evidence of the absorption feature around a rest-frame wavelength \(\lambda_{\rm emit}\) = 2175 A in the spectrum of a galaxy at \(z\) = 6. 71 (JADES-GS+53.15139-27.81917; JADES-GS-z6-0 hereafter), revealed via a significant (8\(\sigma\)) deviation from a smooth power-law continuum, as shown in Fig. 1. This feature, known as the UV attenuation "bump", was first discovered by Stecher (1965) along sightlines in the Milky Way (MW) and is attributed to carbonaceous dust grains[1, 25]. We fitted a Drude profile[26] around 2175 A to the excess attenuation[13], defined as the observed spectrum normalised to a bump-free attenuated spectrum that is predicted by a power-law function fitted outside of the bump region (see the Methods section for details). We find a bump strength (amplitude) of 0.50\({}^{+0.08}_{-0.07}\) mag and a central wavelength \(\lambda_{\rm max}\) = 2241\({}^{+28}_{-29}\) A, the latter at the high end of the range of \(\lambda_{\rm max}\) that has been observed along different sightlines in the MW[26]. Beyond the local Universe, the feature has previously only been observed in the spectra of massive, metal-enriched galaxies at \(z\,\hbox to 0.0pt{\lower 2.5pt\hbox{$\sim$}}\raise 1.5pt\hbox{$<$}\,3\), suggesting it originates in dust grains exclusively present in evolved galaxies[10, 12, 13, 27]. This is the first direct, spectroscopic detection of the UV bump in galaxies at \(z\,>\,3\). The properties of JADES-GS-z6-0, obtained by fitting spectral synthesis models to its spectral energy distribution (SED), are summarised in Table 1. In agreement with the trend between metallicity and bump strength observed at lower redshift, measurements of the gas-phase and stellar metallicity suggest this galaxy has undergone substantial metal enrichment (\(Z\sim\) 0. 2-0. 3 \(Z_{\odot}\)). To systematically investigate the prevalence of the UV bump and obtain clues on its origin at such early times, we further selected JADES galaxies on the basis of a confident spectroscopic redshift above z \(>\) 4 with a median signal-to-noise ratio (SNR) of at least 3 per spectral pixel in the region corresponding to rest-frame wavelengths of 1268 A \(<\lambda_{\rm emit}<\) 2580 A. This results in a sample of 50 objects between redshift 4.02 and 11.48. By comparing the continuum slopes on both sides of the central wavelength at 2175 A (see Methods section), we selected ten galaxies (located at \(4.\,03<z<5.\,89\)) from this parent sample in addition to JADES-GS-z6-0 whose spectral shape points towards the presence of a UV bump. We constructed a weighted average ("stack"; see Methods) of all 50 objects in our parent sample as well as our eleven selected objects with evidence for a UV bump signature, as shown in Fig. 2. In both stacks, we find evidence for line emission from the C III] \(\lambda\,1907\) A, [C III] \(\lambda\,1909\) A doublet, which are nebular emission lines commonly seen in metal-poor galaxies[28]. There is no indication of a UV bump in the parent sample; the stacked spectrum of the eleven objects with tentative UV bumps in their individual spectra, however, shows a clear depression (5o) centred on \(\sim\)2175 A. We again fitted a Drude profile to the excess attenuation, now defined as the subset stack normalised to the stacked spectrum of all objects. In the stacked spectrum of these eleven objects we find a bump amplitude of \(0.\,08^{+0.01}_{-0.02}\) mag and a central wavelength \(\lambda_{\rm max}=2185^{+21}_{-20}\) A. While the UV bump has long been known to exist, its variable presence and strength has been an open topic of debate in galaxy evolution studies[30, 32, 33]. The feature is commonly attributed to graphite or polycyclic aromatic hydrocarbons (PAHs), molecules thought to be susceptible to destruction by hard ionising radiation, and it is present in the MW and Large Magellanic Cloud (LMC) extinction curves, but very weak or absent in the Small Magellanic Cloud (SMC) curves[31]. In the attenuation curve of individual galaxies, radiative-transfer effects determined by the dust-star geometry can weaken the bump in the observed _integrated_ spectrum[32, 33]. However, by stacking the photometry of large samples of galaxies, the bump has been detected to varying degrees at redshifts \(z\lesssim 3\), with tentative hints at \(z\lesssim 6\)[12, 27, 34]. Spectroscopically, the bump has only been seen in relatively massive and dusty individual galaxies at \(z\sim 2\)[10, 13]. In Fig. 3, the bump amplitude is shown as a function of cosmic time, including its strength in the extinction curves in MW, LMC, and SMC sightlines[31, 35]. Our inferred bump amplitude, particularly in the individual spectrum of JADES-GS-z6-0, is comparatively high and defies the trend with stellar mass seen at lower redshift. This might be due to a different, likely simpler, geometry of dust with respect to stars in the current sample compared to the lower-redshift counterparts - intriguingly, there is tentative evidence for a colour gradient in JADES-GS-z6-0 (see Methods). Moreover, a direct detection of the bump at \(z\sim 4\)-7 is striking given that at these redshifts, the age of the Universe is only around a billion years (\(\sim 800\) Myr at \(z=6.\,71\)). A substantial production of carbon and the subsequent formation of carbonaceous grains responsible for the absorption feature through the standard AGB channel, would require, particularly in the low-metallicity regime that characterises such early galaxies (i.e. \(Z\sim 0.\,1\,Z_{\odot}\)[36]), low-mass (\(M\lesssim 2.\,5\) M\({}_{\odot}\)) and hence long-lived stars to reach the AGB at the end of their lives, after more than 300 Myr[20, 37]. If this is the dominant channel via which carbonaceous grains are formed, the presence of the UV bump implies the onset of star formation in these galaxies occurred within the first half billion years of cosmic time, corresponding to redshift \(z\gtrsim 10\). Indeed, star formation has been shown to occur at this early epoch with the confirmation of four \(z>10\) galaxies in the full JADES sample[23, 24]. However, while the sample of galaxies exhibiting a UV bump does reveal hints of modestly evolved stellar populations, we do not find evidence for substantial star formation activity that occurred on timescales beyond 300 Myr. The absence of clear signatures from such relatively old stellar populations (see Methods for details) suggests that other, faster channels for the production of carbonaceous dust are required in these early systems. One explanation is that these grains are formed on significantly shorter timescales via more massive and rapidly evolving stars, possibly by SNe or Wolf-Rayet (WR) stars, which would overhaul some, and place strong constraints on other, theoretical models of dust production and stellar evolution. Yet, while WRs have been observed to produce carbonaceous dust[38], the subsequent SN type-Ib/c explosion is expected to destroy essentially all dust produced in the preceding WR phase. Moreover, for a standard initial mass function (IMF), WR stars (and in particular WC-type stars characterised by dominant carbon lines) are very rare, so they are expected to be minor contributors to the dust budget[39]. Dust production in SN ejecta has been regarded as a potential rapid channel for significant dust production in the early universe, its net efficiency depending on the grain destruction rate in the subsequent reverse shock[40]. However, substantial _carbonaceous_ production in SN ejecta is expected only by some classes of models and for a certain subclass of scenarios (e.g. non-rotating progenitors), while other models favour the formation of silicates or other types of dust[41, 42, 43, 44]. In summary, our detection of carbonaceous dust at \(z\sim 4\)-7 provides crucial constraints on the dust production models and scenarios in the early Universe. running median (solid black line), representing the attenuated stellar continuum, reveals a deep localised absorption profile. A Drude profile fit within the vertical dashed lines (purple solid line) with respect to the smooth power law (blue solid line) yields an amplitude of 0.50\({}^{+0.08}_{-0.07}\) mag and a central wavelength \(\lambda_{\rm max}=2241^{+28}_{-29}\) A. **c**, The residuals (\(\Delta F_{\lambda}\)) show that the power-law (PL) fit alone has a significant negative flux excess between \(\sim\)2000 A and 2400 A (7.6\(\sigma\)), while the power-law and Drude profile combined (PL+Drude; purple line) provides a significantly better fit (chi squared of \(\chi^{2}=75\) versus \(\chi^{2}=1.8\) for PL and PL+Drude, respectively). All shading represents 1\(\sigma\) uncertainty. Figure 1: **Spectrum taken by _JWST_/NIRSpec of JADES-GS-z6-0 at redshift z = 6.71.****a**, Overview of the spectrum (grey solid line) with a power-law fit to the UV continuum (blue solid line). Several spectral features used for spectroscopic redshift confirmation are indicated, including the Lyman-\(\alpha\) break, the [O II] \(\lambda\,3727,\,3730\) Å doublet, the H\(\beta\), H\(\gamma\) and [O III] \(\lambda\,4960,\,5008\) Å lines. **b**, Zoom-in of the UV bump region around \(\lambda_{\rm emit}=2175\) Å where a running median (solid black line), representing the attenuated stellar continuum, reveals a deep localised absorption profile. A Drude profile fit within the vertical dashed lines (purple solid line) with respect to the smooth power law (blue solid line) yields an amplitude of 0.50\({}^{+0.08}_{-0.07}\) mag and a central wavelength \(\lambda_{\rm max}=2241^{+28}_{-29}\) Å. **c**, The residuals (\(\Delta F_{\lambda}\)) show that the power-law (PL) fit alone has a significant negative flux excess between \(\sim\)2000 Å and 2400 Å (7.6\(\sigma\)), while the power-law and Drude profile combined (PL+Drude; purple line) provides a significantly better fit (chi squared of \(\chi^{2}=75\) versus \(\chi^{2}=1.8\) for PL and PL+Drude, respectively). All shading represents 1\(\sigma\) uncertainty. \begin{table} \begin{tabular}{l l} \hline Right Ascension (deg) & 53.15139 \\ Declination (deg) & –27.81917 \\ \(t_{\rm exp}\) (h) & 27.9 \\ \(z_{\rm spec}\) & 6. 70644\({}^{+0.00036}_{-0.00029}\) \\ \(m_{{}_{\rm F115W}}\) (mag) & 28. 32 \(\pm\) 0. 57 \\ \(M_{{}_{\rm UV}}\) (mag) & -18. 59 \(\pm\) 0. 57 \\ \(\beta_{{}_{\rm UV}}\) & -1. 75\({}^{+0.16}_{-0.17}\) \\ \(Y_{{}_{34}}\) & -3. 8\({}^{+1.4}_{-1.1}\) \\ \(Z_{\rm neb}\) (\(Z_{\rm\odot}\)) & 0. 21\({}^{+0.20}_{-0.10}\) \\ \(U\,-\,V\) (mag) & 0. 30\({}^{+0.08}_{-0.07}\) \\ \(M_{{}_{\rm*}}\) (10\({}^{8}\) M\({}_{\rm\odot}\)) & 1. 1\({}^{+0.4}_{-0.3}\) \\ \(Z_{{}_{\rm*}}\) (\(Z_{\rm\odot}\)) & 0. 3\({}^{+0.1}_{-0.1}\) \\ SFR\({}_{\rm 30}\) (M\({}_{\rm\odot}\) yr\({}^{-1}\)) & 2. 0\({}^{+1.2}_{-0.7}\) \\ \(t_{{}_{\rm*}}\) (Myr) & 32\({}^{+23}_{-15}\) \\ \hline \end{tabular} Error bars represent a 1\(\sigma\) uncertainty. Rows: (1) Right Ascension in J2000, (2) Declination in J2000, (3) Exposure time (\(t_{\rm exp}\)) in the NIRSpec PRISM spectra in hours, (4) Spectroscopic redshift (\(z_{\rm spec}\)), (5) Apparent AB magnitude in the NIRCam F115W filter (\(m_{{}_{\rm F115W}}\)), (6) Absolute AB magnitude in the UV (\(M_{{}_{\rm UV}}\)), (7) UV spectral slope (\(\beta_{{}_{\rm UV}}\)), (8) Spectral slope change around \(\lambda_{\rm emit}=~{}2175\) Å (\(Y_{{}_{34}}\)), (9) Gas-phase metallicity (\(Z_{\rm neb}\)) in units of Solar metallicity, (10) Rest-frame \(U\,-\,V\) colour in magnitudes, (11) Stellar mass (\(M_{{}_{\rm*}}\)) in 10\({}^{8}\) Solar masses, (12) Stellar metallicity (\(Z_{{}_{\rm*}}\)) in units of Solar metallicity, (13) Star formation rate in Solar masses per year averaged on a timescale of 30 Myr (SFR\({}_{\rm 30}\)), (14) Mass-weighted stellar age (\(t_{{}_{\rm*}}\)) in Myr. \end{table} Table 1: **Properties of JADES-GS-z6-0.** represents 1\(\sigma\) uncertainty) shows a stacked spectrum obtained by combining all 50 objects in wavelength bins of \(\Delta\lambda_{\rm emit}=30\) A, clearly revealing emission from the C III] \(\lambda\) 1907 A, [C III] \(\lambda\) 1909 A doublet. The stacked spectrum of eleven galaxies selected to have a bump signature (solid black line, shading as 1\(\sigma\) uncertainty) shows the presence of the UV bump around 2175 A at a significance of 4.6\(\sigma\). The excess attenuation \(A_{\lambda,\,{\rm bump}}\) (curve at the bottom, corresponding to the axis on the right) is fitted with a Drude profile (shown in purple with shading as 1\(\sigma\) uncertainty), where we find an amplitude of \(0.08^{+0.01}_{-0.02}\) mag and a central wavelength \(\lambda_{\rm max}=2185^{+21}_{-20}\) A. Figure 2: **Normalised and stacked spectra around the UV bump of z > 4 JADES galaxies observed by _JWST_/NIRSpec. Spectra of all galaxies (small black dots) are shifted to the rest frame and normalised to the predicted continuum level at a rest-frame wavelength of \(\lambda_{\rm emit}=2175\) Å in the absence of a UV bump (see Methods section). The solid blue line (shading represents 1\(\sigma\) uncertainty) shows a stacked spectrum obtained by combining all 50 objects in wavelength bins of \(\Delta\lambda_{\rm emit}=30\) Å, clearly revealing emission from the C III] \(\lambda\) 1907 Å, [C III] \(\lambda\) 1909 Å doublet. The stacked spectrum of eleven galaxies selected to have a bump signature (solid black line, shading as 1\(\sigma\) uncertainty) shows the presence of the UV bump around 2175 Å at a significance of 4.6\(\sigma\). The excess attenuation \(A_{\lambda,\,{\rm bump}}\) (curve at the bottom, corresponding to the axis on the right) is fitted with a Drude profile (shown in purple with shading as 1\(\sigma\) uncertainty), where we find an amplitude of \(0.08^{+0.01}_{-0.02}\) mag and a central wavelength \(\lambda_{\rm max}=2185^{+21}_{-20}\) Å.** Figure 3: **Redshift evolution of UV bump strength.** The amplitude of the excess attenuation, \(A_{\lambda,\,\rm max}\), is shown for JADES-GS-z6-0 individually as well as for the stack of eleven \(z\sim 4\)-7 JADES galaxies. Points are coloured according to their (average) stellar mass; error bars along the y-axis represent 1\(\sigma\) uncertainty. At \(z\sim 2\), measurements from gamma-ray burst absorbers (Heintz et al.) and from stacked spectra in various bins of stellar mass (Shivaei et al.) or shape of the UV continuum as a whole and in the bump region (Noll et al.) are shown (see Methods for details)[10, 13, 45]. Error bars of the stacked spectra along the x-axis represent the full redshift range, their central values slightly shifted for visualisation purposes. The bump amplitudes in the average MW, LMC, and SMC dust extinction curves[31, 35], converted to an attenuation for a visual extinction range of 0.1 mag \(<A_{V}<0.5\) mag, are indicated with light shadings. The age of the Universe is indicated at the top. A vertical dashed line indicates the minimum timescale required for carbon production by AGB stars (i.e. 300 Myr) if the galaxy formed at \(z_{\rm form}=10\).
2307.06967
A Hierarchy of Normalizing Flows for Modelling the Galaxy-Halo Relationship
Using a large sample of galaxies taken from the Cosmology and Astrophysics with MachinE Learning Simulations (CAMELS) project, a suite of hydrodynamic simulations varying both cosmological and astrophysical parameters, we train a normalizing flow (NF) to map the probability of various galaxy and halo properties conditioned on astrophysical and cosmological parameters. By leveraging the learnt conditional relationships we can explore a wide range of interesting questions, whilst enabling simple marginalisation over nuisance parameters. We demonstrate how the model can be used as a generative model for arbitrary values of our conditional parameters; we generate halo masses and matched galaxy properties, and produce realisations of the halo mass function as well as a number of galaxy scaling relations and distribution functions. The model represents a unique and flexible approach to modelling the galaxy-halo relationship.
Christopher C. Lovell, Sultan Hassan, Daniel Anglés-Alcázar, Greg Bryan, Giulio Fabbian, Shy Genel, ChangHoon Hahn, Kartheik Iyer, James Kwon, Natalí de Santi, Francisco Villaescusa-Navarro
2023-07-13T10:05:13Z
http://arxiv.org/abs/2307.06967v1
# A Hierarchy of Normalizing Flows ###### Abstract Using a large sample of galaxies taken from the Cosmology and Astrophysics with MachinE Learning Simulations (CAMELS) project, a suite of hydrodynamic simulations varying both cosmological and astrophysical parameters, we train a normalizing flow (NF) to map the probability of various galaxy and halo properties conditioned on astrophysical and cosmological parameters. By leveraging the learnt conditional relationships we can explore a wide range of interesting questions, whilst enabling simple marginalisation over nuisance parameters. We demonstrate how the model can be used as a generative model for arbitrary values of our conditional parameters; we generate halo masses and matched galaxy properties, and produce realisations of the halo mass function as well as a number of galaxy scaling relations and distribution functions. The model represents a unique and flexible approach to modelling the galaxy-halo relationship. ## 1 Introduction Galaxies form within dark matter haloes, and their evolution is closely tied to the evolutionary history of their host halo - an understanding of the galaxy-halo relationship is key to a cosmological interpretation of galaxy populations (Wechsler and Tinker, 2018). Many computational modelling methods take explicit advantage of the galaxy-halo connection, populating haloes in less computationally expensive Dark-Matter only \(N\)-body simulations with galaxies in order to achieve larger volumes, or explore a larger range of parameters (Benson, 2010; Somerville and Dave, 2015). In the past decade a growing number of supervised machine learning (ML) methods for modelling the galaxy-halo relationship have emerged, using properties of the halo as features from which to predict the host galaxy properties (_e.g._ Kamdar et al., 2016; Agarwal et al., 2018; Jo and Kim, 2019; Lovell et al., 2022; de Santi et al., 2022; Jespersen et al., 2022; Icaza-Lizaola et al., 2023; Chittenden and Tojeiro, 2023). Almost all of these methods are deterministic; a given set of halo properties leads to a single predicted galaxy property.1 However, galaxy evolution is not _entirely_ determined by the host halo; other factors contribute to the properties of a galaxy at a given time that are not encoded in the halo properties and assembly history, _e.g._ the stochastic nature of stellar and AGN feedback. Deterministic methods are therefore susceptible to underpredicting the scatter in galaxy properties for a fixed set of input halo properties; there is insufficient information to model the true scatter. Finally, many studies have demonstrated the intrinsic stochasticity in results from numerical galaxy formation simulations, due to both explicit randomness (Genel et al., 2019) and the computational architecture (Borrow et al., 2022). Footnote 1: Rodrigues et al. (2023) demonstrate a non-deterministic approach, however this relies on binning combined with a classification procedure. What we require is a non-deterministic method for populating haloes with galaxies, that can model the multi-dimensional joint distribution of galaxy properties, accounting for the scatter introduced by all latent variables. _Generative models_, particularly those for density estimation, are an approach with promise in this domain (Kingma and Welling, 2013; Goodfellow et al., 2014; Jimenez Rezende et al., 2014). Normalizing flows (NF; Dinh et al., 2015; Jimenez Rezende and Mohamed, 2015) are one such technique, offering exact density estimation (equivalent to the multi-dimensional likelihood) and efficient sampling. Hassan et al. (2022) demonstrate the use of NFs on the CAMELS simulation suite (Villaescusa-Navarro et al., 2021; 2023) by training a model on maps of atomic hydrogen density. They build a generative model that can produce HI maps for arbitrary cosmological and astrophysical parameters. Friedman and Hassan (2022) present an update to this model, fully utilising the spatial information from the map using the Glow NF model (Kingma and Dhariwal, 2018) to produce better constraints on cosmological parameters. In this paper we build a generative model for discrete halo (\(M_{h}\)) and galaxy properties (\(M_{*},M_{\rm gas},M_{\bullet},\,{\rm SFR}\)), using a hierarchy of NF's trained on haloes and galaxies taken from the CAMELS simulation suite. ## 2 Methods A normalizing flow (NF) models some data **x** as a bijective transformation of some base distribution, typically a gaussian noise variable **u**, \[\textbf{x} =f_{\theta}(\textbf{u}) \tag{1}\] \[\textbf{u} \sim\pi(\textbf{u})\, \tag{2}\] where \(f_{\theta}\) is invertible and differentiable, with parameters \(\theta\). This allows the target density \(p_{\phi}(\textbf{x})\) to be written as \[p_{\phi}(\textbf{x})=\pi(f_{\theta}^{-1}(\textbf{x}))\left|\det\left(\frac{ \delta f_{\theta}^{-1}}{\delta\textbf{x}}\right)\right|\ . \tag{3}\] For maximum flexibility \(f_{\theta}\) and \(f_{\theta}^{-1}\) are modelled using invertible neural networks (NN). \(f_{\theta}\) can be represented by multiple stacked layers, in order to produce highly complex mappings from the noise to the target density. In order to build a conditional model we require a dataset with pairs of variables, \(\mathcal{D}=\{(\textbf{x},\textbf{x})\}\). Here, the **z** parameters are responsible for the generation of **x**, and we wish to model \(p_{x}(\textbf{x}|\textbf{z})\). To include this conditional dependence in our model we incorporate these parameters in our transformation, \(\textbf{x}=f_{\theta}(\textbf{u},\textbf{z})\)(Winkler et al., 2019). We implement a version of a Neural spline flow (Durkan et al., 2019; Dolatabadi et al., 2020). The Cosmology and Astrophysics with MachinE Learning Simulations (CAMELS; Villaescusa-Navarro et al., 2021) are a large ensemble of \(N\)-body and hydrodynamic simulations exploring the effect of cosmological and astrophysical parameter choices on galaxy evolution and structure formation. In this study we focus on the Simba simulation suite only (Dave et al., 2019). For full details please refer to Villaescusa-Navarro et al. (2021, 2023); Ni et al. (2023). Each simulation is defined by the initial random phases, as well as 4 astrophysical parameters (\(A_{\rm SN1}\), \(A_{\rm SN2}\), \(A_{\rm AGN1}\), \(A_{\rm AGN2}\)) and 2 cosmological parameters (\(\Omega_{\rm m}\), \(\sigma_{8}\)). The following cosmological parameters are kept fixed in all simulations: \(\Omega_{\rm b}=0.049\), \(h=0.6711\), \(n_{s}=0.9624\), \(M_{\nu}=0.0\) eV, \(w=-1\), \(\Omega_{K}=0\). The fiducial astrophysical parameters are defined at \(A=1.0\) and varied around this value to control the relative strength of the various feedback implementations in each simulation. There are a number of different simulation sets within the CAMELS suite; the Latin Hypercube (LH) set contains 1000 simulations where the 6 parameters are varied using a latin hypercube; the cosmic variance CV set contains 27 simulations that only differ in the value of the random seed in the initial conditions. We train three complementary flows, each conditional on the cosmological and astrophysical parameters (an illustration of the different flows is shown in Figure 1). The _abundance flow_ models the absolute abundance of subhaloes with mass \(>10^{10}\mathrm{M}_{\odot}\), \(p_{\phi}(\textbf{n}\,|\,\textbf{z})\). We add gaussian noise to the data in the LH set equal to the scatter in the abundance in the CV set 50 times, and train on this augmented data set, to mimic the effect of cosmic variance. The _halo flow_ models the density distribution of halo masses, \(p_{\phi}(\textbf{y}\,|\,\textbf{z})\). By coupling the _abundance_ and _halo flows_, we can generate the volume normalised halo mass function for arbitrary parameters; an example is shown in the top left corner of Figure 2. Finally, the _galaxy flow_ models the distribution of galaxy properties within dark matter haloes by further conditioning on the subhalo mass, \(p_{\phi}(\textbf{x}\,|\,\textbf{z},\textbf{y})\). We predict the stellar mass, gas mass, black hole mass and star formation rate. We reserve a random subset of entire LH set simulations for testing (15%), and use the rest for training and validation; this ensures there is no overlap between the train and test sets of galaxies with the same astrophysical and cosmological parameters. We use the \(z=0\) snapshot from each simulation, and reserve a study of the redshift dependence for future work. Each flow contains 16 layers, each consisting of a linear rational spline bijection (with 256 segments) coupled to an autoregressive NN layer consisting of two hidden layers with 256 and 128 nodes, respectively. We train using the ADAM optimizer (Kingma and Ba, 2014), with a multi-step learning rate starting at \(5\times 10^{-3}\), with \(\gamma=0.1\), using mini batches of size 2048 that are randomly shuffled after each epoch. At the end of each epoch we evaluate on the validation set, and save the model if the validation error has improved, to avoid overfitting. ## 3 Results In this section we demonstrate an example use case for the model by predicting the galaxy and halo properties for a set of parameters not used in the training procedure. We take these parameters from an LH set simulation from the test set, and first predict the halo mass function given the input parameters \(\mathbf{z}\). We then use the _abundance flow_ to predict the cumulative number of subhaloes with mass \(M_{\rm halo}>10^{10}\,M_{\odot}\), \(\mathbf{n}\), and the _halo flow_ to predict the distribution of their masses, \(\mathbf{y}\). Combined we can produce the halo mass function (HMF), shown in the top left panel of Figure 2 for 50 realisations, and compared to the true HMF from the corresponding LH set simulation. The model successfully reproduces the distribution function within the scatter of the realisations. We can also change one of the conditional parameters and explore the impact on the HMF. This is shown in the top row of Figure 2; there is a strong positive correlation between \(\Omega_{\rm m}\) and the normalisation of the HMF. We can also predict the properties of the galaxy within each host subhalo by providing the subhalo mass as well as the other conditional parameters to the _galaxy flow_. Whilst galaxy properties may be dependent on additional parameters as well as mass, the flow is able to model the full distribution of those properties at a given mass, marginalising over these unknown additional dependencies. The first panel in the second row of Figure 2 shows the galaxy stellar mass function (GSMF) produced when applied to haloes generated from the _abundance_ and _halo flows_. The GSMF is reproduced within the scatter of the 50 realisations. We can, again, fix parameters and explore the impact on the GSMF; we show this for \(\Omega_{\rm m}\), \(\rm A_{SN1}\) & \(\rm A_{SN2}\) in the second row of Figure 2. The third, fourth, fifth and sixth rows in Figure 2 also show predictions for the star forming sequence, the stellar mass-gas mass relation, the stellar mass-black hole mass relation, and the stellar-halo mass relation, and the impact of chang Figure 1: High level diagram of the model. The distribution of the conditional cosmological and astrophysical parameters is shown at the top. The _abundance_, _halo_ and _galaxy flows_ are shown below. The arrows highlight the direction of conditional dependence, as well as the mapping from each simple base distribution to the complex target density distribution. Figure 2: An example of the model predictions when used as a generative model for haloes and galaxies, for fixed and varying parameters. The top row shows the halo mass function (HMF), the second row the galaxy stellar mass function, the third row the star forming sequence, the fourth row the stellar mass–gas mass relation, the fifth row the stellar mass–black hole mass relation, and finally the stellar–halo mass relation. The first row shows predictions using haloes generated from the _abundance_ and _halo flows_, as well as haloes taken directly from the LH set simulation. ing conditional parameters (\(\Omega_{\text{m}}\), \(A_{\text{SN1}}\), \(A_{\text{SN2}}\)) on each of these relations in turn. We emphasise that galaxy properties are predicted jointly, enabling us to predict these relations self consistently. ## 4 Conclusions We present a novel approach to modelling the galaxy-halo relationship, using the density estimation capabilities of normalising flows to model the coupled halo and galaxy distribution conditioned on astrophysical and cosmological parameters. The model is able to self-consistently predict a number of halo and galaxy relations, and shows interesting correlations with different cosmological and astrophysical parameters, whilst marginalising over other nuisance parameters. There are a number of applications for such a model, from rapid generation of galaxy properties in dark matter only \(N\)-body simulations, to direct and indirect inference of astrophysical and cosmological parameters from individual galaxy properties or predicted scaling relations through simulation based inference (SBI) (Cranmer et al., 2019), an increasingly popular and flexible approach to inference (Papamakarios et al., 2017; Alsing et al., 2019; Hahn et al., 2019; Zhang et al., 2021; Dax et al., 2021; Hahn & Melchior, 2022; Huppenkothen & Bachetti, 2022; Wang et al., 2023). ## 5 Acknowledgements CCL acknowledges support from a Dennis Sciama fellowship funded by the University of Portsmouth for the Institute of Cosmology and Gravitation. DAA acknowledges support by NSF grants AST-2009687 and AST-2108944, CXO grant TM2-23006X, Simons Foundation Award CCA-1018464, and Cottrell Scholar Award CS-CSA-2023-028 by the Research Corporation for Science Advancement. GF acknowledges the support of the European Research Council under the Marie Sklodowska Curie actions through the Individual Global Fellowship No. 892401 PiCOGAMBAS and of the Simons Foundation. NSMS acknowledges financial support from FAPESP, grants 2019/13108-0 and 2022/03589-4.
2303.14894
A Linear Weight Transfer Rule for Local Search
The Divide and Distribute Fixed Weights algorithm (ddfw) is a dynamic local search SAT-solving algorithm that transfers weight from satisfied to falsified clauses in local minima. ddfw is remarkably effective on several hard combinatorial instances. Yet, despite its success, it has received little study since its debut in 2005. In this paper, we propose three modifications to the base algorithm: a linear weight transfer method that moves a dynamic amount of weight between clauses in local minima, an adjustment to how satisfied clauses are chosen in local minima to give weight, and a weighted-random method of selecting variables to flip. We implemented our modifications to ddfw on top of the solver yalsat. Our experiments show that our modifications boost the performance compared to the original ddfw algorithm on multiple benchmarks, including those from the past three years of SAT competitions. Moreover, our improved solver exclusively solves hard combinatorial instances that refute a conjecture on the lower bound of two Van der Waerden numbers set forth by Ahmed et al. (2014), and it performs well on a hard graph-coloring instance that has been open for over three decades.
Md Solimul Chowdhury, Cayden R. Codel, Marijn J. H. Heule
2023-03-27T03:06:34Z
http://arxiv.org/abs/2303.14894v1
# A Linear Weight Transfer Rule for Local Search ###### Abstract The _Divide and Distribute Fixed Weights_ algorithm (ddfw) is a dynamic local search SAT-solving algorithm that transfers weight from satisfied to falsified clauses in local minima. ddfw is remarkably effective on several hard combinatorial instances. Yet, despite its success, it has received little study since its debut in 2005. In this paper, we propose three modifications to the base algorithm: a linear weight transfer method that moves a dynamic amount of weight between clauses in local minima, an adjustment to how satisfied clauses are chosen in local minima to give weight, and a weighted-random method of selecting variables to flip. We implemented our modifications to ddfw on top of the solver yalsat. Our experiments show that our modifications boost the performance compared to the original ddfw algorithm on multiple benchmarks, including those from the past three years of SAT competitions. Moreover, our improved solver exclusively solves hard combinatorial instances that refute a conjecture on the lower bound of two Van der Waerden numbers set forth by Ahmed et al. (2014), and it performs well on a hard graph-coloring instance that has been open for over three decades. ## 1 Introduction Satisfiability (SAT) solvers are powerful tools, able to efficiently solve problems from a broad range of applications such as verification [11], encryption [25], and planning [9, 17]. The most successful solving paradigm is conflict-driven clause learning (CDCL) [19, 24]. However, stochastic local search (SLS) outperforms CDCL on many classes of satisfiable formulas [6, 18, 22, 23, 27], and it can be used to guide CDCL search [7]. SLS algorithms solve SAT instances by incrementally changing a truth assignment until a solution is found or until timeout. At each step, the algorithm flips the truth value of a single boolean variable, often according to some heuristic. A common heuristic is flipping variables that reduce the number of falsified clauses in the formula, but this is not the only one. The algorithm reaches a _local minimum_ when no variable can be flipped to improve its heuristic. At that point, the algorithm either adjusts its truth assignment or internal state to _escape_ the local minimum, or it starts over. Refer to chapter 6 from the Handbook of Satisfiability [4] for a more detailed discussion of SLS algorithms. _Dynamic local search_ (DLS) algorithms are SLS algorithms that assign a weight to each clause. They then flip variables to reduce the amount of weight held by the falsified clauses. DLS algorithms escape local minima by adjusting clause weights until they can once again flip variables to reduce the amount of falsified weight. Several DLS algorithms have been studied. For example, the Pure Additive Weighting Scheme algorithm (paws) [26] and the Scaling and Probabilistic Smoothing algorithm (saps) [14] both increase the weight of falsified clauses in local minima. A drawback of this method of escaping local minima is that the clause weights must periodically be re-scaled to prevent overflow. The Divide and Distribute Fixed Weights algorithm (ddfw) [15] introduces an alternative way of escaping local minima: increase the weight of falsified clauses by taking weight from satisfied clauses. In local minima, ddfw moves a fixed, constant amount of weight to each falsified clause from a satisfied clause it shares at least one literal with. The transfer method keeps the total amount of clause weight constant, eliminating the need for a re-scaling phase. Another consequence of this transfer method is that as more local minima are encountered, difficult-to-satisfy clauses gather more weight. Thus, ddfw dynamically identifies and prioritizes satisfying hard clauses. Recent work shows that ddfw is an effective algorithm. For example, ddfw (as implemented in ubcsat[28]1) is remarkably effective on matrix multiplication and graph-coloring problems [12; 13]. Yet despite its success, ddfw has received little research attention. In this paper, we revisit the ddfw algorithm to study why it works well and to improve its performance. Footnote 1: To the best of our knowledge, there is no official implementation or binary of original ddfw[15] available. Our contributions are as follows. We propose three modifications to the ddfw algorithm. We first introduce a linear weight transfer rule to allow for a more dynamic transfer of weight in local minima. We then adjust a performance-critical parameter that randomizes which satisfied clause gives up weight in local minima. Our adjustment is supported by an empirical analysis. Finally, we propose a new randomized method for selecting which variable to flip. We implement each of our modifications on top of the state-of-the-art SLS solver yalsat to create a new implementation of ddfw that supports parallelization and restarts. We then evaluate our solver against a set of challenging benchmarks collected from combinatorial problem instances and the past three years of SAT competitions. Our results show that our modifications boost the performance of ddfw: Our best-performing version of ddfw solves 118 SAT Competition instances, a vast improvement over a baseline of 83 solves from the original algorithm. Our solver also exhibits a 16% improvement over the baseline on a set of combinatorial instances. Moreover, in parallel mode, our solver solves instances that refute a conjecture on the lower bound of two van der Waerden numbers [2], and it matches performance with the winning SLS solver from the 2021 SAT competition on a graph-coloring instance that has been open for the past three decades. ## 2 Preliminaries SAT solvers operate on propositional logic formulas in _conjunctive normal form_ (CNF). A CNF formula \(F=\bigwedge_{i}C_{i}\) is a conjunction of clauses, and each clause \(C_{i}=\bigvee_{j}\ell_{j}\) is a disjunction of boolean literals. We write \(v\) and \(\overline{v}\) as the positive and negative literals for the boolean variable \(v\), respectively. A truth assignment \(\alpha\) maps boolean variables to either true or false. A literal \(v\) (resp. \(\overline{v}\)) is satisfied by \(\alpha\) if \(\alpha(v)\) is true (\(\alpha(v)\) is false, respectively). A clause \(C\) is satisfied by \(\alpha\) if \(\alpha\) satisfies at least one of its literals. A formula \(F\) is satisfied by \(\alpha\) exactly when all of its clauses are satisfied by \(\alpha\). Two clauses \(C\) and \(D\) are _neighbors_ if there is a literal \(\ell\) with \(\ell\in C\) and \(\ell\in D\). Let Neighbors(\(C\)) be the set of neighbors of \(C\) in \(F\), excluding itself. Many SLS algorithms assign a weight to each clause. Let \(W:\mathcal{C}\rightarrow\mathbb{R}_{\geq 0}\) be the mapping that assigns weights to the clauses in \(\mathcal{C}\). One can think of \(W(C)\) as the cost to leave \(C\) falsified. We call the total amount of weight held by the falsified clauses, the _falsified weight_. A variable that, when flipped, reduces the falsified weight is called a _weight-reducing variable_ (wrv). A variable that doesn't affect the falsified weight when flipped is a _sideways variable_ (sv). ## 3 The ddfw Algorithm Algorithm 1 shows the pseudocode for the ddfw algorithm. ddfw attempts to find a satisfying assignment for a given CNF formula \(F\) over MAXTRIES trials. The weight of each clause is set to \(w_{0}\) at the start of the algorithm. Each trial starts with a random assignment. By following a greedy heuristic method, ddfw selects and then flips weight-reducing variables until none are left. At this point, it either flips a sideways variable, if one exists and if a weighted coin flip succeeds, or it enters the weight transfer phase, where each falsified clause receives a fixed amount of weight from a maximum-weight satisfied neighbor. Occasionally, ddfw transfers weight from a random satisfied clause instead, allowing weight to move more fluidly between neighborhoods. The amount of weight transferred depends on whether the selected clause has more than \(w_{0}\) weight. There are five parameters in the original ddfw algorithm: the initial weight \(w_{0}\) given to each clause, the two weighted-coin thresholds spt and cspt for sideways flips and transfers from random satisfied clauses, and the amount of weight to transfer in local minima \(\mathtt{c}_{>}\) and \(\mathtt{c}_{=}\). In the original ddfw paper, these five values are fixed constants, with \(w_{0}=8\), \(\mathtt{spt}=0.15\), \(\mathtt{cspt}=0.01\), \(\mathtt{c}_{>}=2\), and \(\mathtt{c}_{=}=1\). ddfw is unique in how it transfers weight in local minima. Similar SLS algorithms increase the weight of falsified clauses (or decrease the weight of satisfied clauses) globally; weight is added and removed based solely on whether the clause is satisfied. ddfw instead moves weight among clause neighborhoods, with falsified clauses receiving weight from satisfied neighbors. One reason why this weight transfer method may be effective is that satisfying a falsified clause \(C\) by flipping literal \(\overline{\ell}\) to \(\ell\) (\(\in C\)) increases the number of true literals in satisfied clauses that neighbor \(C\) on \(\ell\). Thus, \(C\) borrows weight from satisfied clauses that tend to remain satisfied when \(C\) itself becomes satisfied. As a result, ddfw satisfies falsified clauses while keeping satisfied neighbors satisfied. The existence of _two_ weight transfer parameters \(\mathtt{c}_{>}\) and \(\mathtt{c}_{=}\) deserves discussion. Let _heavy clauses_ be those clauses \(C\) with \(W(C)>w_{0}\). Lines 16-19 in Algorithm 1 allow for a different amount of weight to be taken from heavy clauses than from clauses with the initial weight. Because lines 14-15 ensure that the selected clause \(C_{s}\) will have at least \(w_{0}\) weight, \(\mathtt{c}_{=}\) is used when \(W(C_{s})=w_{0}\) and \(\mathtt{c}_{>}\) is used when \(W(C_{s})>w_{0}\) (hence the notation). The original algorithm sets \(\mathtt{c}_{>}=2\) and \(\mathtt{c}_{=}=1\), which has the effect of taking more weight from heavy clauses. ``` Input: CNF Formula \(F\), \(w_{0}\), spt, cspt, \(\mathtt{c}_{>}\), \(\mathtt{c}_{=}\) Output: Satisfiability of \(F\) 1\(W(C)\gets w_{0}\) for all \(C\in F\) 2for\(t=1\) to MAXTRIESdo 3\(\alpha\leftarrow\) random truth assignment on the variables in \(F\) 4for\(f=1\) to MAXFLIIPSdo 5if\(\alpha\) satisfies \(F\)then return "SAT" 6else 7ifthere is a wrvthen 8 Flip a wrv that most reduces the falsified weight 9elseifthere is a sv and rand \(\leq\) sptthen 10 Flip a sideways variable 11else 12foreach falsified clause \(C\)do 13\(C_{s}\leftarrow\) maximum-weighted satisfied clause in Neighbors(\(C\)) 14if\(W(C_{s})<w_{0}\) or rand \(\leq\) csptthen 15\(C_{s}\leftarrow\) random satisfied clause with \(W\geq w_{0}\) 16if\(W(C_{s})>w_{0}\)then 17 Transfer \(\mathtt{c}_{>}\) weight from \(C_{s}\) to \(C\) 18else 19 Transfer \(\mathtt{c}_{=}\) weight from \(C_{s}\) to \(C\) 20 return "No SAT" ``` **Algorithm 1**The ddfw algorithm ## 4 Solvers, Benchmarks, and Hardware The authors of the original ddfw algorithm never released their source code or any binaries. The closest thing we have to a reference implementation is the one in the SLS SAT-solving framework ubcsat[27, 28]. We call this implementation ubc-ddfw, and we use it as a baseline in our experiments. Unfortunately, ubc-ddfw cannot be extended to implement our proposed modifications due to its particular architecture. Instead, we implemented ddfw on top of yalsat[5], which is currently one of the strongest local search SAT solvers. For example, it is the only local search solver in Mallob-mono [21], the clear winner of the cloud track in the SAT Competitions of 2020, 2021, and 2022. yalsat uses probsat[3] as its underlying algorithm, which flips variables in falsified clauses drawn from an exponential probability distribution. One benefit of implementing ddfw on top of yalsat is that is yalsat supports parallelization, which can be helpful when solving challenging formulas. In our experiments, we compare our implementation of ddfw to ubc-ddfw to verify that the two implementations behave similarly. Our implementation of ddfw on top of yalsat was not straightforward. First, we switched the underlying SLS algorithm from probsat to ddfw. Then we added additional data structures and optimizations to make our implementation efficient. For example, one potential performance bottleneck for ddfw is calculating the set of weight-reducing variables for each flip. Every flip and adjustment of clause weight can change the set, so the set must be re-computed often. A naive implementation that loops through all literals in all falsified clauses is too slow, since any literal may appear in several falsified clauses, leading to redundant computation. Instead, we maintain a list of variables uvars that appear in any falsified clause. After each flip, this list is updated. To compute the set of weight-reducing variables, we iterate over the variables in uvars, hitting each literal once. In this way, we reduce redundant computation. Adding our proposed modifications to our implementation was simpler. We represent clause weights with floating-point numbers, and the linear weight transfer rule replaced the original one. We also made the variable selection and weight transfer methods modular, so our modifications slot in easily2. Footnote 2: Source code of our system is available at [https://github.com/solimul/yal-lin](https://github.com/solimul/yal-lin) We evaluated our implementations of ddfw against two benchmarks. The **Combinatorial (COMB)** set consists of 65 hard instances from the following eight benchmarks families collected by Heule:3 (i) 26x26 (4 grid positioning instances), (ii) asias (2 almost square packing problems), (iii) MM (20 matrix multiplication instances), (iv) mphf (12 cryptographic hash instances), (v) ptn (2 Pythagorean triple instances), (vi) Steiner (3 Steiner triples cover instances [20]), (vii) wap (9 graph-coloring instances [16]), and (viii) vdw (13 van der Waerden number instances). These benchmarks are challenging for modern SAT solvers, including SLS solvers. The wap benchmark contains three instances that have been open for three decades, and vdw contains two instances that, if solved, refute conjectures on lower-bounds for two van der Waerden numbers [2]. Footnote 3: [https://github.com/marijnheule/benchmarks](https://github.com/marijnheule/benchmarks) The **SAT Competition (SATComp)** set consists of all 1,174 non-duplicate main-track benchmark instances from the 2019 SAT Race and the 2020 and 2021 SAT Competitions. The competition suites contain medium-hard to very challenging benchmarks, most of which are contributed by the competitors. Unless otherwise specified, we used a timeout of 18,000 and 5,000 seconds for the COMB and SATComp instances, respectively, in our experiments. We used the StarExec cluster [1], where each node has an Intel CPU E5 CPU with a 2.40 GHz clock speed and a 10240 KB cache. For experiments in this cluster, we used at most 64 GB of RAM. To perform experiments on the 3 open wap and 2 vdw instances, we used a different cluster with the following specifications: two AMD EPYC 7742 CPUs with a 3.40 GHz clock speed, each with 64 cores, 256 MB of L3 cache, and 512 GB of RAM. ## 5 Modifications to the ddfw Algorithm We propose three modifications to ddfw. The first is a linear rule for transferring a dynamic amount of weight in local minima. The second is an adjustment of the cspt parameter. The third is the introduction of a weighted-random method for selecting which variable to flip. ### The Linear Weight Transfer Rule The reference implementation of ddfw, ubc-ddfw, represents its clause weights as integers and transfers fixed integer weights in local minima. While this design decision allows ubc-ddfw to have a fast implementation, it unnecessarily restricts the amount of weight transferred in local minima to be integer-valued. In addition, the choice to transfer a fixed, constant amount of weight prevents ddfw from adapting to situations where more weight must be transferred to escape a local minimum, thus requiring multiple weight transfer rounds. To address these concerns, we propose a dynamic linear weight transfer rule to operate on floating-point-valued clause weights. Let \(C_{S}\) be the selected satisfied clause from which to take weight in a local minimum, as in line 13 in Algorithm 1. Our new rule transfers \[\mathtt{a}*W(C_{S})+\mathtt{c}\] weight, where \(0\leq\mathtt{a}\leq 1\) is a multiplicative parameter and \(\mathtt{c}\geq 0\) is an additive parameter. It is not clear that the addition of a multiplicative parameter is helpful, nor what a good pair of \((\mathtt{a},\mathtt{c})\) values would be. So, we performed a parameter search with our solver for \(\mathtt{a}\in[0,0.2]\) in steps of 0.05 and \(\mathtt{c}\in[0,2]\) in steps of 0.25 for both of our instance sets with a 900 second timeout per run. (A parameter search using all 1,174 instances in the SATComp set was not feasible. We instead did the search on the 168 instances from SATComp set that were solved by some setting in earlier experimentation. In Section 6, all instances are used.) The PAR-2 scores4 for the SATComp and COMB benchmark sets for each pair of \((\mathtt{a},\mathtt{c})\) values are shown in Figure 1. Footnote 4: The PAR-2 score is defined as the average solving time, while taking 2 * timeout as the time for unsolved instances. A lower score is better. The plots in Figure 1 show that values of \(\mathtt{a}\) and \(\mathtt{c}\) close to 0 degrade performance, likely due to the need for many weight-transfer rounds to escape local minima. The beneficial effect of higher values of \(\mathtt{a}\) and \(\mathtt{c}\) is more pronounced in the parameter search on the SATComp instances (the left plot). Since the best-performing settings have nonzero \(\mathtt{a}\) and \(\mathtt{c}\) values, we infer that both parameters are needed for improved performance. ### How Much Weight Should be Given Away Initially? On lines 16-19 of Algorithm 1, ddfw takes \(c_{>}\) weight away from the selected clause \(C_{s}\) if \(C_{s}\) is heavy and \(c_{=}\) weight otherwise. The linear rule introduced above can similarly be extended to four parameters: \(a_{>}\), \(a_{=}\), \(c_{>}\), and \(c_{=}\). In the original ddfw paper, \(c_{>}\) (\(=2\)) is greater than \(c_{=}\) (\(=1\)), meaning that heavy clauses give away more weight than clauses with the initial weight in local minima. The intuition behind this is simple: clauses with more weight should give away more weight. For the extended linear rule, one could adopt a similar strategy by setting \(a_{>}\) greater than \(a_{=}\) and \(c_{>}\) greater than \(c_{=}\). However, one effect of our proposed linear rule is that once clauses give or receive weight, they almost never again have exactly \(w_{0}\) weight. As a result, the parameters \(a_{=}\) and \(c_{=}\) control how much weight a clause gives away _initially_. Since the maximum-weight neighbors of falsified clauses tend to be heavy as the search proceeds, the effect of \(a_{=}\) and \(c_{=}\) diminishes over time, but they remain important at the start of the search and for determining how much weight the algorithm has available to assign to harder-to-satisfy clauses. The findings in a workshop paper [8] by two co-authors of this paper indicate that ddfw achieves a better performance when clauses initially give more weight. These findings suggest setting \(c_{=}\) greater than \(c_{>}\) and \(a_{=}\) greater than \(a_{>}\). In Section 6, we evaluate ddfw on the extended linear rule and investigate whether clauses should initially give away more or less weight. ### The cspt Parameter On lines 14-15 of Algorithm 1, ddfw sometimes discards the maximum-weight satisfied neighboring clause \(C_{s}\) and instead selects a random satisfied clause. The cspt parameter controls how often the weighted coin flip on line 14 succeeds. Though these two lines may appear to be minor, a small-scale experiment revealed that the cspt parameter is performance-critical. We ran our implementation of Figure 1: Parameter searches for \(a\in[0,0.2]\) in steps of \(0.05\) and \(c\in[0,2]\) in steps of \(0.25\) on the SATComp (left plot) and COMB (right plot) instances. A lower PAR-2 score is better. There is not a datum for \((a,c)=(0,0)\) since no weight would be transferred. the original ddfw algorithm on the COMB set with an 18,000 second timeout. When we set cspt to 0, meaning that falsified clauses received weight solely from satisfied neighbors, it solved a single instance; when we set cspt to 0.01 (the value in the original ddfw algorithm), it solved 21 instances. Among the eight families in COMB, the wap family was the most sensitive to the change of cspt value from 0 (solved 0) to 0.01 (solved 6 out of 9). We isolated these nine instances and ran a parameter search on them for cspt\(\in[0.01,1]\) in steps of 0.01, for a total of 900 runs. We used an 18,000 second timeout per run. The PAR-2 scores are reported in Figure 2. In Figure 2, we observe that cspt values near 0 and above 0.2 cause an increase in the PAR-2 score. These results indicate that ddfw is sensitive to the cspt value and that the cspt value should be set higher than its original value of 0.01, but not too high, which could potentially degrade the performance of the solver. We use these observations to readjust the cspt parameter in our empirical evaluation presented in Section 6. ### A Weighted-random Variable Selection Method On line 8 of Algorithm 1, ddfw flips a weight-reducing variable that most reduces the amount of falsified weight. Such a greedy approach may prevent ddfw from exploring other, potentially better areas of the search space. Inspired by probsat, which makes greedy moves only some of the time, we introduce a new randomized method that flips a weight-reducing variable according to the following probability distribution: \[\mathbb{P}(\text{Flipping wrv }v)=\frac{\Delta W(v)}{\sum_{v\in\texttt{wrv}} \Delta W(v)},\] where \(\Delta W(v)\) is the reduction in falsified weight if \(v\) is flipped. ## 6 Empirical Evaluation In this section, we present our empirical findings. Since we evaluated several different solvers, we refer to the solvers by the following names: the ubcsat Figure 2: The impact of cspt values on the performance of ddfw on the wap instances. version of ddfw is ubc-ddfw, the version of yalsat that implements probsat is yal-prob, and our implementation of ddfw on top of yalsat is yal-lin. In all of our experiments, we use the default random seed5 present in each solver, and we set the initial clause weight \(w_{0}=8\), as in the original ddfw paper. Footnote 5: Results for additional experiments with a different seed is available in Appendix 1.A In our experiments with yal-lin, we varied the configuration of the solver according to our proposed modifications. We use the identifying string W-cC-P to refer to a configuration for yal-lin, where \(\texttt{W}\in\{\texttt{fw},\texttt{lw}\}\) is the weight transfer method (fw stands for "fixed weight," lw for "linear weight"), \(\texttt{C}\in\{0.01,0.1\}\) is the cspt value, and \(\texttt{P}\in\{\texttt{grdy},\texttt{wrnd}\}\) is the variable selection method (grdy stands for the original "greedy" method, and wrnd stands for our proposed "weighted random" method). For example, the string fw-c.01-grdy describes the original ddfw algorithm, with \(\texttt{c}_{>}=2\) and \(\texttt{c}_{=}=1\). ### Evaluation Without Restarts We evaluate how yal-lin performs without restarts, meaning that ddfw runs until timeout without starting from a fresh random assignment. To disable restarts, we set MAXTRIES to 1 and MAXFLIPS to an unlimited number of flips. For the COMB and SATComp benchmark sets, we set a timeout of 18,000 and 5,000 seconds, respectively. We first checked that our solver yal-lin (with configuration fw-c.01-grdy) behaves similarly to the baseline implementation, ubc-ddfw. The solvers performed almost identically on the two benchmark sets: ubc-ddfw solved 22 of the COMB instances and 80 of the SATComp instances; yal-lin solved 21 and 83, respectively. We attribute the slight difference in solve counts to random noise. These results indicate that we implemented yal-lin correctly. We next evaluate how yal-lin performs under changes in the cspt value and variable selection method. We run yal-lin with the fixed weight transfer method on both benchmarks with all four combinations of \(\texttt{C}\in\{0.01,0.1\}\) and \(\texttt{P}\in\{\texttt{grdy},\texttt{wrnd}\}\). The solve counts and PAR-2 scores are shown in Table 1. Isolating just the change in variable selection method (scanning across rows in Table 1), we see that the weighted-random method outperforms the greedy method for each benchmark and cspt value. There is improvement both in the solve count (ranging from an additional 1 to 5 solves) and in the PAR-2 score. While the improvements may be random noise, the results indicate that injecting some randomness into how variables are flipped may lead to better performance. Isolating the change in cspt value (scanning down columns in Table 1), we see that the higher cspt value of 0.1 outperforms the cspt value of 0.01. Improvements range from 1 additional solve to 16 additional solves. We note that the improvements when increasing the cspt value are more pronounced than when changing the variable selection method, which gives further evidence that the cspt value is performance-critical. In Section 7, we present a possible explanation for why the cspt parameter is so important. The linear weight transfer rule.As we noted in Section 5.2, the linear weight transfer rule can be extended to include four parameters: two multiplicative and two additive. We tested yal-lin on three particular settings of these four parameters, which we call lw-itl (linear weight initial transfer low), lw-ith (linear weight initial transfer high), and lw-ite (linear weight initial transfer equal). * lw-itl takes a low initial transfer from clauses in local minima by setting \(\texttt{a}_{=}<\texttt{a}_{>}\) and \(\texttt{c}_{=}<\texttt{c}_{>}\). * lw-ith takes a high initial transfer from clauses in local minima by setting \(\texttt{a}_{=}>\texttt{a}_{>}\) and \(\texttt{c}_{=}>\texttt{c}_{>}\). * lw-ite does not distinguish clauses by weight, and sets the two pairs of parameters equal. In the left plot of Figure 1, a values for the top 10% of the settings (by PAR-2 scores) are in the range [0.05, 0.1]. Hence, we use 0.05 and 0.1 as the values for \(\texttt{a}_{>}\) and \(\texttt{a}_{=}\) in lw-itl and lw-ith. We keep the values for \(\texttt{c}_{>}\) and \(\texttt{c}_{=}\) at 2 and 1, following the original ddfw algorithm. For lw-ite, we take the average of the two pairs of values, with \(\texttt{a}_{>}=\texttt{a}_{=}=0.075\) and \(\texttt{c}_{>}=\texttt{c}_{=}=1.75\). Table 2 shows the parameter values for the three configurations that we tested. We compare our three new configurations against the original one across the two variable selection methods. We set cspt = 0.1, as our prior experiment showed it to be better than 0.01. Table 3 summarizes the results. Scanning down the columns of Table 3, we see that all three linear weight configurations perform at least as well as the fixed weight version, regardless of variable selection method. The improvements on the COMB benchmark are \begin{table} \begin{tabular}{c|c c|c c|c c|c c} \hline & \multicolumn{4}{c|}{COMB} & \multicolumn{4}{c}{SATComp} \\ cspt & \multicolumn{2}{c|}{grdy} & \multicolumn{2}{c|}{wrnd} & \multicolumn{2}{c|}{grdy} & \multicolumn{2}{c}{wrnd} \\ value & \#solved & PAR-2 & \#solved & PAR-2 & \#solved & PAR-2 & \#solved & PAR-2 \\ \hline 0.01 & 21 & 25393 & 24 & 23871 & 83 & 9339 & 87 & 9312 \\ 0.1 & 24 & 23137 & **25** & **22538** & 98 & 9223 & **103** & **9188** \\ \hline \end{tabular} \end{table} Table 1: Solve counts and PAR-2 scores for different configurations of yal-lin. The configurations vary the cspt value and the variable selection method, with the weight transfer method being fw. The best configuration for each benchmark is bolded. \begin{table} \begin{tabular}{c|c|c|c|c} \hline linearwt versions & \(\texttt{a}_{>}\) & \(\texttt{a}_{=}\) & \(\texttt{c}_{>}\) & \(\texttt{c}_{=}\) \\ \hline lw-itl & 0.1 & 0.05 & 2 & 1 \\ lw-ite & 0.075 & 0.075 & 1.75 & 1.75 \\ lw-ith & 0.05 & 0.1 & 1 & 2 \\ \hline \end{tabular} \end{table} Table 2: Parameter values for three versions of linearwt model, with at most 4 additional solved instances. The improvements on the SATComp benchmark are more substantial, with a maximum of 17 additional solved instances. Overall, the best-performing linear weight configuration was lw-ith, which transfers the more weight from clauses with the initial weight. These results support prior findings that more weight should be freed up to the falsified clauses in local minima. The best-performing variable selection method continues to be the weighted random method wrnd. **Analysis of solve count over runtime.** In addition to solve counts and PAR-2 scores for the three linear weight configurations, we report solve counts as a function of solving time. The data for ten experimental settings of yal-lin on \begin{table} \begin{tabular}{l|r r|r r|r r|r r} \hline Weight & \multicolumn{4}{c|}{COMB} & \multicolumn{4}{c}{SATComp} \\ Transfer & \multicolumn{2}{c|}{grdy} & \multicolumn{2}{c|}{wrnd} & \multicolumn{2}{c|}{grdy} & \multicolumn{2}{c}{wrnd} \\ Method & \#solved & PAR-2 & \#solved & PAR-2 & \#solved & PAR-2 & \#solved & PAR-2 \\ \hline fixedwt & 24 & 23871 & 25 & 22538 & 98 & 9223 & 103 & 9188 \\ \hline lw-it1 & 26 & 22256 & 27 & 21769 & 98 & 9237 & 104 & 9189 \\ lw-ite & **28** & **21233** & 27 & 22228 & 111 & 9129 & 113 & 9114 \\ lw-ith & 26 & 22142 & **28** & 21338 & 115 & 9082 & **118** & **9055** \\ \hline \end{tabular} \end{table} Table 3: Solve counts and PAR-2 scores for different configurations of yal-lin. The configurations vary the linear weight transfer method while keeping the cspt value fixed at 0.1. The best configuration for each benchmark is bolded. Figure 3: Performance profiles of yal-lin (fw-c.01-grdy) and nine modifications for COMB (left) and SATComp (right). the two benchmarks are shown in Figure 3. Note that the original ddfw setting is represented by the setting fw-c.01-grdy, and is our baseline. For the COMB benchmark (Figure 3, left plot), all nine other settings (our modifications) outperform the baseline in terms of solving speed and number of solved instances. The best settings are lw-ith-c.1-wrnd and lw-ite-c.1-grdy, which perform on par with each other and solve 28 instances by timeout. For the SATComp benchmark (Figure 3, right plot), the dominance of the setting lw-ith-c.1-wrnd is more pronounced. For about the first 1,000 seconds, this setting performs similar to lw-ith-c.1-grdy. After that, however, it begins to perform the best of all the settings, and it ends up solving the most instances by timeout, at 118. The baseline setting fw-c.01-grdy ends up solving 83 instances at timeout, which is 35 less than lw-ith-c.1-wrnd. These two plots clearly show that our modifications substantially improve the original ddfw algorithm. ### Evaluation With Restarts Many SLS algorithms restart their search with a random assignment after a fixed number of flips. By default, yalsat also performs restarts. However, at each restart, yalsat dynamically sets a new restart interval as \(r=100,000x\) for some integer \(x\geq 1\), which is initialized to 1, and updated after each restart as follows: if \(x\) is power of 2, then \(x\) is set to 1, otherwise to \(2*x\). The way yalsat initializes its assignment at restart also differs from many SLS algorithms. On some restarts, yalsat uses the best cached assignment. For all others, it restarts Figure 4: Solve time comparisons between base yal-prob, and 10 yal-lin settings for COMB and SATComp, where restarts are enabled with a fresh random assignment. In this way, it attempts to balance exploitation and exploration. We also evaluated yal-lin with yalsat-style restarts. On a restart, the adjusted clause weights are kept. The hope is that the adjusted weights help the solver descend the search landscape faster. We compare yal-prob against ten experimental settings of yal-lin with restarts enabled. The best solver in this evaluation is yal-lin with the setting lw-ith-c.1-grdy on the COMB benchmark and the setting lw-ith-c.1-wrnd on the SATComp benchmark, which solve 11 and 49 more instances than yal-prob, respectively. Figure 4 shows solve counts against solving time, and it confirms that all the yal-lin settings solve instances substantially faster than yal-prob. ### Solving Hard Instances **Closing wap-07a-40.** The wap family from the COMB benchmark contains three open instances: wap-07a-40, wap-03a-40 and wap-4a-40. We attempted to solve these three instances using the parallel version of yal-lin with the ten yal-lin settings (without restarts) used in Section 6.1 in the cluster node with 128 cores and 18,000 seconds of timeout. All of our settings except fw-c.01-grdy (the baseline) solve the wap-07a-40 instance. The best setting for this experiment was lw-itl-c.1-wrnd, which solves wap-07a-40 in just 1168.64 seconds. However, we note that lstech_map (LMpl) [30], the winner of the SAT track of the SAT Competition 2021, also solves wap-07a-40, in 2,103.12 seconds, almost twice the time required by our best configuration lw-itl-c.1-wrnd for solving this instance. Thus, for solving this open instance, our best setting compares well with the state-of-the-art solver for solving satisfiable instances. With restarts, the setting lw-itl-c.1-wrnd, the best setting for this experiment, were not able to solve any of these three instances. **New lower bounds for van der Waerden/Green numbers.** The _van der Waerden theorem_[29] is a theorem about the existence of monochromatic arithmetic progressions among a set of numbers. It states the following: there exists a smallest number \(n=W(k;t_{1},\ldots,t_{i},\ldots,t_{k})\) such that any coloring of the integers \(\{1,2,\ldots,n\}\) with \(k\) colors contains a progression of length \(t_{i}\) of color \(i\) for some \(i\). In recent work, Ben Green showed that these numbers grow much faster than conjectured and that their growth can be observed in experiments [10]. We therefore call the CNF formulas to determine these numbers Green instances. Ahmed et al. studied 20 van der Waerden numbers \(W(2;3,t)\) for two colors, with the first color having arithmetic progression of length 3 and the second of length \(19\leq t\leq 39\), and conjectured that their values for \(t\leq 30\) were optimal, including \(W(2;3,29)=868\) and \(W(2,3,30)=903\)[2]. By using yal-lin, we were able to refute these two conjectures by solving the formulas Green-29-868-SAT and Green-30-903-SAT in the COMB set. Solving these instances yields two new bounds: \(W(2;3,29)\geq 869\) and \(W(2;3,30)\geq 904\). To solve these two instances, we ran our various yal-lin configurations (without restarts) using yalsat's parallel mode, along with a number of other local search algorithms from ubcsat, in the same cluster we used to solve wap-07a-40. Among these solvers, only our solver could solve the two instances. lw-itl-c.1-wrnd solved both Green-29-868-SAT and Green-30-903-SAT, in 942.60 and 6534.56 seconds, respectively. The settings lw-ith-c.1-wrnd and lw-ite-c.1-wrnd also solved Green-29-868-SAT in 1374.74 and 1260.16 seconds, respectively, but neither could solve Green-30-903-SAT within a timeout of 18,000 seconds. The CDCL solver LMpl, which solves wap-07a-40, could not solve any instances from the Green family within a timeout of 18,000 seconds. With restarts lw-itl-c.1-wrnd, the best setting for this experiment only solves Green-29-868-SAT in 2782.81 seconds within a timeout of 18,000 seconds. ## 7 Discussion and Future Work In this paper, we proposed three modifications to the DLS SAT-solving algorithm ddfw. We then implemented ddfw on top of the SLS solver yalsat to create the solver yal-lin, and we tested this solver on a pair of challenging benchmark sets. Our experimental results showed that our modifications led to substantial improvement over the baseline ddfw algorithm. The results show that future users of yal-lin should, by default, use the configuration lw-ith-c.1-wrnd. While each modification led to improved performance, the improvements due to each modification were not equal. The performance boost due to switching to the weighted-random variable selection method was the weakest, as it resulted in the fewest additional solves. However, our results indicate that making occasional non-optimal flips may help ddfw explore its search space better. The performance boost due to adjusting the cspt value was more substantial, supporting our initial findings in Section 5.3. One metric that could explain the importance of a higher cspt value is a clause's _degree of satisfaction_ (DS), which is the fraction of its literals that are satisfied by the current assignment. We noticed in experiments on the COMB benchmark with cspt = 0.01 that clauses neighboring a falsified clause had an average DS value of 0.33, while clauses without a neighboring falsified clause had an average DS value of 0.54. If this trend holds for general yal-lin runs, then it may be advantageous to take weight from the latter clauses more often, since flipping any literal in a falsified clause will not falsify any of the latter clauses. A higher cspt value accomplishes this. However, we did not investigate the relationship between DS and cspt further, and we leave this to future work. Performance also improved with the switch to a linear weight transfer method. The best method, lw-ith, supports the findings from the workshop paper that ddfw should transfer more weight from clauses with the initial weight. Future work can examine whether the heavy-clause distinction is valuable; a weight transfer rule that doesn't explicitly check if a clause is heavy would simplify the ddfw algorithm. When restarts are enabled, all ten settings in yal-lin perform better for COMB than when restarts are disabled. This better performance with restarts comes from solving several MM instances, for which these settings without restarts solve none of them. However, for SATComp, yal-lin performs better when restarts are disabled. Since SATComp comprises of substantially larger number of heterogeneous benchmarks than COMB, these results suggest that the new system performs better when restarts are disabled. Future work on weight transfer methods can take several other directions. Different transfer functions can be tested, such as those that are a function of the falsified clause's weight or those based on rational or exponential functions. Alternate definitions for neighboring clauses are also possible. For example, in formulas with large neighborhoods, it may be advantageous to consider clauses neighbors if they share \(k>1\) literals, rather than just 1. Throughout this paper, we kept the spt parameter set to 0.15. Yet, when clause weights are floating point numbers, it is rare for our solver to make sideways moves. This evident in Figure 5, which compares count of sideways moves per 10,000 flips between our baseline setting (fw-0.01-grdy), and best setting (lw-ith-c.1-wrand) for a randomly chosen SATComp instance sted2_0x0_n219-342 up to 5 millions flips. With fw-0.01-grdy, yal-lin makes some sideways moves, albeit rarely. However, with floating weight transfer in lw-ith-c.1-wrand, the solver makes almost no sideways moves as search progresses. We further investigated the effect of sideways moves on solver performance. We tested the setting lw-ith-c.1-wrnd against a version that did not perform sideways moves on the SATComp benchmark. The version with sideways moves solved 118 instances, while the version without them solved 113. This suggests that sideways moves may add a slight-but-beneficial amount of random noise to the algorithm. Future work can more fully investigate the effect of sideways moves on ddfw. One goal is to eliminate the parameter entirely in order to simplify the algorithm. Alternatively, the algorithm could be modified to occasionally flip variables that _increase_ the falsified weight to help ddfw explore the search space. Overall, we find that the ddfw algorithm continues to show promise and deserves more research interest. Our solver closed several hard instances that eluded other state-of-the-art solvers, and the space of potential algorithmic improvements remains rich. Figure 5: Comparison of sideways moves count per 10,000 flips with Search Progression for our baseline (fw-c.01-grdy) and best setting (lw-ith-c.1-wrnd) from yal-lin for an COMB instance sted2_0x0_n219-342 ## Appendix 0.A Experimental Results with a Different Seed For all our previous experiments, we used the default seed (seed of 0) used in the solvers. We have repeated the experiments reported in Section 0.6.1 and 0.6.2 for COMB and SATComp with a seed of 123. Here, we present the results with this changed seed value. Figure 6 and 7 compare the performance of various configurations of yal-lin against their baselines for this changed seed, without restarts and with restarts, respectively. With the changed seeds, when yal-lin does not perform restarts (Figure 6), all of our configurations performs better than the baseline fw-c.01-grdy, except fw-c.01-wrnd. Similar to the results with seed 0, with the seed of 123, the configurations implementing linearwt dominates over the configurations with fixedwt. When restarts are enabled (Figure 7), with seed 123, the overall performance of our configurations are similar to what they are with seed 0 (Figure 4).
2308.07224
Multimessenger Potential of the Radio Neutrino Observatory in Greenland
The Radio Neutrino Observatory in Greenland (RNO-G) is the only ultrahigh energy (UHE, ${\gtrsim}30$~PeV) neutrino monitor of the Northern sky and will soon be the world's most sensitive high-uptime detector of UHE neutrinos. Because of this, RNO-G represents an important piece of the multimessenger landscape over the next decade. In this talk, we will highlight RNO-G's multimessenger capabilities and its potential to provide key information in the search for the most extreme astrophysical accelerators. In particular, we will highlight opportunities enabled by RNO-G's unique field-of-view, its potential to constrain the sources of UHE cosmic rays, and its complementarity with IceCube at lower energies.
Marco Stein Muzio
2023-08-14T15:49:26Z
http://arxiv.org/abs/2308.07224v1
# Multimessenger Potential of the Radio Neutrino Observatory in Greenland ###### Abstract: The Radio Neutrino Observatory in Greenland (RNO-G) is the only ultrahigh energy (UHE, \(\gtrsim\)30 PeV) neutrino monitor of the Northern sky and will soon be the world's most sensitive high-uptime detector of UHE neutrinos. Because of this, RNO-G represents an important piece of the multimessenger landscape over the next decade. In this talk, we will highlight RNO-G's multimessenger capabilities and its potential to provide key information in the search for the most extreme astrophysical accelerators. In particular, we will highlight opportunities enabled by RNO-G's unique field-of-view, its potential to constrain the sources of UHE cosmic rays, and its complementarity with IceCube at lower energies. ## 1 Introduction The Radio Neutrino Observatory in Greenland (RNO-G, [1]) is an in-ice radio experiment located at Summit Station, Greenland. RNO-G is designed to detect ultrahigh energy (UHE, \(\gtrsim\)30 PeV) neutrinos. UHE neutrinos are predicted to be produced by UHE cosmic rays (UHECRs), whose origins are still unknown. In particular, photopion production interactions of UHECRs with the cosmic microwave background (CMB) - often referred to as the Greisen-Zatsepin-Kuzmin (GZK) effect [2, 3] -- imprints a horizon of \(\sim\)100 Mpc on UHECRs above \(\sim\)10\({}^{19.7}\) eV. This makes it impossible to study UHECRs beyond the GZK horizon directly. However, UHE neutrinos (produced in the decay of such photopions) propagate through the universe unimpeded, suffering only redshift losses. This makes UHE neutrinos both a smoking gun of UHECR production and a window into the extreme astrophysical universe on cosmological scales. If a UHE neutrino has an interaction as it traverses the Earth, it will initiate a particle shower. Especially if this shower develops in a dense medium, like glacial ice, a charge asymmetry will develop at the shower front leading to the emission of radio waves. Radio waves have an exceptionally long attenuation length in ice, on the order of \(\sim\)1 km, allowing for radio receivers embedded in ice to efficiently monitor a large volume. RNO-G takes advantage of this detection principle. Currently, RNO-G has 7 stations deployed and taking data, and upon completion in 2027 will consist of 35 independent stations. Stations of RNO-G are separated by \(\sim\)1.25 km with the entire array encompassing 40 km\({}^{2}\). The array and station layout are illustrated in Fig. 1. RNO-G stations employ a hybrid design, taking advantage of both surface and deep antennas. Deep antennas are distributed across three strings embedded into the ice: one "power" string and two "helper" strings. Figure 1: Left: Layout for a single RNO-G station. Right: Map of RNO-G’s 35 station array. The power string consists of 9 antennas (7 sensitive to vertically polarized signals (Vpols) and 2 to horizontally polarized signals (Hpols)): 3 spread between 40 m and 80 m in depth and 6 closely spaced at \(\sim\)100 m depth. The two helper strings each consist of 3 closely space antennas (2 Vpols and 1 Hpol) at \(\sim\)100 m depth. The deep antennas provide improved sensitivity, with the power string providing a phased trigger and the helper strings providing additional reconstruction power. Each station also has 9 surface antennas with an independent trigger, which provide improved event reconstruction and background rejection. This hybrid design will allow RNO-G to lead the next generation of UHE neutrino observatories, balancing precision pointing for multimessenger follow-up with an unprecedented diffuse flux sensitivity. ## 2 Role in multimessenger landscape over the coming decade RNO-G's location at in the Northern hemisphere makes it unique for both in-ice radio neutrino observatories and, in particular, for ultrahigh energy neutrino observatories. While other neutrino observatories exist in the Northern hemisphere, including ANTARES [4], Baikal-GVD [5], KM3Net [6], and P-ONE [7], none of these are particularly sensitive to neutrinos above 30 PeV. Similarly, despite being located at the South Pole, IceCube has a view of the Northern sky but only at lower energies where the Earth is transparent to neutrinos. The Northern sky is a particularly interesting region to explore, both because at ultrahigh energies it is the least explored region of the neutrino sky and because it contains a number of interesting possible neutrino sources (as shown in Fig. 2). First and foremost, the only known extragalactic sources of neutrinos, TXS-0506+056 and NGC 1068, are located in the Northern sky [8, 9, 10]. NGC 1068 is a particularly interesting source since it is the first known point source of neutrinos. Further, to what degree the hard spectrum of TXS-0506+056 continues to higher Figure 2: Expected field-of-view of RNO-G along with a number of notable point sources. Red lines indicate RNO-G’s instantaneous sky coverage while bands indicate the daily sky coverage in a narrow (dark) and wide (light) altitude range. An example high-quality event reconstruction is also shown. energies, making it an emitter of UHE neutrinos, is an open question. RNO-G will be the only observatory currently planned which will be able to address these questions. While no other sources of neutrinos are currently known, the Northern sky contains a number of other extreme astrophysical sources which are good candidates to be neutrino sources. These include bright starburst galaxies, like M82, and nearby blazars, like Mrk421 and Mrk501. Additionally, there is some evidence for significant UHE cosmic ray (UHECR) production in the Northern sky. The Telescope Array Collaboration has reported two excesses (hotspots) of UHECRs from the Northern sky [11]. Additionally, models of the UHECR dipole observed by the Pierre Auger Observatory [12] also indicate that the Virgo cluster (of which M87 is a member) may be a significant source of UHECRs [13]. Thanks to its multimessenger and multiwavelength complementarity, RNO-G represents an important piece of the multimessenger landscape over the coming decade. As will be discussed further in Section 3, RNO-G will have the capability to follow-up multimessenger alerts from other observatories around the world. Due to the large number of excellent observing locations in the Northern hemisphere, these include ground-based observatories like HAWC [14], CTA [15], and LHAASO [16] in gamma-rays and ZTF [17], which has detected a number of potential tidal disruption events in optical wavelengths. RNO-G additionally shares a field-of-view with the Telescope Array, which primarily observes UHECRs, and IceCube at lower energies, as previously mentioned. In particular, RNO-G's completion over the next few years places it at a particularly unique time for multimessenger science, as highlighted in Fig. 3. Firstly, the LIGO-Virgo-KAGRA (LVK) O4 run is currently underway and will continue until 2025, followed shortly thereafter by their O5 run from 2027 to 2030 [18]. At the same time, all-sky gamma-ray telescopes, such as _Fermi_ and Swift, are still operating but may be decomissioned in the early 2030s, leaving a GeV-gap gamma-ray sky monitoring [19]. During this golden era for multimessenger science, RNO-G will be the only facility able to provide UHE neutrino follow-up to alerts. Figure 3: Expected timeline of multimessenger facilities over the next 15 years. ## 3 Transient follow-up In order to fully participate in the global multimessenger network, RNO-G will respond to real-time multimessenger alerts and, eventually, will issue its own alerts. RNO-G will monitor multimessenger alert networks, such as AMON [20], and respond to high-signalness alerts in one of two modes. For most alerts, RNO-G will respond in _normal mode_. In normal mode, each RNO-G station triggers at a rate of \(\sim\)1 Hz, limited by the wireless LTE throughput. In this mode, "listeners" subscribed to multimessenger alert streams will monitor for alerts with a high probability of being a UHE neutrino source, based on the alert's source type, distance, and other alert-specific parameters (e.g. the probability of an event being a binary neutron start merger \(p_{\mathrm{BNS}}\) in LVK alerts). Once a high-neutrino-probability alert is identified an automated analysis is performed to search for neutrino events within an alert-specific time window \(\Delta t\) around the alert time \(t_{0}\). For events within this time window with a high neutrino probability, an initial reconstruction of the event direction and neutrino energy will be made. The timing, spatial, and energy information will be combined with the spatial and temporal information from the original alert to determine the signalness of the candidate neutrino and the probability of chance coincidence (i.e. false alarm rate). For candidates with sufficiently high signalness and low false alarm rate, a follow-up to the alert will be issued. For exceptional alerts, RNO-G will respond in _burst mode_. Burst mode allows RNO-G to respond to high quality alerts in its current field-of-view with increased sensitivity. This is done by optimizing the trigger threshold in each of its beams for the alert to achieve the maximum possible trigger rate (up to \(\sim\)100 Hz) for signals correlated with the alert direction (see Fig. 3(b)). This temporarily boosts the instantaneous aperture (i.e. transient source sensitivity) up to twofold, with the largest increase at low energies (see Fig. 3(a)). This is ideal since the neutrino flux is expected to be largest in the low energy end of RNO-G's sensitive range for most models of a transient neutrino flux [1]. After the burst period has concluded the same automated analysis described above will commence. Figure 4: Left: Increase in instantaneous aperture in burst mode relative to baseline. Right: The mapping between radio reception direction and neutrino source direction. RNO-G’s default beams are shown along with an example alert direction. ## 4 Science with diffuse neutrinos Today, with only 7 stations currently operating, RNO-G is already the largest in-ice neutrino detector in the world by effective volume. Over the next decade it will become the world's most sensitive detector of UHE neutrinos. This places RNO-G in a prime position to make a strong impact on the study of both astrophysical neutrinos and UHECRs. RNO-G will either discover UHE neutrinos or place strong constraints on models of their sources. Figure 5 summarizes the evolution of RNO-G sensitivity to a diffuse neutrino flux over the next 15 years. Under the most optimistic models for the UHE neutrino flux, RNO-G could realistically discover a neutrino by 2025 [21, 25, 26, 27]. If such a detection is made, RNO-G will immediately have discovered that there are trans-GZK CR sources at high redshifts (beyond \(z\)\(\sim\)1) and that there is a significant flux of protons arriving at Earth above 30 EeV [21]. Conversely, the lack of a neutrino detection by 2025 will place strong constraints on UHECR sources beyond the GZK horizon. By 2028, RNO-G's full detector array will have been deployed and its accumulated exposure will have significantly increased. If a neutrino is detected in its livetime through 2028 it would be a strong indication that high redshift UHECR sources are more luminous than those at low redshifts (i.e. that the UHECR source evolution is strongly positive). In particular, this would be strong evidence against negatively evolving sources, like BL Lacs, as being the sources of UHECRs [23]. However, if no neutrinos are detected by 2028 then RNO-G will place strong constraints on UHECR source models which assume a strong positive evolution or produce a significant proton fraction above 30 EeV [21]. RNO-G's exposure by 2032 will be sufficient to probe the connection between the IceCube astrophysical neutrino flux and UHEs. Detection of a neutrino at this level of exposure would indicate a hardening of the astrophysical neutrino spectrum at UHEs. This would be a strong indication that sources of astrophysical neutrinos above and below 30 PeV are largely divorced. Such a situation would imply that UHE neutrino detectors are required to fully understand the sources of neutrinos, since lower energy detectors will not have access to such a UHE source population. Conversely, a lack of neutrino detections would allow for the possibility that the sources of neutrinos seen by IceCube are also the most luminous sources of neutrinos at UHEs. Figure 5: Left: Evolution of RNO-G’s diffuse flux sensitivity over the next 15 years, along with several predictions for the neutrino flux from UHECR sources. Right: The expected number of observed neutrinos in the RNO-G livetime under different diffuse neutrino flux models [21, 22, 23, 24]. Finally, by 2038 RNO-G will have enough data to start probing specific astrophysical UHECR source models. Neutrino detections, or lack thereof, with this level of exposure will give the first concrete evidence for or against AGN [24], BL Lacs [23], and newborn pulsars [22] as the sources of UHECRs. Detection of neutrinos below \(\sim\)1 EeV would provide strong evidence that UHECRs have a significant number of interactions inside their source environment - a key uncertainty in UHECR source modeling. On the other hand, a lack of neutrinos would place strong constraints on the UHECR source luminosity-density, a recovery in the UHECR spectrum beyond \(10^{20.3}\) eV, and the cutoff energy of the astrophysical neutrino spectrum. In addition to these specific constraints, RNO-G will have achieved world-record sensitivity to the diffuse neutrino flux from 10 PeV-100 EeV. ## 5 Probes of particle physics Observation of a UHE neutrino by RNO-G will represent the most energetic neutrino ever observed, far beyond the energies accessible by terrestrial facilities. As can be seen in Fig. 5, in the most optimistic scenarios RNO-G could observe hundreds of neutrinos by 2038. This would enable RNO-G to become a leading probe of particle physics at the highest energies and beyond the Standard Model (BSM). Observations of neutrinos will enable RNO-G to measure the neutrino-nucleon cross section at \(\sqrt{s}\gtrsim 10\) TeV. This will allow RNO-G to probe BSM scenarios including extra dimensions, leptoquarks, and microscopic black hole production [28, 29, 30]. Neutrino observations with RNO-G will also probe secret neutrino interactions, neutrino-dark matter (DM) interactions, and Lorentz invariance violation [31, 32, 33, 34, 35, 36]. Even in the most pessimistic scenario of no neutrino observations by 2038, RNO-G will set strong limits on annihilating DM models [37]. ## 6 Conclusion The Radio Neutrino Observatory in Greenland (RNO-G) is poised to become the most sensitive observatory of ultrahigh energy (UHE) neutrinos in the world. With 7 of its 35 total stations currently deployed, it is already the largest in-ice neutrino detector ever constructed by effective volume. By combining the improved event reconstruction provided by surface antennas with the increased diffuse flux sensitivity provided by deep antennas, RNO-G will become a critical component of the multimessenger landscape over the coming decade. RNO-G is the only UHE neutrino observatory in the world with a view of the Northern sky, home to a number of promising sources including NGC 1068. With its precise pointing and capability to temporarily boost its instantaneous aperture in the direction of transient events, RNO-G will provide critical information to the global multimessenger alert network. It will fulfill this role during a golden era of multimessenger science, when both the LIGO-Virgo-KAGRA gravitational wave network undergoes its run O4 and O5 and all-sky telescopes (such as _Fermi_ and Swift) continue to monitor the gamma-ray sky. RNO-G will provide critical probes of the astrophysical sources of UHECRs and their properties beyond the GZK horizon. The large effective area of RNO-G will allow for an unprecedented exposure to neutrinos in the 10 PeV to 100 EeV energy range. Detection or non-detection of neutrinos in this range will provide critical information on the evolution of UHECR sources, the proton fraction above 30 EeV, and the significance of in-source UHECR interactions. Moreover, RNO-G will probe models of BL Lacs, AGN, and newborn pulsars as UHECR sources. Finally, the connection between the astrophysical neutrino flux and UHE neutrinos will be illuminated. If Nature realizes an optimistic UHE neutrino flux, RNO-G will observe hundreds of neutrinos over the next 15 years. This opens the possibility for RNO-G to push the boundaries of particle physics, including the measurement of the neutrino-nucleon cross-section at unprecedented energies, and physics beyond the Standard Model. Even in the most pessimistic scenario, RNO-G will probe unexplored parts of the annihilating dark matter parameter space.
2303.05027
Full-horseshoes for the Galerkin truncations of 2D Navier-Stokes equation with degenerate stochastic forcing
In this paper, we mainly study the turbulence of Galerkin truncations of 2D Navier-Stokes equation under degenerate stochastic forcing and large-scale. We use a kind of chaotic structure named full-horseshoes to describe it. It is proved that if the stochastic forcing satisfies some kind of hypoelliptic condition, then the system has full-horseshoes.
Wen Huang, Jianhua Zhang
2023-03-09T04:23:49Z
http://arxiv.org/abs/2303.05027v3
Full-horseshoes for Galerkin truncations of the 2D Navier-Stokes equation with degenerate stochastic forcing ###### Abstract. In this paper, we mainly study turbulence of Galerkin truncations of the 2D Navier-Stokes equation under degenerate stochastic forcing and large-scale. We use a kind of chaotic structure named full-horseshoes to describe it. It is proved that if the stochastic forcing satisfies some kind of hypoelliptic condition, then the system has full-horseshoes. Key words and phrases:Stochastic flow; stationary measure; entropy; full-horseshoes; Lyapunov exponents 2020 Mathematics Subject Classification: 37H05, 37A50, 60H10 ## 1. Introduction Turbulent dynamical systems are ubiquitous complex systems in hydrodynamics, such as [26, 29, 30, 32], and are characterized by a large dimensional phase space and a large dimension of unstable directions in phase space. One of the main goals in the development of the theory of chaotic dynamical systems has been to understand the turbulence. In this paper, we focus on Galerkin truncations of the 2D stochastic Navier-Stokes equation on tours (being abbreviated as GSNS). This model was initiated by E and Mattingly in [11]. And they proved the unique ergodicity of stationary measure of GSNS under a kind of degenerate stochastic forcing. Later, Hairer and Mattingly showed the unique ergodicity of stationary measure of full 2D stochastic Navier-Stokes equation under more general degenerate stochastic forcing in [17]. However, there are few mathematically rigorous results to describe the chaotic phenomenon of GSNS or 2D stochastic Navier-Stokes equations, even other turbulent dynamical systems. In the recent, Bedrossian, Blumenthal and Punshon-Smith made a breakthrough and firstly proved a kind of turbulent dynamical systems under the Homander's parabolic bracket spanning assumption have positive Lyapunov exponents with respect to its unique stationary measure in [4]. Particularly, GSNS has positive Lyapunov exponents with respect to its unique stationary measure [6]. It is well-known that positive Lyapunov exponent implies the sensitive dependence on initial conditions. Then, it is natural to ask whether there is a kind of chaotic structure in GSNS, such as Smale horseshoes [31] or others. The purpose of this paper is to characterize turbulence of the GSNS with full-horseshoes. Full-horseshoe was firstly introduced by the first author and Lu in [19] to investigate the complex behaviors of infinite-dimensional random dynamical systems. It imitates the process of coin toss and is a weaker chaotic structure than Smale horseshoe. To the authors' knowledge, this is the first result about chaotic structure for the turbulent dynamical ###### Contents * 1 Introduction * 2 Preliminaries * 3 The \(\mathbb{Z}^{2}_{+,N}\)-module \(\mathbb{Z}^{2}_{+,N}\)- where \(\boldsymbol{u}=(u_{(\boldsymbol{k},1)},u_{(\boldsymbol{k},2)})_{\boldsymbol{k}\in \mathbb{Z}^{2}_{+,N}}\in\mathbb{R}^{4N(N+1)}\), and \[B_{(\boldsymbol{k},1)}(\boldsymbol{u},\boldsymbol{u}) :=\frac{1}{2}\sum_{\begin{subarray}{c}\boldsymbol{i}+\boldsymbol{ j}=\boldsymbol{k}\\ \boldsymbol{i},\boldsymbol{j}\in\mathbb{Z}^{2}_{+,N}\end{subarray}}c_{ \boldsymbol{i},\boldsymbol{j}}\big{(}u_{(\boldsymbol{i},1)}u_{(\boldsymbol{ j},1)}-u_{(\boldsymbol{i},2)}u_{(\boldsymbol{j},2)}\big{)}-\frac{1}{2}\sum_{ \begin{subarray}{c}\boldsymbol{i}-\boldsymbol{j}=\boldsymbol{k}\\ \boldsymbol{i},\boldsymbol{j}\in\mathbb{Z}^{2}_{+,N}\end{subarray}}c_{ \boldsymbol{i},\boldsymbol{j}}\big{(}u_{(\boldsymbol{i},1)}u_{(\boldsymbol{ j},1)}+u_{(\boldsymbol{i},2)}u_{(\boldsymbol{j},2)}\big{)},\] \[B_{(\boldsymbol{k},2)}(\boldsymbol{u},\boldsymbol{u}) :=\frac{1}{2}\sum_{\begin{subarray}{c}\boldsymbol{i}+\boldsymbol{ j}=\boldsymbol{k}\\ \boldsymbol{i},\boldsymbol{j}\in\mathbb{Z}^{2}_{+,N}\end{subarray}}c_{ \boldsymbol{i},\boldsymbol{j}}\big{(}u_{(\boldsymbol{i},2)}u_{(\boldsymbol{ j},1)}+u_{(\boldsymbol{i},1)}u_{(\boldsymbol{j},2)}\big{)}-\frac{1}{2}\sum_{ \begin{subarray}{c}\boldsymbol{i}-\boldsymbol{j}=\boldsymbol{k}\\ \boldsymbol{i},\boldsymbol{j}\in\mathbb{Z}^{2}_{+,N}\end{subarray}}c_{ \boldsymbol{i},\boldsymbol{j}}\big{(}u_{(\boldsymbol{i},2)}u_{(\boldsymbol{ j},1)}-u_{(\boldsymbol{i},1)}u_{(\boldsymbol{j},2)}\big{)}.\] It is clear that (1.1\({}_{N}\)) defines a stochastic flow of \(C^{\infty}\) diffeomorphisms (for example, see [4, 5]) as \[\Phi:[0,+\infty)\times\Omega\times\mathbb{R}^{d}\to\mathbb{R}^{d},\quad(t, \omega,x)\mapsto\Phi^{t}_{\omega}(x), \tag{1.5}\] where \(d:=4N(N+1)\), with following properties 1. for \(\mathbb{P}\)-a.s. \(\omega\in\Omega\), the mapping \(t\mapsto\Phi^{t}_{\omega}\) is continuous from \([0,+\infty)\) to \(\mathrm{Diff}^{\infty}(\mathbb{R}^{d})\) endowed with relative compact open topology; 2. for \(\mathbb{P}\)-a.s. \(\omega\in\Omega\), one has that \(\Phi^{t}_{\theta^{s}\omega}\circ\Phi^{s}_{\omega}=\Phi^{t+s}_{\omega}\) for any \(t,s\geqslant 0\), and \(\Phi^{0}_{\omega}=\mathrm{Id}_{\mathbb{R}^{d}}\), where \(\theta^{t}:\Omega\to\Omega\) is Wiener shift defined as \(\theta^{t}(\omega)=\omega(\cdot+t)-\omega(\cdot)\); 3. for any \(0\leqslant t_{0}<t_{1}<\cdots<t_{n}\), the mappings \(\omega\mapsto\Phi^{t_{n}-t_{n-1}}_{\theta^{t_{n-1}}\omega},\ldots,\omega \mapsto\Phi^{t_{1}-t_{0}}_{\theta^{t}0\omega}\) are indenpendent random variables from \(\Omega\to\mathrm{Diff}^{\infty}(\mathbb{R}^{d})\). To ensure that the stochastic flow of (1.1\({}_{N}\)) has a unique stationary measure for all \(\epsilon\in(0,1)\), we review a hypoelliptic condition (see [4] for details) for driven model \[\mathcal{K}_{N}:=\mathcal{K}\cap\mathbb{Z}^{2}_{+,N},\] where \(N\) is a positive integer and \(\mathbb{Z}^{2}_{+,N}\) is defined as (1.4). **Definition 1.1**.: \(\mathcal{K}_{N}\) is hypoelliptic if \(\mathbb{Z}^{2}_{0,N}=\bigcup_{n=0}^{+\infty}\mathcal{Z}^{n}\), where \[\mathbb{Z}^{2}_{0,N}:=\{\boldsymbol{k}\in\mathbb{Z}^{2}:0<| \boldsymbol{k}|_{\infty}\leqslant N\},\] \[\mathcal{Z}^{0}=\mathcal{K}_{N}\cup(-\mathcal{K}_{N}),\] \[\mathcal{Z}^{n}=\{\boldsymbol{k}\in\mathbb{Z}^{2}_{0,N}:\exists \boldsymbol{i}\in\mathcal{Z}^{n-1}\text{ and }\boldsymbol{j}\in\mathcal{Z}^{0}\text{ such that }c_{\boldsymbol{i},\boldsymbol{j}}\neq 0 \text{ and }\boldsymbol{k}=\boldsymbol{i}+\boldsymbol{j}\}.\] Now, we state the main result of this paper as follows. **Theorem 1.2**.: _For sufficiently large positive integer \(N\), if the driven model \(\mathcal{K}_{N}\) is hypoelliptic, then there exists \(\epsilon_{0}>0\) such that for all \(\epsilon\in(0,\epsilon_{0})\), the stochastic flow \(\Phi\) of (1.1\({}_{N}\)) has full-horseshoes. Namely, there exists a pair of disjoint non-empty compact subsets \(\{U_{1},U_{2}\}\) of \(\mathbb{R}^{d}\) such that for \(\mathbb{P}\)-a.s. \(\omega\in\Omega\), there is a subset \(J(\omega)\) of \(\mathbb{Z}_{+}:=\mathbb{N}\cup\{0\}\) with following properties,_ 1. \(\lim_{m\to+\infty}\frac{|J(w)\cap\{0,1,\ldots,m-1\}|}{m}>0\)_;_ 2. _for any_ \(s\in\{1,2\}^{J(\omega)}\)_, there exists an_ \(x_{s}\in\mathbb{R}^{d}\) _such that_ \(\Phi^{j}_{\omega}(x_{s})\in U_{s(j)}\) _for any_ \(j\in J(\omega)\)_._ **Remark 1.3**.: In [11], it was proved that \(\mathcal{K}_{N}=\{(0,1),(1,1)\}\) and \(\mathcal{K}_{N}=\{(1,0),(1,1)\}\) are both hypoelliptic for all \(N\in\mathbb{N}\). **Remark 1.4**.: In fact, we prove that for \(\mathbb{P}\)-a.s. \(\omega\in\Omega\), the density of \(J(\omega)\) has a uniformly positive lower bound. **Remark 1.5**.: The full-horseshoes of any discrete time form for GSNS exists. Namely, for any \(\tau\in(0,+\infty)\), there exists a pair of disjoint non-empty compact subsets \(\{U_{1},U_{2}\}\) of \(\mathbb{R}^{d}\) such that for \(\mathbb{P}\)-a.s. \(\omega\in\Omega\), there is a subset \(J(\omega)\) of \(\mathbb{Z}_{+}\) with following properties, (a) \(\lim_{m\to+\infty}\frac{|J(\omega)\cap\{0,1,\ldots,m-1\}|}{m}>0\); (b) for any \(s\in\{1,2\}^{J(\omega)}\), there exists an \(x_{s}\in\mathbb{R}^{d}\) such that \(\Phi_{\omega}^{j\tau}(x_{s})\in U_{s(j)}\) for any \(j\in J(\omega)\). Unfortunately, we can't describe accurately the location of full-horseshoes at present. We guess that GSNS has full-horseshoes on any two disjoint closed subsets which have non-empty interior. Despite the ideas of our proof come from the known literature, we overcame the difficulties caused by varied settings. The organization of this paper is as follows: In Section 2, we review basic knowledge of ergodic theory and entropy theory. In Section 3, we show that the stochastic flow of GSNS has positive entropy with respect to its unique stationary measure, namely Proposition 3.10. In Section 4, we prove the existence of measurable weak-horseshoes for GSNS, namely Theorem 4.4. In Section 5, we extend the measurable weak-horseshoe to the full-horseshoe. Then Theorem 1.2 is proved. **Acknowledgments.** The second author would like to thank Alex Blumenthal and Sam Punshon-Smith for useful discussions. The authors were supported by NSFC of China (12090012,12031019,11731003). ## 2. Ergodic theory and entropy theory In this section, we review some basic concepts and classical results about measure-preserving dynamical systems, measure of disintegration, relative entropy, and relative Pinsker \(\sigma\)-algebra. The reader can see [12, 13, 14, 33] for details. ### Measure-preserving dynamical systems and measure of disintegration In this paper, we always work on the _Polish probability space_\((X,\mathscr{X},\mu)\), which means that \(X\) is a Polish space, \(\mathscr{X}\) is the Borel-\(\sigma\) algebra of \(X\), and \(\mu\) is a probability measure on \((X,\mathscr{X})\). _A measure-preserving dynamical system_\((X,\mathscr{X},\mu,T)\) is said to be a measure-preserving map \(T\) on the probability space \((X,\mathscr{X},\mu)\). Given two measure-preserving dynamical systems \((X,\mathscr{X},\mu,T)\) and \((Y,\mathscr{Y},\nu,S)\), we say that \((Y,\mathscr{Y},\nu,S)\) is _a factor_ of \((X,\mathscr{X},\mu,T)\) if there exists a measure-preserving map \(\pi:(X,\mathscr{X},\mu)\to(Y,\mathscr{Y},\nu)\) such that \(\pi\circ T=S\circ\pi\). And, \(\pi\) is called a _factor map_. **Definition 2.1**.: Let \((X,\mathscr{X},\mu,T)\) be a measure-preserving dynamical system. It is called an _ergodic_ measure-preserving dynamical system if \(\mu(A)=1\) or \(\mu(X\setminus A)=1\) wherever \(A\) is a \(T\)-invariant measurable subset of \(X\); it is called an _invertible_ measure-preserving dynamical system, if \(T^{-1}:X\to X\) exists and is measurable. Let \(\pi:(X,\mathscr{X},\mu)\to(Y,\mathscr{Y},\nu)\) is a measure-preserving map between two Polish probability spaces. Then, there is a family of conditional probability measures \(\{\mu_{y}\}_{y\in Y}\) on \((X,\mathscr{X})\) which are characterized by * \(\mu_{y}(\pi^{-1}(y))=1\) for \(\nu\)-a.s. \(y\in Y\); * for each \(f\in L^{1}(X,\mathscr{X},\mu)\), one has that \(f\in L^{1}(X,\mathscr{X},\mu_{y})\) for \(\nu\)-a.s. \(y\in Y\), the map \(y\mapsto\int_{X}f\,\mathrm{d}\mu_{y}\) belongs to \(L^{1}(Y,\mathscr{Y},\nu)\) and \(\mu=\int_{Y}\mu_{y}\mathrm{d}\nu(y)\) in the sense that \[\int_{Y}\left(\int_{X}f\,d\mu_{y}\right)\,\mathrm{d}\nu(y)=\int_{X}f\,\mathrm{ d}\mu.\] Then \(\mu=\int_{Y}\mu_{y}\mathrm{d}\nu(y)\) is called _disintegration_ of \(\mu\) relative to \(Y\). Furthermore, if \(\pi:(X,\mathscr{X},\mu,T)\to(Y,\mathscr{Y},\nu,S)\) is a factor map between two invertible measure-preserving dynamical systems on Polish probability spaces, then \(T_{*}\mu_{y}=\mu_{Sy}\) for \(\nu\)-a.s. \(y\in Y\), where \(T_{*}\mu_{y}\) is defined by \[T_{*}\mu_{y}(A):=\mu_{y}(T^{-1}A),\] for any \(A\in\mathscr{X}\). **Lemma 2.2** ([12, Proposition 6.13]).: Let \(\pi:(X,\mathscr{X},\mu,T)\to(Y,\mathscr{Y},\nu,S)\) a factor map between two measure-preserving systems on Polish probability spaces. Then \[(X\times X,\mathscr{X}\otimes\mathscr{X},\mu\times_{Y}\mu,T\times T),\] where the measure \(\mu\times_{Y}\mu:=\int_{Y}(\mu_{y}\times\mu_{y})\,d\nu(y)\), is a measure-preserving dynamical system. It is called _relatively independent joining_ of \((X,\mathscr{X},\mu,T)\) with itself relative to \(Y\). ### Relative entropy and relative Pinsker \(\sigma\)-algebra In this subsection, we always assume that \(\pi:(X,\mathscr{X},\mu,T)\to(Z,\mathscr{X},\eta,R)\) is a factor map between two invertible measure-preserving dynamical systems on Polish probability spaces. And we review definitions of its relative entropy and relative Pinsker factor. For any two finite Borel measurable partitions \(\alpha\) and \(\beta\) of \(X\), denote \(\alpha\vee\beta\) as the family of intersections of a set from \(\alpha\) with a set from \(\beta\), which is a finite Borel measurable partition of \(X\). The definition of multiple is similar. Put \[H_{\mu}(\alpha)=\sum_{A\in\alpha}-\mu(A)\log\mu(A),\quad H_{\mu}(\alpha|\beta) =H_{\mu}(\alpha\vee\beta)-H_{\mu}(\beta),\] \[h_{\mu_{z}}(T,\alpha)=\lim_{n\to+\infty}H_{\mu_{z}}(\alpha|\bigvee_{i=1}^{n}T^ {-i}\alpha),\quad h_{\mu}(T,\alpha|Z)=\int_{Z}h_{\mu_{z}}(T,\alpha)d\eta(z),\] where \(\mu=\int_{Z}\mu_{z}d\eta(z)\) is the disintegration of \(\mu\) relative to \(Z\). Then entropy of \((X,\mathscr{X},\mu,T)\) relative to \(Z\) is defined as \[h_{\mu}(T|Z)=\sup_{\alpha}h_{\mu}(T,\alpha|Z), \tag{2.1}\] where \(\alpha\) is taken all over finite Borel measurable partition of \(X\). The _relative Pinsker \(\sigma\)-algebra_\(\mathcal{P}_{\mu}(\pi)\) of the factor map \(\pi:(X,\mathscr{X},\mu,T)\to(Z,\mathscr{X},\eta,R)\) is defined as the smallest \(\sigma\)-algebra containing \[\{A\in\mathscr{X}:h_{\mu}(T,\{A,A^{c}\}|Z)=0\}.\] Note \(\mathcal{P}_{\mu}(\pi)\) is a \(T\)-invariant sub-\(\sigma\)-algebra of \(\mathscr{X}\) (for example, see [33, Section 4.10] or [15]). Hence, it determines a measure-preserving dynamical system \((Y,\mathscr{Y},\nu,S)\) on the Polish probability space and two factor maps \[\pi_{1}:(X,\mathscr{X},\mu,T)\to(Y,\mathscr{Y},\nu,S),\quad\pi_{2}:(Y,\mathscr{ Y},\nu,S)\to(Z,\mathscr{X},\eta,R),\] such that \(\pi_{2}\circ\pi_{1}=\pi\) and \(\pi_{1}^{-1}(\mathscr{Y})=\mathcal{P}_{\mu}(\pi)\pmod{\mu}\). The factor map \(\pi_{1}:(X,\mathscr{X},\mu,T)\to(Y,\mathscr{Y},\nu,S)\) is called _relative Pinsker factor map_ of \(\pi\). Let's end of this section by a result about conditional measure-theoretic entropy and relative Pinsker factor. The reader can refer to the proof for [19, Lemma 4.1] and [20, Lemma 3.3]. **Lemma 2.3**.: Denote \(\pi_{1}:(X,\mathscr{X},\mu,T)\to(Y,\mathscr{Y},\nu,S)\) as relative Pinsker factor map of \(\pi:(X,\mathscr{X},\mu,T)\to(Z,\mathscr{Z},\eta,R)\). Then, for any \(l\in\mathbb{N}\) and a finite Borel measurable partition \(\alpha\) on \(X\) one has that 1. \(h_{\mu}(T^{l},\alpha|Z)=h_{\mu}(T^{l},\alpha|Y)=\int_{Y}h_{\mu_{y}}(T^{l}, \alpha)d\nu(y)\); 2. \(\lim_{m\to+\infty}h_{\mu}(T^{m},\alpha|Z)=H_{\mu}(\alpha|Y)\), where \(H_{\mu}(\alpha|Y):=\int_{Y}H_{\mu_{y}}(\alpha)d\nu(y)\). ## 3. Positive entropy of GSNS In this section, we show that the stochastic flow of \((1.1_{N})\) has positive entropy with respect to its unique stationary measure by borrowing the result in [6] and Pesin's entropy formula on non-compact Riemannina manifolds in [7]. ### Invariant measure and entropy for RDS In this subsection, we mainly review some basic definitions in discrete random dynamical systems (being abbreviated as RDS). A canonical model is generated by the time-\(1\) map of solutions of classical SDEs (see [25, Chapter V] for details). **Definition 3.1**.: Let \((\Omega,\mathscr{F},\mathbb{P},\theta)\) be a measure-preserving dynamical system on the Polish probability space. A RDS \(F\) on a Polish space \(M\) over \((\Omega,\mathscr{F},\mathbb{P},\theta)\) means that \[F:\mathbb{Z}_{+}\times\Omega\times M\to M,\quad(n,\omega,x)\mapsto F_{\omega}^ {n}x\] is Borel measurable satisfying that for \(\mathbb{P}\)-a.s. \(\omega\in\Omega\), \(F_{\omega}^{0}=\mathrm{Id}_{M}\) and \(F_{\omega}^{n+m}=F_{\theta^{m}\omega}^{n}\circ F_{\omega}^{m}\) wherever \(n,m\in\mathbb{Z}_{+}\). Furthermore, if for any \(n\in\mathbb{Z}_{+}\) and \(\mathbb{P}\)-a.s. \(\omega\in\Omega\), \(F_{\omega}^{n}\) is continuous, then \(F\) is called a _continuous RDS_. The RDS \(F\) always is viewed as a skew product map given by \[T:\Omega\times M\to\Omega\times M\quad\text{with}\quad T(\omega,x)=(\theta \omega,F_{\omega}^{1}x).\] A Borel probability measure \(\mu\) on \(\Omega\times M\) is called an _invariant measure_ of RDS \(F\) if a) \(\mu\) is \(T\)-invariant; b) \((\pi_{\Omega})_{*}\mu=\mathbb{P}\), where \(\pi_{\Omega}\) is the projection from \(\Omega\times M\) to \(\Omega\). Additionally, if \(\mu\) is \(T\)-ergodic, then \(\mu\) is called an _invariant ergodic measure_ of \(F\). **Definition 3.2**.: For any an invaraint measure \(\mu\) of RDS \(F\), the entropy of \((F,\mu)\) is defined as \[h_{\mu}(F):=\sup_{\alpha}\lim_{n\to+\infty}\frac{1}{n}\int_{\Omega}H_{\mu_{ \omega}}\left(\bigvee_{i=0}^{n-1}(F_{\omega}^{i})^{-1}\alpha\right)\mathrm{d }\mathbb{P}(\omega),\] where \(\alpha\) is taken over the set of all finite Borel measurable partitions of \(\mathbb{R}^{d}\), and \(\mu=\int_{\Omega}\delta_{\omega}\times\mu_{\omega}d\mathbb{P}(\omega)\) is the disintegration of \(\mu\) relative to \(\Omega\). **Remark 3.3**.: The definition of entropy of \((F,\mu)\) coincides with the definition of relative entropy, i.e. \(h_{\mu}(F)=h_{\mu}(T|\Omega)\) (for example, see [8, Theorem 2.3.4]). Let \(\Phi\) be the stochastic flow on \(\mathbb{R}^{d}\) over \((\Omega,\mathscr{F},\mathbb{P})\) defined as (1.5). Clearly, for any \(\tau\in(0,+\infty)\), \((\Omega,\mathscr{F},\mathbb{P},\theta^{\tau})\) is an invertible ergodic measure-preserving dynamical system (for example, see [2, Appendix A.3]), where \(\theta^{\tau}\) is the Wiener shift. By the definition of stochastic flow, the time-\(\tau\) map of \(\Phi\) defines a discrete RDS \(\Phi^{(\tau)}\) on \(\mathbb{R}^{d}\) over \((\Omega,\mathscr{F},\mathbb{P},\theta^{\tau})\). The stochastic flow \(\Phi\) naturally induces a family of Markov processes on \(\mathbb{R}^{d}\) whose transition probabilities \(\mathcal{P}_{t}(x,\cdot)\), where \(x\in\mathbb{R}^{d}\) and \(t\geqslant 0\), are defined by \[\mathcal{P}_{t}(x,A)=\mathbb{P}\big{(}\{\omega\in\Omega:\Phi^{t}_{\omega}(x) \in A\}\big{)}\text{ for any Borel subset $A$ of $\mathbb{R}^{d}$.}\] A Borel probability measure \(\varrho\) on \(\mathbb{R}^{d}\) is called a _stationary measure_ of \(\Phi\), if for any \(t\geqslant 0\) and Borel subset \(A\) of \(\mathbb{R}^{d}\), one has that \[\varrho(A)=\int_{\mathbb{R}^{d}}\mathcal{P}_{t}(x,A)d\varrho(x).\] Furthermore, \(\mu\) is called an _ergodic_ stationary measure if for any Borel subset \(A\) of \(\mathbb{R}^{d}\) satisfying that for all \(t>0\), \(\mathcal{P}_{t}(x,A)=1_{A}(x)\) for \(\varrho\)-a.s. \(x\in\mathbb{R}^{d}\) is \(\varrho\)-null measure or full-measure (see the other equivalent definitions in [10, Theorem 3.2.4]). Next, we give a lemma which illustrates the relationship between invariant measure of RDS and stationary measure of stochastic flow. **Lemma 3.4**.: Suppose that \(\Phi\) is the stochastic flow on \(\mathbb{R}^{d}\) over \((\Omega,\mathscr{F},\mathbb{P})\) defined as (1.5). Let \(\varrho\) be an ergodic stationary measure of \(\Phi\). Then there exists a unique Borel probability measure \(\mu\) on \(\Omega\times\mathbb{R}^{d}\) such that \(\mu\) is an invariant ergodic measure of RDS \(\Phi^{(\tau)}\) for any \(\tau\in(0,+\infty)\) with marginal \(\varrho\). Specifically, * for any \(\tau\in(0,+\infty)\), \(\mu\) is invariant ergodic with respect to the skew product map (3.1) \[T_{\tau}:\Omega\times\mathbb{R}^{d}\to\Omega\times\mathbb{R}^{d}\quad\text{ with}\quad T_{\tau}(\omega,x)=(\theta^{\tau}\omega,\Phi^{\tau}_{\omega}x);\] * \((\pi_{\Omega})_{*}\mu=\mathbb{P}\) and \((\pi_{\mathbb{R}^{d}})_{*}\mu=\varrho\), where \(\pi_{\Omega}\) and \(\pi_{\mathbb{R}^{d}}\) are the projection from \(\Omega\times\mathbb{R}^{d}\) to \(\Omega\) and \(\mathbb{R}^{d}\), respectively. Proof.: This result follows from [22, Theorem 2.1 in Section I] and [2, Theorem 1.7.1]. Finally, we give definition of entropy of stochastic flow with respect to its stationary measure. The reader can refer to [25, Section 3 in Chapter V] for more details. For sake of convenience, we only consider the behavior of time-1 map of the stochastic flow in this paper. **Definition 3.5**.: Let \(\Phi\) be the stochastic flow on \(\mathbb{R}^{d}\) over \((\Omega,\mathscr{F},\mathbb{P})\) defined as (1.5) and \(\varrho\) be its stationary measure. The entropy \(h_{\varrho}(\Phi)\) of \((\Phi,\varrho)\) is defined as the entropy of RDS \(\Phi^{(1)}\) with respect to the invariant measure \(\mu\) defined as Lemma 3.4. ### Positive Lyapunov exponents for GSNS In smooth dynamical systems, Lyapunov exponent is a significant index to measure the divergence and convergence the speed of nearby trajectories by constructing invariant manifolds. The celebrated multiplicative ergodic theorem asserts the existence of Lyapunov exponents. We give it a convenient version in our setting. The reader can see the details in [2, Theorem 3.4.1]. **Proposition 3.6**.: Suppose that \(F\) is a RDS on \(\mathbb{R}^{d}\) over an ergodic measure-preserving dynamical system \((\Omega,\mathscr{F},\mathbb{P},\theta)\) on the Polish probability space and \(\mu\) is an invariant ergodic measure of \(F\). If \(F^{1}_{\omega}\) is a \(C^{1}\) diffeomorphsim on \(\mathbb{R}^{d}\) for \(\mathbb{P}\)-a.s. \(\omega\in\Omega\) and the following integral condition hold, \[\int_{\Omega\times\mathbb{R}^{d}}\left(\log^{+}|\mathrm{d}_{x}F^{1}_{\omega}|+ \log^{+}|\mathrm{d}_{x}(F^{1}_{\omega})^{-1}|\right)\mathrm{d}\mu(\omega,x)<+\infty, \tag{3.2}\] where \(\log^{+}|a|:=\max\{\log|a|,0\}\). Then there is a \(\mu\)-full measure subset \(\Lambda\subset\Omega\times\mathbb{R}^{d}\) and \(d\) constants \[+\infty>\lambda_{1}\geqslant\lambda_{2}\geqslant\cdots\geqslant\lambda_{d}>-\infty\] such that for each \((\omega,x)\in\Lambda\), there exists a deceasing measurable filtration \[T_{x}\mathbb{R}^{d}=E_{1}(\omega,x)\supset E_{2}(\omega,x)\supset\cdots \supset E_{d}(\omega,x)\supsetneq E_{d+1}(\omega,x):=\{\mathbf{0}\},\] with following properties 1. \((d_{x}F^{n}_{\omega})E_{i}(\omega,x)=E_{i}(\theta^{n}\omega,F^{n}_{\omega}x)\) for \(i=1,2,\ldots,d\), \(n\in\mathbb{Z}_{+}\) and \((\omega,x)\in\Lambda\); 2. for any \(i=1,\ldots,d\) and \((\omega,x)\in\Lambda\), if \(E_{i}(\omega,x)\setminus E_{i+1}(\omega,x)\neq\varnothing\), then \(v\in E_{i}(\omega,x)\setminus E_{i+1}(\omega,x)\) if and only if \[\lim_{n\to+\infty}\frac{\log|(d_{x}F^{n}_{\omega})v|}{n}=\lambda_{i};\] Usually, \(\lambda_{1},\lambda_{2},\ldots,\lambda_{d}\) are called Lyapunov exponents of \((F,\mu)\). Particularly, \(\lambda_{1}\) is called top Lyapunov exponent of \((F,\mu)\). It is extremely hard to determine that the top Lyapunov exponent of a specific dynamical system is positive (see the discussion in [3]). Recently, Bedrossian et. al put forward a new method for obtaining quantitative lower bounds on the top Lyapunov exponent for Euler-like systems, including GSNS in [4]. Then, Bedrossian and Punshon-Smith reformulated the condition to be more amenable for GSNS in [6]. And they proved the following result. **Proposition 3.7** ([6, Theorem 1.1 and Remark 1.5]).: For any positive integer \(N\geqslant 392\), if \(\mathcal{K}_{N}\) is hypoelliptic, then there exists \(\epsilon_{0}>0\) such that for all \(\epsilon\in(0,\epsilon_{0})\) the stochastic flow \(\Phi\) of (1.1\({}_{N}\)) has a unique stationary measure \(\varrho\) and the top Lyapunov exponent \(\lambda_{1}\) of \((\Phi,\varrho)\) is positive. **Remark 3.8**.: Note that \(\varrho\) is also an ergodic stationary measure of \(\Phi\) (for example, see [10, Theorem 3.2.6]). The Lyapunov exponents of \((\Phi,\varrho)\) are defined as the Lyapunov exponents of RDS \(\Phi^{(1)}\) with respect to the invariant ergodic measure \(\mu\) defined as Lemma 3.4. ### Positive entropy for GSNS It is well-known that Pesin's entropy formula builds the connection measure-theoretic entropy and positive Lyapunov exponents for the SRB measure on compact Riemannian manifold, such as [24, 23, 25, 28] for a variety of settings. Note that the stationary measure \(\varrho\) of the stochastic flow \(\Phi\) in Proposition 3.7 has full-support (for example, see [16, 18]). Hence that, we need to consider Pesin's entropy formula for RDS on \(\mathbb{R}^{d}\). **Lemma 3.9**.: Let \(\Phi\) be the stochastic flow on \(\mathbb{R}^{d}\) over \((\Omega,\mathscr{F},\mathbb{P})\) defined as (1.5) and \(\varrho\) be a smooth ergodic stationary measure \(\varrho\) of \(\Phi\), i.e. \(\varrho\) being absolutely continuous with respect to the volume measure of \(\mathbb{R}^{d}\). Denote \(\mu\) as the probability measure on \(\Omega\times\mathbb{R}^{d}\) defined as Lemma 3.4. If following three assumptions hold, (**Assumption 1**) \[\log^{+}\sup_{v\in O(\mathbf{0},1)}|\mathrm{d}_{x+v}\Phi_{\omega}^{n}|\in L^{1}( \Omega\times\mathbb{R}^{d},\mu)\quad\text{for any $n\in\mathbb{N}$},\] (**Assumption 2**) \[\log^{+}\left(\sup_{v\in O(\mathbf{0},1)}|\mathrm{d}_{x+v}^{2}\Phi_{\omega}^{ 1}|\right)\in L^{1}(\Omega\times\mathbb{R}^{d},\mu),\] (**Assumption 3**) \[\log^{+}|\mathrm{d}_{\Phi_{\omega}^{1}(x)}\big{(}\Phi_{\omega}^{1}\big{)}^{-1 }|,\quad\log^{+}\left(\sup_{v\in O(\mathbf{0},1)}|\mathrm{d}_{\Phi_{\omega}^{ 1}(x+v)}^{2}\big{(}\Phi_{\omega}^{1}\big{)}^{-1}|\right)\in L^{1}(\Omega\times \mathbb{R}^{d},\mu),\] where \(O(\mathbf{0},1):=\{v\in\mathbb{R}^{d}:|v|<1\}\), then \[h_{\varrho}(\Phi)=\sum_{i=1}^{d}\lambda_{i}^{+}>0,\] where \(\lambda_{i}^{+}=\max\{\lambda_{i},0\}\). Proof.: This lemma just is a rewrite of [7, Theorem 3.8] in our setting. **Proposition 3.10**.: Under the setting in Proposition 3.7, letting \(\varrho\) be the unique stationary measure of the stochastic flow \(\Phi\), then \[h_{\varrho}(\Phi)=\sum_{i=1}^{d}\lambda_{i}^{+}\geqslant\lambda_{1}>0,\] where \(\lambda_{1},\cdots,\lambda_{d}\) is the Lyapunov exponents of \((\Phi,\varrho)\). Proof.: Firstly, summary the properties of the unique stationary measure \(\varrho\) of \(\Phi\) from [5, Theorem 1.2] as follows. **Lemma 3.11**.: Under the setting in Proposition 3.7, let \(\varrho\) be the unique stationary measure of the stochastic flow \(\Phi\) and \(m_{\mathbb{R}^{d}}\) be the volume measure on \(\mathbb{R}^{d}\). Then \(\varrho\ll m_{\mathbb{R}^{d}}\) and \(\rho:=\frac{d\rho}{dm_{\mathbb{R}^{d}}}\) is a smooth function satisfying that there exist some constants \(C,\eta>0\) such that for every \(x\in\mathbb{R}^{d}\), \(\rho(x)\leqslant Ce^{-\eta|x|^{2}}\). According to Lemma 3.9 and Proposition 3.7, we only need to verify the **Assumption 1-Assumption 3**. To simply estimations, write (1.1\({}_{N}\)) as \[\mathrm{d}x_{t}=\Big{(}B(x_{t},x_{t})-\epsilon Ax_{t}\Big{)}\mathrm{d}t+\sum_ {i=1}^{2|\mathcal{K}_{N}|}e_{i}\mathrm{d}W_{t}^{i}, \tag{3.3}\] where \(B:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}^{d}\) is bilinear, and \(A\) is a \(d\times d\) matrix. Verification for **Assumption 1:** For any \(i\in\{1,2\ldots,d\}\), \(t\in(0,+\infty)\) and \(x\in\mathbb{R}^{d}\), the equality \[\partial_{i}\Phi_{\omega}^{t}(x)=\partial_{i}x+\int_{0}^{t}B\big{(}\Phi_{ \omega}^{\tau}(x),\partial_{i}\Phi_{\omega}^{\tau}(x)\big{)}+B\big{(}\partial _{i}\Phi_{\omega}^{\tau}(x),\Phi_{\omega}^{\tau}(x)\big{)}\mathrm{d}\tau-\int_ {0}^{t}\epsilon A\big{(}\partial_{i}\Phi_{\omega}^{\tau}(x)\big{)}\mathrm{d}\tau,\] holds for any \(\mathbb{P}\)-a.s. \(\omega\in\Omega\), which implies that \[|\partial_{i}\Phi^{t}_{\omega}(x)|\leqslant 1+\int_{0}^{t}\Big{(}a+b|\Phi^{\tau}_{ \omega}(x)|\Big{)}|\partial_{i}\Phi^{\tau}_{\omega}(x)|\mathrm{d}\tau,\] where \(a>1\) and \(b>1\) are constants which only depend on \(A\) and \(B\), respectively. By using Gronwall's inequality (for example, see [27, Lemma 1.1]), we have \[|\partial_{i}\Phi^{t}_{\omega}(x)|\leqslant\exp\left(\int_{0}^{t}a+b|\Phi^{ \tau}_{\omega}(x)|\mathrm{d}\tau\right). \tag{3.4}\] Let \(\{v_{k}\}_{k\in\mathbb{N}}\) be a countable dense subset of \(O(\mathbf{0},1):=\{v\in\mathbb{R}^{d}:|v|<1\}\). Fixing a \(t\in(0,+\infty)\), define a Borel measurable partition of \(\Omega\times\mathbb{R}^{d}\) as \[O_{1}:=\big{\{}(\omega,x)\in\Omega\times\mathbb{R}^{d}:|\Phi^{t}_{\omega}(x+v )|=\sup_{v\in O(\mathbf{0},1)}|\Phi^{t}_{\omega}(x+v)|\big{\}},\] \[O_{k}:=\big{\{}(\omega,x)\in\Omega\times\mathbb{R}^{d}:|\Phi^{t}_{\omega}(x+v _{k})|=\sup_{v\in O(\mathbf{0},1)}|\Phi^{t}_{\omega}(x+v)|\big{\}}\setminus \bigcup_{l=1}^{k-1}O_{l}\text{ for }k\geqslant 2.\] Then, one has that \[\int_{\Omega\times\mathbb{R}^{d}}\sup_{v\in O(\mathbf{0},1)}|\Phi ^{t}_{\omega}(x+v)|\mathrm{d}\mu(\omega,x) =\sum_{k\in\mathbb{N}}\int_{O_{k}}|\Phi^{t}_{\omega}(x+v_{k})| \mathrm{d}\mu(\omega,x)\] \[=\sum_{k\in\mathbb{N}}\int_{T_{t}O_{k}}|x+v_{k}|\mathrm{d}\mu( \omega,x)\] \[\leqslant\sum_{k\in\mathbb{N}}\int_{T_{t}O_{k}}(|x|+1)\mathrm{d} \mu(\omega,x)\] \[\stackrel{{\text{Lemma \ref{lem:2}}}}{{\leqslant}}\int_{ \Omega\times\mathbb{R}^{d}}|x|\mathrm{d}\mu(\omega,x)+1<+\infty,\] where \(T_{t}\) is the invertible measurable transformation on \(\Omega\times\mathbb{R}^{d}\) defined as (3.1). Therefore, for any \(n\in\mathbb{N}\) \[\int_{\Omega\times\mathbb{R}^{d}}\log^{+}\sup_{v\in O(\mathbf{0 },1)}|\partial_{i}\Phi^{n}_{\omega}(x+v)|\mathrm{d}\mu(\omega,x) \stackrel{{\eqref{lem:2}}}{{\leqslant}}a+b\int_{ \Omega\times\mathbb{R}^{d}}\int_{0}^{n}\sup_{v\in O(\mathbf{0},1)}|\Phi^{\tau} _{\omega}(x+v)|\mathrm{d}\tau\mathrm{d}\mu(\omega,x)\] \[=a+b\int_{0}^{n}\int_{\Omega\times\mathbb{R}^{d}}\sup_{v\in O( \mathbf{0},1)}|\Phi^{\tau}_{\omega}(x+v)|\mathrm{d}\mu(\omega,x)\mathrm{d}\tau\] \[\leqslant a+b\int_{0}^{n}\int_{\Omega\times\mathbb{R}^{d}}|x|+1 \mathrm{d}\mu(\omega,x)\mathrm{d}\tau \tag{3.5}\] \[\stackrel{{\text{Lemma \ref{lem:2}}}}{{=}}a+nb+nb\int_{ \mathbb{R}^{d}}|x|\mathrm{d}\varrho(x)<+\infty.\] It follows that \(\log^{+}\big{(}\sup_{v\in O(\mathbf{0},1)}|\mathrm{d}_{x+v}\Phi^{n}_{\omega}| \big{)}\in L^{1}(\Omega\times\mathbb{R}^{d},\mu)\). Verification for **Assumption 2:** For any \(i,j\in\{1,2\ldots,d\}\), \(t\in(0,+\infty)\) and \(x\in\mathbb{R}^{d}\), following equality hods for \(\mathbb{P}\)-a.s. \(\omega\in\Omega\), \[\partial_{ij}\Phi^{t}_{\omega}(x)=\int_{0}^{t}\big{(}B\big{(}\Phi^{\tau}_{ \omega}(x),\partial_{ij}\Phi^{\tau}_{\omega}(x)\big{)}+B\big{(}\partial_{ij} \Phi^{\tau}_{\omega}(x),\Phi^{\tau}_{\omega}(x)\big{)}\big{)}\,\mathrm{d}\tau\] \[+\int_{0}^{t}\big{(}B\big{(}\partial_{i}\Phi_{\omega}^{\tau}(x), \partial_{j}\Phi_{\omega}^{\tau}(x)\big{)}+B\big{(}\partial_{j}\Phi_{\omega}^{ \tau}(x),\partial_{i}\Phi_{\omega}^{\tau}\big{)}-\epsilon A\big{(}\partial_{ij} \Phi_{\omega}^{\tau}(x)\big{)}\big{)}\,\mathrm{d}\tau\] which implies that \[|\partial_{ij}\Phi_{\omega}^{1}(x)| \leqslant\int_{0}^{1}(a+b|\Phi_{\omega}^{\tau}(x)|)\cdot|\partial _{ij}\Phi_{\omega}^{\tau}(x)|\mathrm{d}\tau+b\int_{0}^{1}|\partial_{j}\Phi_{ \omega}^{\tau}(x)|\cdot|\partial_{i}\Phi_{\omega}^{\tau}(x)|\mathrm{d}\tau\] \[\overset{\eqref{eq:1.1}}{\leqslant}\int_{0}^{1}(a+b|\Phi_{\omega}^ {\tau}(x)|)\cdot|\partial_{ij}\Phi_{\omega}^{\tau}(x)|\mathrm{d}\tau+b\exp \left(2\int_{0}^{1}a+b|\Phi_{\omega}^{\tau}(x)|\mathrm{d}\tau\right).\] Using Gronwall's inequality again, we have \[|\partial_{ij}\Phi_{\omega}^{1}(x)|\leqslant b\exp\left(3\int_{0}^{1}a+b|\Phi_{ \omega}^{\tau}(x)|\mathrm{d}\tau\right). \tag{3.6}\] Combing (3.5) and (3.6), one has that \[\int_{\Omega\times\mathbb{R}^{d}}\sup_{v\in O(\mathbf{0},1)}\log ^{+}|\partial_{ij}\Phi_{\omega}^{1}(x)|\mathrm{d}\mu(\omega,x)\leqslant (b+3a)+3b\int_{\Omega\times\mathbb{R}^{d}}\int_{0}^{1}\sup_{v\in O (\mathbf{0},1)}|\Phi_{\omega}^{\tau}(x)|\mathrm{d}\tau\,\mathrm{d}\mu(\omega,x)\] \[\leqslant (b+3a)+3b\int_{0}^{1}\int_{\Omega\times\mathbb{R}^{d}}\sup_{v\in O (\mathbf{0},1)}|\Phi_{\omega}^{\tau}(x)|\mathrm{d}\tau\,\mathrm{d}\mu(\omega,x)\] \[< +\infty.\] Therefore, \(\log^{+}\big{(}\sup_{v\in O(\mathbf{0},1)}|d_{x+v}^{2}\Phi_{\omega}^{1}|\big{)} \in L^{1}(\Omega\times\mathbb{R}^{d},\mu)\). Verification for **Assumption 3:** For \(t\geqslant\tau\geqslant 0\) and \(\omega\in\Omega\), denote \(\Phi_{\omega}^{\tau,t}:=\Phi_{\omega}^{\tau}\circ(\Phi_{\omega}^{t})^{-1}\). Using the backward flow of Equation (3.3), for any \(t\geqslant 0\) and \(x\in\mathbb{R}^{d}\), one has that \[\Phi_{\omega}^{0,t}(x)=x-\int_{0}^{t}\big{(}B\big{(}\Phi_{\omega}^{\tau,t}(x), \Phi_{\omega}^{\tau,t}(x)\big{)}-\epsilon A\big{(}\Phi_{\omega}^{s,t}(x)\big{)} \big{)}\,\mathrm{d}\tau-\sum_{i=1}^{2|\mathcal{K}_{N}|}\int_{0}^{t}e_{i} \mathrm{d}W_{\tau}^{i}.\] for \(\mathbb{P}\)-a.s. \(\omega\in\Omega\). Repeating above argument in verification for **Assumption 1** and **Assumption 2**, one can verify that \[\log^{+}|\mathrm{d}_{\Phi_{\omega}^{1}(x)}\big{(}\Phi_{\omega}^{1}\big{)}^{-1} |,\quad\log^{+}\left(\sup_{v\in O(\mathbf{0},1)}|\mathrm{d}_{\Phi_{\omega}^{1} (x+v)}^{2}\big{(}\Phi_{\omega}^{1}\big{)}^{-1}|\right)\in L^{1}(\Omega\times \mathbb{R}^{d},\mu).\] This finishes the proof of Proposition 3.10. ## 4. Revisit of weak-horseshoes In this section, we review the proof of [19, Theorem 2.1] to obtain measurable weak-horseshoes for GSNS, namely, Theorem 4.4. The measurable property originates from the property disintegration of measure and return times. ### Preparing lemmas In this subsection, we mainly present some preparing lemmas. Following lemma is summarized from [19, Lemma 3.4, Lemma 3.5, Lemma 4.3]. **Lemma 4.1**.: Assume that \(\pi:(X,\mathscr{X},\mu,T)\to(Z,\mathscr{Z},\eta,R)\) is a factor map between two invertible ergodic measure-preserving dynamical systems on Polish probability spaces. Let \(\pi_{1}:(X,\mathscr{X},\mu,T)\to(Y,\mathscr{Y},\nu,S)\) be the relative Pinsker factor map of \(\pi\) and \(\mu=\int_{Y}\mu_{y}d\nu(y)\) be the disintegration of \(\mu\) relative to \(Y\). If \(h_{\mu}(T|Z)>0\), then 1. \(\mu_{y}\) is non-atomic (that is \(\mu_{y}(\{x\})=0\) for each \(x\in X\)) for \(\nu\)-a.s. \(y\in Y\); 2. \((X\times X,\mathscr{X}\otimes\mathscr{X},\mu\times_{Y}\mu,T\times T)\) is an ergodic measure-preserving dynamical system on the Polish probability space, where \[(X\times X,\mathscr{X}\otimes\mathscr{X},\mu\times_{Y}\mu,T\times T)\] is the product of \((X,\mathscr{X},\mu,T)\) with itself relative to \(\pi\) (see Lemma 2.2); 3. if \(U_{1},U_{2}\in\mathscr{X}\) with \(\mu\times_{Y}\mu(U_{1}\times U_{2})>0\), then there exists a Borel measurable subset \(\boldsymbol{A}\) of \(Y\) with \(\nu(\boldsymbol{A})>0\), a positive integer \(\boldsymbol{r}>2\) and a Borel measurable partition \(\alpha=\{B_{1},\ldots,B_{\boldsymbol{r}}\}\) of \(X\) such that \(\pi_{1}^{-1}(\boldsymbol{A})\cap B_{i}\subset U_{i},i=1,2\) and \(\mu_{y}(B_{j})=1/\boldsymbol{r},j=1,\ldots,\boldsymbol{r}\) for \(\nu\)-a.s. \(y\in Y\). Following combinatorial lemma is well-known which is a Karpovsky-Milman-Alon's generalization of the Sauer-Perles-Shelah lemma. For example, the reader can see it in [1, Corollary 1]. **Lemma 4.2**.: Given a positive integer \(\boldsymbol{r}\geqslant 2\) and \(\boldsymbol{\lambda}\in(1,+\infty)\), there exists a positive constant \(\boldsymbol{c}\) such that for all \(n\in\mathbb{N}\), if \(\mathcal{R}\subset\{1,\cdots,r\}^{\{1,\ldots,n\}}\) satisfies \(|\mathcal{R}|\geqslant\big{(}(\boldsymbol{r}-1)\boldsymbol{\lambda}\big{)}^{n}\), then there is a subset \(J(n,\mathcal{R})\) of \(\{1,\ldots,n\}\) with \(|J(n,\mathcal{R})|\geqslant\boldsymbol{c}n\) such that for any \(s\in\{1,\ldots,\boldsymbol{r}\}^{J(n,\mathcal{R})}\), there exists \(\check{s}\in\mathcal{R}\) with \(s(j)=\check{s}(j)\) for each \(j\in J(n,\mathcal{R})\). Now, we introduce a variation measurable selection theorem. It is a quick result of [12, Lemma 5.25] by using measure-preserving property. **Lemma 4.3**.: Let \(\pi:(X,\mathscr{X},\mu)\to(Y,\mathscr{Y},\nu)\) be a factor map between two Polish probability spaces. Then there exists a Borel measurable map \(\widehat{\pi}:Y\to X\) such that \(\pi\circ\widehat{\pi}(y)=y\) for \(\nu\)-a.s. \(y\in Y\). ### Weak-horseshoes Before stating Theorem 4.4, we give a one to one corresponding relationship between the element in \(\{0,1\}^{\mathbb{Z}_{+}}\) and subset of \(\mathbb{Z}_{+}\) given by \[u\in\{0,1\}^{\mathbb{Z}_{+}}\mapsto\widehat{u}=\{n\in\mathbb{Z}_{+}:u(n)=1\}. \tag{4.1}\] Now, let's display measurable weak-horseshoes of GSNS. **Theorem 4.4**.: _Under the setting in Proposition 3.10, there is a pair of non-empty disjoint compact subsets \(\{U_{1},U_{2}\}\) of \(\mathbb{R}^{d}\), a positive constant \(\boldsymbol{b}\), a sequence \(\{\mathbf{N}_{n}\}_{n\in\mathbb{N}}\) of strictly increasing Borel measurable maps \(\mathbf{N}_{n}:\Omega\to\mathbb{Z}_{+}\)1, and a sequence \(\{\gamma_{n}\}_{n\in\mathbb{N}}\) of Borel measurable maps \(\gamma_{n}:\Omega\to\{0,1\}^{\mathbb{Z}_{+}}\) such that for any \(n\in\mathbb{N}\) and \(\mathbb{P}\)-a.s. \(\omega\in\Omega\), one has that_ Footnote 1: In this place, ”strictly increasing” means that \(\mathbf{N}_{n+1}(\omega)>\mathbf{N}_{n}(\omega)\) for any \(n\in\mathbb{N}\) and \(\mathbb{P}\)-a.s. \(\omega\in\Omega\). _(a)_ \(\widehat{\gamma}_{n}(\omega)\subset\{0,1,\ldots,\mathbf{N}_{n}(\omega)-1\}\) _and_ \(|\widehat{\gamma}_{n}(\omega)|\geqslant\boldsymbol{b}\mathbf{N}_{n}(\omega)\)_;_ _(b) for any_ \(s\in\{1,2\}^{\widehat{\gamma}_{n}(\omega)}\)_, there exists an_ \(x_{s}\in\mathbb{R}^{d}\) _with_ \(\Phi_{\omega}^{j}(x_{s})\in U_{s(j)}\) _for any_ \(j\in\widehat{\gamma}_{n}(\omega)\)_._ Proof.: Recall that \(\Phi\) is the stochastic flow of GSNS on \(\mathbb{R}^{d}\) over \((\Omega,\mathscr{F},\mathbb{P})\) defined as (1.5), \(\varrho\) is the unique stationary measure of \(\Phi\), and \(\theta:\Omega\to\Omega\) with \(\theta(\omega)=\omega(\cdot+1)-\omega(\cdot)\) is the preserving-measure transformation on \((\Omega,\mathscr{F},\mathbb{P})\). Then \((\Omega,\mathscr{F},\mathbb{P},\theta)\) is an ergodic measure-preserving dynamical system. Denoting, \(X:=\Omega\times\mathbb{R}^{d}\), \(\mathscr{X}\) as the Borel \(\sigma\)-algebra of \(X\), \(\mu\) as the Borel probability measure on \(X\) defined as Lemma 3.4, and \(T:X\to X\) as the skew product map induced by the time-\(1\) map of \(\Phi\), then \(\pi_{\Omega}:\big{(}X,\mathscr{X},\mu,T\big{)}\to(\Omega,\mathscr{F},\mathbb{P},\theta)\) with \((\omega,x)\mapsto\omega\) is a factor map between two invertible ergodic measure-preserving dynamical systems. Denote \(\mathcal{P}_{\mu}(\pi_{\Omega})\) as the relative Pinsker \(\sigma\)-algebra of \(\pi_{\Omega}\). Then, there exists an invertible measure-preserving dynamical system \((Y,\mathscr{Y},\nu,S)\) on the Polish probability space and two factor maps \[\pi_{1}:(X,\mathscr{X},\mu,T)\to(Y,\mathscr{Y},\nu,S),\quad\pi_{2}:(Y,\mathscr{ Y},\nu,S)\to(\Omega,\mathscr{F},\mathbb{P},\theta),\] between invertible measure-preserving dynamical systems on Polish probability spaces such that \(\pi_{2}\circ\pi_{1}=\pi_{\Omega}\) and \(\pi_{1}^{-1}(\mathscr{Y})=\mathcal{P}_{\mu}(\pi_{\Omega})\pmod{\mu}\). Let \(\mu=\int_{Y}\mu_{y}d\nu(y)\) be the disintegration relative to \(Y\). According to Proposition 3.10 and Remark 3.3, \(h_{\mu}(T|\Omega)=h_{\varrho}(\Phi)>0.\) By (1) and (2) in Lemma 4.1, we know that the measure-preserving dynamical system \[(X\times X,\mathscr{X}\times\mathscr{X},\mu\times_{Y}\mu,T\times T),\] is ergodic and \(\mu_{y}\) is non-atomic for \(\nu\)-a.s. \(y\in Y\). Next, we give a sufficient condition to ensure the existence of measurable full-horseshoes. **Lemma 4.5**.: For any a pair of disjoint non-empty compact subsets \(\{U_{1},U_{2}\}\) of \(\mathbb{R}^{d}\), if \[\mu\times_{Y}\mu\big{(}(\Omega\times U_{1})\times(\Omega\times U_{2})\big{)}>0,\] then there exists a positive constant \(\mathbf{b}\), a sequence \(\{\mathbf{N}_{n}\}_{n\in\mathbb{N}}\) of strictly increasing Borel measurable maps \(\mathbf{N}_{n}:\Omega\to\mathbb{N}\) and a sequence \(\{\gamma_{n}\}_{n\in\mathbb{N}}\) of Borel measurable maps \(\gamma_{n}:\Omega\to\{0,1\}^{\mathbb{Z}_{+}}\) such that for any \(n\in\mathbb{N}\) and \(\mathbb{P}\)-a.s. \(\omega\in\Omega\) one has that (a) \(\widehat{\gamma}_{n}(\omega)\subset\{0,1,\ldots,\mathbf{N}_{n}(\omega)-1\}\) and \(|\widehat{\gamma}_{n}(\omega)|\geqslant\mathbf{b}\mathbf{N}_{n}(\omega)\); (b) for any \(s\in\{1,2\}^{\widehat{\gamma}_{n}(\omega)}\), there exists an \(x_{s}\in\mathbb{R}^{d}\) with \(\Phi_{\omega}^{j}(x_{s})\in U_{s(j)}\) for any \(j\in\widehat{\gamma}_{n}(\omega)\). Now we are ready to prove Theorem 4.4 assuming Lemma 4.5. It is clear that we only need to explain the existence of \(\{U_{1},U_{2}\}\) in Lemma 4.5. The proof of Lemma 4.5 will be carried later. Let \(\pi_{\mathbb{R}^{d}}:\Omega\times\mathbb{R}^{d}\to\mathbb{R}^{d}\) be the projection and \[\Delta_{\pi_{\mathbb{R}^{d}}}:=\left\{\big{(}(\omega_{1},x_{1}),(\omega_{2},x_ {2})\big{)}\in X\times X:x_{1}=x_{2}\right\},\] Note that \(\pi_{1}^{-1}(y)\subset\pi_{\Omega}^{-1}(\pi_{2}(y))\) and \(\mu_{y}(\pi_{1}^{-1}(y))=1\). It follows that \[\mu_{y}\times\mu_{y}(\Delta_{\pi_{\mathbb{R}^{d}}}) =\mu_{y}\times\mu_{y}\left(\Delta_{\pi_{\mathbb{R}^{d}}}\cap\big{(} \pi_{\Omega}^{-1}(\pi_{2}(y))\times\pi_{\Omega}^{-1}(\pi_{2}(y))\big{)}\right)\] \[\leqslant\mu_{y}\times\mu_{y}(\Delta_{X})\] \[\overset{\text{Lemma \ref{lem:c-1}}\;(1)}{=}0\] for \(\nu\)-a.s. \(y\in Y\). Hence, \(\mu\times_{Y}\mu(X\times X\setminus\Delta_{\pi_{\mathbb{R}^{d}}})=1\). Denote \[\mathfrak{O}:=\{\bar{O}(x,1/n):x\in\mathbb{Q}^{d}\text{ and }n\in\mathbb{N}\}, \text{ and }\mathfrak{U}:=\{(U_{1},U_{2})\in\mathfrak{O}\times\mathfrak{O}:\rho(U_{1},U_{ 2})>0\},\] where \(\bar{O}(x,1/n):=\{y\in\mathbb{R}^{d}:\|x-y\|\leqslant 1/n\}\), and \(\rho(U_{1},U_{2})=\inf_{x\in U_{1},y\in U_{2}}\|x-y\|.\) It is clear that \[X\times X\setminus\Delta_{\pi_{\mathbb{R}^{d}}}=\bigcup_{(U_{1},U_{2})\in \mathfrak{U}}(\Omega\times U_{1})\times(\Omega\times U_{2}).\] Therefore, there exists \(\{U_{1},U_{2}\}\in\mathfrak{U}\) such that \(\mu\times_{Y}\mu\big{(}(\Omega\times U_{1}\big{)}\times\big{(}\Omega\times U_{2} \big{)}\big{)}>0.\) This finishes the proof of Theorem 4.4. Finally, we divide into four steps to prove Lemma 4.5. Proof of Lemma 4.5.: **Step 1:** In this step, we mainly introduce " function of estimation entropy " and "recurrence time sequence". The nations will be fixed through this proof and we use boldface form to stress some important nations. Recall that \(\{U_{1},U_{2}\}\) is a pair of disjoint non-empty compact subsets of \(\mathbb{R}^{d}\) with \[\mu\times_{Y}\mu\Big{(}(\Omega\times U_{1})\times(\Omega\times U_{2})\Big{)}>0.\] By (3) in Lemma 4.1, there exists a Borel measurable subset \(\boldsymbol{A}\) of \(Y\) with \(\nu(\boldsymbol{A})>0\), a positive integer \(\boldsymbol{r}>2\), and a Borel measurable partition \(\alpha=\{B_{1},B_{2},\ldots,B_{\boldsymbol{r}}\}\) of \(X\) such that \[\pi_{1}^{-1}(\boldsymbol{A})\cap B_{i}\subset\Omega\times U_{i}\text{ for }i=1,2, \tag{4.2}\] and \(\mu_{y}(B_{j})=1/\boldsymbol{r}\), \(j=1,\ldots,\boldsymbol{r}\) for \(\nu\)-a.s. \(y\in Y\). **Function of estimation entropy:** According to (ii) in Lemma 2.3, we have \[\lim_{l\to+\infty}h_{\mu}(T^{l},\alpha|\Omega)=H_{\mu}(\alpha|Y)=\sum_{j=1}^{ \boldsymbol{r}}\int_{Y}-\mu_{y}(B_{j})\log\mu_{y}(B_{j})d\nu(y)=\log\boldsymbol {r}.\] Thus, there is an \(\boldsymbol{l}\in\mathbb{N}\) such that \[3\boldsymbol{c}_{0}:=h_{\mu}(T^{l},\alpha|\Omega)-\nu(Y\setminus\boldsymbol{A} )\log\boldsymbol{r}-\nu(\boldsymbol{A})\log(\boldsymbol{r}-1)>0.\] **Claim 4.6**.: Define a Borel measurable function on \(Y\) as follows, \[\delta(y):=\Big{(}h_{\mu_{y}}(T^{l},\alpha)-\log(\boldsymbol{r}-1)\Big{)}1_{ \boldsymbol{A}}(y),\] where \(h_{\mu_{y}}(T^{l},\alpha):=\lim_{n\to+\infty}H_{\mu_{y}}\Big{(}\alpha|\bigvee_{ i=1}^{n}T^{-il}\alpha\Big{)}\). Then there exists a Borel measurable subset \(\boldsymbol{D}\) of \(Y\) such that \(\nu(\boldsymbol{D})>0\) and, for each \(y\in\boldsymbol{D}\) one has that: 1. \(\lim_{m\to+\infty}\frac{1}{m}\sum_{i=0}^{m-1}1_{\boldsymbol{A}}(S^{ti}y)\) exists, and it is greater than \(0\). Denoting the limiting as \(1_{\boldsymbol{A}}^{*}(y)\), then \(1_{\boldsymbol{A}}^{*}(y)>0\); 2. \(\lim_{m\to+\infty}\frac{1}{m}\sum_{i=0}^{m-1}\delta(S^{ti}y)\) exists, and it is greater than or equal to \(3\boldsymbol{c}_{0}\). Denoting the limiting as \(\delta^{*}(y)\), then \(\delta^{*}(y)\geqslant 3\boldsymbol{c}_{0}\); 3. for any \(i\in\mathbb{Z}\), \((T^{i})_{*}\mu_{y}=\mu_{S^{i}y}\). 4. for any \(i\in\mathbb{Z}\), one has that \[1_{\boldsymbol{A}}(S^{ti}y)\geqslant\frac{\delta(S^{ti})}{\log\boldsymbol{r}- \log(\boldsymbol{r}-1)}.\] Proof of Claim 4.6.: Note that \(\nu(\boldsymbol{A})>0\), \(h_{\mu_{y}}(T^{l},\alpha)\leqslant\log\boldsymbol{r}\) for \(\nu\)-a.s. \(y\in Y\), and \[\int_{Y}\delta(y)d\nu(y) =\int_{\boldsymbol{A}}h_{\mu_{y}}(T^{l},\alpha)d\nu(y)-\nu( \boldsymbol{A})\log(\boldsymbol{r}-1)\] \[=\int_{Y}h_{\mu_{y}}(T^{l},\alpha)d\nu(y)-\int_{Y\setminus \boldsymbol{A}}h_{\mu_{y}}(T^{l},\alpha)d\nu(y)-\nu(\boldsymbol{A})\log( \boldsymbol{r}-1)\] \[\begin{split}\text{(i) in Lemma \ref{lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlemlem:lemlem:lemlem:lemlem:lem:lemlemlem:lemlemlem:lemlem:lem:lemlem:lemlem:lemlem:lemlemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlemlem:lemlem:lemlem:lemlemlem:lemlem:lemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlem:lemlemlemlem:lemlemlem:lemlemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlem:lemlemlemlem:lemlemlemlem:lemlemlem:lemlemlem:lemlemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlemlem:lemlemlem:lemlemlem:lemlemlemlem:lemlemlem:lemlemlem:lemlemlemlem:lemlemlem:lemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlem:lemlemlemlem:lemlemlemlem \[=\boldsymbol{D}\cap S^{-lk}\boldsymbol{A}\cap\left(\bigcup_{m=n}^{k-1}\bigcap_{i= m}^{k-1}S^{-l\boldsymbol{i}}(Y\setminus\boldsymbol{A})\cap\{y\in\boldsymbol{D}: \boldsymbol{a}_{n}(y)=\boldsymbol{l}(m-1)\}\right)\in\mathscr{Y}.\] Thus, \(\{\boldsymbol{a}_{n}\}_{n\in\mathbb{N}}\) is a family of Borel measurable maps. Note that \(\boldsymbol{E}:=\bigcup_{i=0}^{+\infty}S^{-i}\boldsymbol{D}\) is a \(\nu\)-full measure subset of \(Y\), since \((Y,\mathscr{Y},\nu,S)\) is an ergodic measure-preserving dynamical system and \(\nu(\boldsymbol{D})>0\). Let * \(\tau:\boldsymbol{E}\to\mathbb{Z}_{+}:=\mathbb{N}\cup\{0\}\) be the first time return to \(\boldsymbol{D}\) under the transformation \(S\), i.e. \[\tau(z):=\inf\{n\in\mathbb{Z}_{+}:S^{n}z\in\boldsymbol{D}\};\] * \(\{\boldsymbol{b}_{n}\}_{n\in\mathbb{N}}\) be a sequence of Borel measurable maps \[\boldsymbol{b}_{n}:\boldsymbol{E}\to\mathbb{Z}_{+}\text{ with }\boldsymbol{b}_{n}(z)=\tau(z)+\boldsymbol{a}_{n}(S^{\tau(z)}z).\] It is clear that 1. \(\{\boldsymbol{b}_{n}\}_{n\in\mathbb{N}}\) is a sequence of Borel measurable maps \(\boldsymbol{b}_{n}:\boldsymbol{E}\to\mathbb{Z}_{+}\); 2. denoting \(\mathfrak{b}(z)=\{\boldsymbol{b}_{1}(z),\boldsymbol{b}_{2}(z),\dots\}\), then for each \(z\in\boldsymbol{E}\), (4.4) \[\lim_{m\to+\infty}\frac{|\mathfrak{b}(z)\cap\{0,1,\dots,m-1\}|}{m}\geqslant 3 \boldsymbol{c}_{1};\] 3. for any \(z\in\boldsymbol{E}\), one has that \(S^{j}z\in\boldsymbol{A}\) for any \(j\in\mathfrak{b}(z)\). **Step 2**: Recall that \(\{B_{1},\cdots,B_{\boldsymbol{r}}\}\) is a finite Borel measurable partition of of \(X\), which is define in **Step 1**. In this step, we mainly use positive entropy property of systems to estimate the "hitting freedom" of the elements \(\{B_{1},\cdots,B_{\boldsymbol{r}}\}\) along \(\mathfrak{b}(z)\) for each \(z\in\boldsymbol{E}\). Precisely, it is formulated as following claim. **Claim 4.7**.: There exists a Borel measurable map \(\widetilde{\mathbf{N}}_{0}:\boldsymbol{E}\to\mathbb{N}\) such that for any \(z\in\boldsymbol{E}\) and \(k\geqslant\widetilde{\mathbf{N}}_{0}(z)\), one has that \(|\mathcal{R}_{k}(z)|\geqslant\left((\boldsymbol{r}-1)2^{\mathsf{c}_{0}}\right)^ {k}\), where \[\mathcal{R}_{k}(z):=\left\{s\in\{1,\cdots,\boldsymbol{r}\}^{\{1,\dots,k\}}: \mu_{z}\left(\pi_{1}^{-1}z\cap\left(\bigcap_{j=1}^{k}T^{-\boldsymbol{b}_{j}(z )}B_{s(j)}\right)\right)>0\right\}. \tag{4.5}\] Proof of Claim 4.7.: For any \(k\in\mathbb{N}\), define a map \(\delta_{k}:\boldsymbol{D}\to\mathbb{R}\) with \[\delta_{k}(y):=\frac{1}{\frac{\boldsymbol{a}_{k}(y)}{l}+1}\sum_{j=0}^{ \boldsymbol{a}_{k}(y)/l}\delta(S^{lj}y).\] It is clear that \(\delta_{k}\) is Borel measurable, since \(\boldsymbol{a}_{k}:\boldsymbol{D}\to\mathbb{Z}_{+}\) is Borel mesurable. Due to \(\lim_{k\to+\infty}\delta_{k}(y)=\delta^{*}(y)\geqslant 3\boldsymbol{c}_{0}\) for any \(y\in\boldsymbol{D}\), one has that \[\boldsymbol{D}=\bigcup_{i=1}^{\infty}\bigcap_{k=i}^{\infty}\boldsymbol{D}_{k}, \tag{4.6}\] where \(\boldsymbol{D}_{k}:=\{y\in\boldsymbol{D}:\delta_{k}(y)\geqslant\boldsymbol{c} _{0}\}\). Define a map \(\tilde{\mathbf{N}}_{0}:\boldsymbol{D}\to\mathbb{N}\) such that \(\tilde{\mathbf{N}}_{0}(y)\) is the smallest positive integer \(n\) satisfying \(y\in\bigcap_{k=n}^{\infty}\boldsymbol{D}_{k}\). For any \(n\in\mathbb{N}\), it is clear that \[\{y\in\boldsymbol{D}:\tilde{\mathbf{N}}_{0}(y)=n\}=\left(\bigcap_{k=n}^{ \infty}\boldsymbol{D}_{k}\right)\bigcap\left(\boldsymbol{D}\setminus\bigcap_{k= n-1}^{\infty}\boldsymbol{D}_{k}\right),\] where \(\boldsymbol{D}_{0}:=\boldsymbol{D}\). It follows that \(\tilde{\mathbf{N}}_{0}:\boldsymbol{D}\to\mathbb{N}\) is Borel measurable. For each \(k\in\mathbb{N}\) and \(z\in\mathbf{E}\), one has that \[\log|\mathcal{R}_{k}(z)| \geqslant H_{\mu_{z}}\left(\bigvee_{j=1}^{k}T^{-\mathbf{b}_{j}(z)} \alpha\right)\] \[=H_{\mu_{S^{-\tau(z)}\alpha_{S^{\tau(z)}z}}}\left(\bigvee_{j=1}^{k }T^{-\mathbf{a}_{j}(S^{\tau(z)}z)-\tau(z)}\alpha\right)\] \[=H_{\mu_{S^{\tau(z)}z}}\left(\bigvee_{j=1}^{k}T^{-\mathbf{a}_{j}(S^{ \tau(z)}(z))}\alpha\right)\] \[=H_{\mu_{S^{\tau(z)}z}}\left(T^{-\mathbf{a}_{1}(S^{\tau(z)}z)}\alpha |\bigvee_{j=2}^{k}T^{-\mathbf{a}_{j}(S^{\tau(z)}z)}\alpha\right)+H_{\mu_{S^{\tau(z )}z}}\left(\bigvee_{j=2}^{k}T^{-\mathbf{a}_{j}(S^{\tau(z)}z)}\alpha\right)\] \[=H_{\mu_{S^{\mathbf{b}_{1}(z)}z}}\left(\alpha|\bigvee_{j=2}^{k}T^{-( \mathbf{a}_{j}(S^{\tau(z)}z)-\mathbf{a}_{1}(S^{\tau(z)}z))}\alpha\right)+H_{\mu_{S^{ \tau(z)}(z)}}\left(\bigvee_{j=2}^{k}T^{-\mathbf{a}_{j}(S^{\tau(z)}z)}\alpha\right)\] \[\geqslant h_{\mu_{S^{\mathbf{b}_{1}(z)}z}}(T^{\mathbf{l}},\alpha)+H_{\mu_ {S^{\tau(z)}(z)}}\left(\bigvee_{j=2}^{k}T^{-\mathbf{a}_{j}(S^{\tau(z)}z)}\alpha\right)\] \[\cdots\] \[\geqslant\sum_{j=1}^{k}h_{\mu_{S^{\mathbf{b}_{j}(z)}z}}(T^{\mathbf{l}}, \alpha).\] Therefore, for any \(z\in\mathbf{E}\) and \(k\geqslant\mathbf{\tilde{N}}_{0}(S^{\tau(z)}z)\), one has that \[\log|\mathcal{R}_{k}(z)| \geqslant\sum_{j=1}^{k}h_{\mu_{S^{\mathbf{b}_{j}(z)}z}}(T^{\mathbf{l}},\alpha)\] \[=k\log(\mathbf{r}-1)+\sum_{j=0}^{\mathbf{a}_{k}(S^{\tau(z)}z)/\mathbf{l}} \big{(}h_{\mu_{S^{\mathbf{l}j+\tau(z)}z}}(T^{\mathbf{l}},\alpha)-\log(\mathbf{r}-1)\big{)} 1_{\mathbf{A}}(S^{\mathbf{l}j+\tau(z)}z)\] \[=k\log(\mathbf{r}-1)+\sum_{j=0}^{\mathbf{a}_{k}(S^{\tau(z)}z)/\mathbf{l}} \delta(S^{\mathbf{l}j+\tau(z)}z)\] \[\geqslant k\log(\mathbf{r}-1)+\Big{(}\frac{\mathbf{a}_{k}(S^{\tau(z)}z)}{ \mathbf{l}}+1\Big{)}\mathbf{c}_{0}\] \[\geqslant k\log(\mathbf{r}-1)+k\mathbf{c}_{0}.\] For all, the Claim 4.7 holds by letting \(\mathbf{\tilde{N}}_{0}:\mathbf{E}\to\mathbb{N}\) with \(z\mapsto\mathbf{\tilde{N}}_{0}(S^{\tau(z)}z)\). **Step 3**: In this step, we mainly use the combinatorial lemma (Lemma 4.2) to obtain a sequence of measurable hitting times such that the elements in \(\{B_{1},\cdots,B_{\mathbf{r}}\}\) can hit freely along these hitting times. This is, **Claim 4.8**.: There exists a positive constant \(\mathbf{b}\), a sequence \(\{\widetilde{\mathbf{N}}_{n}\}_{n\in\mathbb{N}}\) of strictly monotone increasing Borel measurable maps \(\widetilde{\mathbf{N}}_{n}:\mathbf{E}\to\mathbb{Z}_{+}\), and a sequence \(\{\widetilde{\gamma}_{n}\}_{n\in\mathbb{N}}\) of Borel measurable maps \(\widetilde{\gamma}_{n}:\mathbf{E}\to\{0,1\}^{\mathbb{Z}_{+}}\) such that for each \(z\in E\) and \(n\in\mathbb{N}\), (a) \(\widetilde{\widetilde{\gamma}}_{n}(z)\subset\{0,1,\ldots,\widetilde{\mathbf{N} }_{n}(z)-1\}\) and \(|\widetilde{\widetilde{\gamma}}_{n}(z)|\geqslant\mathbf{b}\widetilde{\mathbf{N}}_{ n}(z)\); (b) for any \(s\in\{1,\ldots,\mathbf{r}\}^{\widetilde{\gamma}_{n}(z)}\), one has that \[\mu_{z}\left(\pi_{1}^{-1}z\cap\bigcap_{j\in\widetilde{\widetilde{\gamma}}_{n} (z)}T^{-j}B_{s(j)}\right)>0.\] Proof of Claim 4.8.: Recall that \(\mathbf{r}\) is a positive integer and \(\mathbf{c}_{0}\) is positive constant defined in **Step 1**, and for any \(z\in E\), \(\mathfrak{b}(z)\) is a subset of \(\mathbb{Z}_{+}\) defined in **Step 2**. Let \(\mathbf{c}\) be a positive constant defined by applying Lemma 4.2 to \(\mathbf{r}\) and \(\mathbf{\lambda}:=2^{\mathsf{c}_{0}}\). Combining (4.4) and the similar argument in the proof of Claim 4.7, there exists a Borel measurable map \(\widetilde{\mathbf{N}}_{1}:\mathbf{E}\to\mathbb{Z}_{+}\) such that for any \(z\in\mathbf{E}\), one has that \(\widetilde{\mathbf{N}}_{1}(z)\geqslant 1/\mathbf{c}\), and for each \(m\geqslant\widetilde{\mathbf{N}}_{1}(z)\), \[|\mathfrak{b}(z)\cap\{0,1,\ldots,m-1\}|\geqslant\mathbf{c}_{1}m>\widetilde{ \mathbf{N}}_{0}(z), \tag{4.7}\] where \(\widetilde{\mathbf{N}}_{0}:\mathbf{E}\to\mathbb{Z}_{+}\) is the Borel measurable map defined in Claim 4.7 and \(\mathbf{c}_{1}\) is the positive constant defined in (4.3). For each \(n\in\mathbb{N}\), define a map \(\widetilde{\mathbf{N}}_{n}:\mathbf{E}\to\mathbb{N}\) with \(\widetilde{\mathbf{N}}_{n}(z):=\widetilde{\mathbf{N}}_{1}(z)+n-1\). Denoting \[\check{\mathbf{N}}_{n}(z):=|\mathfrak{b}(z)\cap\{0,1,\ldots,\widetilde{ \mathbf{N}}_{n}(z)-1\}|, \tag{4.8}\] then \(\check{\mathbf{N}}_{n}\) is Borel measurable, since \(\{\mathbf{b}_{n}\}_{n\in\mathbb{N}}\) is a sequence of Borel measurable maps \(\mathbf{b}_{n}:\mathbf{E}\to\mathbb{Z}_{+}\). Given \(n\in\mathbb{N}\) and \(\mathcal{R}\subset\{1,\cdots\mathbf{r}\}^{\{1,2,\ldots,n\}}\) with \(|\mathcal{R}|\geqslant\big{(}(\mathbf{r}-1)\mathbf{\lambda}\big{)}^{n}\), applying Lemma 4.2 to \(\mathbf{r}\), \(\mathbf{\lambda}\) and \(\mathbf{c}\), we can choose a set \[J(n,\mathcal{R})\subset\{1,\ldots,n\} \tag{4.9}\] with \(|J(n,\mathcal{R})|\geqslant\mathbf{c}n\) such that for any \(s\in\{1,\ldots,\mathbf{r}\}^{J(n,\mathcal{R})}\), there exists \(\check{s}\in\mathcal{R}\) with \(s(j)=\check{s}(j)\) for each \(j\in J(n,\mathcal{R})\). By Claim 4.7, one has that for \(z\in\mathbf{E}\) \[\mathcal{R}_{\check{\mathbf{N}}_{n}(z)}(z)\subset\{1,\cdots,\mathbf{r}\}^{\{1, \cdots,\check{\mathbf{N}}_{n}(z)\}}\quad\text{and}\quad|\mathcal{R}_{\check{ \mathbf{N}}_{n}(z)}(z)|\geqslant\big{(}(\mathbf{r}-1)\mathbf{\lambda}\big{)}^{ \check{\mathbf{N}}_{n}(z)}.\] Therefore, there exists a subset2 Footnote 2: For any \(z_{1}\neq z_{2}\in\mathbf{E}\), if \(\check{\mathbf{N}}_{n}(z_{1})=\check{\mathbf{N}}_{n}(z_{2})\) and \(\mathcal{R}_{\check{\mathbf{N}}_{n}(z_{1})}(z_{1})=\mathcal{R}_{\check{ \mathbf{N}}_{n}(z_{2})}(z_{2})\), we desire that \(J\big{(}\check{\mathbf{N}}_{n}(z_{1}),\mathcal{R}_{\check{\mathbf{N}}_{n}(z_ {1})}(z_{1})\big{)}=J\big{(}\check{\mathbf{N}}_{n}(z_{2}),\mathcal{R}_{ \check{\mathbf{N}}_{n}(z_{2})}(z_{2})\big{)}\). \[J\big{(}\check{\mathbf{N}}_{n}(z),\mathcal{R}_{\check{\mathbf{N}}_{n}(z)}(z) \big{)}\subset\{1,\cdots,\check{\mathbf{N}}_{n}(z)\} \tag{4.10}\] with \(|J\big{(}\check{\mathbf{N}}_{n}(z),\mathcal{R}_{\check{\mathbf{N}}_{n}(z)}(z) \big{)}|\geqslant\mathbf{c}\check{\mathbf{N}}_{n}(z)\geqslant 1\) such that for any \(s\in\{1,\ldots,\mathbf{r}\}^{J\big{(}\check{\mathbf{N}}_{n}(z),\mathcal{R}_{ \check{\mathbf{N}}_{n}(z)}(z)\big{)}}\), \[\mu_{z}\left(\pi_{1}^{-1}z\cap\bigcap_{j\in J\big{(}\check{\mathbf{N}}_{n}(z), \mathcal{R}_{\check{\mathbf{N}}_{n}(z)}(z)\big{)}}T^{-\mathbf{b}_{j}(z)}B_{s(j)} \right)>0.\] According to the corresponding relationship in (4.1), we can define a map \[\underline{\gamma}_{n}:\mathbf{E}\to\{0,1\}^{\mathbb{Z}_{+}}\quad\text{with}\quad \underline{\widehat{\gamma}}_{n}(z):=J\big{(}\check{\mathbf{N}}_{n}(z), \mathcal{R}_{\check{\mathbf{N}}_{n}(z)}(z)\big{)}.\] Fix \(n\in\mathbb{N}\). Now, we are going to prove that \(\underline{\gamma}_{n}:\mathbf{E}\to\{0,1\}^{\mathbb{Z}_{+}}\) is Borel measurable. Note that the image of \(\underline{\gamma}_{n}\) contains at most countable points in \(\{0,1\}^{\mathbb{Z}_{+}}\) by (4.10). Hence that, we only need to prove that for any finite subset \(J\) of \(\mathbb{Z}_{+}\), \[\{z\in\mathbf{E}:\widehat{\underline{\gamma}}_{n}(z)=J\} \tag{4.11}\] is a measurable subset of \(\mathbf{E}\). It is sufficient to prove that any \(\tilde{n}\in\mathbb{N}\) and \(\mathcal{R}\subset\{1,\ldots,\mathbf{r}\}^{\{1,\ldots,\tilde{n}\}}\), \[\{z\in E:\bar{\mathbb{N}}_{n}(z)=\tilde{n}\text{ and } \mathcal{R}_{\bar{\mathbb{N}}_{n}(z)}(z)=\mathcal{R}\}\] \[= \{z\in E:\bar{\mathbb{N}}_{n}(z)=\tilde{n}\}\bigcap\{z\in E: \mathcal{R}_{\tilde{n}}(z)=\mathcal{R}\}\] \[= \{z\in E:\bar{\mathbb{N}}_{n}(z)=\tilde{n}\}\bigcap\left(\bigcap \limits_{s\in\mathcal{R}}\left\{z\in\mathbf{E}:\mu_{z}\Big{(}\bigcap\limits_{j=1} ^{\tilde{n}}T^{-\mathbf{b}_{j}(z)}B_{s(j)}\Big{)}>0\right\}\right)\] \[\bigcap\left(\bigcap\limits_{s\in\{1,\ldots,\mathbf{r}\}^{\{1,\ldots, n\}}\setminus\mathcal{R}}\left\{z\in\mathbf{E}:\mu_{z}\Big{(}\bigcap\limits_{j=1} ^{\tilde{n}}T^{-\mathbf{b}_{j}(z)}B_{s(j)}\Big{)}=0\right\}\right)\] is a Borel measurable subset of \(Y\). Since for any \(s\in\mathcal{R}\), \[\left\{z\in\mathbf{E}:\mu_{z}\Big{(}\bigcap\limits_{j=1}^{\tilde{n}} T^{-\mathbf{b}_{j}(z)}B_{s(j)}\Big{)}>0\right\}\] \[= \bigcup\limits_{0\leqslant b_{1}<\cdots<b_{\tilde{n}}}\left(\{z \in\mathbf{E}:\mathbf{b}_{j}(z)=b_{j}\text{ for }j=1,\ldots,\tilde{n}\}\cap\left\{z\in\mathbf{E}:\mu_{z} \Big{(}\bigcap\limits_{j=1}^{\tilde{n}}T^{-b_{j}}B_{s(j)}\Big{)}>0\right\}\right)\] and for any \(s\in\{1,\ldots,\mathbf{r}\}\setminus\mathcal{R}\), \[\left\{z\in\mathbf{E}:\mu_{z}\Big{(}\bigcap\limits_{j=1}^{\tilde{n}} T^{-\mathbf{b}_{j}(z)}B_{s(j)}\Big{)}=0\right\}\] \[= \bigcup\limits_{0\leqslant b_{1}<\cdots<b_{\tilde{n}}}\left(\{z \in\mathbf{E}:\mathbf{b}_{j}(z)=b_{j}\text{ for }j=1,\ldots,\tilde{n}\}\cap\left\{z\in\mathbf{E}:\mu_{z} \Big{(}\bigcap\limits_{j=1}^{\tilde{n}}T^{-b_{j}}B_{s(j)}\Big{)}=0\right\}\right)\] both are Borel measurable subsets of \(Y\), \(\underline{\gamma}_{n}\) is Borel measurable. For each \(n\in\mathbb{N}\), define \(\widetilde{\gamma}_{n}:\mathbf{E}\to\{0,1\}^{\mathbb{Z}_{+}}\) with \[\widehat{\widetilde{\gamma}}_{n}(z)=\{\mathbf{b}_{j}(z):j\in\widehat{\underline{ \gamma}}_{n}(z)\}.\] Given \(\tilde{l}\in\mathbb{N}\) and a finite subset \(\tilde{J}=\{\tilde{j}_{1}<\tilde{j}_{2}<\ldots\tilde{j}_{\tilde{l}}\}\) of \(\mathbb{N}\), one has that \[\{z\in\mathbf{E}:\widehat{\widetilde{\gamma}}_{n}(z)=\tilde{J}\}\] \[= \bigcup\limits_{0\leqslant j_{1}<\cdots<j_{\tilde{l}}}\left(\left\{ z\in\mathbf{E}:\widehat{\underline{\gamma}}_{n}(z)=\{j_{1},j_{2},\ldots,j_{\tilde{l}}\} \right\}\cap\{z\in\mathbf{E}:\mathbf{b}_{j_{i}}(z)=\tilde{j}_{i}:i=1,\ldots,\tilde{l}\}\right)\] is a Borel measurable subset of \(Y\). It follows that \(\widetilde{\gamma}_{n}\) is Borel measurable. Note that for any \(z\in\mathbf{E}\), \[\widehat{\widetilde{\gamma}}_{n}(z) =\{\mathbf{b}_{j}(z):j\in\widehat{\underline{\gamma}}_{n}(z)\}\] \[=\{\mathbf{b}_{j}(z):j\in J\big{(}\bar{\mathbb{N}}_{n}(z),\mathcal{R} _{\bar{\mathbb{N}}_{n}(z)}(z)\big{)}\}\] \[(\omega_{s},x_{s})\in\pi_{1}^{-1}\Big{(}\widehat{\pi}_{2}(\omega)\Big{)}\cap\bigcap_ {j\in\widehat{\gamma}_{n}(\omega)}T^{-j}B_{s(j)}. \tag{4.13}\] Noting that \(\pi_{2}\circ\widehat{\pi}_{2}(\omega)=\omega_{s}\in\Omega\), then \(\omega=\omega_{s}\). It follows that \(\pi_{1}(\omega,x_{s})=\widehat{\pi}_{2}(\omega)\in\boldsymbol{E}\) and \(S^{j}(\pi_{1}(\omega,x_{s}))\in\boldsymbol{A}\) for any \(j\in\widehat{\gamma}_{n}(\omega)=\widehat{\widehat{\gamma}}_{n}(\widehat{\pi}_ {2}(\omega))\). Therefore, \[(\theta^{j}\omega,\Phi^{j}_{\omega}(x_{s}))=T^{j}(\omega,x_{s})\in B_{s(j)} \cap\pi_{1}^{-1}\boldsymbol{A}\subset\Omega\times U_{s(j)},\] which implies that \(\Phi^{j}_{\omega}(x_{s})\in U_{s(j)}\) for any \(j\in\widehat{\gamma}_{n}(\omega)\). This finishes the proof of Lemma 4.5. ## 5. Proof of Theorem 1.2 Under the setting of Theorem 4.4, the collection of the hitting freedom at each fibers induces a product system. And, we will view the induced system as a trivial RDS. By Krylov-Bogolyubov theorem in RDS, we prove that GSNS has full-horseshoes. ### Krylov-Bogolyubov theorem in RDSs In this subsection, we mainly review the narrow topology of probability measures and Krylov-Bogolyubov theorem on an invariant random compact set for a continuous RDS. For convenience, we still assume \(M\) is a compact metric space, \(\mathscr{B}_{M}\) is the Borel \(\sigma\)-algebra of \(M\), \((\Omega,\mathscr{F},\mathbb{P},\theta)\) is measure-preserving dynamical system on the Polish probability space, and \(\mathscr{F}_{\mathbb{P}}\) is the completion of \(\mathscr{F}\) with respect to \(\mathbb{P}\) through this subsection. Denote \(C_{\Omega}(M)\) as the collections of functions \(h:\Omega\times M\to\mathbb{R}\) satisfying 1. for all \(x\in M\), \(\omega\mapsto h(\omega,x)\) is measurable from \((\Omega,\mathscr{F}_{\mathbb{P}})\) to \((\mathbb{R},\mathscr{B}_{\mathbb{R}})\); 2. for all \(\omega\in\Omega\), \(x\mapsto h(\omega,x)\) is continuous; 3. \(\int_{\Omega}\sup_{x\in M}|h(\omega,x)|d\mathbb{P}(\omega)<+\infty\). It is pointed out that if the mapping \(h:\Omega\times M\to\mathbb{R}\) satisfies (1) and (2), then \(h\) is measurable from \((\Omega\times M,\mathscr{F}_{\mathbb{P}}\otimes\mathscr{B}_{M})\) to \((\mathbb{R},\mathscr{B}_{\mathbb{R}})\) (for example, see [9, Lemma 1.1]). Recall that \(\mathcal{P}_{\mathbb{P}}(\Omega\times M)\) is the space of probability measures on \((\Omega\times M,\mathscr{F}_{\mathbb{P}}\otimes\mathscr{B}_{M})\) with the marginal \(\mathbb{P}\). The _narrow topology_ of \(\mathcal{P}_{\mathbb{P}}(\Omega\times M)\) is defined by the topology basis which is given by the collection of all sets of the form \[U_{h_{1},\ldots,h_{n}}(\check{\nu},\delta)=\{\nu\in\mathcal{P}_{\mathbb{P}}( \Omega\times M):|\int_{\Omega\times M}h_{k}\mathrm{d}\check{\nu}-\int_{\Omega \times M}h_{k}\mathrm{d}\nu|<\delta,k=1,\ldots,n\},\] where \(n\in\mathbb{N},h_{1},\ldots,h_{n}\in C_{\Omega}(M)\), \(\check{\nu}\in\mathcal{P}_{\mathbb{P}}(\Omega\times M)\) and \(\delta>0\). A _random compact set_\(K\) of \(M\) on the measurable space \((\Omega,\mathscr{F}_{\mathbb{P}})\) is a set-valued map from \(\Omega\) to \(2^{M}\), the collection of all subsets of \(M\), with \(\omega\mapsto K(\omega)\) satisfying that 1. \(K(\omega)\) is a non-empty compact subset for any \(\omega\in\Omega\); 2. for any \(x\in M\), \(\omega\mapsto d_{M}(x,K(\omega))\) is a measurable map \((\Omega,\mathscr{F}_{\mathbb{P}})\) to \((\mathbb{R},\mathscr{B}_{\mathbb{R}})\), where \(d_{M}\) is a compatible metric on \(M\). Next, we review two lemmas about Portmenteau theorem in RDSs and the equivalent characterization of a random compact set, respectively. **Lemma 5.1** ([9, Theorem 3.17]).: Let \(\{\widetilde{\nu}_{n}\}_{n\in\mathbb{N}}\) be a sequence of \(\mathcal{P}_{\mathbb{P}}(\Omega\times M)\). Then \(\{\widetilde{\nu}_{n}\}_{n\in\mathbb{N}}\) converges to a measure \(\widetilde{\nu}\in\mathcal{P}_{\mathbb{P}}(\Omega\times M)\) in the narrow topology if and only if \[\limsup_{n\to+\infty}\widetilde{\nu}_{n}(K)\leqslant\widetilde{\nu}(K)\] for any random compact set \(K\) of \(M\) on \((\Omega,\mathscr{F}_{\mathbb{P}})\). **Lemma 5.2** ([2, Proposition 1.6.2 and Proposition 1.6.3]).: Let \(K:\Omega\to 2^{M}\) be a set-valued map taking values in the subspace of non-empty compact subsets of \(M\). Then the following statements are equivalent: 1. \(K\in\mathscr{F}_{\mathbb{P}}\otimes\mathscr{B}_{M}\); 2. \(K\) is a random compact set of \(M\) on \((\Omega,\mathscr{F}_{\mathbb{P}})\); Let \(F\) be a continuous RDS on \(M\) over \((\Omega,\mathscr{F}_{\mathbb{P}},\mathbb{P},\theta)\). A random compact set \(K\) of \(M\) on \((\Omega,\mathscr{F}_{\mathbb{P}})\) is said to be \(F\)-forward invariant if \(F_{\omega}^{n}\big{(}K(\omega)\big{)}\subset K(\theta^{n}\omega)\) for any \(n\in\mathbb{N}\) and \(\omega\in\Omega\). According to [9, Corollary 6.13], one has the Krylov-Bogolyubov theorem: **Lemma 5.3**.: Let \(F\) be a continuous RDS on \(M\) over \((\Omega,\mathscr{F}_{\mathbb{P}},\mathbb{P},\theta)\) and \(K\) be a \(F\)-forward invariant random compact set of \(M\) on \((\Omega,\mathscr{F}_{\mathbb{P}})\). Assume that \(\{\gamma_{n}\}_{n\in\mathbb{N}}\) is a sequence of measurable maps \(\gamma_{n}:(\Omega,\mathscr{F}_{\mathbb{P}})\to(M,\mathscr{B}_{M})\) satisfying \(\gamma_{n}(\omega)\in K(\omega)\) for \(\mathbb{P}\)-a.s. \(\omega\in\Omega\), and \(\{\mathbf{N}_{n}\}_{n\in\mathbb{N}}\) is a sequence of strictly increasing measurable maps \(\mathbf{N}_{n}:(\Omega,\mathscr{F}_{\mathbb{P}})\to(\mathbb{Z}_{+},\mathscr{B }_{\mathbb{Z}_{+}})\). For a sequence of probability measures \(\{\widetilde{\nu}_{n}\}_{n\in\mathbb{N}}\) on \((\Omega\times M,\mathscr{F}_{\mathbb{P}}\otimes\mathscr{B}_{M})\) defined as \[\widetilde{\nu}_{n}=\int_{\Omega}\frac{1}{\mathbf{N}_{n}(\omega)} \sum_{i=0}^{\mathbf{N}_{n}(\omega)-1}\delta_{\big{(}\theta^{i}\omega,F_{\omega }^{i}\gamma_{n}(\omega)\big{)}}\mathrm{d}\mathbb{P}(\omega),\] there exists a strictly increasing sequence \(\{n_{k}\}_{k\in\mathbb{N}}\) of \(\mathbb{N}\) such that \(\widetilde{\nu}:=\lim_{k\to+\infty}\widetilde{\nu}_{n_{k}}\) is an invariant measure of the RDS \(F\), and \(\widetilde{\nu}(K)=1\). ### Proof of Theorem 1.2 In this subsection, we give the final proof of Theorem 1.2. The positive constant \(\epsilon_{0}\) in Theorem 1.2 is determined in Proposition 3.7. Recall that the one to one corresponding relationship between the element in \(\{0,1\}^{\mathbb{Z}_{+}}\) and subset of \(\mathbb{Z}_{+}\) is defined as \[u\in\{0,1\}^{\mathbb{Z}_{+}}\mapsto\widehat{u}=\{n\in\mathbb{Z}_{+}:u(n)=1\}.\] Proof of Theorem 1.2.: Under the setting of Proposition 3.7, we are going to prove the stochastic flow \(\Phi\) on \(\mathbb{R}^{d}\) over \((\Omega,\mathscr{F},\mathbb{P})\) has full-horseshoes. By Theorem 4.4, there exists a pair of non-empty disjoint compact subsets \(\{U_{1},U_{2}\}\) of \(\mathbb{R}^{d}\), a constant \(\boldsymbol{b}>0\), a \(\mathbb{P}\) full-measure \(\Omega^{1}\subset\Omega\), a sequence \(\{\mathbf{N}_{n}\}_{n\in\mathbb{N}}\) of strictly increasing Borel measurable map \(\mathbf{N}_{n}:\Omega\to\mathbb{Z}_{+}\), and a sequence \(\{\gamma_{n}\}_{n\in\mathbb{N}}\) of Borel measurable maps \(\gamma_{n}:\Omega\to\{0,1\}^{\mathbb{Z}_{+}}\) such that one has that for any \(n\in\mathbb{N}\) and any \(\omega\in\Omega^{1}\), 1. \(\widehat{\gamma}_{n}(\omega)\subset\{0,1,\ldots,\mathbf{N}_{n}(\omega)-1\}\) and \(|\widehat{\gamma}_{n}(\omega)|\geqslant\boldsymbol{b}\mathbf{N}_{n}(\omega)\); 2. for any \(s\in\{1,2\}^{\widehat{\gamma}_{n}(\omega)}\), there exists an \(x_{s}\in\mathbb{R}^{d}\) with \(\Phi_{\omega}^{j}(x_{s})\in U_{s(j)}\) for any \(j\in\widehat{\gamma}_{n}(\omega)\). Define a continuous RDS \(F\) on \(\{0,1\}^{\mathbb{Z}_{+}}\) over \((\Omega,\mathscr{F}_{\mathbb{P}},\mathbb{P},\theta)\) by setting \(F_{\omega}=\sigma\) as the left-shift on \(\{0,1\}^{\mathbb{Z}_{+}}\), i.e. \[F:\mathbb{Z}_{+}\times\Omega\times\{0,1\}^{\mathbb{Z}_{+}}\to\{0,1\}^{\mathbb{ Z}_{+}},\quad(n,\omega,u)\mapsto\sigma^{n}(u).\] For any \(u\in\{0,1\}^{\mathbb{Z}_{+}}\) and \(\omega\in\Omega\), denote \[K(\omega):=\Big{\{}u\in\{0,1\}^{\mathbb{Z}_{+}}: \text{ for any }s\in\{1,2\}^{\widehat{u}},\text{ there exists }x_{s}\in\mathbb{R}^{d}\] \[\text{ such that }\Phi_{\omega}^{j}(x_{s})\in U_{s(j)}\text{ for each }j\in\widehat{u}\Big{\}}.\] Here, we need to point out that we regard \(u=\boldsymbol{0}=(0,0,\cdots)\in\{0,1\}^{\mathbb{Z}_{+}}\) as an element of \(K(\omega)\) for any \(\omega\in\Omega\). Denote \(K:=\bigcup_{\omega\in\Omega}\{\omega\}\times K(\omega)\). In the next, we prove that the slight adjustment of \(K\) is a \(F\)-forward invariant random compact set. Namely, **Lemma 5.4**.: There exists a \(F\)-forward invariant random compact set \(\widetilde{K}\) of \(\{0,1\}^{\mathbb{Z}_{+}}\) on \((\Omega,\mathscr{F}_{\mathbb{P}})\) such that \(\widetilde{K}(\omega)=K(\omega)\) for \(\mathbb{P}\)-a.s. \(\omega\in\Omega\). Proof.: Fix \((\omega,u)\in K\). If \(u=\mathbf{0}\), then it is clear that \(\sigma(u)=\mathbf{0}\in K(\theta\omega)\). If \(u\neq\mathbf{0}\), note that \(\{j+1:j\in\widehat{\sigma(u)}\}\subset\widehat{u}\). For any \(s\in\{1,2\}^{\widehat{\sigma(u)}}\), there exists \(\widetilde{s}\in\{1,2\}^{\widehat{u}}\) such that \(\widetilde{s}(j+1)=s(j)\) for each \(j\in\widehat{\sigma(u)}\). Since \(u\in K(\omega)\), we can find \(x_{\widetilde{s}}\in\mathbb{R}^{d}\) such that \[\Phi_{\omega}^{j+1}(x_{\widetilde{s}})\in U_{\widetilde{s}(j+1)}\quad\text{ for each }j\in\widehat{\sigma(u)}.\] Then, \[\Phi_{\theta\omega}^{j}\big{(}\Phi_{\omega}^{1}x_{\widetilde{s}}\big{)}=\Phi_ {\omega}^{j+1}(x_{\widetilde{s}})\in U_{\widetilde{s}(j+1)}=U_{s(j)}\quad \text{for each }j\in\widehat{\sigma(u)}.\] Therefore, \((\theta\omega,\sigma(u))\in K\) which implies that \(K\) is \(F\)-forward invariant. The remainder proof of this lemma is divided into two steps. **Step 1**: Let \(\Omega^{2}\) be a \(\mathbb{P}\)-full measure subset of \(\Omega^{1}\) such that for any \(l,m\in\mathbb{Z}_{+}\), the mapping \(\omega\mapsto\Phi_{\theta^{l}\omega}^{m}\) from \(\Omega^{2}\) to \(\text{Diff}^{\infty}(\mathbb{R}^{d})\) is a Borel measurable map. By Lusin's theorem (for example, see [21, (17.12) Theorem]), there exists a sequence of compact subsets \(\{\Omega_{n}\}_{n\in\mathbb{N}}\) of \(\Omega\) with \(\mathbb{P}(\Omega_{n})\geqslant 1-1/n\) such that \(\Omega_{n}\subset\Omega^{2}\) and \[\Omega_{n}\times\mathbb{R}^{d}\to\mathbb{R}^{d},\quad(\omega,x)\mapsto\Phi_{ \theta^{l}\omega}^{m}(x) \tag{5.1}\] is continuous for any \(l,m\in\mathbb{N}\). In this step, we show that for each \(n\in\mathbb{N}\), \[K_{n}=\bigcup_{\omega\in\Omega_{n}}\{\omega\}\times K(\omega)=K\cap(\Omega_{n }\times\{0,1\}^{\mathbb{Z}_{+}})\] is a closed subset of \(\Omega\times\{0,1\}^{\mathbb{Z}_{+}}\). Moreover, for any \(\omega\in\Omega_{n}\), \(K(\omega)\) is a compact subset of \(\{0,1\}^{\mathbb{Z}_{+}}\). Given a sequence \(\{(\omega_{i},u_{i})\}_{i\in\mathbb{N}}\) of \(K_{n}\) satisfying \[(\omega,u):=\lim_{i\to+\infty}(\omega_{i},u_{i})\in\Omega\times\{0,1\}^{ \mathbb{Z}_{+}},\] we are going to show that \((\omega,u)\in K_{n}\). Since \(\omega=\lim_{i\to+\infty}\omega_{i}\in\Omega_{n}\), we only need to prove that \(u\in K(\omega)\). If \(\widehat{u}=\varnothing\), then \(u=\mathbf{0}\) and \((\omega,\mathbf{0})\in K_{n}\). Otherwise, if \(\widehat{u}\neq\varnothing\), denote \(n_{u}=\min\{n\in\mathbb{Z}_{+}:n\in\widehat{u}\}\). Fixing \(\check{s}\in\{1,2\}^{\widehat{u}}\), there exists a strictly increasing sequence \(\{i_{k}\}_{k\in\mathbb{N}}\) of \(\mathbb{N}\) such that for any \(k\in\mathbb{N}\), \[\widehat{u}_{i_{k}}\cap\{0,1,\ldots,n_{u}+r\}=\widehat{u}\cap\{0,1\ldots,n_{u }+r\},\] where \(1\leqslant r\leqslant k\). Now for each \(k\in\mathbb{N}\), we can choose a \(s^{k}\in\{1,2\}^{\widehat{u}_{i_{k}}}\) such that \[s^{k}(j)=\check{s}(j)\text{ for }j\in\widehat{u}\cap\{0,1,\ldots,n_{u}+k\}. \tag{5.2}\] Since \((\omega_{i_{k}},u_{i_{k}})\in K_{n}\), there exists \(x_{s^{k}}\in\mathbb{R}^{d}\) such that \(\Phi_{\omega_{i_{k}}}^{j}(x_{s^{k}})\in U_{s^{k}(j)}\) for each \(j\in\widehat{u}_{i_{k}}\). By compactness of \(U_{\check{s}(n_{u})}\), without loss of generality, we assume that \[\lim_{k\to+\infty}\Phi_{\omega_{i_{k}}}^{n_{u}}(x_{s^{k}})=x_{s}^{n_{u}}. \tag{5.3}\] Since that \(\Phi_{\omega}^{n_{u}}\) is a diffeomorphism on \(\mathbb{R}^{d}\), there is an \(x_{s}\in\mathbb{R}^{d}\) such that \(\Phi_{\omega}^{n_{u}}(x_{s})=x_{s}^{n_{u}}\). Therefore, for each \(j\in\widehat{u}\), \[\Phi_{\omega}^{j}(x_{s}) =\Phi_{\theta^{n_{u}}\omega}^{j-n_{u}}(x_{s}^{n_{u}})\] \[=\lim_{k\to+\infty}\Phi_{\theta^{n_{u}}\omega_{i_{k}}}^{j-n_{u}} \big{(}\Phi_{\omega_{i_{k}}}^{n_{u}}(x_{s^{k}})\big{)}\] \[=\lim_{k\to+\infty}\Phi_{\omega_{i_{k}}}^{j}\big{(}x_{s^{k}}\big{)} \in U_{s(j)},\] which implies that \((\omega,u)\in K_{n}\). Hence that, \(K_{n}\) is a closed subsets of \(\{0,1\}^{\mathbb{Z}_{+}}\). **Step 2**: In this step, we prove that a slight adjustment of \(K\) is a measurable subset of \(\Omega\times\{0,1\}^{\mathbb{Z}_{+}}\). This provides the existence of \(\widetilde{K}\). Denoting \(\widetilde{\Omega}:=\bigcup_{n\in\mathbb{N}}\Omega_{n}\), then \(\Omega^{3}:=\bigcap_{j\in\mathbb{Z}_{+}}\theta^{-j}\widetilde{\Omega}\) is a \(\theta\)-forward invariant \(\mathbb{P}\)-full measure subset of \(\Omega^{2}\). Let \[\widetilde{K}=\big{(}K\cap(\Omega^{3}\times\{0,1\}^{\mathbb{Z}_{+}})\big{)} \cup\big{(}(\Omega\setminus\Omega^{3})\times\{\mathbf{0}\}\big{)},\] Note that * for any \(n\in\mathbb{N}\) and \(\omega\in\Omega\), \(F^{n}_{\omega}(\widetilde{K}(\omega))=\sigma^{n}(\widetilde{K}(\omega)) \subset\widetilde{K}(\theta^{n}\omega)\). Hence that \(\widetilde{K}\) is \(F\)-forward invariant; * \(K(\omega)\) is a compact subset of \(\{0,1\}^{\mathbb{Z}_{+}}\) for any \(\omega\in\Omega^{3}\); * since \(K_{n}\) is a closed subset of \(\Omega\times\{0,1\}^{\mathbb{Z}_{+}}\) for any \(n\in\mathbb{N}\), one has that \[K\cap(\Omega^{3}\times\{0,1\}^{\mathbb{Z}_{+}}) =K\cap(\widetilde{\Omega}\times\{0,1\}^{\mathbb{Z}_{+}})\cap( \Omega^{3}\times\{0,1\}^{\mathbb{Z}_{+}})\] \[=K\cap\big{(}\bigcup_{n\in\mathbb{N}}(\Omega_{n}\times\{0,1\}^{ \mathbb{Z}_{+}})\big{)}\cap(\Omega^{3}\times\{0,1\}^{\mathbb{Z}_{+}})\] \[=\bigcup_{n\in\mathbb{N}}(K\cap(\Omega_{n}\times\{0,1\}^{ \mathbb{Z}_{+}}))\cap(\Omega^{3}\times\{0,1\}^{\mathbb{Z}_{+}})\] \[=\bigcup_{n\in\mathbb{N}}K_{n}\cap(\Omega^{3}\times\{0,1\}^{ \mathbb{Z}_{+}})\in\mathscr{F}_{\mathbb{P}}\otimes\mathscr{B}_{\{0,1\}^{ \mathbb{Z}_{+}}}.\] This finishes the proof of Lemma 5.4 by using Lemma 5.2. Let's proceed to prove Theorem 1.2. Define a sequence of probability measures \(\{\widetilde{\nu}_{n}\}_{n\in\mathbb{N}}\) on \((\Omega\times\{0,1\}^{\mathbb{Z}_{+}},\mathscr{F}_{\mathbb{P}}\otimes\mathscr{B }_{\{0,1\}^{\mathbb{Z}_{+}}})\) as follows, \[\widetilde{\nu}_{n}=\int_{\Omega}\frac{1}{\mathbf{N}_{n}(\omega)}\sum_{i=0}^{ \mathbf{N}_{n}(\omega)-1}\delta_{\big{(}\theta^{i}\omega,\sigma^{i}\gamma_{n}( \omega)\big{)}}d\mathbb{P}(\omega), \tag{5.4}\] where \(\{\gamma_{n}\}_{n\in\mathbb{N}}\) is a sequence of Borel measurable maps \(\gamma_{n}:\Omega\to\{0,1\}^{\mathbb{Z}_{+}}\) and \(\{\mathbf{N}_{n}\}_{n\in\mathbb{N}}\) is a sequence of Borel measurable maps \(\mathbf{N}_{n}:\Omega\to\mathbb{N}\), which are given in the beginning of the proof of Theorem 1.2. It is clear that for any \(n\in\mathbb{N}\) and \(\mathbb{P}\)-a.s. \(\omega\in\Omega\), one has that \(\gamma_{n}(\omega)\in\widetilde{K}(\omega)\). By Lemma 5.3, there exists a strictly increasing sequence \(\{n_{k}\}_{k\in\mathbb{N}}\) of \(\mathbb{N}\) such that \(\widetilde{\nu}:=\lim_{k\to+\infty}\widetilde{\nu}_{n_{k}}\) is an invariant measure of the RDS \(F\) and \(\widetilde{\nu}(\widetilde{K})=1\). By Lemma 5.1, one has that \[\lim_{k\to+\infty}\widetilde{\nu}_{n_{k}}([1])=\widetilde{\nu}([1]), \tag{5.5}\] where we used that \([1]:=\Omega\times\{u\in\{0,1\}^{\mathbb{Z}_{+}}:u(0)=1\}\) and \[[0]:=\big{(}\Omega\times\{0,1\}^{\mathbb{Z}_{+}}\big{)}\setminus[1]=\Omega \times\{u\in\{0,1\}^{\mathbb{Z}_{+}}:u(0)=0\}\] both are random compact set of \(\{0,1\}^{\mathbb{Z}_{+}}\) on \((\Omega,\mathscr{F}_{\mathbb{P}})\). Therefore, \[\widetilde{\nu}([1]) =\lim_{k\to+\infty}\widetilde{\nu}_{n_{k}}([1])\] \[=\lim_{k\to+\infty}\int_{\Omega}\frac{1}{\mathbf{N}_{n_{k}}( \omega)}\sum_{j=0}^{\mathbf{N}_{n_{k}}(\omega)-1}\delta_{\big{(}\theta^{j} \omega,\sigma^{j}\gamma_{n_{k}}(\omega)\big{)}}([1])d\mathbb{P}(\omega)\] \[=\lim_{k\to+\infty}\int_{\Omega}\frac{|\{j\in\{0,1,\ldots,\mathbf{N}_{n _{k}}(\omega)-1\}:\big{(}\sigma^{j}\gamma_{n_{k}}(\omega)\big{)}(0)=1\}|}{\mathbf{ N}_{n_{k}}(\omega)}d\mathbb{P}(\omega)\] \[=\lim_{k\to+\infty}\int_{\Omega}\frac{|\{j\in\{0,1,\ldots,\mathbf{ N}_{n_{k}}(\omega)-1\}:\big{(}\gamma_{n_{k}}(\omega)\big{)}(j)=1\}|}{\mathbf{N}_{n_{k}}( \omega)}d\mathbb{P}(\omega)\] \[=\lim_{k\to+\infty}\int_{\Omega}\frac{|\widehat{\gamma}_{n_{k}}( \omega)|}{\mathbf{N}_{n_{k}}(\omega)}d\mathbb{P}(\omega)\geqslant\boldsymbol{b}.\] By the ergodic decomposition (for example, see [12, Theorem 6.2]), \(\pi_{\ast}\widetilde{\nu}=\mathbb{P}\) and the fact that \((\Omega,\mathscr{F}_{\mathbb{P}},\mathbb{P},\theta)\) is ergodic, we know that there exists an invariant ergodic Borel probability measure \(\nu\) of the RDS \(F\) such that \(\nu([1])=\widetilde{\nu}([1])\geqslant\boldsymbol{b}\) and \(\nu(\widetilde{K})=1\). Let \[G_{\nu}:=\left\{(\omega,u)\in\Omega\times\{0,1\}^{\mathbb{Z}_{+}}:\lim_{N\to+ \infty}\frac{1}{N}\sum_{j=0}^{N-1}\delta_{(\theta^{j}\omega,\sigma^{j}u)}([1 ])=\nu([1])\right\}.\] By Birkhoff ergodic theorem, one has that \(\nu(G_{v})=1\). Then there exists a \(\mathbb{P}\)-full measure subset \(\Omega^{4}\) of \(\Omega^{3}\) such that for any \(\omega\in\Omega^{4}\) \[\pi_{\Omega}^{-1}(\omega)\cap G_{v}\cap K=\pi_{\Omega}^{-1}(\omega)\cap G_{v} \cap\widetilde{K}\neq\varnothing,\] where \(\pi_{\Omega}\) is the projection form \(\Omega\times\{0,1\}^{\mathbb{Z}_{+}}\) to \(\Omega\). For any \(\omega\in\Omega^{4}\), there exists \(u_{\omega}\in\{0,1\}^{\mathbb{Z}_{+}}\) such that \((\omega,u_{\omega})\in G_{\nu}\cap K\). Letting \(J(\omega):=\widehat{u}(\omega)=\{n\in\mathbb{Z}_{+}:u_{\omega}(n)=1\}\), then one has that \[\nu([1]) =\lim_{N\to+\infty}\frac{1}{N}\sum_{j=0}^{N-1}\delta_{(\theta^{j }\omega,\sigma^{j}u_{\omega})}([1])\] \[=\lim_{N\to+\infty}\frac{1}{N}|\{j\in\{0,1,\ldots,N-1\}:(\sigma^{ j}u_{\omega})(0)=1\}|\] \[=\lim_{N\to+\infty}\frac{1}{N}|\{j\in\{0,1,\ldots,N-1\}:u_{\omega} (j)=1\}|\] \[=\lim_{N\to+\infty}\frac{1}{N}|J(\omega)\cap\{0,1,\ldots,N-1\}| \geqslant\boldsymbol{b}.\] By the definition of \(K\), for any \(s\in\{1,2\}^{J(\omega)}\), there exists an \(x_{s}\in\mathbb{R}^{d}\) with \(\Phi_{\omega}^{j}(x_{s})\in U_{s(j)}\) for any \(j\in J(\omega)\). For all, we complete the proof of Theorem 1.2. Through all proofs, it can be seen that, **Proposition 5.5**.: For any a stochastic flow \(\Phi\) of \(C^{2}\) diffeomorphisms on \(\mathbb{R}^{d}\) over the Wiener space \((\Omega,\mathscr{F},\mathbb{P})\) which is defined as (1.5), if it satisfies following hypothesis, * \(\Phi\) admits a smooth unique stationary measure and has positive top Lyapunov exponents with respect to this stationary measure; * **Assumption 1-Assumption 3** in Lemma 3.9 holds, then the stochastic flow \(\Phi\) has full-horseshoes.
2305.08855
Debunking Cantor: New Set-Theoretical and Logical Considerations
For more than a century, Cantor's theory of transfinite numbers has played a pivotal role in set theory, with ramifications that extend to many areas of mathematics. This article extends earlier findings with a fresh look at the critical facts of Cantor's theory: i) Cantor's widely renowned Diagonalization Argument (CDA) is fully refuted by a set of counter-examples that expose the fallacy of this proof. ii) The logical inconsistencies of CDA are revisited, exposing the short-comings of CDA's implementation of the method of proof by contradiction. iii) The denumerability of the power set of the set of the natural numbers, P(N), is substantiated by a proof that takes full account of all the infinite subsets of N. Such a result confirms the denumerability of the set of the real numbers, R, and with it the countable nature of the continuum. iv) Given that the denumerable character of (probably) all infinite sets makes their comparison in terms of one-to-one correspondences a rather pointless exercise, a new concept of relative cardinality is introduced which facilitates a quantitative evaluation of their different magnitudes.
Juan A Perez
2023-03-14T15:29:33Z
http://arxiv.org/abs/2305.08855v1
# Debunking Cantor: New ###### Abstract. For more than a century, Cantor's theory of transfinite numbers has played a pivotal role in set theory, with ramifications that extend to many areas of mathematics. This article extends earlier findings with a fresh look at the critical facts of Cantor's theory: - Cantor's widely renowned Diagonalization Argument (CDA) is fully refuted by a set of counter-examples that expose the fallacy of this proof. - The logical inconsistencies of CDA are revisited, exposing the short-comings of CDA's implementation of the _reductio_ method of proof. - The denumerability of the power set of the set of the natural numbers, \(\mathcal{P}(\mathbb{N})\), is substantiated by a proof that takes full account of all the infinite subsets of \(\mathbb{N}\). Such a result confirms the denumerability of the set of the real numbers, \(\mathbb{R}\), and with it the countable nature of the continuum. - Given that the denumerable character of (probably) all infinite sets makes their comparison in terms of one-to-one correspondences a rather pointless exercise, a new concept of relative cardinality is introduced which facilitates a quantitative evaluation of their different magnitudes. 2000 Mathematics subject classification Primary 03B10, 03E30, 03E50; Secondary 03F07, 03F40 ## 1. Introduction A previous report [15] presented a detailed and critical evaluation of the various proofs that underpin Cantor's theory of transfinite numbers [7, 12, 16]. Cantor's famous Diagonalization Argument (CDA) was particularly signalled for analysis, alongside other proofs supporting the uncountable nature of the set of real numbers, \(\mathbb{R}\), and the power set of the set of natural numbers, \(\mathcal{P}(\mathbb{N})\). Those proofs underpin much of modern set theory, with far reaching implications for most branches of mathematics. Consequently, their refutation (if correct) can be considered sufficiently important to merit further investigation. This article does precisely that, with a fresh look at the shortcomings of CDA, for which a number of counter-examples are described. The logical inadequacies of CDA are re-examined, reinforcing the previous analysis [15]. Furthermore, in order to confirm the denumerability of the power set of \(\mathbb{N}\), \(\mathcal{P}(\mathbb{N})\) (for which as many as three different proofs were already reported [15]), a new proof is described which takes into clear account all the infinite subsets of \(\mathbb{N}\). Since the filing of the original report, two other independent articles have reached the same conclusions, based on rationals that have much in common with our preceeding findings [4, 10]. Our hope is that the new results presented here will further cement the inescapable conclusion that Cantorian mathematics needs to be expunged from the fabric of mathematical theory. The many implications for set theory and mathematical logic were extensively analysed before [15], so the interested reader is referred to the original material. ## 2. Counter-examples of Cantor's Diagonalization Argument Cantor's Diagonalization Argument (CDA) [3, 5, 7, 16] sits at the heart of his whole construction of transfinite number theory. Over the years, the simplicity of this argument has made it a favourite of set theorists and logicians alike [17], so it has been adapted to a great number of proofs. Hence, a refutation of CDA cannot be taken lightly. In order to analyse it in some detail in this and the following section, CDA will be reproduced here, adapted for the set of infinite binary strings [3, 5, 7]: **Theorem.** The set of infinite binary strings is uncountable. _Proof_. Suppose that the set \(B\) of infinite binary strings is countable. Then we can list all the strings \(S_{n}\) in \(B\) as \[S_{1}\,,\,S_{2}\,,\,S_{3}\,,\,\cdot\cdot\,,S_{n}\,,\,\cdot\cdot\] with each string in \(B\) appearing as \(S_{n}\) for exactly one \(n\!\in\!\mathbb{N}\), \(n\!\geq\!1\). We shall represent each string \(S_{n}\) as \[S_{n}=a_{n,1}\,a_{n,2}\,a_{n,3}\,\cdot\,\cdot\,\cdot\,a_{n,n}\,\cdot\cdot\cdot \quad n\!\in\!\mathbb{N},\,n\!\geq\!1\] where each \(a_{n,n}\) takes the value "0" or "1". We can then picture the set of strings \(S_{n}\) written out in an array: \[S_{1}=a_{1,1}\,a_{1,2}\,a_{1,3}\,\cdot\,\cdot\,a_{1,n}\,\cdot\cdot\] \[S_{2}=a_{2,1}\,a_{2,2}\,a_{2,3}\,\cdot\,\cdot\,a_{2,n}\,\cdot\cdot\] \[S_{3}=a_{3,1}\,a_{3,2}\,a_{3,3}\,\cdot\,\cdot\,a_{3,n}\,\cdot\cdot\] \[\cdot\] \[S_{n}=a_{n,1}\,a_{n,2}\,a_{n,3}\,\cdot\,\cdot\,a_{n,n}\,\cdot\cdot\] \[\cdot\] Now define an "antidiagonal" string \(S_{AD}=d_{1}\,d_{2}\,d_{3}\,\cdot\,\cdot\,d_{n}\,\cdot\cdot\) by \[d_{n}=\left\{\begin{array}{l}1,\,\mbox{if $a_{n,n}=0$,}\\ 0,\,\mbox{if $a_{n,n}=1$.}\end{array}\right.\] Then \(S_{D}\) belongs to \(B\). However, \(S_{AD}\) has been constructed to disagree with each \(s_{n}\) at the nth decimal place, so it cannot equal \(s_{n}\) for any \(n\). Thus \(S_{AD}\) does not appear in the list, contradicting that the list contains all binary strings. Therefore, we have that \(B\) is uncountable. Q.E.D. The main criticism originally raised against CDA was that the diagonal string \(s_{D}\) can never "cover" the whole of the array [15]. This is best illustrated with a finite example: consider the set \(B_{4}\) of all finite binary arrays of length \(4\), i.e. \(s_{n}=a_{n,1}\,a_{n,2}\,a_{n,3}\,a_{n,4}\). It is simple to observe that the whole array consists of \(2^{4}=16\) strings \(s_{n}\), all of length \(4\): \[\begin{array}{cccccccccccccccccccc}s_{1}&s_{2}&s_{3}&s_{4}&s_{5}&s_{6}&s_{7} &s_{8}&s_{9}&s_{10}&s_{11}&s_{12}&s_{13}&s_{14}&s_{15}&s_{16}\\ \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\parpar\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par (2.2) (2.2) \[\begin{array}{cccccccccccc}\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\ The counter-examples constructed in (2.2) and (2.3) are not the only ones which can be conceived, an alternative construction will alternate 0s and 1s in \(s_{{}_{AD}}\) : (2.4) In (2.4), the antidiagonal string is \(s_{{}_{AD}}\) = 1 0 1 0 1 0 1 \(\cdots\), the same infinite string that the strings in array \(B\) tend towards. And an alternative to (2.4) could be constructed by changing all the 1s for 0s and all the 0s for 1s: (2.5) In (2.5), the antidiagonal string will be \(s_{{}_{AD}}\) = 0 1 0 1 0 1 0 \(\cdots\), once more the same infinite string that the strings in array \(B\) tend towards. It should be equally obvious that, in all constructions (2.2) to (2.5), it will be possible to place a completely randomised set of 1s and 0s below the diagonal line, leaving the antidiagonal string \(s_{{}_{AD}}\) unaffected, thus highlighting that the number of possible counter-examples is, in fact, infinite. For example, illustrating this point, an alternative to (2.2) might be: (2.6) (2.6) \[\begin{array}{cccccccccccc}\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\ (2.9) \[\begin{array}{l}r_{0}=0\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ is reached (i.e. a statement which is always false) [8, 14]. Since a contradiction is a statement than can never be true (it is commonplace to describe it as a composite statement, \(R\wedge\neg R\), hence reinforcing its falsehood), its negation leads to the negation of \(\neg P\), and this, in turn, to the truth of \(P\)[8]. The associated chain of inference can be written as: \[\neg P\Rightarrow Q_{1}\Rightarrow Q_{2}\Rightarrow\cdots\Rightarrow Q_{n} \Rightarrow(R\wedge\neg R) \tag{3.1}\] so the rule of hypothetical syllogism [14] implies \[\neg P\Rightarrow(R\wedge\neg R) \tag{3.2}\] and, by _modus tollens_ and double negation [14], \[\neg(R\wedge\neg R)\Rightarrow\neg(\neg P)\Rightarrow P \tag{3.3}\] completing the proof. A variation on this theme reported in [15] has, as the final statement in the chain of inference, the initial statement \(P\), this is \[\neg P\Rightarrow Q_{1}\Rightarrow Q_{2}\Rightarrow\cdots\Rightarrow Q_{n} \Rightarrow P \tag{3.4}\] so the rule of hypothetical syllogism combined with conjunction introduction [14] now implies that \[\neg P\Rightarrow(P\wedge\neg P) \tag{3.5}\] and, once more by _modus tollens_ and double negation [14], \[\neg(P\wedge\neg P)\Rightarrow\neg(\neg P)\Rightarrow P \tag{3.6}\] and the proof is again complete. The chain of inference (3.4) is relevant to our analysis, given that this is the form of proof associated with CDA [15]. One fundamental aspect of proofs by contradiction is the fact that, in order to derive the truth of \(P\), the truth of all the intermediate statements \(Q_{n}\) in (3.1), or (3.4), has to be independently asserted. Quoting from [8]: "Such a proof (_reductio ad absurdum_) consists of a deduction of a contradiction from the negation of the statement whose proof is required. That this is a legitimate procedure (..) can be seen as follows. If we have an argument which is known to be an instance of a valid argument form, and its conclusion is known to be false, then at least one of the premises must be false, If all the premises are known to be true except one (the assumed one), then the legitimate deduction is that this assumed one is the one which is false." Such a prerequisite is fundamental to the success of these proofs. The chains of inference (3.1) or (3.4) do not offer any additional complication, but the same cannot be said of chains of inference where the connectors are biconditional instead of single conditional: \[\neg P \Leftrightarrow Q_{1} \Leftrightarrow Q_{2} \Leftrightarrow\cdots\Leftrightarrow Q_{n} \Rightarrow (R\wedge\neg R) \tag{3.8}\] \[\neg P \Leftrightarrow Q_{1} \Leftrightarrow Q_{2} \Leftrightarrow\cdots\Leftrightarrow Q_{n} \Rightarrow P \tag{3.7}\] In (3.7) and (3.8), the truth of the intermediate statements \(Q_{1},\ldots,Q_{n}\) is directly associated to the truth of \(\neg P\), since they all are equivalent statements [8]. Therefore, if \(\neg P\) is false, so are \(Q_{1},\ldots,Q_{n}\). In other words, the falsehood of \((R\wedge\neg R)\), or \((P\wedge\neg P)\), implies the falsehood of them all, \(Q_{1},\ldots,Q_{n}\) as well as \(\neg P\). Consequently, the proofs fail to have a single true statement underpinning the sought conclusion, i.e. the truth of \(P\). It is hard to see this scenario being nothing but a corruption of the method of proof by contradiction. This much was concluded in [15]. However, there is a "half-way house" situation where having biconditional statements connecting \(\neg P\) to some, but not all, of the statements \(Q_{1},\ldots,Q_{n}\) does not compromise the validity of the proof: \[\neg P \Leftrightarrow Q_{1} \Leftrightarrow Q_{2} \Leftrightarrow\cdots\Leftrightarrow Q_{i-1} \Leftrightarrow Q_{i} \Rightarrow Q_{i+1} \Rightarrow\cdots\Rightarrow Q_{n} \Rightarrow (R\wedge\neg R) \tag{3.10}\] \[\neg P \Leftrightarrow Q_{1} \Leftrightarrow Q_{2} \Leftrightarrow\cdots\Leftrightarrow Q_{i-1} \Leftrightarrow Q_{i} \Rightarrow Q_{i+1} \Rightarrow\cdots\Rightarrow Q_{n} \Rightarrow P \tag{3.9}\] In (3.9) and (3.10), the truth of the statements \(Q_{i+1},\ldots,Q_{n}\) is not associated to the truth of \(\neg P\) (unlike \(Q_{1},\ldots,Q_{i}\)), and that will be sufficient to validate the proof, provided the statements \(Q_{i+1},\ldots,Q_{n}\) were shown to be true. It is this observation what we failed to notice in our original report [15]2. Footnote 2: Fortunately, all the proofs of nondenumerability that were analysed in [15] fall into the category of (3.7) or (3.8), so the conclusions reported there remain sound. Knowing already that CDA is a flawed proof, we are now in a position to evaluate the logical structure of CDA. If we take the presentation of CDA already described in Section 2, we could dissect the chain of inference as follows: * \(P=\) 'The set \(B\) of infinite binary strings is uncountable' * \(\neg P=\) 'The set \(B\) of infinite binary strings is countable' * \(Q_{1}=\) 'The strings \(s_{n}\) in \(B\) can be listed as \[s_{1},s_{2},s_{3},\ldots,s_{n},\ldots\] where \(n\in\mathbb{N}\), \(n\geq 1\)' * \(Q_{2}=\) 'We can picture the set of strings \(s_{n}\) written out in an array: \[s_{n}=a_{n,1}\;a_{n,2}\;a_{n,3}\;\ldots\;a_{n,n}\;\ldots\] where \(n\in\mathbb{N}\), \(n\geq 1\)' * \(Q_{3}=\) 'We define an "antidiagonal" string \(s_{AD}=d_{1}\;d_{2}\;d_{3}\;\ldots\;d_{n}\;\ldots\) by \[d_{n}=\left\{\begin{array}{l}1,\;\mbox{if}\;a_{n,n}=0\\ 0,\;\mbox{if}\;a_{n,n}=1\end{array}\right.\] \(s_{AD}\) belongs to \(B\) but is \(s_{AD}\neq a_{n,n}\) for all \(n\in\mathbb{N}\), so it cannot be part of the array' * \(Q_{4}=\) 'The array is not a complete listing of the elements of \(B\)' The list of statements forms the logical sequence \[\neg P\Leftrightarrow Q_{1}\Leftrightarrow Q_{2}\Leftrightarrow Q_{3} \Rightarrow Q_{4}\Leftrightarrow P \tag{3.11}\] where the connectives linking \(\neg P\) with \(Q_{1}\), \(Q_{2}\) and \(Q_{3}\) are all biconditional, leaving just a single conditional connective between \(Q_{3}\) and \(Q_{4}\) since, in principle, there could be other reasons (not addressed by the proof) why the array is not a complete listing of \(B\). The final connective between \(Q_{4}\) and \(P\) is also biconditional. It is important to understand that the connective between statements \(Q_{2}\) and \(Q_{3}\) is biconditional: the antidiagonal string \(s_{AD}\) can only be defined based on the construction of the array and, in reverse, the definition of \(s_{AD}\) implies the existence of the countable array. The chain of inference (3.11) is an example of (3.8), without a single true intermediate statement underpinning the validity of the proof. Since we already know that CDA is flawed, it should come as no suprise that its logical structure fails to meet the requirements of a correct proof by contradiction. In fact, this failure could have been used to point to the short-comings of CDA. Since it is the case that \(Q_{3}\Rightarrow(P\wedge\neg P)\), there are no circumstances under which \(Q_{3}\) can be a true statement. The implications of the flawed nature of CDA as a method of proof are considerable. Diagonalisation arguments have been used extensively by set-theorists and logicians over the years, and quite a number of important results (including Godel's famous theorems of incompleteness [17]) are underpinned by such arguments. This issue was comprehensively analysed in our previous report [15]. With regard to the nondenumerability of the set of real numbers, \(\mathbb{R}\), non-set-theoretical proofs can be found in the literature that come from other branches of mathematics [11]. It will be of interest to verify whether such proofs also lack a reliable logical structure. In our previous report [15], we introduced the definition of _inconceivable_ statements, to be used as a preventative measure against the construction of incorrect proofs: **Definition 3.1**.: _A mathematical statement \(Q\) is said to be inconceivable when there is another statement \(P\) such that_ \(\textbf{i)}\)__\((Q\Rightarrow P)\wedge(Q\Rightarrow\neg P)\)_, _or_ _**ii)**\(Q\Rightarrow\)_(\((P\Rightarrow\neg P)\vee(\neg P\Rightarrow P))\)__ _Otherwise, the statement \(Q\) is considered conceivable._ This definition lead to the formulation of a Principle (of _Conceivable Proof_) that needs a slight alteration, in order to account for proof constructions such as (3.9) and (3.10). **Principle 5.2** (of Conceivable Proof).: _No mathematial proof by contradiction can be judged valid if (in the absence of any true statement or statements underpinning the proof) its construction includes one or more inconceivable statements; an exception will be when the purpose of the proof is to demonstrate the falsehood of such an inconceivable statement, provided that the resulting contradiction is not conceptually linked to the initial assumption of the proof._ This principle was initially put forward to prevent the construction of erroneous proofs like CDA [15]. Proofs by contradiction are everywhere in the mathematical literature, thus the prevention of mistakes in their formulation seems warranted. ## 4. Denumerability of the Power Set of \(\mathbb{N}\) (\(\mathcal{P}(\mathbb{N})\)) The refutation of CDA reported here, as well as the preceding critical evaluations of the remaining set-theoretical proofs on the uncountability of \(\mathbb{R}\) and \(\mathcal{P}(\mathbb{N})\)[15], do not, just by themselves, prove that these sets are denumerable. Such a conclusion can only be reached with the construction of the relevant proof/s. To this effect, in our previous report we described three independent proofs of the denumerability of \(\mathcal{P}(\mathbb{N})\)[15], whose completion required the formulation of a new theorem (of actual countable infinity) [15], a natural extension of the axiom of infinity [7, 12, 16]. A new proof is presented here that does not make use of such a theorem, and takes full account of all the infinite subsets of \(\mathbb{N}\). Two preliminary facts need to be taken into account before dealing with the proof, which will be used in its construction: _i)_ Firstly, consider a well-known theorem for the union of countable sets [2, 16]: **Theorem 4.1**.: _If \(A_{n}\) is a countable set for each \(n\in\mathbb{N}\), then the union \(A:=\bigcup_{n=1}^{\infty}A_{n}\) is countable._ In Theorem 4.1, for countable it will also be understood infinitely countable, i.e. denumerable. _ii)_ Secondly, consider a finite set \(A_{n}\) of \(n\) members, with \(n\in\mathbb{N}\), and also consider its power set, \(\mathcal{P}(A_{n})\), with cardinality given by [1, 15]: (4.1) where each binomial coefficient equals the cardinality of a given subset of \(\mathcal{P}(A_{n})\)3. If we name as \(\mathcal{N}(A_{p})\) the subset of \(\mathcal{P}(A_{n})\) formed by all the subsets of \(A_{n}\) with cardinality \(p\), it will be the case that Footnote 3: In (4.1) it has been assumed, for simplicity, that \(n\) is an even natural, i.e. \(n=0\,(\mathrm{mod}\ 2)\), so that the binomial expansion of \(2^{n}\) has just one single central term \(\left(\begin{array}{c}n\\ n/2\end{array}\right)\). \[\left|\mathcal{N}(A_{p})\right|=\left(\begin{array}{c}n\\ p\end{array}\right) \tag{4.2}\] and we will be able to write that \[\mathcal{P}(A_{n})=\bigcup_{p=0}^{n}\mathcal{N}(A_{p}) \tag{4.3}\] and \[\left|\mathcal{P}(A_{n})\right|=\sum_{p=0}^{n}\left|\mathcal{N}(A_{p})\right| =\sum_{p=0}^{n}\left(\begin{array}{c}n\\ p\end{array}\right) \tag{4.4}\] since the subsets \(\mathcal{N}(A_{p})\) are pairwise disjoint. A property of the binomial coefficients, to be used in what follows, is the relationship between consecutive coefficients [1], that is \[\left(\begin{array}{c}n\\ p+1\end{array}\right)=\left(\begin{array}{c}n\\ p\end{array}\right)\cdot\frac{(n-p)}{(p+1)} \tag{4.5}\] Applying (4.5) to consecutive binomial coefficients beyond \(n/2\), it can be easily deduced that \[\left(\begin{array}{c}n\\ n/2+d+1\end{array}\right)=\left(\begin{array}{c}n\\ n/2+d\end{array}\right)\cdot q \tag{4.6}\] where the ratio \(q\) (\(q\in\mathbb{Q}\)) is given by \[q=\frac{(n-2d)}{[\,n+2\,(d+1)]}=\frac{1-(2/n)\cdot d}{[\,1+(2/n)\cdot(d+1)]} \tag{4.7}\] with \(0<q<1\), and \(0\leq d\leq n/2-1\) (\(d\in\mathbb{N}\)). The ratio \(q\) will take the limit values: \[d=0\ \rightarrow\ \left(\begin{array}{c}n\\ n/2+1\end{array}\right)=\left(\begin{array}{c}n\\ n/2\end{array}\right)\cdot q\quad\mbox{with}\quad q=\frac{1}{1+(2/n)} \tag{4.9}\] \[d=n/2-1\ \rightarrow\ \left(\begin{array}{c}n\\ n\end{array}\right)=\left(\begin{array}{c}n\\ n-1\end{array}\right)\cdot q\quad\mbox{with}\quad q=1/n \tag{4.8}\] The above results can be illustrated graphically with an example, e.g. \(n=40\) (Figure 1). It is also relevant to examine the values that \(q\) takes for a range of values of \(d\) as a function of \(n\) (Table 1). With these two facts now established, we are in a position to describe our new proof of the denumerability of the power set \(\mathcal{P}(\mathbb{N})\). **Theorem 4.2** (Denumerability of the Power Set).: _Let \(\mathbb{N}\) be the set of all natural numbers, \(\mathbb{N}\!=\!\{\,0,1,2,3,\cdots,p,\cdots\,\}\). Its power set \(\mathcal{P}(\mathbb{N})\), the set of all subsets of \(\mathbb{N}\), is denumerable._ Proof.: The power set \(\mathcal{P}(\mathbb{N})\) will be the union of all sets \(\mathcal{N}(\mathbb{N}_{p})\), i.e. the sets of all subsets of \(\mathbb{N}\) with cardinality \(p\): \[\mathcal{P}(\mathbb{N})\!=\!\bigcup\nolimits_{p=0}^{\mathbb{N}_{0}}\!\! \mathcal{N}(\mathbb{N}_{p}) \tag{4.10}\] where \(\mathbb{N}_{0}\) denotes the cardinality of \(\mathbb{N}\), i.e. \(\left|\mathbb{N}\right|\!=\!\mathbb{N}_{0}\)[7, 12, 16]. Therefore, the sets \(\mathcal{N}(\mathbb{N}_{n})\) will comprise both finite and infinite sets. Since all sets \(\mathcal{N}(\mathbb{N}_{p})\) are pairwise disjoint, the cardinality of \(\mathcal{P}(\mathbb{N})\) can be expressed as the summation of the cardinalities of all sets \(\mathcal{N}(\mathbb{N}_{p})\): \[\left|\mathcal{P}(\mathbb{N})\right|=\sum_{p=0}^{\mathbb{N}_{0}}\left| \mathcal{N}(\mathbb{N}_{p})\right| \tag{4.11}\] with the total count of sets \(\mathcal{N}(\mathbb{N}_{p})\) being denumerable, i.e. if we denote as \(\mathcal{T}_{\mathbb{N}_{0}}\) the set whose members are all the sets \(\mathcal{N}(\mathbb{N}_{p})\): \[\mathcal{T}_{\mathbb{N}_{0}}=\left\{\mathcal{N}(\mathbb{N}_{0}),\,\mathcal{N }(\mathbb{N}_{1}),\,\mathcal{N}(\mathbb{N}_{2}),\,\ldots,\,\mathcal{N}(\mathbb{ N}_{p}),\,\ldots,\,\mathcal{N}(\mathbb{N}_{\mathbb{N}_{0}})\right\} \tag{4.12}\] it will be the case that \[\left|\mathcal{T}_{\mathbb{N}_{0}}\right|=\!\mathbb{N}_{0} \tag{4.13}\] In order to prove that the power set \(\mathcal{P}(\mathbb{N})\) is denumerable, it will be sufficient to prove that each set \(\mathcal{N}(\mathbb{N}_{p})\) is denumerable, that is \[\left|\mathcal{N}(\mathbb{N}_{p})\right|=\!\mathbb{N}_{0}\,\,\forall p\!\in \!\mathbb{N}\,\,\wedge\,\,\forall\mathcal{N}(\mathbb{N}_{p})\!:\,\left|\mathbb{ N}_{p}\right|=\!\mathbb{N}_{0} \tag{4.14}\] It is already well-documented that the set whose members are all the finite subsets of \(\mathbb{N}\), denoted \(\mathcal{F}(\mathbb{N})\), is denumerable [16]: A proof of this statement can easily be constructed using Theorem 4.1 and mathematical induction: - \(\left|\mathcal{N}(\mathbb{N}_{0})\right|=\!1\), since the only member of \(\mathcal{N}(\mathbb{N}_{0})\) is the empty set, \(\mathcal{O}\). - \(\left|\mathcal{N}(\mathbb{N}_{1})\right|=\!\mathbb{N}_{0}\), as the members of \(\mathcal{N}(\mathbb{N}_{1})\) are all the singletons \(\{i\},\,\forall i\!\in\!\mathbb{N}\). - \(\left|\mathcal{N}(\mathbb{N}_{2})\right|=\!\mathbb{N}_{0}\). The members of \(\mathcal{N}(\mathbb{N}_{2})\), i.e. the pairs \(\{i,j\}\), \(\forall i,j\!\in\!\mathbb{N}\), can be constructed by the union of all the singletons \(\{i\}\) with the singletons \(\{j\}\), provided that, to avoid repetition, \(j>i\). For each singleton \(\{i\}\), a denumerable set of pairs \(\{i,j\}\) will be generated. Since the set of singletons \(\{i\}\) is also denumerable, application of Theorem 4.1 will imply that the total of pairs \(\{i,j\}\) is indeed denumerable. - The inductive step: If it is assumed that the set \(\mathcal{N}(\mathbb{N}_{p}),p\!\in\!\mathbb{N}\), is denumerable, this will imply that the set \(\mathcal{N}(\mathbb{N}_{p\,+1})\) is also denumerable. The members of \(\mathcal{N}(\mathbb{N}_{p})\) will be all the subsets of \(\mathbb{N}\) with cardinality \(p\), e.g. \(\{0,1,2,3,\cdots,p-1\}\). To construct the members of \(\mathcal{N}(\mathrm{N}_{p+1})\), we will need to undertake the union of every member of \(\mathcal{N}(\mathrm{N}_{p})\) (this is, a subset of \(\mathbb{N}\) with cardinality \(p\)) with every singleton \(\{i\}\), such that \(i\) is greater than any of the elements of the given subset, in order to avoid repetitions. Therefore, for every member of \(\mathcal{N}(\mathrm{N}_{p})\), a denumerable list of subsets of \(\mathbb{N}\) with cardinality \(p+1\) will be generated. Since it has been assumed that \(\mathcal{N}(\mathrm{N}_{p})\) is a denumerable set, the application of Theorem 4.1 will conclude that \(\mathcal{N}(\mathrm{N}_{p+1})\) is also denumerable. - A final application of Theorem 4.1 will help us to reach the conclusion that \(\mathcal{F}(\mathbb{N})\) is denumerable. All members of \(\mathcal{F}(\mathbb{N})\) are finite subsets of \(\mathbb{N}\). To construct the infinite subsets of \(\mathbb{N}\), a starting point can be those subsets that are obtained by extracting the differences \(\mathbb{N}\setminus\mathrm{N}_{p}\), this is, extracting from \(\mathbb{N}\) the finite subsets \(\mathrm{N}_{p}\). To illustrate this point, two examples of \(\mathbb{N}\setminus\mathrm{N}_{1}\) will be the subsets of \(\mathbb{N}\)\(\{1,2,3,4,\ldots,p,\ldots\}\) and \(\{0,2,3,4,\ldots,p,\ldots\}\), while two examples of \(\mathbb{N}\setminus\mathrm{N}_{2}\) will be the subsets of \(\mathbb{N}\setminus\mathrm{N}_{2}\)\(\{2,3,4,\ldots,p,\ldots\}\) and \(\{0,3,4,\ldots,p,\ldots\}\). And so on. Since a one-to-one correspondence can be established between every \(\mathcal{N}(\mathrm{N}_{p})\) and \(\mathcal{N}(\mathrm{N}_{\mathbb{N}_{p}})\) (where \(\mathrm{N}_{\mathbb{N}_{\mathbb{N}}{p}}\) denotes the set of subsets of \(\mathbb{N}\) obtained by extracting the differences \(\mathbb{N}\setminus\mathrm{N}_{p}\), for a given cardinality \(p\)), i.e. \(\mathcal{N}(\mathrm{N}_{p})\leftrightarrow\mathcal{N}(\mathrm{N}_{\mathbb{N} _{\mathbb{N}}{p}})\), it will always be the case that \[\left|\mathcal{N}(\mathrm{N}_{p})\right|=\left|\mathcal{N}(\mathrm{N}_{ \mathbb{N}_{\mathbb{N}}{p}})\right|=\mathbb{N}_{0}\ \ \ \forall p\in\mathbb{N}\wedge p\neq 0 \tag{4.15}\] Returning to (4.11), expand this statement as follows: \[\left|\mathcal{P}(\mathbb{N})\right|=\left|\mathcal{N}(\mathrm{N }_{0})\right|+\left|\mathcal{N}(\mathrm{N}_{1})\right|+\cdot\cdot\cdot+\left| \mathcal{N}(\mathrm{N}_{p})\right|+\cdot\cdot\cdot+\left|\mathcal{N}(\mathrm{N }_{\mathbb{E},\mathbb{O}})\right|+\\ +\cdot\cdot\cdot+\left|\mathcal{N}(\mathrm{N}_{\mathbb{N}_{ \mathbb{N}}{p}})\right|+\cdot\cdot\cdot+\left|\mathcal{N}(\mathrm{N}_{ \mathbb{N}_{\mathbb{N}}{1}})\right|+\left|\mathcal{N}(\mathrm{N}_{\mathbb{N} _{\mathbb{N}}{0}})\right| \tag{4.16}\] where \(\mathrm{N}_{\mathbb{E},\mathbb{O}}\) denotes the set of infinite subsets of \(\mathbb{N}\) with equal numbers of members of \(\mathbb{N}\) missing as showing. Two examples of these subsets are the set of all even numbers, \(\mathbb{E}=\{0,2,4,6,8,\ldots\}\), and the set of all odd numbers, \(\mathbb{O}=\{1,3,5,7,9,\ldots\}\) (notice that \(\mathbb{E}\cup\mathbb{O}=\mathbb{N}\), and \(\mathbb{E}\cap\mathbb{O}=\mathbb{O}\)). We know already that some of the sets (of subsets of \(\mathbb{N}\)) are denumerable: \[\left|\mathcal{P}(\mathbb{N})\right|=1+\mathbb{N}_{0}+\cdot\cdot\cdot+\mathbb{ N}_{0}+\cdot\cdot\cdot+\left|\mathcal{N}(\mathrm{N}_{\mathbb{E},\mathbb{O}}) \right|+\cdot\cdot\cdot+\mathbb{N}_{0}+\cdot\cdot\cdot+\mathbb{N}_{0}+1 \tag{4.17}\] Adapting (4.4) from the power set of a finite set, \(A_{n}\), to the power set of \(\mathbb{N}\), the cardinality of \(\mathcal{P}(\mathbb{N})\) will be given by \[\left|\mathcal{P}(\mathbb{N})\right|=\lim_{n\to\mathbb{N}_{0}}\left[\sum_{p=0} ^{\mathbb{N}_{0}}\left(\begin{array}{c}n\\ p\end{array}\right)\right]=\sum_{p=0}^{\mathbb{N}_{0}}\left[\lim_{n\to\mathbb{N}_ {0}}\left(\begin{array}{c}n\\ p\end{array}\right)\right] \tag{4.18}\] If (4.16) and (4.18) are compared, it becomes possible to derive the cardinality of \(\mathcal{N}(\mathrm{N}_{\mathbb{E},\mathrm{O}})\) as given by \[\left|\mathcal{N}(\mathrm{N}_{\mathbb{E},\mathrm{O}})\right|=\lim_{n\to\aleph_{ 0}}\left(\begin{array}{c}n\\ n/2\end{array}\right) \tag{4.19}\] since \(\left|\mathcal{N}(\mathrm{N}_{\mathbb{E},\mathrm{O}})\right|\) is the central term of the expansion (4.16). For \(\mathcal{P}(\mathbb{N})\) to be uncountable, it will be required that at least one of the members of the expansion (4.16) is uncountable. Accordingly, \(\mathcal{N}(\mathrm{N}_{\mathbb{E},\mathrm{O}})\) will have to be uncountable, as it is the central term of (4.16). If the arithmetics of transfinite cardinals are taken into consideration [7, 12, 16], it will be realised that (4.19) does not generate a transfinite cardinal larger than \(\aleph_{0}\): \[\left|\mathcal{N}(\mathrm{N}_{\mathbb{E},\mathrm{O}})\right|=\lim_{n\to\aleph_{ 0}}\left(\begin{array}{c}n\\ n/2\end{array}\right)=\lim_{n\to\aleph_{0}}\left[\frac{n!}{(n/2)!\cdot(n/2)!} \right]=\aleph_{0} \tag{4.20}\] since \(\aleph_{0}\cdot\aleph_{0}=\aleph_{0}\)[7, 12, 16]; as the limit is taken, the factorial \(n!\) will never be able to grow in value beyond \(\aleph_{0}\). What is true of \(\mathcal{N}(\mathrm{N}_{\mathbb{E},\mathrm{O}})\), will also be true of all the subsets of \(\mathcal{P}(\mathbb{N})\) whose members are infinite subsets of \(\mathbb{N}\). Therefore, all of them will be denumerable. To corroborate this conclusion, we can proceed with the following analysis: Assume that \(\mathcal{N}(\mathrm{N}_{\mathbb{E},\mathrm{O}})\) is an uncountable set, so that its cardinality is \(\aleph_{1}\), i.e. the least uncountable cardinal [7, 12, 16]. According to the arithmetics of transfinite cardinals, it is the case that \(\aleph_{1}\cdot k=\aleph_{1}\), \(\forall k\in\mathbb{N}\). Equally, it can be written that \(\aleph_{1}\cdot(1/k)=\aleph_{1}\), \(\forall k\in\mathbb{N}\). And this last statement can also be extended to the product of \(\aleph_{1}\) by a rational number \(q\), i.e. \(\aleph_{1}\cdot q=\aleph_{1}\), \(\forall q\in\mathbb{Q}\cdot 0<q<1\). By considering (4.7) together with Table 4.1, it will then be possible to evaluate the cardinalities of \(\mathcal{N}(\mathrm{N}_{\mathbb{E},\mathrm{O}+1})\) and subsequent subsets of \(\mathcal{P}(\mathbb{N})\): \[\left|\mathcal{N}(\mathrm{N}_{\mathbb{E},\mathrm{O}+1})\right|=\left| \mathcal{N}(\mathrm{N}_{\mathbb{E},\mathrm{O}})\right|\cdot q=\lim_{n\to\aleph _{0}}\left[\left(\begin{array}{c}n\\ n/2\end{array}\right)\cdot\frac{1}{1+(2/n)}\right]=\aleph_{1} \tag{4.22}\] \[\left|\mathcal{N}(\mathrm{N}_{\mathbb{E},\mathrm{O}+2})\right|= \left|\mathcal{N}(\mathrm{N}_{\mathbb{E},\mathrm{O}+1})\right|\cdot q=\lim_{n \to\aleph_{0}}\left[\left(\begin{array}{c}n\\ (n/2)+1\end{array}\right)\cdot\frac{1-(2/n)}{1+(4/n)}\right]=\aleph_{1} \tag{4.21}\] and so on. It is clear that the product of \(\aleph_{1}\) by the corresponding values of the ratio \(q\) would render \(\aleph_{1}\) in all cases, since \(0<q<1\). This is, all the terms in the expansion (4.16) between \(\mathcal{N}(\mathrm{N}_{\mathbb{E},\mathrm{O}})\) and \(\mathcal{N}(\mathrm{N}_{\mathbb{N},\mathrm{P}})\) would have cardinalities that equal \(\aleph_{1}\). And, more significantly, the sets \(\mathcal{N}(\mathrm{N}_{\mathbb{N},\mathrm{P}})\), \(\forall p\in\mathbb{N}\), would also have cardinalities equal to \(\aleph_{1}\), therefore contradicting (4.15). Conclusively, such a contradiction confirms that \(\mathcal{N}(\mathrm{N}_{\mathbb{E},\mathrm{O}})\), and all the subsequents subsets of \(\mathcal{P}(\mathbb{N})\), cannot be uncountable. It is of interest to consider the cardinality of the last term of the expansion (4.16), i.e. \(\mathcal{N}(\mathrm{N}_{\mathbb{N},\mathrm{O}})\): If the term \(\mathcal{N}(\mathrm{N}_{\mathbb{N},\mathrm{I}})\) had cardinality equal to \(\aleph_{1}\), then the cardinality of \(\mathcal{N}(\mathrm{N}_{\mathbb{N},\mathrm{O}})\) would be given by \({}^{4}\) \[\left|\mathcal{N}(\mathrm{N}_{\mathbb{N},\mathrm{O}})\right|=\left|\mathcal{N}( \mathrm{N}_{\mathbb{N},\mathrm{I}})\right|\cdot q=\lim_{n\to\aleph_{0}}\left[ \aleph_{1}\cdot(1/n)\right]=\aleph_{1}\cdot(1/\aleph_{0})=\aleph_{1} \tag{4.23}\] contradicting the fact that \(\left|\mathcal{N}(N_{N_{N}0})\right|=1\). Alternatively, it is easy to see that, if \(\mathcal{N}(N_{E,O})\) had been a denumerable set, i.e. \(\left|\mathcal{N}(N_{E,O})\right|=\aleph_{0}\), then it would had also followed that all the subsequent subsets of \(\mathcal{P}(\mathbb{N})\) were denumerable, and that \(\left|\mathcal{N}(N_{N_{N}0})\right|=1\). Once all subsets of \(\mathcal{P}(\mathbb{N})\), whose members are infinite subsets of \(\mathbb{N}\), have been shown to be denumerable, the denumerability of \(\mathcal{P}(\mathbb{N})\) follows. Since it can now be written that \[\left|\mathcal{P}(\mathbb{N})\right|=1+\aleph_{0}+\cdot\cdot\cdot+\aleph_{0}+ \cdot\cdot\cdot+\aleph_{0}+\cdot\cdot\cdot+\aleph_{0}+\cdot\cdot\cdot+\aleph_{0}+1 \tag{4.24}\] where, according to (4.13), the total of terms of the summation is denumerable, a final implementation of Theorem 4.1 concludes that \(\left|\mathcal{P}(\mathbb{N})\right|=\aleph_{0}\). It becomes a corollary of Theorem 4.2 to state that the set of reals \(\mathbb{R}\) is also denumerable [15]: \[\left|\mathbb{R}\right|=\left|\mathcal{P}(\mathbb{N})\right|=2^{\aleph_{0}}= \aleph_{0} \tag{4.25}\] And the same conclusion applies to the power set of \(\mathcal{P}(\mathbb{N})\), \[\left|\mathcal{P}(\mathcal{P}(\mathbb{N}))\right|=\aleph_{0} \tag{4.26}\] as well as to subsequent power sets, thus questioning the viability of transfinite cardinals beyond \(\aleph_{0}\). ## 5. Comparing Infinities: Relative Cardinalities Since, as a consequence of Theorem 4.2, all common infinite sets appear to have the same cardinality \((\aleph_{0})\), a different way of comparing them seems necessary. For this purpose, the concept of relative cardinality was introduced in [15], modified here as follows: **Definition 5.1** (Relative Cardinality of Finite Sets).: _Consider two finite sets, \(A\) and \(B\), with cardinalities \(\left|A\right|=a\) and \(\left|B\right|=b\), such that \(a<b\). Their relative cardinality is defined as the ratio \(\rho_{A,B}=a/b\)._ **Definition 5.2** (Relative Cardinality of Infinite Sets).: _Consider two sets, \(A\) and \(B\), both denumerable, such that \(A\) is a subset of \(B\), \(A\subseteq B\). Assume their constructions generate formulae, \(\Phi_{A}(n)\) and \(\Phi_{B}(n)\), \(\forall n\in\mathbb{N}\), which render the cardinalities of the respective interim finite sets, in relation to each other._ _The relative cardinality of \(A\) and \(B\) is defined as the limiting ratio_ \[\mathsf{P}_{A,B}=\lim_{n\to\aleph_{0}}\frac{\Phi_{A}(n)}{\Phi_{B}(n)} \tag{6.1}\] From both definitions, it will always be the case that \(0\!\leq\!\mathsf{P}_{A,B}\!\leq\!1\). While the definition of relative cardinality for finite sets is elementary, the implementation of (6.1) for infinite sets needs some kind of baseline for \(\Phi_{A}(n)\) and \(\Phi_{B}(n)\) to be truly comparable, hence the requisite of \(A\) being a subset of \(B\). It can be envisaged that the quality of being a finite or an infinite set is an "absolute" property of the set, based on their definition [7]: "A set \(X\) is _finite_ if there is a bijection \(f\!:n\!\to\!X\) for some \(n\!\in\!\mathbb{N}\). If there is no such bijection for any \(n\!\in\!\mathbb{N}\), \(X\) is _infinite_." The introduction of relative cardinalities, as defined here, brings the possibility of comparing finite sets according to their size. And the same applies to infinite sets, albeit in relative terms. In this sense, the property of being a denumerable set, i.e. countably infinite, is treated as a basic ("absolute") property of the set which differentiates it from any finite set, but does not from other denumerable sets. However, the relative cardinality \(\mathsf{P}_{A,B}\) of two denumerable sets provides a comparison of their "relative" sizes. This point can better be illustrated by relevant examples: _i)_ Consider the set of natural numbers, \(\mathbb{N}\!=\!\{0,1,2,\cdots,n,\cdots\}\), and the set of even numbers, \(\mathbb{E}\!=\!\{0,2,4,\cdots,2k,\cdots\}\!\;\forall k\!\in\!\mathbb{N}\). \(\mathbb{E}\) is a infinite subset of \(\mathbb{N}\), such that \(|\mathbb{E}|\!=\!|\mathbb{N}|\!=\!\mathbb{N}\!\;\!\mathbb{N}_{0}\), so they have the same "absolute" cardinality. Nevertheless, their relative cardinality is \[\mathsf{P}_{\mathbb{E},\mathbb{N}}=\lim_{n\to\aleph_{0}}\frac{n/2}{n}=0.5 \tag{6.2}\] therefore, in relative terms, there are half as many even numbers as natural numbers. _ii)_ Consider the set of natural numbers, \(\mathbb{N}\), and the set of all integers, positive and negative, \(\mathbb{Z}\!=\!\{\cdots,-2,-1,0,1,2,\cdots\}\). \(\mathbb{N}\) is a infinite subset of \(\mathbb{Z}\), such that \(|\mathbb{N}|\!=\!|\mathbb{Z}|\!=\!\aleph_{0}\), so once more they have the same "absolute" cardinality. To determine their relative cardinality, \(\mathsf{P}_{\mathbb{N},\mathbb{Z}}\), it is necessary to have a suitable formula \(\Phi_{\mathbb{Z}}(n)\). This will be: \(\Phi_{\mathbb{Z}}(n)\!=\!2n\!+\!1\). Accordingly, \[\mathsf{P}_{\mathbb{N},\mathbb{Z}}=\lim_{n\to\aleph_{0}}\frac{n}{2n\!+\!1}=0.5 \tag{6.3}\] so, in relative terms, there are half as many natural numbers as integers. **iii)** Consider the set of natural numbers, \(\mathbb{N}\), and the set of rational numbers, \(\mathbb{Q}\!=\!\{a/b,\forall a,b\!\in\!\mathbb{Z},b\!\neq\!0\}\). \(\mathbb{N}\) is a infinite subset of \(\mathbb{Q}\), such that \(|\mathbb{N}|\!=\!|\mathbb{Q}|\!=\!\mathbb{N}_{0}\), this is, they have the same "absolute" cardinality. In order to determine their relative cardinality, \(\rho_{\mathbb{N},\mathbb{Q}}\), it is necessary to have a suitable formula \(\Phi_{\mathbb{Q}}(n)\). Consider first the fractions \(q\) in the interval (0,1], such that \(0\!<\!q\!\leq\!1,\forall q\!\in\!(0,\!1]\): (6.4) \[\begin{array}{c|ccccccccc}\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\ _iv)_ Consider the set of natural numbers, \(\mathbb{N}\), and the set of real numbers, \(\mathbb{R}\). \(\mathbb{N}\) is a infinite subset of \(\mathbb{R}\), such that \(|\mathbb{N}|=|\mathbb{R}|=\mathbb{N}_{0}\), this is, they have the same "absolute" cardinality. In order to determine their relative cardinality, \(\mathsf{P}_{\mathbb{N},\mathbb{R}}\), it is first necessary to have a suitable formula \(\Phi_{\mathbb{R}}(n)\). Consider the real numbers \(r\) in the interval [0,1), such that \(0\leq r<1,\forall r\in[0,1)\); it will be the case that \[\big{(}\Phi_{\mathbb{R}}(n)\big{)}\Big{|}_{[0,1)}=2^{\,n} \tag{6.8}\] since it is possible to establish a one-to-one correspondence between the set of infinite binary strings and the real numbers in the interval [0,1). To cover the whole of the number line, it will be necessary to multiply (6.8) by \(2\,n\), resulting in \[\Phi_{\mathbb{R}}(n)=2\,n\cdot 2^{\,n}=n\cdot 2^{\,n+1} \tag{6.9}\] Finally, the relative cardinality \(\mathsf{P}_{\mathbb{N},\mathbb{R}}\) will be given by \[\mathsf{P}_{\mathbb{N},\mathbb{R}}\lim_{n\to\mathbb{N}_{0}}\ \frac{n}{n\cdot 2^{\,n+1}}=0 \tag{6.10}\] This implies that, in relative terms, there are infinitely more real numbers than natural numbers. _iv)_ Consider the set of natural numbers, \(\mathbb{Q}\), and the set of real numbers, \(\mathbb{R}\). \(\mathbb{Q}\) is a infinite subset of \(\mathbb{R}\), such that \(|\mathbb{Q}|=|\mathbb{R}|=\mathbb{N}_{0}\), this is, they have the same "absolute" cardinality. To determine their relative cardinality \(\mathsf{P}_{\mathbb{Q},\mathbb{R}}\), it is sufficient to apply (6.1) using the formulae \(\Phi_{\mathbb{Q}}(n)\) and \(\Phi_{\mathbb{R}}(n)\) already obtained. The result will be Figure 2. Correction factor \(f\) as a function of \(n\). \[\mathsf{P}_{\mathbb{N},\mathbb{Q}}=\lim_{n\to\aleph_{0}}\frac{1.26\,n\,.\,[(n^{2} \!-\!n)/2]\!+\!1}{n\!\cdot\!2^{n+1}}=0 \tag{6.11}\] (6.11) implies the existence, in relative terms, of infinitely more real numbers than rational numbers. _v_) Consider the set of real numbers, \(\mathbb{R}\), and the set of complex numbers, \(\mathbb{C}\). \(\mathbb{R}\) is a infinite subset of \(\mathbb{C}\), such that \(|\mathbb{R}|\!=\!|\mathbb{C}|\!=\!\aleph_{0}\), this is, they have the same "absolute" cardinality. To determine their relative cardinality \(\mathsf{P}_{\mathbb{R},\mathbb{C}}\), it is first necessary to evaluate the formula \(\Phi_{\mathbb{C}}(n)\). Since the set of complex numbers can be constructed by the cartesian product of \(\mathbb{R}\) with the set of imaginary numbers \(\mathbb{I}\!=\{r\!\cdot\!i,\forall r\!\in\!\mathbb{R}\,\wedge\,i\!=\!\!\sqrt{ -1}\,\}\), it can be deduced that \[\Phi_{\mathbb{C}}(n)=(n\cdot 2^{\,n+1})\cdot(n\cdot 2^{\,n+1})=n^{2}\cdot 2^{\,2 ^{\,2n+2}} \tag{6.12}\] Accordingly, the relative cardinality \(\mathsf{P}_{\mathbb{R},\mathbb{C}}\) will be given by \[\mathsf{P}_{\mathbb{R},\mathbb{C}}=\lim_{n\to\aleph_{0}}\frac{n\cdot 2^{\,n+1} }{n^{2}\cdot 2^{\,2^{\,n+2}}}=\lim_{n\to\aleph_{0}}\frac{1}{n\cdot 2^{\,n+1}}=0 \tag{6.13}\] which implies the existence, again in relative terms, of infinitely more complex numbers than real numbers. Examples _i_) to _v_) illustrate how a concept as simple as the relative cardinality of two given sets \(A\) and \(B\), \(\mathsf{P}_{A,B}\) (Definitions 6.1 and 6.2) is nevertheless capable of providing a powerful quantitative comparison between the relative sizes of sets, of particular significance when dealing with infinite sets. As Hilbert's well-know metaphor of the "Infinity Hotel" [6,9] indicates, infinity is treated mathematically as an "elastic" entity that can be expanded indefinitely to accommodate more and more members (a property that is fully encapsulated by Theorem 4.1). Since the main claim reported here is that all infinite sets are denumerable, i.e. they all have the same "absolute" cardinality \(\aleph_{0}\), it becomes clear that their relative cardinalities offer effective and quantitative means with which to compare them. ## 6. Concluding Remarks The results reported here offer an additional confirmation of the conclusions and implications already reported in [15]. The purge of Cantor's transfinite theory from the fabric of mathematics, although it will undoubtedly be a traumatic and arduous process, will nevertheless bring a great deal of benefits in terms of the consequential simplification of the axiomatic principles that underpin set theory. Such benefits might propagate into all areas of pure mathematics, so only time will tell what new and exciting findings will be uncovered as a result.
2306.00781
Non-perturbative theory of spontaneous parametric down-conversion in open and dispersive optical systems
We develop a non-perturbative formulation based on the Green-function quantization method, that can describe spontaneous parametric down-conversion in the high-gain regime in nonlinear optical structures with arbitrary amount of loss and dispersion. This formalism opens the way for description and design of arbitrary complex and/or open nanostructured nonlinear optical systems in quantum technology applications, such as squeezed-light generation, nonlinearity-based quantum sensing, and hybrid quantum systems mediated by nonlinear interactions. As an example case, we numerically investigate the scenario of integrated quantum spectroscopy with undetected photons, in the high-gain regime, and uncover novel gain-dependent effects in the performance of the system.
Aleksa Krstić, Frank Setzpfandt, Sina Saravi
2023-06-01T15:12:50Z
http://arxiv.org/abs/2306.00781v3
Non-perturbative theory of spontaneous parametric down-conversion in open and dispersive optical systems ###### Abstract We develop a non-perturbative formulation based on the Green-function quantization method, that can describe spontaneous parametric down-conversion in the high-gain regime in nonlinear optical structures with arbitrary amount of loss and dispersion. This formalism opens the way for description and design of arbitrary complex and/or open nanostructured nonlinear optical systems in quantum technology applications, such as squeezed-light generation, nonlinearity-based quantum sensing, and hybrid quantum systems mediated by nonlinear interactions. As an example case, we numerically investigate the scenario of integrated quantum spectroscopy with undetected photons, in the high-gain regime, and uncover novel gain-dependent effects in the performance of the system. ## I Introduction With the ever growing importance of optical quantum technologies, new ways to leverage nonclassical properties of light are in high demand [1; 2]. Some of the most prevalent methods for generating non-classical light are based on nonlinear sources of photon pairs in which nonlinear optical processes, such as spontaneous parametric down-conversion (SPDC) or spontaneous four-wave mixing (SFWM), are used to convert input light into pairs of photons which exhibit non-classical correlations [3; 4], highly relevant for many applications in quantum technologies. Such sources also offer the advantage that they can be implemented in integrated platforms [2; 4], as well as utilised in hybrid quantum photonic systems, to overcome the limitations of monolithic integrated systems [5]. Apart from generating entangled photon pairs, nonlinear photon-pair sources are also commonly used as a means to obtain heralded single photons [6; 7; 8]. In both cases the sources are operated in the so-called low-gain regime, where the dominant contribution to the output state (apart from the vacuum) is a single photon-pair state [7]. This is achieved by keeping the input pump beam of the nonlinear source at a power that is low enough to keep the multiple photon-pair contributions in the output state negligible. Although such low-gain probabilistic sources of photon pairs are of importance in many areas of quantum technologies, their development in the high-gain regime is also of interest [9]. In the high-gain regime, the SPDC and SFWM sources can be used as sources of squeezed light, where squeezed light has wide applications in continuous variable (CV) quantum computation [10; 11; 12; 13], quantum communication [14], as well as quantum sensing [15; 16; 17]. Another application of high-gain operation of such sources can be for generating multi-photon Fock states [18; 19], which have applications in many branches of quantum technology as well as metrology and fundamental tests of quantum entanglement [20; 21; 22]. The rapidly growing interest in the utilisation of high-gain nonlinear sources of quantum light in nanostructured and hybrid platforms [23; 24; 25; 26; 27; 28; 13; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78] necessitates the development of new theoretical formalisms to describe high-gain SPDC and SFWM, as such systems can be generally open/lossy with highly complex spatial and spectral properties [29; 30]. Formalisms for describing SPDC and SFWM in the high-gain regime in closed systems are well-established, but either neglect loss altogether [31; 32; 33] or are only able to account for weak losses that have negligible effect on the modal structure of the system [9; 34], but are not capable of treating systems where loss is an inherent part of the system's optical properties, such as nanoresonators [35], plasmonic systems [30], or structures with inherently leaky guided modes [36; 37], or strongly lossy systems that can be encountered in quantum sensing applications [38]. Finally, many established high-gain formalisms use a weak-dispersion approximation for the involved modes or rely on approximate expansions to accommodate higher-order dispersions [9; 32; 39], and do not offer a straightforward path to inclusion of complex dispersion relations, such as those that can appear in photonic crystals with internal loss or gain, where the combination of loss/gain and evanescent modes create complex dispersion diagrams [40; 41]. The Green's function (GF) quantisation method [42; 43; 44] offers a way to describe light quantisation in optical systems with arbitrary dispersion, loss, and nanostructuring, as the classical GF of the system incorporated into this quantisation approach naturally takes all these effects into account. In contrast to closed systems, where the normal modes of the field are quantised to obtain photon creation/annihilation operators, the GF method quantises the local bosonic excitations of the combined field and medium system, which allows it to naturally describe any form of loss as a coupling between different field-matter modes. A detailed overview of the GF quantisation procedure can be found in Ref. [45]. The GF formalism has already been used to describe photon pair generation in plasmonic structures [46] and dielectric nanoresonators and nanoparticles [47; 48; 49], as well as in the presence of quantum emitters and significant loss [50; 38; 51], but in all these cases, it has only been used do describe pair generation in the low-gain regime. In this work, we present a formalism, based on the GF quantisation method, to describe high-gain SPDC in arbitrary open and dispersive optical systems, within the undepleted pump approximation. Our developed nonperturbative formalism allows the exact calculation of the field operators, while intrinsically taking into account arbitrary dispersion and losses of nanostructured systems through the classical electromagnetic GF of the optical system. This method can be of special interest for the description of engineered squeezed-light generation in nanostructured systems in CV quantum computing applications [13] or systems where losses cannot be treated perturbatively (e.g. nonlinear metasurfaces for quantum light generation [52; 53; 54]), as well as high-gain quantum sensing and imaging applications, where loss is not necessarily a weak effect [15; 16; 38; 51]. Additionally, our formalism may open the path for the high-gain description of hybrid nonlinear systems, which involve direct interfacing of quantum emitters with nonlinear systems [50]. The paper is organised as follows: In Sec. II we introduce the theoretical framework for describing SPDC in arbitrarily dispersive and lossy systems using fields quantised using the GF quantisation method. We derive coupled-mode equations that allow the calculation of the full electric field operator at arbitrary times and amounts of gain (within the undepleted pump approximation). Then, we introduce a modified formalism for the calculation of frequency-domain field components, which is extremely useful in evaluating the spectral properties of the output quantum state. In Sec. III, as an example application scenario, we apply the formalism to the problem of integrated quantum spectroscopy with undetected photons (QSUP) and investigate the effects of a spectrally localised loss on the spectrum of the output photons of a waveguide SPDC source, operating in the high-gain regime. We show that the performance of the sensing system is gain-dependent and its signatures become more prominent as gain is increased, but also that they saturate at even higher gain values. Finally, in Sec. IV we discuss our results and review the advantages and use cases of our non-perturbative formalism. ## II Non-perturbative description of SPDC in open and dispersive systems We are considering a generally inhomogeneous, dispersive and lossy dielectric system, characterised by the linear permittivity \(\varepsilon(\mathbf{r},\omega)=\varepsilon^{\prime}(\mathbf{r},\omega)+i \varepsilon^{\prime\prime}(\mathbf{r},\omega)\), where \(\varepsilon^{\prime}\) and \(\varepsilon^{\prime\prime}\) are its real and imaginary parts, respectively. To make our expressions less cumbersome, we restrict our present analysis to an isotropic system and note that a generalisation to anisotropic systems is possible by using an appropriately generalised GF quantisation procedure, such as the one derived in Ref. [55]. We consider the system to have a second-order nonlinearity, characterised by the second-order nonlinear susceptibility tensor \(\chi^{(2)}_{ijk}(\mathbf{r})\). In the Heisenberg picture, the SPDC interaction Hamiltonian has the following form [56]: \[\hat{H}_{\text{SPDC}}(t)=\] \[-\varepsilon_{0}\int\mathrm{d}\mathbf{r}\,\chi^{(2)}_{ijk}( \mathbf{r})E^{(-)}_{P,i}(\mathbf{r},t)\hat{E}^{(+)}_{j}(\mathbf{r},t)\hat{E}^ {(+)}_{k}(\mathbf{r},t)\] \[+H.c., \tag{1}\] where \(\hat{E}^{(+)}_{j,k}(\mathbf{r},t)\) are Cartesian components of the positive-frequency part of the generated quantum fields, \(E^{(-)}_{P,i}(\mathbf{r},t)\) are the Cartesian components of the negative-frequency part of the pump field and \(\varepsilon_{0}\) is the dielectric permittivity of vacuum. We emphasise that we are working in the undepleted pump approximation and thus assume the pump field to be an undepleted classical pulse, which is a crucial assumption for finding the input-output operator relations later in this work. Treating the extremely high-gain scenarios, in which the pump can also deplete substantially, results in a more complex dynamics for generation of non-Gaussian quantum states [57; 58]. Advancing the GF formalism to go beyond the undepleted pump approximation could be an interesting direction for future works. As will be the convention in the remainder of the paper, repeated Latin indices are implicitly summed over and the domains of integration for spatial and frequency integrals are \((-\infty,\infty)\) and \([0,\infty)\), respectively, unless otherwise noted in the text. We carry out all of our calculations in the Heisenberg picture and assume that the nonlinear interaction is adiabatically "turned on" at \(t\to-\infty\) and adiabatically "turned off" as \(t\to\infty\). To allow for the treatment of open and dispersive systems, the electric field operator is quantised using the GF quantisation method [42; 43; 44]: \[\hat{\mathbf{E}}^{(\pm)}(\mathbf{r},t)=\int\mathrm{d}\omega\, \hat{\mathbf{E}}^{(\pm)}(\mathbf{r},\omega,t),\] \[\hat{\underline{E}}^{(+)}_{i}(\mathbf{r},\omega,t)=i\mathcal{K} \,\frac{\omega^{2}}{c^{2}}\] \[\qquad\times\int\mathrm{d}\mathbf{r}^{\prime}\,\sqrt{\varepsilon ^{\prime\prime}(\mathbf{r}^{\prime},\omega)}\,G_{ij}(\mathbf{r},\mathbf{r}^{ \prime},\omega)\,\hat{f}_{j}(\mathbf{r}^{\prime},\omega,t), \tag{2}\] \[\hat{\underline{E}}^{(-)}_{i}(\mathbf{r},\omega,t)=\left(\hat{ \underline{E}}^{(+)}_{i}(\mathbf{r},\omega,t)\right)^{\dagger},\] where \(\mathcal{K}=\sqrt{\frac{\hbar}{\pi\varepsilon_{0}}}\), \(\hat{\underline{\mathbf{E}}}^{(\pm)}(\mathbf{r},\omega,t)\) is the amplitude operator of frequency \(\omega\), \(G_{ij}(\mathbf{r},\mathbf{r}^{\prime},\omega)\) are the matrix elements of the dyadic GF of the dielectric system \(\hat{\mathbf{G}}(\mathbf{r},\mathbf{r}^{\prime},\omega)\) and \(\hat{f}_{j}(\mathbf{r}^{\prime},\omega,t)\) are the Cartesian components of the bosonic annihilation operator \(\hat{\mathbf{f}}(\mathbf{r}^{\prime},\omega,t)\). The GF in 2 is the solution to the classical Helmholtz equation: \[\left(\nabla\times\nabla\times-\frac{\omega^{2}}{c^{2}}\varepsilon(\mathbf{r}, \omega)\right)\,\overleftrightarrow{\mathbf{G}}(\mathbf{r},\mathbf{r}^{\prime},\omega)=\overleftrightarrow{T}\delta(\mathbf{r}-\mathbf{r}^{\prime}), \tag{3}\] together with the condition that it vanishes at infinity. In the above equation \(\overleftrightarrow{T}\) is the identity matrix. The operator \(\,\overleftrightarrow{\mathbf{f}}(\mathbf{r}^{\prime},\omega,t)\) annihilates a local field-matter excitation of frequency \(\omega\), located at \(\mathbf{r}^{\prime}\). Along with its adjoint \(\,\overleftrightarrow{\mathbf{f}}^{\dagger}(\mathbf{r}^{\prime},\omega,t)\), it makes up the fundamental operator algebra of the GF quantisation method and obeys the canonical commutation relation: \[\left[\,\hat{f}_{i}(\mathbf{r},\omega,t),\,\hat{f}_{\,j}^{\,\dagger}(\mathbf{ r}^{\prime},\omega^{\prime},t)\right]=\delta_{ij}\delta(\mathbf{r}-\mathbf{r}^{ \prime})\delta(\omega-\omega^{\prime}). \tag{4}\] Using Eq. 4, it can be shown that the amplitude operators \(\,\overleftarrow{\mathbf{E}}^{(\pm)}(\mathbf{r},\omega,t)\) obey the following commutation relation: \[\left[\underline{\hat{E}}^{(+)}_{i}(\mathbf{r},\omega,t), \underline{\hat{E}}^{(-)}_{j}(\mathbf{r}^{\prime},\omega^{\prime},t)\right]=\\ \mathcal{K}^{2}\frac{\omega^{2}}{c^{2}}\operatorname{Im}\left[G_ {ij}(\mathbf{r},\mathbf{r}^{\prime},\omega)\right]\delta(\omega-\omega^{\prime }), \tag{5}\] where \(\operatorname{Im}\left[G_{ij}(\mathbf{r},\mathbf{r}^{\prime},\omega)\right]\) is the imaginary part of the Green's function. The full derivation of the above relation can be found in Appendix A. Before proceeding, we note that the explicit results of our non-perturbative approach, presented in this paper, are strictly valid for fields in _non-magnetic systems_. However, the main ideas and steps of our formalism could potentially be adapted for magnetic systems by using an appropriate field quantisation, such as one used in Ref. [59]. ### Equations of motion The Cartesian components of \(\,\underline{\hat{\mathbf{E}}}^{(\pm)}(\mathbf{r},\omega,t)\) evolve in time according to the Heisenberg equation of motion: \[\partial_{t}\underline{\hat{E}}^{(\pm)}_{i}(\mathbf{r},\omega,t)=\\ \frac{1}{i\hbar}\Big{[}\underline{\hat{E}}^{(\pm)}_{i}(\mathbf{r},\omega,t),\hat{H}_{0}(t)+\hat{H}_{\text{SPDC}}(t)\Big{]}, \tag{6}\] where \(\,\hat{H}_{0}(t)=\hbar\int\mathrm{d}\mathbf{r}\int\mathrm{d}\omega\,\omega\, \hat{f}_{\,i}^{\,\dagger}(\mathbf{r},\omega,t)\,\hat{f}_{i}(\mathbf{r},\omega,t)\) is the free-field Hamiltonian in the GF quantisation scheme [43]. The resulting differential equations thus contain terms describing both the nonlinear and free-field evolution. The latter can be eliminated and the equations of motion made simpler by introducing the slowly-varying, _rotating-frame_ operators, \(\,\underline{\mathbf{E}}^{(\pm)}(\mathbf{r},\omega,t)\), where: \[\,\underline{\hat{\mathbf{E}}}^{(\pm)}(\mathbf{r},\omega,t)=\, \,\underline{\mathbf{E}}^{(\pm)}(\mathbf{r},\omega,t)\,e^{\mp i\omega t}, \tag{7a}\] \[\,\underline{\hat{\mathbf{E}}}^{(\pm)}(\mathbf{r},t)=\int\mathrm{d} \omega\,\,\underline{\mathbf{E}}^{(\pm)}(\mathbf{r},\omega,t)\,e^{\mp i\omega t}. \tag{7b}\] The exponential factors account for the free-field component of the evolution and the rotating-frame operators \(\,\underline{\mathbf{E}}^{(\pm)}(\mathbf{r},\omega,t)\) evolve according to: \[\partial_{t}\,\mathbf{\bar{E}}^{(\pm)}(\mathbf{r},\omega,t)=\frac{1}{i\hbar} \Big{[}\,\underline{\mathbf{E}}^{(\pm)}(\mathbf{r},\omega,t),\hat{H}_{ \text{SPDC}}(t)\Big{]}. \tag{8}\] A proof of the above equation, as well as a more formal definition of the rotating-frame operators is given in Appendix B. Rotating-frame creation and annihilation operators \(\,\underline{\mathbf{f}}^{(\dagger)}(\mathbf{r},\omega,t)\) are defined equivalently to Eq. 7a and are related to the rotating frame amplitude operators \(\,\underline{\mathbf{E}}^{(\pm)}(\mathbf{r},\omega,t)\) by the relation: \[\,\underline{\bar{E}}^{(+)}_{i}(\mathbf{r},\omega,t)=\\ i\mathcal{K}\,\frac{\omega^{2}}{c^{2}}\int\mathrm{d}\mathbf{r}^{ \prime}\,\sqrt{\varepsilon^{\prime\prime}(\mathbf{r}^{\prime},\omega)}\,G_{ ij}(\mathbf{r},\mathbf{r}^{\prime},\omega)\bar{f}_{j}(\mathbf{r}^{\prime}, \omega,t), \tag{9}\] and its adjoint for \(\,\underline{\bar{E}}^{(-)}_{i}(\mathbf{r},\omega,t)\) and \(\bar{f}^{\dagger}_{j}(\mathbf{r}^{\prime},\omega,t)\). The rotating-frame amplitude operators \(\,\underline{\mathbf{E}}^{(\pm)}(\mathbf{r},\omega,t)\), as well as the rotating-frame creation/annihilation operators \(\,\overline{\mathbf{f}}^{(\dagger)}(\mathbf{r}^{\prime},\omega,t)\), can be shown to obey equivalent commutation relations as their Heisenberg picture counterparts: \[\left[\bar{f}_{i}(\mathbf{r},\omega,t),\bar{f}^{\dagger}_{j}( \mathbf{r}^{\prime},\omega^{\prime},t)\right]=\delta_{ij}\delta(\mathbf{r}- \mathbf{r}^{\prime})\delta(\omega-\omega^{\prime}), \tag{10a}\] \[\Big{[}\,\underline{\bar{E}}^{(+)}_{i}(\mathbf{r},\omega,t),\, \underline{\bar{E}}^{(-)}_{j}(\mathbf{r}^{\prime},\omega^{\prime},t)\Big{]}=\\ \mathcal{K}^{2}\frac{\omega^{2}}{c^{2}}\operatorname{Im}\left[G_ {ij}(\mathbf{r},\mathbf{r}^{\prime},\omega)\right]\delta(\omega-\omega^{ \prime}). \tag{10b}\] The above relations are further discussed in Appendix B and, from this point onward, operators will be considered exclusively in the rotating frame without explicitly noting the fact, e.g., "rotating-frame amplitude operators" will simply be referred to as "amplitude operators" and so on. We find the equations of motion governing the evolution of the Cartesian components of \(\,\underline{\mathbf{E}}^{(+)}(\mathbf{r},\omega,t)\) from Eq. 8 (the full derivation is given in Appendix C): \[\partial_{t}\,\underline{\bar{E}}^{(+)}_{i}(\mathbf{r},\omega,t)=\\ \int\mathrm{d}\mathbf{r}^{\prime}\int\mathrm{d}\omega^{\prime} \,F_{ij}(\mathbf{r},\omega;\mathbf{r}^{\prime},\omega^{\prime};t)\,\bar{E}^{(-)}_ {j}(\mathbf{r}^{\prime},\omega^{\prime},t), \tag{11}\] with the corresponding equation for \(\,\partial_{t}\,\underline{\bar{E}}^{(-)}_{i}(\mathbf{r},\omega,t)\) obtained by simply taking the adjoint of Eq. 11. Here we also defined: \[F_{ij}(\mathbf{r},\omega;\mathbf{r}^{\prime},\omega^{\prime};t) = \frac{2i}{\pi}\frac{\omega^{2}}{c^{2}}\operatorname{Im}\left[G_ {il}(\mathbf{r},\mathbf{r}^{\prime},\omega)\right] \tag{12}\] \[\times E^{(+)}_{P,k}(\mathbf{r}^{\prime},t)\chi^{(2)}_{klj}( \mathbf{r}^{\prime})\,e^{i(\omega+\omega^{\prime})t}.\] ### Input-output relations The obtained equations of motion Eqs. 11 are _linear_ in terms of the amplitude operators \(\,\underline{\bar{E}}^{(\pm)}_{i}(\mathbf{r},\omega,t)\) Along with the fact that the \(\,\bar{E}_{i}^{(\pm)}(\mathbf{r},\omega,t)\) are themselves _canonical_, i.e., their commutator Eq. 10b is scalar, this enables us to claim that, at an arbitrary time \(t\), \(\bar{E}_{i}^{(\pm)}(\mathbf{r},\omega,t)\) can be expressed as a linear combination of \(\,\bar{E}_{i}^{(\pm)}(\mathbf{r},\omega,t\to-\infty)\), i.e., the free-field amplitude operators [31; 32; 60]. Thus, we can postulate an ansatz for the general solution of Eqs. 11 in which the time-dependent (output) operators are expressed in terms of the free-field (input) operators via an input-output (IO-) relation. Additionally, due to the linear relation between the amplitude operators \(\,\bar{E}_{i}^{(\pm)}(\mathbf{r},\omega,t)\) and creation/annihilation operators \(\,\bar{\mathbf{f}}^{(\dagger)}(\mathbf{r},\omega,t)\), as can be seen from Eq. 9, either the free-field amplitude operators or the free-field creation/annihilation operators can be used to formulate the IO-relation. For the purpose of this work, we use the creation/annihilation operators \(\,\bar{\mathbf{f}}^{(\dagger)}(\mathbf{r}^{\prime},\omega,t\to-\infty)\), which we label \(\,\bar{\mathbf{f}}^{(\dagger)}(\mathbf{r}^{\prime},\omega)\) for compactness. We note that a completely equivalent formalism is obtained if the amplitude operators \(\,\bar{\mathbf{E}}^{(\pm)}(\mathbf{r},\omega,t\to-\infty)\) are used instead and the main reason we opted for \(\,\bar{\mathbf{f}}^{(\dagger)}(\mathbf{r}^{\prime},\omega)\) was to have more streamlined final expressions due to their simpler commutation relation in Eq. 10a. With the above discussion in mind, we assume the following ansatz for the IO-relation: \[\bar{E}_{i}^{(+)}(\mathbf{r},\omega,t)=\int\mathrm{d}\mathbf{\xi}\int\mathrm{d} \nu\left(\mathcal{B}_{ij}(\mathbf{r},\omega;\mathbf{\xi},\nu;t)\bar{f}_{j}(\mathbf{ \xi},\nu)+\mathcal{A}_{ij}^{*}(\mathbf{r},\omega;\mathbf{\xi},\nu;t)\bar{f}^{ \dagger}_{\ j}(\mathbf{\xi},\nu)\right). \tag{13}\] In the above expression, \(\mathcal{A}_{ij}(\mathbf{r},\omega;\mathbf{\xi},\nu;t)\) and \(\mathcal{B}_{ij}(\mathbf{r},\omega;\mathbf{\xi},\nu;t)\) are the _input-output (IO)-coefficients_. In Eq. 13, \(\,\mathbf{\xi}\text{ and }\nu\,\) are spatial and frequency variables, which enumerate the free-field field-matter local excitations. The distribution of the contributing excitations is determined by the IO-coefficients, which are, in turn, determined by the properties of the system under consideration. A similar IO-relation is often derived in existing works on high-gain SPDC in lossless systems, where the output normal-mode photon creation/annihilation operators are expressed as linear combinations of normal-mode photon operators at the input, and is associated with frequency mixing caused by the nonlinear interaction [31; 32]. In the present context, Eq. 13 does indeed represent the output as a linear combination of excitation of different frequencies, manifested by the frequency integral over \(\nu\). However, it also shows the output to be a linear combination of excitations at different spatial positions, manifested by the spatial integral over \(\mathbf{\xi}\). This is a property inherent to the GF quantisation method and is a consequence of the local nature of the fundamental bosonic modes, which itself enables the treatment of arbitrary open optical systems. In the remainder of the paper, to make expressions more compact, we will combine the variables \(\mathbf{\xi}\) and \(\nu\) into the vector \(\mathbf{\Xi}=(\nu,\mathbf{\xi})\) whenever possible and replace \(\int\mathrm{d}\mathbf{\xi}\int\mathrm{d}\nu\) with \(\int\mathrm{d}\mathbf{\Xi}\). Here, we again emphasise that the linearity of Eqs. 11 in terms of the field operators, necessary for establishing the IO-relation Eq. 13, is a direct consequence of the undepleted pump approximation and treating the pump field as a classical pulse (as shown in Appendix C). The input-output relation 13 allows us to reconstruct the field operators at any time \(t\), if the IO-coefficients \(\mathcal{A}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\) and \(\mathcal{B}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\) have been obtained. Moreover, Eq. 13 can be used to express field-dependent quantities directly in terms of the IO-coefficients. To illustrate this, we consider the first-order field correlation function \(g_{ij}^{(1)}(\mathbf{r},\mathbf{r}^{\prime};t,t^{\prime})=\left\langle\hat{E} _{i}^{(-)}(\mathbf{r},t)\hat{E}_{j}^{(+)}(\mathbf{r}^{\prime},t^{\prime})\right\rangle\) as an example. Here, the expectation value is to be taken in the initial state of the system, since we are working in the Heisenberg picture. To write \(g_{ij}^{(1)}(\mathbf{r},\mathbf{r}^{\prime};t,t^{\prime})\) in terms of the IO-coefficients, we expand the field operators using Eq. 7b and then replace the amplitude operators using the IO-relation Eq. 13. If we assume the initial state to be the vacuum, we obtain (see Appendix D for details): \[g_{ij}^{(1)}(\mathbf{r},\mathbf{r}^{\prime};t,t^{\prime})=\iint \mathrm{d}\omega\,\mathrm{d}\omega^{\prime}\ e^{i\omega t-i\omega^{\prime}t^{ \prime}}\\ \times\int\mathrm{d}\mathbf{\Xi}\,\mathcal{A}_{ik}(\mathbf{r}, \omega,\mathbf{\Xi};t)\mathcal{A}_{jk}^{*}(\mathbf{r}^{\prime},\omega^{\prime}, \mathbf{\Xi};t^{\prime}). \tag{14}\] If the system is initially not in the vacuum state, e.g., in the case of a seeded nonlinear process, \(g_{ij}^{(1)}(\mathbf{r},\mathbf{r}^{\prime};t,t^{\prime})\) (and any higher-order correlation function) can still be expressed in terms of the IO-coefficients, however, a non-vacuum initial state will also result in a more complicated expression, more details on this are given in Appendix D. To obtain numerical values of \(g_{ij}^{(1)}(\mathbf{r},\mathbf{r}^{\prime};t,t^{\prime})\), one need only find the IO-coefficients at times \(t\) and \(t^{\prime}\) and insert them into Eq. 14. ### Coupled-mode equations for the IO-coefficients In order to calculate the IO-coefficients at a certain time, we find the coupled differential equations governing their time evolution by inserting Eq. 13 into the Heisenberg equations of motion Eq. 11 as an ansatz to obtain: \[\partial_{t}\mathcal{A}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)=\] \[\int\mathrm{d}\mathbf{r}^{\prime}\int\mathrm{d}\omega^{\prime}\ F^{*}_{ik}( \mathbf{r},\omega;\mathbf{r}^{\prime},\omega^{\prime};t)\mathcal{B}_{kj}( \mathbf{r}^{\prime},\omega^{\prime},\mathbf{\Xi};t), \tag{15a}\] \[\partial_{t}\mathcal{B}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)=\] \[\int\mathrm{d}\mathbf{r}^{\prime}\int\mathrm{d}\omega^{\prime}\ F_{ik}( \mathbf{r},\omega;\mathbf{r}^{\prime},\omega^{\prime};t)\mathcal{A}_{kj}( \mathbf{r}^{\prime},\omega^{\prime},\mathbf{\Xi};t). \tag{15b}\] The boundary conditions for the IO-coefficients are derived by equating Eq. 13 with Eq. 9 and setting \(t\rightarrow-\infty\) on both sides: \[\mathcal{A}_{ij}(\mathbf{r},\omega;\mathbf{\Xi}\,;t\rightarrow- \infty)=0, \tag{16a}\] \[\mathcal{B}_{ij}(\mathbf{r},\omega;\mathbf{\Xi}\,;t\rightarrow- \infty)=\] \[i\mathcal{K}\,\frac{\nu^{2}}{c^{2}}\ \sqrt{\varepsilon^{\prime\prime}(\mathbf{\xi}, \nu)}G_{ij}(\mathbf{r},\mathbf{\xi},\nu)\delta(\omega-\nu). \tag{16b}\] In most practical cases, Eqs. 15 do not have analytical solutions and have to be solved numerically. To that end, we can rewrite them in a form more appropriate for implementation, which has the added benefit of more clearly distinguishing the different contributions to the output field in the IO-relation of Eq. 13. We begin by decomposing the coefficient \(\mathcal{B}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\) in the following way: \[\mathcal{B}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)=b^{(0)}_{ij}(\mathbf{r},\omega,\mathbf{\Xi})+B_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t) \tag{17}\] where \(b^{(0)}_{ij}\) and \(B_{ij}\) satisfy the following conditions: \[b^{(0)}_{ij}(\mathbf{r},\omega,\mathbf{\Xi})\equiv\mathcal{B}_{ij}( \mathbf{r},\omega,\mathbf{\Xi};t\rightarrow-\infty)\] \[\partial_{t}B_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)=\,\partial_{t} \mathcal{B}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t),\] \[B_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t\rightarrow-\infty)=0.\] After inserting the decomposition Eq. 17 into Eq. 13 and relabeling \(\mathcal{A}_{ij}\to A_{ij}\) for notational convenience, the IO-relation takes the form: \[\bar{E}^{(+)}_{i}(\mathbf{r},\omega,t)=\,\bar{E}^{(0,+)}_{i}(\mathbf{r}, \omega)+\int\mathrm{d}\mathbf{\Xi}\left(B_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\bar{ f}_{j}(\mathbf{\Xi})+A^{*}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\bar{f}^{\dagger}_{j}( \mathbf{\Xi})\right), \tag{18}\] where \(\,\bar{E}^{(0,+)}_{i}(\mathbf{r},\omega)=\int\mathrm{d}\mathbf{\Xi}\,b^{(0)}_{ij} (\mathbf{r},\omega,\mathbf{\Xi})\bar{f}_{j}(\mathbf{\Xi})\) is exactly the free-field (input) amplitude operator of frequency \(\omega\), since \(\,\bar{\mathbf{E}}^{(+)}(\mathbf{r},\omega,t\rightarrow-\infty)=\,\bar{ \mathbf{E}}^{(0,+)}(\mathbf{r},\omega)\). The remaining two terms in Eq. 18 quantify the changes that the free-field component of frequency \(\omega\) undergoes due to the nonlinear interaction through the "new" set of coefficients - \(A_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\) and \(B_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\). The differential equations coupling the IO-coefficient \(A_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\) and the newly introduced \(B_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\) are derived by inserting the decomposition Eq. 17 into Eqs. 15: \[\partial_{t}A_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)=S^{(0)}_{ij}( \mathbf{r},\omega,\mathbf{\Xi};t)+\] \[\int\mathrm{d}\mathbf{r}^{\prime}\int\mathrm{d}\omega^{\prime}\,F^ {*}_{ik}(\mathbf{r},\omega;\mathbf{r}^{\prime},\omega^{\prime};t)B_{kj}( \mathbf{r}^{\prime},\omega^{\prime},\mathbf{\Xi};t), \tag{19a}\] \[\partial_{t}B_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)=\] \[\int\mathrm{d}\mathbf{r}^{\prime}\int\mathrm{d}\omega^{\prime}\,F _{ik}(\mathbf{r},\omega;\mathbf{r}^{\prime},\omega^{\prime};t)A_{kj}( \mathbf{r}^{\prime},\omega^{\prime},\mathbf{\Xi};t). \tag{19b}\] The coupled quantities in the above system have the initial values: \(A_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t\rightarrow-\infty)=B_{ij}(\mathbf{r}, \omega,\mathbf{\Xi};t\rightarrow-\infty)=0\) and we defined \(S^{(0)}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)=\int\mathrm{d}\mathbf{r}^{\prime} \int\mathrm{d}\omega^{\prime}\,F^{*}_{ik}(\mathbf{r},\omega;\mathbf{r}^{ \prime},\omega^{\prime};t)b^{(0)}_{kj}(\mathbf{r}^{\prime},\omega^{\prime},\bm {\Xi})\), which acts as a source term for equation 19a. After taking the frequency integral, it will have the following form: \[S^{(0)}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)=\] \[\frac{2\mathcal{K}}{\pi}\frac{\omega^{2}\nu^{2}}{c^{4}}\sqrt{ \varepsilon^{\prime\prime}(\mathbf{\xi},\nu)}\,e^{-i(\omega+\nu)t}\int\mathrm{d} \mathbf{r}^{\prime}\,\chi^{(2)}_{klm}(\mathbf{r}^{\prime})\] \[\times E^{(-)}_{P,k}(\mathbf{r}^{\prime},t)\mathrm{Im}\left[G_{il} (\mathbf{r},\mathbf{r}^{\prime},\omega)\right]G_{mj}(\mathbf{r}^{\prime},\mathbf{ \xi},\nu). \tag{20}\] The systems of coupled equations 15 and 19, along with their respective boundary conditions are equivalent formulations of our theoretical formalism and are the first part of the main results presented in this work. They are valid in arbitrary dispersive and open nanostructured systems, such as photonic crystals, nanoresonators, metasurfaces and waveguides, where the complex spatial and dispersive properties inherent to such devices can all be accounted for in the electromagnetic GF, as well as arbitrary loss, e.g. radiation leakage, material absorption, scattering losses etc. To summarise the main results derived in this section: in order to find the output electric field operator of a high-gain SPDC system, our proposed formalism consists of solving the coupled equations 19, which are defined by the system's classical Green's function \(G_{ij}(\mathbf{r},\mathbf{r}^{\prime},\omega)\), linear susceptibility \(\varepsilon(\mathbf{r},\omega)\) and nonlinear susceptibility \(\chi^{(2)}_{ijk}(\mathbf{r})\). These properties, along with the pump field \(\,\mathbf{E}^{(\pm)}_{p}(\mathbf{r},t)\) are used to define the integration kernel \(F_{ik}(\mathbf{r},\omega;\mathbf{r}^{\prime},\omega^{\prime};t)\) and source term \(S^{(0)}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\), given by Eqs. 12 and 20, respectively. Solving the equations provides the IO-coefficients, which are then inserted into the IO-relation Eq. 18 to obtain the output amplitude operators \(\,\mathbf{\tilde{E}}^{(\pm)}(\mathbf{r},\omega,t)\). These, in turn enable us to reconstruct the full electric field operator as shown in Eq. 7b. The IO-coefficients can also be used to directly calculate scalar quantities related to the field, such as correlation functions, as was given in Eq. 14. ### Frequency domain formulation The formalism developed in the previous section enables straightforward calculation of output fields and field-related quantities and is valid under very general considerations. However, the fields (and thus, all field-dependent quantities) are obtained in the _time domain_ and the calculation of frequency-domain fields and quantities, e.g., the spectra of output photons, is inefficient to calculate numerically from the obtained time-domain quantities. This can be seen by examining, for example, the expression for the single-photon spectrum of the electric field calculated using the IO-coefficients used in Eq. 18. The spectrum of a quantised electric field, measured at a time \(t\) and position \(\mathbf{r}_{0}\) and summed over all field polarisations is given by [61]: \[\sigma(\mathbf{r}_{0},\omega_{0},t) =\iint\limits_{-\infty}^{t}\mathrm{d}t^{\prime}\;\mathrm{d}t^{ \prime\prime}\;e^{-i\omega_{0}(t^{\prime}-t^{\prime\prime})}\] \[\times\left\langle\mathbf{\tilde{E}}^{(-)}(\mathbf{r}_{0},t^{ \prime})\cdot\mathbf{\tilde{E}}^{(+)}(\mathbf{r}_{0},t^{\prime\prime})\right\rangle. \tag{21}\] where \(\omega_{0}\) is the frequency at which the spectrum is evaluated. To obtain an expression for \(\sigma(\mathbf{r}_{0},\omega_{0},t)\) in terms of the IO-coefficients \(A_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\) and \(B_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\), we use the first-order correlation function Eq. 14 (remembering that \(\mathcal{A}_{ij}\equiv A_{ij}\)) and insert it into Eq. 21: \[\sigma(r_{0},\omega_{0},t)=\iint\limits_{-\infty}^{t}\mathrm{d}t^ {\prime}\,\mathrm{d}t^{\prime\prime}\] \[\times\iint\mathrm{d}\omega\,\mathrm{d}\omega^{\prime}\;e^{-i( \omega_{0}-\omega)t^{\prime}}\,e^{i(\omega_{0}-\omega^{\prime})t^{\prime\prime}}\] \[\times\int\mathrm{d}\mathbf{\Xi}\,A_{ij}(\mathbf{r}_{0},\omega, \mathbf{\Xi};t^{\prime})A_{ij}^{*}(\mathbf{r}_{0},\omega^{\prime},\mathbf{\Xi };t^{\prime\prime}). \tag{22}\] Although Eq. 22 theoretically allows one to calculate the spectrum, it can be impractical in a realistic case, when the IO-coefficients have to be obtained numerically. Namely, in order to perform the \(t^{\prime}/t^{\prime\prime}\) and \(\omega/\omega^{\prime}\) integrals with sufficient numerical accuracy, one requires the values of the coefficient \(A_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\) at many narrowly spaced times \(t\) in addition to the other variables \(\omega\) and \(\mathbf{\Xi}\), which dramatically increases the computational resources required for such a calculation. The above limitation is a consequence of the individual frequency amplitudes \(\,\mathbf{\tilde{E}}^{(\pm)}(\mathbf{r},\omega,t)\) not being equivalent to the spectral components of the field that are detected in a spectrally resolved measurement in the presence of field sources, such as during a nonlinear field interaction [61; 62]. To overcome these difficulties and further expand the applicability of our formalism, we develop a frequency-domain formulation, which can be applied in the calculation of various spectral quantities, while still rigorously including the effects of arbitrary loss and dispersion. It focuses on the "spectrally relevant" parts of the field, i.e. special field operators which directly contribute to the spectrum. These are defined as _time-frequency_ transforms of the time-domain fields [62] and, as will be shown in the next sections, evolve in time in a way quite similar (although not identical) to \(\,\mathbf{\tilde{E}}^{(\pm)}(\mathbf{r},\omega,t)\). They also have their own IO-relation, which enables us to use them as fundamental field variables when determining the time evolution of frequency-domain quantities. We begin by noting that the field spectrum Eq. 21 can also be written as: \[\sigma(\mathbf{r}_{0},\omega_{0},t)=\left\langle\mathbf{\tilde{E}}^{(-)}( \mathbf{r}_{0},\omega_{0},t)\mathbf{\tilde{E}}^{(+)}(\mathbf{r}_{0},\omega_{0 },t)\right\rangle, \tag{23}\] where the operators \(\mathbf{\tilde{E}}^{(\pm)}(\mathbf{r},\omega,t)\) have been defined as: \[\mathbf{\tilde{E}}^{(\pm)}(\mathbf{r},\omega,t)=\int_{-\infty}^{t}\mathrm{d}t ^{\prime}\;e^{\pm i\omega t^{\prime}}\,\mathbf{\tilde{E}}^{(\pm)}(\mathbf{r}, t^{\prime}). \tag{24}\] We will refer to them as _filtered_ field operators, due to the definition Eq. 24 being reminiscent of the action of a causal, ideally monochromatic filter on the time-domain field. In the absence of an interaction, they reduce to the free-field amplitude operators \(\,\mathbf{\tilde{E}}^{(0,\pm)}(\mathbf{r},\omega)\) as \(t\rightarrow\infty\). For finite \(t\) and/or with sources (i.e. an interaction) present, their interpretation is more involved and is discussed in [61; 62]. The expression for the spectrum in terms of the filtered field Eq. 23 is immediately more appealing than Eq. 21, as it involves the expectation value of operators at a time \(t\), when the spectrum itself is evaluated. Thus, the need for knowing the fields (or, equivalently, the IO-coefficients) at all times prior to \(t\) is eliminated, provided the filtered fields \(\mathbf{\tilde{E}}^{(\pm)}(\mathbf{r},\omega,t)\) can be obtained without much additional computational complexity, which, as will be shown in the following, is indeed the case. As can be noted from Eq. 24, the filtered field operators are a linear transform of the "regular" \(\,\mathbf{\tilde{E}}^{(\pm)}(\mathbf{r},t)\) and thus of \(\,\mathbf{\tilde{E}}^{(\pm)}(\mathbf{r},\omega,t)\), as well. This enables us to formulate an analogous input-output relation for the filtered fields. To that end, we combine Eq. 18 and Eq. 7b and insert the resulting field decomposition into Eq. 24. Thus we obtain: \[\tilde{E}_{i}^{(+)}(\mathbf{r},\omega,t)=\tilde{E}_{i}^{(0,+)}(\mathbf{r},\omega)+ \int\mathrm{d}\mathbf{\Xi}\left(\tilde{B}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\bar{f}_ {j}(\mathbf{\Xi})+\tilde{A}_{ij}^{\ast}(\mathbf{r},\omega,\mathbf{\Xi};t)\bar{f}_{j}^{ \dagger}(\mathbf{\Xi})\right), \tag{25}\] where we defined: \[\tilde{E}_{i}^{(0,+)}(\mathbf{r},\omega)=\] \[\int_{-\infty}^{t}\mathrm{d}t^{\prime}\int\mathrm{d}\omega^{\prime }\ e^{i(\omega-\omega^{\prime})t^{\prime}}\bar{E}_{i}^{(0,+)}(\mathbf{r},\omega ^{\prime}), \tag{26a}\] \[\tilde{A}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)=\] \[\int_{-\infty}^{t}\mathrm{d}t^{\prime}\int\mathrm{d}\omega^{ \prime}\ e^{-i(\omega-\omega^{\prime})t^{\prime}}A_{ij}(\mathbf{r},\omega^{ \prime},\mathbf{\Xi};t^{\prime}),\] (26b) \[\tilde{B}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)=\] \[\int_{-\infty}^{t}\mathrm{d}t^{\prime}\int\mathrm{d}\omega^{ \prime}\ e^{i(\omega-\omega^{\prime})t^{\prime}}B_{ij}(\mathbf{r},\omega^{ \prime},\mathbf{\Xi};t^{\prime}). \tag{26c}\] Here, \(\tilde{E}_{i}^{(0,+)}(\mathbf{r},\omega)\) is the filtered analogue of the free-field amplitudes \(\bar{E}_{i}^{(0,+)}(\mathbf{r},\omega)\) and \(\tilde{A}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\) and \(\tilde{B}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)\) are the IO-coefficients of the filtered field. For conciseness, we will refer to them as _filtered IO-coefficients_. An expression for the spectrum in terms of the filtered coefficients can be found by grouping time and frequency integrals with their corresponding \(A_{ij}\) coefficient in Eq. 22 and recognising that the resulting integral quantities exactly match the definitions in Eqs. 26. The expression thus obtained is: \[\sigma(\mathbf{r}_{0},\omega_{0},t)=\int\mathrm{d}\mathbf{\Xi}\,\tilde{A}_{ij}( \mathbf{r}_{0},\omega_{0},\mathbf{\Xi};t)\tilde{A}_{ij}^{\ast}(\mathbf{r}_{0}, \omega_{0},\mathbf{\Xi};t). \tag{27}\] As expected, we no longer require knowledge of the IO-coefficients at narrowly spaced times, unlike Eq. 22, - we only need the filtered coefficients at time \(t\). To find their values at a given time, we obtain a new set of coupled equations by combining the definitions Eqs. 26 with the system Eq. 19 (the full derivation can be found in Appendix E): \[\partial_{t}\tilde{A}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)=\tilde{ S}_{ij}^{(0)}(\mathbf{r},\omega,\mathbf{\Xi};t)\] \[+\int\mathrm{d}\mathbf{r}^{\prime}\int\mathrm{d}\omega^{\prime}\, \tilde{F}_{ik}^{\ast}(\mathbf{r},\omega;\mathbf{r}^{\prime},\omega^{\prime};t) \tilde{B}_{kj}(\mathbf{r}^{\prime},\omega^{\prime},\mathbf{\Xi};t), \tag{28a}\] \[\partial_{t}\tilde{B}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)=\] \[\int\mathrm{d}\mathbf{r}^{\prime}\int\mathrm{d}\omega^{\prime}\, \tilde{F}_{ik}(\mathbf{r},\omega;\mathbf{r}^{\prime},\omega^{\prime};t)\tilde{A }_{kj}(\mathbf{r}^{\prime},\omega^{\prime},\mathbf{\Xi};t), \tag{28b}\] where \(\tilde{S}_{ij}^{(0)}(\mathbf{r},\omega,\mathbf{\Xi};t)\) is derived from the source term \(S_{ij}^{(0)}(\mathbf{r},\omega,\mathbf{\Xi};t)\), present in Eq. 19a and is defined as: \[\tilde{S}_{ij}^{(0)}(\mathbf{r},\omega,\mathbf{\Xi};t)=2i\mathcal{K} \,\frac{\nu^{2}}{c^{2}}\sqrt{\varepsilon^{\prime\prime}(\mathbf{\xi},\nu)} \tag{29}\] \[\times\int\mathrm{d}\mathbf{r}^{\prime}\,\chi_{klm}^{(2)}( \mathbf{r}^{\prime})G_{mj}(\mathbf{r}^{\prime},\mathbf{\xi},\nu)\,e^{-i(\omega+ \nu)t}\] \[\times\int\mathrm{d}\omega_{p}\,\frac{(\omega_{p}-\nu)^{2}}{c^{2}} \mathcal{E}_{P,k}^{(-)}(\mathbf{r}^{\prime},\omega_{p})G_{il}^{\ast}(\mathbf{r },\mathbf{r}^{\prime},\omega_{p}-\nu)\,e^{i\omega_{p}t}.\] Here, \(\mathcal{E}_{P,k}^{(\pm)}(\mathbf{r},\omega_{p})\) is the Fourier transform of the pump amplitude, defined by \(\mathbf{E}_{P}^{(\pm)}(\mathbf{r},t)=\int\mathrm{d}\omega_{p}\,\mathcal{E}_{P }^{(\pm)}(\mathbf{r},\omega_{p})\,e^{\mp i\,\omega_{p}t}\). The integral kernel \(\tilde{F}_{ik}(\mathbf{r},\omega;\mathbf{r}^{\prime},\omega^{\prime};t)\) which couples the two coefficients in Eq. 28 is given by: \[\tilde{F}_{ik}(\mathbf{r},\omega;\mathbf{r}^{\prime},\omega^{\prime };t)=\frac{2i}{\pi}\int\mathrm{d}\bar{\omega}\,\frac{\bar{\omega}^{2}}{c^{2}}\,e ^{i(\omega-\bar{\omega})t}\] \[\times\mathrm{Im}\left[G_{im}(\mathbf{r},\mathbf{r}^{\prime},\bar{ \omega})\right]\mathcal{E}_{P,l}^{(+)}(\mathbf{r}^{\prime},\omega^{\prime}+ \bar{\omega})\chi_{lmk}^{(2)}(\mathbf{r}^{\prime}), \tag{30}\] and the initial conditions for the filtered coefficients are easily inferred from the definitions 26: \[\tilde{A}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t\rightarrow-\infty)=\tilde{B}_{ij}( \mathbf{r},\omega,\mathbf{\Xi};t\rightarrow-\infty)\equiv 0.\] The coupled equations Eqs. 28 are the second part of our main results and can be summarised as follows: to find the output field in the frequency domain, one must solve the coupled equations 28 to obtain the filtered IO-coefficients. As was the case in the time-domain formulation, the properties of the optical system and the pump field are embedded in the source term \(\tilde{S}_{ij}^{(0)}(\mathbf{r},\omega,\mathbf{\Xi};t)\), defined in Eq. 29, and the integration kernel \(\tilde{F}_{ik}(\mathbf{r},\omega;\mathbf{r}^{\prime},\omega^{\prime};t)\), given by Eq. 30. The filtered IO-coefficients can then be used to reconstruct the frequency domain field operators using the IO-relation Eq. 25. Alternatively, the filtered IO-coefficients can also be used to directly evaluate frequency-domain expectation values, such as the field spectrum Eq. 27, discussed in the beginning of this section, or field moments of the form \(\left\langle\tilde{E}_{i}^{(-)}(\mathbf{r},\omega,t)\tilde{E}_{j}^{(+)}(\mathbf{r}, \omega^{\prime},t)\right\rangle\) and \(\left\langle\tilde{E}_{i}^{(+)}(\mathbf{r},\omega,t)\tilde{E}_{j}^{(+)}(\mathbf{r}, \omega^{\prime},t)\right\rangle\). The field moments can be used to reconstruct the joint spectral amplitude (JSA) of the output state of the system [9], which is extremely useful when investigating high-gain effects, e.g. squeezing [56]. To verify the results of our frequency-domain approach, we find the low-gain analytical solutions to Eqs. 28 and insert them into Eq. 27 to obtain the low-gain signal photon spectrum. These calculations are shown in Appendix F and the obtained expression for the spectrum, shown in Eq. F5, matches the results obtained in previous works on low-gain SPDC that used the GF quantisation formalism [46; 50]. ## III Integrated quantum spectroscopy with undetected photons in the high-gain regime Quantum spectroscopy with undetected photons (QSUP) is a spectroscopic technique which relies on frequency-entangled, non-degenerate photon pairs, where only one of the entangled photons (e.g., the idler) interacts with the measured sample, but the effects of the sample's dispersion and/or absorption can be studied by detecting the other photon (e.g., the signal). This method can be used as a way to overcome the limited availability of high-sensitivity detectors in certain frequency regions, e.g., infrared or terahertz [63; 64; 65]. In Ref. [38], a QSUP scheme was proposed, in which a material to be sensed is placed in the near-field of a waveguide SPDC source. In such a configuration, when the material has a spectrally localised absorption line around the frequency range of the idler photons, it also affects the spectral properties of the generated signal photons [38; 50]. In the aforementioned work, the proposed scheme was investigated perturbatively, in the low-gain regime of SPDC. Using our formalism, we can now extend the investigation into the high-gain regime through numerical simulations of integrated QSUP at varying amounts of gain and show that the spectral sensitivity of the scheme improves at higher parametric gains. We note here that a similar improvement of sensitivity at high gain has already been observed in other schemes involving measurements with undetected photons, such as quantum imaging and optical coherence tomography systems based on nonlinear interferometers and induced coherence [66; 67; 68]. In these types of schemes, a sample is placed between two coherently pumped high-gain SPDC sources, which interacts with the output idler arm of the first source after generation. In contrast, the integrated QSUP scheme proposed and investigated in the low-gain regime in [38], involves an analyte sample interacting with the nonlinear waveguide during the pair-production process, resulting in the spectroscopic information about the sample being imparted onto the spectrum of the produced photon pairs. The frequency domain formulation of our formalism is ideally suited for investigating this kind of scheme, as it enables us to study the spectral properties of the output photons at high parametric gains in the presence of significant, spectrally-varying loss, that is directly affecting the SPDC process at the generation stage. We consider the system shown schematically in Fig. 1: a periodically poled nonlinear waveguide, oriented along the \(z\)-axis, is excited by a pump pulse with a Gaussian temporal envelope. For simplicity, we neglect any polarisation or transverse dependence and assume that the waveguide is homogeneous, dispersive and lossy, characterised by the complex, frequency-dependent permittivity \(\varepsilon(\omega)\), which includes both the effect of the waveguide plus an analyte that is interacting with the near-field of the waveguide modes. Under the assumptions noted here, the GF of the waveguide has the analytical form [43]: \[G(z,z^{\prime},\omega)=\frac{i}{2\frac{\omega}{c}\sqrt{\varepsilon(\omega)}} \,e^{i\frac{\omega}{c}\sqrt{\varepsilon(\omega)}\big{|}z-z^{\prime}\big{|}}. \tag{31}\] Since many relevant analytical properties of the GF are direct consequences of the Kramers-Kronig relations between the real and imaginary parts of the dielectric permittivity, absorption and dispersion of the model medium must be consistent with the aforementioned relations to ensure physically meaningful results [43]. Accordingly, we use a Lorentz model for the permittivities of the waveguide dielectric and the analyte [69]. The permittivity of the combined waveguide and analyte system has the form: \[\varepsilon(\omega)=1+\frac{\Omega_{Pl}^{2}}{(\Omega_{0}^{2}-\omega^{2}-i \Gamma\omega)}+\alpha\,\varepsilon_{\text{loss}}(\omega) \tag{32}\] where \(\Omega_{Pl}\) is the plasma frequency of the waveguide dielectric, while \(\Omega_{0}\) and \(\Gamma\) are the frequency and width of the dielectric resonance, respectively. The term \(\varepsilon_{\text{loss}}(\omega)=\frac{\Omega_{Pl}^{2}}{(\omega_{\text{loss }}^{2}-\omega^{2}-i\gamma\omega)}\), models the spectrally localised loss of the analyte, whose magnitude is modulated through the unitless factor \(\alpha\). Here, \(\omega_{\text{loss}}\) and \(\gamma\) are the frequency and width of the analyte resonance, respectively. The plasma frequency of the analyte is assumed to be the same as the waveguide for simplicity. For the incoming pump pulse, we use the frequency-domain form: \(\mathcal{E}_{P}^{(+)}(z,\omega)=E_{0}\sqrt{\tau_{p}}\,e^{-2\tau_{p}^{2}(\omega _{\text{r}0}-\omega)^{2}}\,e^{ik(\omega)z}\). Here, \(\omega_{\text{p}0}\) is the pump central frequency, \(\tau_{p}\) is the temporal width of the pulse, \(k(\omega)=\frac{\omega}{c}\sqrt{\varepsilon(\omega)}\) and \(E_{0}\) is a normalisation constant determined by the total energy of the pump pulse \(U_{0}\), where \(U_{0}\propto\left|E_{0}\right|^{2}\). Figure 1: Schematic representation of the integrated quantum spectroscopy process. A nonlinear waveguide SPDC source of length \(L\) produces signal (blue) and idler (red) photons when excited by a pump pulse (purple). The idler mode, with a larger wavelength and consequently broader mode profile, interacts with the analyte (pale red) which has an absorption line in the frequency range of the idler photons. At the output of the waveguide, the signal and idler photons are separated by a beamsplitter which directs the signal photons into a spectrally-resolving detector while the idler photons remain undetected. The periodic light-blue-green region represent the periodically poled region of the nonlinear waveguide, allowing efficient phase-matching within the length \(L\). ### Simulation parameters All of the simulation parameters given here were normalised in the following manner: frequency quantities are expressed in units of \(\omega_{p0}\), temporal quantities in units of \(\frac{1}{\omega_{p0}}\) and spatial quantities in units of \(\frac{2\pi\varepsilon}{\omega_{p0}}\) (the central wavelength of the pump in vacuum). The nonlinear region of the waveguide is centred at the coordinate origin with the length \(L=3\times 10^{4}\) and is periodically poled with a spatial dependence \(\chi^{(2)}(z)=\chi_{m}^{(2)}\cos\Lambda z\), where \(\chi_{m}^{(2)}\) is the maximum absolute value of the nonlinear permittivity and \(\Lambda\) is the poling period, chosen such that the central phase-matched frequencies of the signal and idler photons are \(\omega_{s0}=0.7\) and \(\omega_{i0}=0.3\), respectively. The resonance frequency of the analyte was set to be identical to the idler central phase-matching frequency, \(\omega_{\text{loss}}=\omega_{i0}=0.3\), to make the effects of the loss more prominent. The width of the loss spectrum was chosen to be \(\gamma=2.5\times 10^{-3}\). The waveguide parameters in \(\varepsilon(\omega)\) were set to be \(\Omega_{0}=2.1\), \(\Omega_{Pl}=0.25\) and \(\Gamma=10^{-7}\), in order to satisfy the following conditions: 1. the resonance of the dielectric is far above the frequency region of interest and is narrow enough so the dielectric is effectively lossless for frequencies around and below \(\omega_{p0}\); 2. the length of the nonlinear region, in combination with the refractive index of the dielectric, ensures a phase-matching bandwidth significantly wider than the width of the loss spectrum; 3. the pump temporal width was chosen to be \(\tau_{p}=2400\), resulting in a bandwidth sufficiently narrower than the loss spectrum to allow for observing the spectral correlation between the signal photon spectrum and the spectrum of the loss. The strength of the nonlinear interaction, which contributes to determining the parametric gain, is characterised by the product \(\chi_{m}^{(2)}E_{0}\propto\chi_{m}^{(2)}\sqrt{U_{0}}\), which is varied to simulate SPDC at different values of parametric gain. In our case, this is done by adjusting the pump pulse energy \(U_{0}\), while keeping the maximum nonlinear susceptibility \(\chi_{m}^{(2)}\) constant. Finally, all of our simulations were performed for a sufficiently long time interval, in which the pump has had enough time to completely pass through the structure. ### Lossless medium We begin our investigation by studying the single-photon spectrum of SPDC at different values of parametric gain in the "lossless" case, i.e. without an analyte (the loss associated with the waveguide dielectric, although negligibly small in the frequency region of interest, is nevertheless accounted for in \(\varepsilon(\omega)\)). For different values of pump pulse energy \(U_{0}\), we calculate the IO-coefficients by numerically solving Eqs. 28, which are then used to evaluate the spectrum, as per Eq. 27. To extract the values of parametric gain associated with different pump intensities, we follow the prescription detailed in Ref. [33]: we plot the dependence of the maximal intensity of the single-photon spectrum \(\mathcal{I}_{m}\) (at \(\omega_{s}=\omega_{s0}=0.7\)) as a function of the pump energy \(U_{0}\) and fit it with the well-known dependence of the single-mode intensity in the case of a two-mode squeezer: \(\mathcal{I}_{m}=a\sinh^{2}\left(b\sqrt{U_{0}}\right)\), where \(a\) and \(b\) are fitting parameters. The parametric gain \(\mathcal{G}\) for a particular value of pump energy is then defined as \(\mathcal{G}=b\sqrt{U_{0}}\). In Fig. 2a, we show the dependence of the spectrum maximum on pump pulse energy for pump pulses of two different bandwidths, defined by their temporal widths \(\tau_{p}=2400\) and \(\tau_{p}=600\), and observe that they both indeed follow the \(\sinh^{2}\) law. The wider-bandwidth pulse, corresponding to \(\tau_{p}=600\), was included in our simulations to make sure our formalism correctly predicts that a wider pump spectrum results in higher parametric gain for the same total pulse energy [33; 56]. The inset of Fig. 2a shows that the maximal intensity is approximately independent of the pump bandwidth at low pump energies, again, in accordance with previous observations [33; 56]. The dependence of the parametric gain on the pulse energy is shown in Fig. 2b, where the higher amount of parametric gain obtained for wider pump bandwidths is evident. Aside from the behaviour of the spectrum amplitudes, our formalism also correctly predicts the broadening of the single- and two-photon spectra, occurring as gain is increased [33; 56]. The broadening of the two-photon spectrum can be observed by studying the second-order moment of the output field \(N(\omega,\omega^{\prime})=\left\langle\mathbf{\tilde{E}}^{(+)}(\mathbf{r}_{0},\omega,t)\cdot\mathbf{\tilde{E}}^{(+)}(\mathbf{r}_{0},\omega^{\prime},t)\right\rangle\), which is related to the JSA of the output photons [70]. We can calculate the moment directly in terms of the IO-coefficients using the expression: \[\left\langle\mathbf{\tilde{E}}^{(+)}(\mathbf{r}_{0},\omega,t) \mathbf{\tilde{E}}^{(+)}(\mathbf{r}_{0},\omega^{\prime},t)\right\rangle=\] \[\int\mathrm{d}\mathbf{\Xi}\,\tilde{B}_{ij}(\mathbf{r}_{0},\omega, \mathbf{\Xi};t)\tilde{A}_{ij}^{*}(\mathbf{r}_{0},\omega^{\prime},\mathbf{\Xi} ;t), \tag{33}\] which is obtained by expanding the filtered field operators using Eq. 25 and taking the expectation value. \(N(\omega_{i},\omega_{s})\), in the case of wide-band pump (\(\tau_{p}=600\)), is shown in Fig. 3a for two values of parametric gain and we observe that higher gain indeed results in the moment becoming broader in frequency and, correspondingly, in output photons to exhibit less frequency correlations. Additionally, in Fig. 3b, we show the normalised single photon spectrum at different values of gain for the wide-band pump. As gain is increased, the spectrum broadens, in accordance with previous results on high-gain lossless SPDC [33; 56]. ### Lossy medium To investigate the effects of increasing gain in QSUP, we introduce the analyte into the material permittivity \(\varepsilon(\omega)\) and calculate signal photon spectra at the output. In Fig. 4a we show the output spectra without the analyte and in Fig. 4b we show the spectra with the analyte present. The effect of the idler loss is immediately evident, as the spectra show a dip in the signal intensity, centred around the frequency \(\omega^{(c)}=0.7\), which corresponds to the resonance frequency of the analyte \(\omega_{\text{loss}}\) through the conservation of energy \(\omega_{p0}=\omega^{(c)}+\omega_{\text{loss}}\). Both sets of spectra were obtained for the narrow-band pump with \(\tau_{p}=2400\). The appearance of the spectral dip in the presence of idler loss is in accordance with the perturbative results of Ref. [38], where the perturbative calculations also suggested the spectral shape of the dip (i.e. its relative depth and width) to be independent of gain. However, using our non-perturbative formalism, we observe that the depth and spectral width of the dip does change as gain is increased. To quantify and investigate this behaviour we calculate the signal photon extinction spectrum (defined as the difference between the lossless and lossy spectra) for different values of gain. The extinction spectra and their properties at different values of gain are shown in Fig. 5. We observe two main tendencies: the maximal extinction (equivalent to the relative depth of the dip in the signal spectrum at \(\omega_{s}=\omega^{(c)}=0.7\)) _increases_ with gain, while the width of the dip _decreases_. In Fig. 5a, we show extinction spectra obtained at various amounts of gain, which are normalised to 1 to better showcase the change in the spectral width as gain is increased. The lineshape of the analyte absorption spectrum (i.e. \(\text{Im}\left[\varepsilon_{\text{loss}}\right]\)) is also shown for comparison. The dependence of the extinction maximum and its full-width-at-half-maximum (FWHM) on parametric gain are shown in Fig. 5b. We observe that the maximum monotonically increases with gain, but with a progressively slower rate as gain rises and seems to show signs of saturation at very high gain values. On the other hand, the extinction FWHM decreases with gain (also shown in Fig. 5b) and shows signs of saturation at higher gain Figure 3: (**a**) The normalised second-order field moment \(N(\omega_{i},\omega_{s})\) for the wide-band pump (\(\tau_{p}=600\)) at low parametric gain - \(\mathcal{G}=0.0915\) and high parametric gain - \(\mathcal{G}=3.54\). The output state at higher parametric gain exhibits less frequency correlations between the photons due to gain-induced broadening. (**b**) Normalised single-photon spectrum of the signal photons at the output (each spectrum normalised to its maximum) for the wide-band pump with \(\tau_{p}=600\) at varying gain. Figure 2: (**a**) The dependence of the spectral maximum at \(\omega=0.7\) on the total pump energy for the narrow-band pump (red squares) with \(\tau_{p}=2400\) and wide-band pump (blue circles) with \(\tau_{p}=600\). The solid lines indicate the fitted \(\sinh^{2}\left(b\sqrt{U_{0}}\right)\) and the inset shows the intensity dependencies at very low pump pulse energies. In this regime, the behaviour of the spectrum becomes independent of the pump bandwidth. (**b**) The dependence of the observed parametric gain on the square root of the pump energy for the narrow- (red squares) and wide-band pump pulses (blue circles). In accordance with previous works, the wide-band pump results in higher parametric gain for a given pump pulse energy. values as well. Both the monotonic increase of the extinction maximum and the decrease of its FWHM can be explained as a consequence of self-seeding being impeded by the analyte loss: idler photons with frequencies within the loss spectrum of the analyte are removed from the idler field before they have a chance to seed the production of further photons at those frequencies. This results in signal photon intensity at frequencies affected by the loss to increase at a lower rate (with increasing gain) as compared to the lossless case. On the other hand, the intensity at frequencies unaffected by the loss increases at the "lossless" rate, resulting in a net increase of the maximum extinction. Analogously, the decreasing extinction FWHM with gain can also be seen as a consequence of self-seeding being impeded in the presence of loss, but here, we also need to take into account the shape of the loss spectrum. The lineshape of the analyte absorption spectrum dictates that idler photons experience less loss, the further their frequency is from the analyte resonance \(\omega_{\text{loss}}\). The idler photons further from resonance thus have a lower chance to be absorbed before seeding further pairs, leading to the signal photon intensities increasing faster (with gain) for frequencies further away from \(\omega^{(c)}\), resulting in a net narrowing of the extinction spectrum. As mentioned at the beginning of this section, sensitivity enhancement in the high-gain regime has already been observed for undetected photon measurement schemes based on induced coherence and nonlinear interferometers. The nature of the enhancement in the case of induced coherence is discussed in detail in Ref. [66] and the authors conclude that it also occurs due to seeding effects, where idler photons from the first source, after interacting with the sample, seed further pairs in the second source and thus impart the effects of the sample onto those newly generated photons. The observed "saturation" of both the extinction maximum and FWHM at very high gain has, to our knowledge, not been reported before, but it too can be understood to be another consequence of self-seeding, more specifically, high-order self-seeding. In such cases, idler (signal) photons of frequency \(\omega\) seed the generation of further signal (idler) photons, not only at the frequency \(\omega_{p0}-\omega\), but also at frequencies around it. At sufficiently high values of gain, the idler (signal) photons thus generated, can seed further signal (idler) photons and so on. This enables signal and idler photons unaffected by the Figure 4: Normalised single-photon spectrum of the signal photons for the narrow-band pump with \(\tau_{p}=2400\) at varying amounts of parametric gain, (**a**) without the analyte, and (**b**) with the analyte. The analyte loss is centred around \(\omega_{\text{loss}}=0.3\). In the lossy case, each spectrum is normalised to its lossless counterpart with the same gain, in order to better showcase the changes in the spectral shape of the dip. Figure 5: (**a**) The extinction spectra of the signal photon at different values of parametric gain (solid lines) and the line-shape of the idler loss, namely \(\text{Im}\left[\varepsilon_{\text{loss}}\right]\) (dashed line). The plots are all normalised to their own maximum amplitude to showcase the extinction spectrum becoming narrower as gain is increased. (**b**) The dependence of the extinction maximum (blue circles) and the FWHM of the extinction spectrum on parametric gain (red squares). The solid lines are interpolation curves to help illustrate the observed tendencies of the extinction spectrum. loss to potentially seed the production of additional photons within the extinction spectrum. Eventually, these high-order effects compensate the gain reduction experienced by signal photons within the extinction region, resulting in the spectral shape of the extinction saturating at very high parametric gain. As these effects correspond to higher-order processes, they only appear at sufficiently high gain values (\(\mathcal{G}>2\) in our case), but become more prominent as gain is further increased. ### Spatial evolution of the spectrum The spectra given and discussed in Sec. III.2 and III.3 were obtained assuming a detector located immediately at the output of the nonlinear waveguide, however, our formalism also allows the study of the spectrum of the produced photons throughout the entire length of the nonlinear medium - all obtained by solving Eqs. 28 once, for a given value of parametric gain. Although we do not use the evolution of the spectrum within the nonlinear waveguide to obtain new insight in the context of QSUP, it can be used to showcase that our non-perturbative formalism can predict the spatial functionality of the field and field-related quantities. In Fig. 6a, we show the evolution of the signal photon spectrum as the photons propagate through the nonlinear waveguide without the analyte present. At the start of the waveguide, we see the spectrum having a very wide bandwidth and negligible amplitude; as we go forward in length, we see phase matching taking hold - reducing the bandwidth and increasing the amplitude. Additionally, in Fig. 6b we see the effect of the idler loss accumulating during propagation, causing the dip in spectral intensity, which increases with propagation, according to the discussion in Sec. III.3. Finally, in Fig. 6c, we show the spatial behaviour of the spectral maximum of the signal photons for increasing values of gain, in the lossless case. We observe that, in the low-gain regime, the spectral maximum obeys the well-established quadratic law, where the intensity is \(\propto l^{2}\), \(l\) being the length of the nonlinear region. As gain is increased, the exponential nature of the length dependence of the maximum spectral intensity becomes more apparent. ## IV Conclusion In summary, we presented a non-perturbative formalism for the description of high-gain SPDC, applicable to a wide variety of nanostructured and/or open optical systems with arbitrarily high complex spatial and spectral properties. Our formalism enables the calculation of the field operators and field-dependent quantities (e.g., correlation functions and spectra) at the output and within the nonlinear optical system. In contrast to previous methods, our formalism is capable of describing systems with arbitrary amounts of dispersion and loss, through the use of the Green's function quantisation method, which intrinsically takes into account these effects. We presented both a time-domain, as well as a frequency-domain formulation, which allows the formalism to be applicable to many types of temporal and spectral analyses in the field of quantum technologies. As an example, we used this formalism to investigate quantum spectroscopy with undetected photons in a nonlinear waveguide surrounded by an analyte with a loss spectrum corresponding to the frequencies of one of the output pho Figure 6: The normalised spectral intensity of the signal photons as a function of their frequency and position within the nonlinear waveguide in the (**a**) lossless and (**b**) lossy cases. Both plots were obtained for the narrow-band pump (\(\tau_{p}=2400\)) and parametric gain of \(\mathcal{G}=1.33\). (**c**) Normalised maximum spectral intensities of the signal photons as a function of position within the nonlinear waveguide, obtained for the narrow-band pump (\(\tau_{p}=2400\)) in the lossless case, for different values of gain. tons. We have thus expanded upon the results of previous low-gain investigations [38] and discovered that the spectral sensitivity of the QSUP scheme is heavily dependent on the amount of the parametric gain and can be improved by operating the nonlinear waveguide in the high-gain regime. Although derived here for the case of SPDC, the formalism can be generalised to treat high-gain SFWM in lossy and dispersive systems, since the SFWM interaction Hamiltonian has an identical operator structure as Eq. 1, when considered in the undepleted pump approximation [71]. We believe the formalism presented here will advance the design and implementation of nonlinear sources of quantum light in many emerging quantum technologies, especially ones implemented on nanostructured platforms. In addition to integrated squeezed light and high-order Fock state sources in nanostructured systems with arbitrary loss and dispersion, other examples include: hybrid systems [5], quantum sensing applications [15] and quantum frequency conversion in the optical domain [72] or microwave to optical domain [73]. ###### Acknowledgements. This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under the project identifier 398816777-SFB 1375 (NOA), the German Federal Ministry of Education and Research (BMBF) under the project identifiers 13N14877 (QuantIm4Life), 13N16108 (PhoQuant), 13N15956 (LIVE2QMIC) and the Thuringian Ministry for Economy, Science, and Digital Society and the European Social Funds (2021 FGI 0043 -Quantum Hub Thuringia). Sina Saravi acknowledges the funding by the LIGHT profile (FSU Jena). ## Appendix A Commutation relation for the frequency amplitude operators We begin by expanding the \(\hat{\underline{E}}^{(\pm)}_{i,j}(\mathbf{r},\omega,t)\) operators in the commutator according to Eq. 2: \[\Big{[}\hat{\underline{E}}^{(+)}_{i}(\mathbf{r},\omega,t),\hat{ \underline{E}}^{(-)}_{j}(\mathbf{r}^{\prime},\omega^{\prime},t)\Big{]}=\] \[\mathcal{K}^{2}\,\frac{\omega^{2}}{c^{2}}\frac{\omega^{\prime 2 }}{c^{2}}\iint\mathrm{d}\mathbf{r}_{1}\,\mathrm{d}\mathbf{r}_{2}\] \[\times\sqrt{\varepsilon^{\prime\prime}(\mathbf{r}_{1},\omega)} \sqrt{\varepsilon^{\prime\prime}(\mathbf{r}_{2},\omega^{\prime})}G_{ik}( \mathbf{r},\mathbf{r}_{1},\omega)G^{*}_{jl}(\mathbf{r}^{\prime},\mathbf{r}_{ 2},\omega^{\prime})\] \[\times\Big{[}\,\hat{f}_{k}(\mathbf{r}_{1},\omega),\,\hat{f}^{ \dagger}_{\,l}(\mathbf{r}_{2},\omega^{\prime},t)\Big{]}.\] Now, we use the commutation relation Eq. 4 and the properties of the \(\delta\)-functions to obtain: \[\Big{[}\hat{\underline{E}}^{(+)}_{i}(\mathbf{r},\omega,t),\hat{ \underline{E}}^{(-)}_{j}(\mathbf{r}^{\prime},\omega^{\prime},t)\Big{]}=\] \[\mathcal{K}^{2}\,\frac{\omega^{4}}{c^{4}}\delta(\omega-\omega^{ \prime})\] \[\times\int\mathrm{d}\mathbf{r}_{1}\,\varepsilon^{\prime\prime}( \mathbf{r}_{1},\omega)G_{ik}(\mathbf{r},\mathbf{r}_{1},\omega)G^{*}_{jk}( \mathbf{r}^{\prime},\mathbf{r}_{1},\omega).\] The above expression can be simplified by using the following GF identity [42]: \[\frac{\omega^{2}}{c^{2}}\int\mathrm{d}\mathbf{r}\,\varepsilon^{ \prime\prime}(\bar{\mathbf{r}},\omega)G_{ik}(\mathbf{r},\bar{\mathbf{r}}, \omega)\,G^{*}_{jk}(\mathbf{r}^{\prime},\bar{\mathbf{r}},\omega)=\] \[\mathrm{Im}\left[G_{ij}(\mathbf{r},\mathbf{r}^{\prime},\omega) \right], \tag{10}\] to finally arrive at Eq. 5. ## Appendix B The rotating frame The rotating frame operators introduced in Sec. II can be more formally defined through a decomposition of the unitary evolution operator that separates the free-field evolution and the evolution induced by the nonlinear interaction. This procedure is very similar to the definition of the interaction picture of quantum mechanics. In general, the Heisenberg equation of motion 6 is equivalent to the relation: \[\hat{\underline{E}}^{(+)}_{i}(\mathbf{r},\omega,t)=\hat{U}^{\dagger}(t)\hat {\underline{E}}^{(+)}_{i}(\mathbf{r},\omega)\hat{U}(t), \tag{11}\] where \(\hat{U}(t)=\mathcal{T}\,e^{\frac{1}{\hbar}\int_{-\infty}^{t}\mathrm{d}t^{ \prime}\left(\hat{H}_{0}+\hat{H}_{\mathrm{SPDC}}(t^{\prime})\right)}\), \(\hat{H}_{0}\) and \(\hat{H}_{\mathrm{SPDC}}(t^{\prime})\) are the free-field and nonlinear interaction Hamiltonians, respectively, both considered in the Schrodinger picture and \(\mathcal{T}\) indicates the time-ordering superoperator. Lastly, \(\hat{\underline{E}}^{(+)}_{i}(\mathbf{r},\omega)\equiv\hat{\underline{E}}^{( +)}_{i}(\mathbf{r},\omega,t\rightarrow-\infty)\) represents the field amplitude operators before the nonlinear interaction, where we omitted the time \(t\rightarrow-\infty\) for compactness. By using Feynman's disentanglement theorem [74], we can decompose \(\hat{U}(t)\) as \(\hat{U}(t)=\hat{U}_{0}(t)\hat{U}_{\mathrm{SPDC}}(t)\), where: \[\hat{U}_{0}(t) =\mathcal{T}\,e^{\frac{1}{\hbar}\int_{-\infty}^{t}\mathrm{d}t^{ \prime}\hat{H}_{0}} \tag{12a}\] \[\hat{U}_{\mathrm{SPDC}}(t) =\mathcal{T}\,e^{\frac{1}{\hbar}\int_{-\infty}^{t}\mathrm{d}t^{ \prime}\hat{H}_{\mathrm{SPDC}}(t^{\prime})}\] (12b) \[\hat{H}_{\mathrm{SPDC}}(t) =\hat{U}^{\dagger}_{0}(t)\hat{H}_{\mathrm{SPDC}}(t)\hat{U}_{0}(t). \tag{12c}\] We can use the decomposition of \(\hat{U}(t)\) to rewrite Eq. 11 as: \[\hat{\underline{E}}^{(+)}_{i}(\mathbf{r},\omega,t)=\] \[\hat{U}^{\dagger}_{\mathrm{SPDC}}(t)\hat{U}^{\dagger}_{0}(t)\hat{ \underline{E}}^{(+)}_{i}(\mathbf{r},\omega)\hat{U}_{0}(t)\hat{U}_{\mathrm{SPDC}}(t),\] using \(\hat{H}_{0}=\hbar\int\mathrm{d}\mathbf{r}\int\mathrm{d}\omega\,\omega\,\hat{f}^{ \dagger}_{\,i}(\mathbf{r},\omega)\,\hat{f}_{i}(\mathbf{r},\omega)\) and the definition 2 we then obtain: \[\hat{\underline{E}}^{(+)}_{i}(\mathbf{r},\omega,t)=\] \[\hat{U}^{\dagger}_{\mathrm{SPDC}}(t)\hat{\underline{E}}^{(+)}_{i}( \mathbf{r},\omega)\hat{U}_{\mathrm{SPDC}}(t)\,e^{-i\omega t},\] If we now define: \[\bar{E}_{i}^{(+)}(\mathbf{r},\omega,t)=\hat{U}_{\text{SPDC}}^{\dagger}(t)\hat{ \underline{E}}_{i}^{(+)}(\mathbf{r},\omega)\hat{U}_{\text{SPDC}}(t), \tag{10}\] we arrive at Eq. 7a. The rotating-frame creation/annihilation operators can be defined in a similar manner to be: \[\hat{f}_{j}^{(\dagger)}(\mathbf{r},\omega,t)=\hat{U}_{\text{SPDC}}^{\dagger}(t )\hat{f}_{i}^{(\dagger)}(\mathbf{r},\omega)\hat{U}_{\text{SPDC}}(t), \tag{11}\] where again we have \(\hat{f}_{i}^{(\dagger)}(\mathbf{r},\omega)\equiv\hat{f}_{i}^{(\dagger)}( \mathbf{r},\omega,t\rightarrow-\infty)\) for compactness. As with the amplitude operators, the rotating-frame annihilation operators are related to their Heisenberg picture counterparts as: \[\hat{f}_{i}(\mathbf{r},\omega,t)=\bar{f}_{j}(\mathbf{r},\omega,t)\,e^{-i \omega t}, \tag{12}\] with the relation for the creation operators obtained by taking the adjoint of Eq. 12. To find the general equation of motion for \(\bar{E}_{i}^{(+)}(\mathbf{r},\omega,t)\), we begin by finding the time derivative of Eq. 10: \[\partial_{t}\,\bar{E}_{i}^{(+)}(\mathbf{r},\omega,t)=\] \[\Big{(}\,\partial_{t}\hat{U}_{\text{SPDC}}^{\dagger}(t)\,\Big{)} \,\underline{\hat{E}}_{i}^{(+)}(\mathbf{r},\omega)\hat{U}_{\text{SPDC}}(t)\] \[+\hat{U}_{\text{SPDC}}^{\dagger}(t)\underline{\hat{E}}_{i}^{(+)}( \mathbf{r},\omega)\,\Big{(}\,\partial_{t}\hat{U}_{\text{SPDC}}(t)\Big{)}\,.\] We then recall Eq. 11b and obtain, after some straightforward calculation: \[\partial_{t}\,\bar{E}_{i}^{(+)}(\mathbf{r},\omega,t)=\] \[\frac{1}{i\hbar}\hat{U}_{\text{SPDC}}^{\dagger}(t)\Big{[} \underline{\hat{E}}_{i}^{(+)}(\mathbf{r},\omega),\tilde{H}_{\text{SPDC}}(t) \Big{]}\hat{U}_{\text{SPDC}}(t).\] Due to the properties of the commutator, \(\hat{U}_{\text{SPDC}}^{(\dagger)}(t)\) can act on the operators inside of it independently and give: \[\partial_{t}\,\bar{E}_{i}^{(+)}(\mathbf{r},\omega,t)=\] \[\frac{1}{i\hbar}\Big{[}\,\bar{E}_{i}^{(+)}(\mathbf{r},\omega,t), \hat{U}_{\text{SPDC}}^{\dagger}(t)\tilde{H}_{\text{SPDC}}(t)\hat{U}_{\text{SPDC }}(t)\Big{]},\] where we used Eq. 12. Also recalling Eq. 11c and the decomposition \(\hat{U}(t)=\hat{U}_{0}(t)\hat{U}_{\text{SPDC}}(t)\), we can identify \(\hat{U}_{\text{SPDC}}^{\dagger}(t)\tilde{H}_{\text{SPDC}}(t)\hat{U}_{\text{SPDC }}(t)\) to be the full, Heisenberg-picture SPDC Hamiltonian given in Eq. 1. Thus we have proven Eq. 8. The commutation relation for the rotating-frame creation/annihilation operators is easily found by replacing the definition Eq. 12 in the commutation relation for the Heisenberg picture operators Eq. 4: \[\Big{[}\bar{f}_{i}(\mathbf{r},\omega,t),\bar{f}_{j}^{\dagger}( \mathbf{r}^{\prime},\omega^{\prime},t)\Big{]}=\] \[\delta_{ij}\delta(\mathbf{r}-\mathbf{r}^{\prime})\delta(\omega- \omega^{\prime})\,e^{i(\omega-\omega^{\prime})t}.\] Since the \(\delta\)-function in the above expression results in \(\omega\equiv\omega^{\prime}\), the exponential factor can be ignored. The same is valid for the commutation relation of the rotating-frame amplitude operators, which is found by replacing Eq. 7a in Eq. 10b: \[\Big{[}\,\bar{E}_{i}^{(+)}(\mathbf{r},\omega,t),\,\bar{E}_{j}^{(- )}(\mathbf{r}^{\prime},\omega^{\prime},t)\Big{]}=\] \[\mathcal{K}^{2}\frac{\omega^{2}}{c^{2}}\,\text{Im}\left[G_{ij}( \mathbf{r},\mathbf{r}^{\prime},\omega)\right]\delta(\omega-\omega^{\prime}) \,e^{i(\omega-\omega^{\prime})t}.\] ## Appendix C Equations of motion for the amplitude operators To find the commutator \(\Big{[}\,\bar{E}_{i}^{(+)}(\mathbf{r},\omega,t),\hat{H}_{\text{SPDC}}(t) \Big{]}\), we first expand the SPDC Hamiltonian from Eq. 1 in terms of the rotating-frame operators \(\,\bar{\mathbf{E}}^{(\pm)}(\mathbf{r},\omega,t)\): \[\hat{H}_{\text{SPDC}}(t)=-\varepsilon_{0}\int\mathrm{d}\mathbf{r }^{\prime}\iint\mathrm{d}\omega^{\prime}\,\mathrm{d}\omega^{\prime\prime}\, \chi^{(2)}_{ljk}(\mathbf{r}^{\prime})E_{P,l}^{(-)}(\mathbf{r}^{\prime},t)\] \[\quad\times\bar{E}_{j}^{(+)}(\mathbf{r}^{\prime},\omega^{\prime},t )\,\bar{E}_{k}^{(+)}(\mathbf{r}^{\prime},\omega^{\prime\prime},t)\,e^{-i\omega ^{\prime}t}\,e^{-i\omega^{\prime\prime}t}\] \[\quad+H.c.\] Due to the linear properties of the commutator and the fact that amplitude operators with the same sign in the superscript commute, it is sufficient to calculate: \[\Big{[}\,\bar{E}_{i}^{(+)}(\mathbf{r},\omega,t),\,\bar{E}_{j}^{(- )}(\mathbf{r}^{\prime},\omega^{\prime},t)\,\bar{E}_{k}^{(-)}(\mathbf{r}^{ \prime},\omega^{\prime\prime},t)\Big{]}= \tag{13}\] \[\quad\quad\quad\quad\mathcal{K}^{2}\frac{\omega^{2}}{c^{2}}\big{(} \delta(\omega-\omega^{\prime})\text{Im}\left[G_{ij}(\mathbf{r},\mathbf{r}^{ \prime},\omega)\right]\,\bar{E}_{k}^{(-)}(\mathbf{r}^{\prime},\omega^{\prime \prime},t)\] \[\quad\quad\quad\quad+\delta(\omega-\omega^{\prime\prime})\text{Im} \left[G_{ik}(\mathbf{r},\mathbf{r}^{\prime},\omega)\right]\,\bar{E}_{j}^{(-)}( \mathbf{r}^{\prime},\omega^{\prime},t)\big{)},\] where we used Eq. 10b. To simplify the relations, we assume that the Kleinman symmetry condition is valid (there is negligible dispersion in the nonlinear response), which is a commonly used assumption in many practical scenarios of interest [3]. In this case, we have the permutation symmetry of the nonlinear susceptibility \(\chi^{(2)}_{ljk}=\chi^{(2)}_{lkj}\), hence the two terms on the right-hand-side (r.h.s.) of Eq. 13 will result in identical contributions to the final equation of motion. If Kleinman symmetry is not valid, we can simply keep both terms in Eq. 13, which adds no complexity to the calculation, and only makes the expression longer. Thus we can write: \[\partial_{t}\,\bar{E}_{i}^{(+)}(\mathbf{r},\omega,t)= \tag{14}\] \[\quad 2i\frac{\varepsilon_{0}\mathcal{K}^{2}\omega^{2}}{\pi c^{2}} \int\mathrm{d}\mathbf{r}^{\prime}\,\chi^{(2)}_{ljk}(\mathbf{r}^{\prime})E_{P,l}^{(+ )}(\mathbf{r}^{\prime},t)\text{Im}\left[G_{ik}(\mathbf{r},\mathbf{r}^{\prime}, \omega)\right]\] \[\quad\times\iint\mathrm{d}\omega^{\prime}\,\mathrm{d}\omega^{ \prime\prime}\,\delta(\omega-\omega^{\prime\prime})\,\bar{E}_{j}^{(-)}(\mathbf{r}^{ \prime},\omega^{\prime},t)\,e^{i\omega^{\prime}t}\,e^{i\omega^{\prime\prime}t}.\] After performing the \(\omega^{\prime\prime}\)-integral and replacing the value of \(\mathcal{K}\), we obtain the final form of the equation of motion given in Eq. 11. ## Appendix D Derivation of Eq. 14 To find the correlation function \(g^{(1)}_{ij}({\bf r},{\bf r}^{\prime};t,t^{\prime})=\left\langle\hat{E}^{(-)}_{i} ({\bf r},t)\hat{E}^{(+)}_{j}({\bf r}^{\prime},t^{\prime})\right\rangle\) in terms of the IO-coefficients, we begin by expanding the electric field operators in the expectation value according to Eq. 7b: \[\left\langle\hat{E}^{(-)}_{i}({\bf r},t)\hat{E}^{(+)}_{j}({\bf r} ^{\prime},t^{\prime})\right\rangle= \tag{15}\] \[\iint\mathrm{d}\omega\,\mathrm{d}\omega^{\prime}\left\langle\, \hat{E}^{(-)}_{i}({\bf r},\omega,t)\,\bar{E}^{(+)}_{j}({\bf r}^{\prime},\omega ^{\prime},t)\right\rangle\,e^{i\omega t}\,e^{-i\omega^{\prime}t^{\prime}}.\] we then expand each of the amplitude operators using the IO-relation Eq. 13. To avoid writing out all of the terms that thus appear, we will only remark that they are each proportional to one of the products: \(\bar{f}^{(\dagger)}_{\alpha}\bar{f}^{(\dagger)}_{\beta}\), where \(\alpha,\beta\) are arbitrary indices. The expectation value of each product is then taken in the initial state of the system (as we are working in the Heisenberg picture). In the present case, we assume the initial state to be the vacuum, which causes the expectation values to evaluate to zero for all terms except ones proportional to \(\bar{f}_{\alpha}\bar{f}^{\dagger}_{\beta}\). Thus we are left with: \[\left\langle\hat{E}^{(-)}_{i}({\bf r},t)\hat{E}^{(+)}_{j}({\bf r }^{\prime},t^{\prime})\right\rangle=\] \[\iint\mathrm{d}\omega\,\mathrm{d}\omega^{\prime}\int\mathrm{d} \mathbf{\Xi}\,\mathrm{d}\mathbf{\Xi}^{\prime}\,\mathcal{A}_{ik}({\bf r}, \omega,\mathbf{\Xi};t)\mathcal{A}^{*}_{jl}({\bf r}^{\prime},\omega^{\prime}, \mathbf{\Xi}^{\prime};t^{\prime})\] \[\times\left\langle\bar{f}_{k}(\mathbf{\Xi})\bar{f}^{\dagger}_{l} (\mathbf{\Xi}^{\prime})\right\rangle\,e^{i\omega t}\,e^{-i\omega^{\prime}t^{ \prime}}, \tag{16}\] where the remaining expectation value can be evaluated using the commutation relation Eq. 10a to result in \(\left\langle\bar{f}_{k}(\mathbf{\Xi})\bar{f}^{\dagger}_{l}(\mathbf{\Xi}^{ \prime})\right\rangle=\delta_{kl}\delta(\mathbf{\Xi}-\mathbf{\Xi}^{\prime}) \equiv\delta_{kl}\delta(\mathbf{\xi}-\mathbf{\xi}^{\prime})\delta(\nu-\nu^{ \prime})\). The \(\mathbf{\Xi}^{\prime}\) integral can then be evaluated to obtain Eq. 14. If the system was initially in a state other than vacuum, the expectation values of the various \(\bar{f}^{(\dagger)}_{\alpha}\bar{f}^{(\dagger)}_{\beta}\) products may result in multiple nonzero terms, which can also be analytically evaluated using Eq. 10a by first expressing the initial state using the free-field creation/annihilation operators \(\bar{f}^{(\dagger)}_{\alpha}\). ## Appendix E Derivation of the coupled equations for the filtered IO-coefficients To obtain the differential equation governing the evolution of \(\tilde{A}_{ij}(\dots)\) we first take the time derivative of both sides of Eq. 26b: \[\partial_{t}\tilde{A}_{ij}({\bf r},\omega,\mathbf{\Xi};t)=\int\mathrm{d} \omega^{\prime}\,\,e^{-i(\omega-\omega^{\prime})t}A_{ij}({\bf r},\omega^{ \prime},\mathbf{\Xi};t). \tag{17}\] Then we write out \(A_{ij}({\bf r},\omega^{\prime},\mathbf{\Xi};t)\) as the formal solution of Eq. 15a. This is found by integrating both sides of the equation over time: \[A_{ij}({\bf r},\omega,\mathbf{\Xi};t)=\int_{-\infty}^{t}\mathrm{d}t^{\prime} \,S^{(0)}_{ij}({\bf r},\omega,\mathbf{\Xi};t^{\prime})+\int_{-\infty}^{t} \mathrm{d}t^{\prime}\int\mathrm{d}{\bf r}^{\prime}\int\mathrm{d}\omega^{ \prime}\,F^{*}_{ik}({\bf r},\omega;{\bf r}^{\prime},\omega^{\prime};t^{\prime} )B_{kj}({\bf r}^{\prime},\omega^{\prime},\mathbf{\Xi};t^{\prime}),\] where we used the fact that \(A_{ij}({\bf r},\omega,\mathbf{\Xi};t\rightarrow-\infty)=0\). When we introduce this form for \(A_{ij}({\bf r},\omega,\mathbf{\Xi};t)\) into Eq. 17, we obtain two main terms which contribute to the equation of motion. The first term we label as \(\tilde{S}^{(0)}_{ij}({\bf r},\omega,\mathbf{\Xi};t)\) and it has the form: \[\tilde{S}^{(0)}_{ij}({\bf r},\omega,\mathbf{\Xi};t)=\int\mathrm{d}\omega^{ \prime}\,\,e^{-i(\omega-\omega^{\prime})t}\int_{-\infty}^{t}\mathrm{d}t^{\prime }\,S^{(0)}_{ij}({\bf r},\omega^{\prime},\mathbf{\Xi};t^{\prime}).\] We can immediately rewrite it using the full expression for \(S^{(0)}_{ij}({\bf r},\omega^{\prime},\mathbf{\Xi};t^{\prime})\), obtained from Eq. 20: \[\tilde{S}^{(0)}_{ij}({\bf r},\omega,\mathbf{\Xi};t)=\frac{2{ \cal K}}{\pi}\frac{\nu^{2}}{c^{2}}\sqrt{\varepsilon^{\prime\prime}(\mathbf{ \xi},\nu)}\int\mathrm{d}\omega^{\prime}\,\,\frac{\omega^{\prime 2}}{c^{2}}\,e^{-i(\omega-\omega^{ \prime})t}\int_{-\infty}^{t}\mathrm{d}t^{\prime}\,\,e^{-i(\omega^{\prime}+\nu)t ^{\prime}}\] \[\times\int\mathrm{d}{\bf r}^{\prime}\,\chi^{(2)}_{klm}({\bf r}^{ \prime})E^{(-)}_{P,k}({\bf r}^{\prime},t^{\prime})\mathrm{Im}\left[G_{il}({\bf r },{\bf r}^{\prime},\omega^{\prime})\right]G_{mj}({\bf r}^{\prime},\mathbf{\xi},\nu).\] Our next aim is to be able to evaluate the time integral in the above expression. To do that, we expand the pump field into a Fourier integral: \(E^{(-)}_{P,k}({\bf r},t)=\int\mathrm{d}\omega_{p}\,\mathcal{E}^{(-)}_{P,k}({\bf r },\omega_{p})\,e^{i\,\omega_{p}t}\), where \(\mathcal{E}^{(-)}_{P}({\bf r},\omega_{p})\) is the pump amplitude in the frequency domain. The full form of \(\tilde{S}^{(0)}_{ij}({\bf r},\omega,\mathbf{\Xi};t)\) is now: \[\tilde{S}^{(0)}_{ij}({\bf r},\omega,\mathbf{\Xi};t)=\frac{2{ \cal K}}{\pi}\frac{\nu^{2}}{c^{2}} \sqrt{\varepsilon^{\prime\prime}(\mathbf{\xi},\nu)}\int\mathrm{d}{\bf r }^{\prime}\,\chi^{(2)}_{klm}({\bf r}^{\prime})G_{mj}({\bf r}^{\prime},\mathbf{ \xi},\nu)\] \[\times\int\mathrm{d}\omega^{\prime}\,\frac{\omega^{\prime 2}}{c^{2}} \mathrm{Im}\left[G_{il}({\bf r},{\bf r}^{\prime},\omega^{\prime})\right]\int \mathrm{d}\omega_{p}\,\mathcal{E}^{(-)}_{P,k}({\bf r}^{\prime},\omega_{p})\,e^{-i (\omega-\omega^{\prime})t}\int_{-\infty}^{t}\mathrm{d}t^{\prime}\,\,e^{i( \omega_{p}-\omega^{\prime}-\nu)t^{\prime}}, \tag{18}\] Now we examine the time integral \(\int_{-\infty}^{t}\mathrm{d}t^{\prime}\ e^{i(\omega_{p}-\omega^{\prime}-\nu)t^{ \prime}}\) and introduce a substitution \(t^{\prime}=t-\tau\) which allows us to rewrite it as \(\ e^{i(\omega_{p}-\omega^{\prime}-\nu)t}\int_{0}^{\infty}\mathrm{d}\tau\ e^{i( \omega^{\prime}+\nu-\omega_{p})\tau}\equiv\ e^{i(\omega_{p}-\omega^{\prime}- \nu)t}\zeta(\omega^{\prime}+\nu-\omega_{p})\), where \(\zeta(\omega)\) is a generalised function proportional to the Fourier transform of the Heaviside step function and is closely related to the analytical properties of the GF [75; 76]. One notable analytical property of the GF is [46; 76]: \[\int\mathrm{d}\omega\,\omega^{2}\mathrm{Im}\left[G_{ij}(\mathbf{r},\mathbf{r} ^{\prime},\omega)\right]\zeta(\omega-\omega_{0})=i\pi\omega_{0}^{2}G_{ij}^{*}( \mathbf{r},\mathbf{r}^{\prime},\omega_{0}) \tag{10}\] which we can use to further simplify Eq. 10 by performing the integration over \(\omega^{\prime}\). Thus we arrive at the final form for the filtered source term 29. The second term appearing on the r.h.s. of Eq. 10 is: \[\int\mathrm{d}\omega^{\prime}\ e^{-i(\omega-\omega^{\prime})t}\int_{-\infty}^ {t}\mathrm{d}t^{\prime}\int\mathrm{d}\mathbf{r}^{\prime}\int\mathrm{d}\omega^ {\prime\prime}\,F_{ik}^{*}(\mathbf{r},\omega^{\prime};\mathbf{r}^{\prime}, \omega^{\prime\prime};t^{\prime})B_{kj}(\mathbf{r}^{\prime},\omega^{\prime \prime},\mathbf{\Xi};t^{\prime}).\] If we write out the full form of \(F_{ik}^{*}(\mathbf{r},\omega;\mathbf{r}^{\prime},\omega^{\prime};t^{\prime})\) using Eq. 12, we have: \[-\frac{2i}{\pi}\int\mathrm{d}\omega^{\prime}\ e^{-i(\omega- \omega^{\prime})t}\int_{-\infty}^{t}\mathrm{d}t^{\prime}\int\mathrm{d}\mathbf{ r}^{\prime}\int\mathrm{d}\omega^{\prime\prime}\,\frac{\omega^{\prime 2}}{c^{2}}\chi_{lmk}^{(2)}(\mathbf{r}^{ \prime})\,\mathrm{Im}\left[G_{im}(\mathbf{r},\mathbf{r}^{\prime},\omega^{ \prime})\right]\] \[\times\left(\int\mathrm{d}\omega_{p}\,\mathcal{E}_{P,l}^{(-)}( \mathbf{r}^{\prime},\omega_{p})\,e^{i\,\omega_{p}t^{\prime}}\right)\,e^{-i( \omega^{\prime}+\omega^{\prime\prime})t^{\prime}}B_{kj}(\mathbf{r}^{\prime}, \omega^{\prime\prime},\mathbf{\Xi};t^{\prime}),\] where we immediately expanded the pump field into a Fourier integral. After some reordering of the integrals in the above expression, we can identify: \[\int_{-\infty}^{t}\mathrm{d}t^{\prime}\int\mathrm{d}\omega^{\prime\prime}\ e^{i(\omega_{p}-\omega^{\prime}-\omega^{\prime\prime})t^{ \prime}}B_{kj}(\mathbf{r}^{\prime},\omega^{\prime\prime},\mathbf{\Xi};t^{\prime})= \tilde{B}_{kj}(\mathbf{r}^{\prime},\omega_{p}-\omega^{\prime},\mathbf{\Xi};t),\] where the r.h.s. was established according to Eq. 26c. Thus, we now write the term in question as: \[-\frac{2i}{\pi}\int\mathrm{d}\mathbf{r}^{\prime}\int\mathrm{d}\omega^{\prime} \ \frac{\omega^{\prime 2}}{c^{2}}\chi_{lmk}^{(2)}(\mathbf{r}^{\prime}) \mathrm{Im}\left[G_{im}(\mathbf{r},\mathbf{r}^{\prime},\omega^{\prime})\right] \,e^{-i(\omega-\omega^{\prime})t}\int\mathrm{d}\omega_{p}\,\mathcal{E}_{P,l}^ {(-)}(\mathbf{r}^{\prime},\omega_{p})\tilde{B}_{kj}(\mathbf{r}^{\prime}, \omega_{p}-\omega^{\prime},\mathbf{\Xi};t).\] To arrive at the form of the term given by Eqs. 28a and 30, we first introduce a variable substitution \(\omega_{p}=\omega^{\prime}+\bar{\omega}\) into the above expression: \[-\frac{2i}{\pi}\int\mathrm{d}\mathbf{r}^{\prime}\int\mathrm{d}\omega^{\prime} \ \frac{\omega^{\prime 2}}{c^{2}}\chi_{lmk}^{(2)}(\mathbf{r}^{\prime}) \mathrm{Im}\left[G_{im}(\mathbf{r},\mathbf{r}^{\prime},\omega^{\prime}) \right]\,e^{-i(\omega-\omega^{\prime})t}\int_{-\omega^{\prime}}^{\infty} \mathrm{d}\bar{\omega}\,\mathcal{E}_{P,l}^{(-)}(\mathbf{r}^{\prime},\bar{ \omega}+\omega^{\prime})\tilde{B}_{kj}(\mathbf{r}^{\prime},\bar{\omega},\mathbf{ \Xi};t).\] The newly introduced \(\bar{\omega}\) has the limits \([-\omega^{\prime},\infty)\), as shown above, which indicates that the filtered coefficients have to be evaluated at certain negative frequency values during calculation. Although this is also formally allowed by the definitions in Eqs. 26, these negative-frequency contributions can be safely neglected and the lower integration limit set to \(0\) without any loss of accuracy. The reason for this can be seen by examining the source term in 29, where negative values of the variable \(\omega\) would cause the term to oscillate at optical frequencies, thus averaging to \(0\) on the time scales of the SPDC process and resulting in these negative-frequency values of the IO-coefficients not contributing in the final equations of motion. Finally, to make the resulting expression consistent with the notation used in the main text, we will exchange the variables \(\omega^{\prime}\) and \(\bar{\omega}\) and write the final form of the second term of Eq. 10: \[-\frac{2i}{\pi}\int\mathrm{d}\mathbf{r}^{\prime}\int\mathrm{d}\omega^{\prime} \left(\int\mathrm{d}\bar{\omega}\,\frac{\bar{\omega}^{2}}{c^{2}}\chi_{lmk}^{(2 )}(\mathbf{r}^{\prime})\mathrm{Im}\left[G_{im}(\mathbf{r},\mathbf{r}^{\prime}, \bar{\omega})\right]\,e^{-i(\omega-\bar{\omega})t}\mathcal{E}_{P,l}^{(-)}( \mathbf{r}^{\prime},\omega^{\prime}+\bar{\omega})\right)\tilde{B}_{kj}(\mathbf{ r}^{\prime},\omega^{\prime},\mathbf{\Xi};t),\] where the expression in the parentheses exactly corresponds to the definition Eq. 30 and, when the above expression is combined with Eq. 29, we get exactly the equation of motion Eq. 28a. An analogous procedure can be performed to find the equation of motion for the coefficient \(\tilde{B}_{kj}(\mathbf{r},\omega,\mathbf{\Xi};t)\), after which we obtain the coupled equations 28. ## Appendix F Low-gain solution To test the validity of the coupled equations Eqs. 28, we find the perturbative solutions for the IO-coefficients in the the low-gain regime, when the pump amplitude is sufficiently weak, and use them to calculate the low-gain single-photon spectrum. The single-photon output spectrum in terms of the IO-coefficients is given in Eq. 27, where we see that it is entirely determined by \(\tilde{A}_{kj}(\mathbf{r},\omega,\mathbf{\Xi};t)\). Thus, an analytical expression for the spectrum in the low-gain regime can be obtained by finding a first-order perturbative solution for the IO-coefficient \(\tilde{A}_{kj}(\mathbf{r},\omega,\mathbf{\Xi};t)\) and replacing it in Eq. 27. To obtain a perturbative expansion of \(\tilde{A}_{kj}(\mathbf{r},\omega,\mathbf{\Xi};t)\), we begin by integrating both sides of Eq. 28a over time, to find the formal form of the solution. This yields: \[\tilde{A}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)=\int_{-\infty}^{t}\mathrm{d}t ^{\prime}\,\tilde{S}_{ij}^{(0)}(\mathbf{r},\omega,\mathbf{\Xi};t^{\prime})+ \int_{-\infty}^{t}\mathrm{d}t^{\prime}\int\mathrm{d}\mathbf{r}^{\prime}\int \mathrm{d}\omega^{\prime}\,\tilde{F}_{ik}^{*}(\mathbf{r},\omega;\mathbf{r}^{ \prime},\omega^{\prime};t^{\prime})\tilde{B}_{kj}(\mathbf{r}^{\prime},\omega^ {\prime},\mathbf{\Xi};t^{\prime}). \tag{100}\] Next, we find the formal solution of Eq. 28b in the same manner: \[\tilde{B}_{ij}(\mathbf{r},\omega,\mathbf{\Xi};t)=\int_{-\infty}^{t}\mathrm{d} t^{\prime}\int\mathrm{d}\mathbf{r}^{\prime}\int\mathrm{d}\omega^{\prime}\, \tilde{F}_{ik}(\mathbf{r},\omega;\mathbf{r}^{\prime},\omega^{\prime};t^{\prime} )\tilde{A}_{kj}(\mathbf{r}^{\prime},\omega^{\prime},\mathbf{\Xi};t^{\prime}),\] and insert it into 10 in order to obtain the exact output fields, however, as will now be discussed, the relevant contributions come from specific limited ranges determined by the spatial and spectral properties of the structure in question, as well as the nonlinear interaction. To determine which range of the spatial variable \(\mathbf{\xi}\) gives a dominant contribution to the output we examine the low-gain analytical expression for the coefficient \(\bar{A}(z,\omega,\mathbf{\Xi};t\to\infty)\) (since it is the only one contributing to the spectrum) given by simplifying Eq. 15 according to the assumptions made about the waveguide under consideration in Section III: \[\bar{A}(z,\omega,\mathbf{\Xi};t\to\infty) = 2i\kappa\,\frac{\nu^{2}\omega^{2}}{c^{2}}\sqrt{\varepsilon^{\prime \prime}(\nu)} \tag{19}\] \[\times \int\mathrm{d}z^{\prime}\,\chi^{(2)}(z^{\prime})\mathcal{E}_{P}^ {(-)}(z^{\prime},\omega+\nu)\] \[\times G^{*}(z,z^{\prime},\omega)G(z^{\prime},\xi,\nu).\] The only \(\xi\)-dependent factor present - \(G(z^{\prime},\xi,\nu)\), in the case of the GF given by 31, is proportional to \(e^{i\frac{\nu}{\varepsilon}\sqrt{\varepsilon(\nu)}\left|z^{\prime}-\xi\right|}\). Due to \(\varepsilon(\nu)\) being complex, this factor represents an oscillating function with an exponentially decreasing amplitude in \(\left|z^{\prime}-\xi\right|\) with the decay length \(l_{d}(\nu)=\frac{1}{\varepsilon\,\mathrm{Im}\left[\sqrt{\varepsilon(\nu)} \right]}\). Since the variable \(z^{\prime}\) is always confined to the nonlinear region of the waveguide, the effective range of the variable \(\xi\) consists of the nonlinear region itself and a multiple of the length \(l_{d}\) of the linear regions surrounding the nonlinear waveguide. Physically this means that field-matter excitations sufficiently far away from the nonlinear region are unable to influence the nonlinear interaction due to their effects being damped away by the absorption of the dielectric. In practice, it is sufficient to find the largest possible decay length for the frequency range of interest \(l_{d,max}\) and fix that as the boundary value for \(\xi\). For materials with small amounts of loss or where the loss is very narrow-band, these boundary values can be extremely large and could present a problem in a numerical implementation, where memory limitations are a factor. The potentially large range of \(\xi\) outside of the nonlinear region can be made more manageable via a variable transformation. We used the double-exponential transformation of the form [77]: \(\xi=\sinh\bigl{(}\frac{\pi}{2}\sinh\theta\bigr{)}\) where \(\theta\) is the substitution variable whose range is chosen to correspond to values of \(\xi\) outside the nonlinear region. With such a transformation, we compress the potentially long range of integration for the variable \(\xi\) in the surrounding linear regions into a much shorter range for the variable \(\theta\). This transformation resembles the coordinate transformation used in implementing perfectly matched layers (PMLs) in computational photonics [78]. Even though the above conclusions were made using the low-gain result, they also hold in the high-gain regime as well, as was confirmed by our simulations. This is due to the spatial/absorbing properties of the structure being independent of the pump intensity and thus, the spatial distribution of the initial-time field-matter excitations contributing to the output stays the same, regardless of gain. On the other hand, the relevant frequency range for the contributing initial-time field-matter excitations, i.e. the relevant range of \(\nu\), cannot simply be inferred from the low-gain, due to the seeding effect that couples neighbouring frequencies in the high-gain regime. Looking again at Eq. 16, we observe the following in the low-gain regime: the range of frequencies \(\nu\) contributing to the output at frequency \(\omega_{0}\) is determined by the spectrum of the pump, i.e. output photons of frequency \(\omega_{0}\) are influenced by field-matter excitations of frequencies within a pump bandwidth of \(\nu_{0}=\omega_{p0}-\omega_{0}\). However, as gain is increasing, the generated field at frequencies neighbouring \(\omega_{0}\) will start contributing to the intensity at \(\omega_{0}\) through higher order seeding effects, where those neighbouring frequencies themselves are affected by their own respective neighbouring frequencies further from \(\omega_{0}\). Hence, to have an accurate representation of the intensity at \(\omega_{0}\), one has to include the whole range of frequencies with substantial intensities around \(\omega_{0}\), which are mainly determined by the phase-matching condition. Hence, in our calculation, for both \(\omega\) and \(\nu\) that appear in Eqs. 28, we consider the whole range of frequencies around the main and the first surrounding peaks of the phase-matching function.
2309.08558
A modern approach to transition analysis and process mining with Markov models: A tutorial with R
This chapter presents an introduction to Markovian modeling for the analysis of sequence data. Contrary to the deterministic approach seen in the previous sequence analysis chapters, Markovian models are probabilistic models, focusing on the transitions between states instead of studying sequences as a whole. The chapter provides an introduction to this method and differentiates between its most common variations: first-order Markov models, hidden Markov models, mixture Markov models, and mixture hidden Markov models. In addition to a thorough explanation and contextualization within the existing literature, the chapter provides a step-by-step tutorial on how to implement each type of Markovian model using the R package seqHMM. The chaper also provides a complete guide to performing stochastic process mining with Markovian models as well as plotting, comparing and clustering different process models.
Jouni Helske, Satu Helske, Mohammed Saqr, Sonsoles López-Pernas, Keefe Murphy
2023-09-02T07:24:32Z
http://arxiv.org/abs/2309.08558v1
# A modern approach to transition analysis and process mining with Markov models: A tutorial with R ###### Abstract This chapter presents an introduction to Markovian modeling for the analysis of sequence data. Contrary to the deterministic approach seen in the previous sequence analysis chapters, Markovian models are probabilistic models, focusing on the transitions between states instead of studying sequences as a whole. The chapter provides an introduction to this method and differentiates between its most common variations: first-order Markov models, hidden Markov models, mixture Markov models, and mixture hidden Markov models. In addition to a thorough explanation and contextualization within the existing literature, the chapter provides a step-by-step tutorial on how to implement each type of Markovian model using the R package seqHMM. The chaper also provides a complete guide to performing stochastic process mining with Markovian models as well as plotting, comparing and clustering different process models. ## 1 Introduction In the previous two chapters, we have learned about sequence analysis [1, 2] and its relevance to educational research. This chapter delves into a closely-related method: Markovian models. Specifically, we focus on a particular type of Markovian model, where the data are assumed to be categorical and observed at discrete time intervals, as per the previous chapters about sequence analysis, although in general Markovian models are not restricted to categorical data. One of the main differences between sequence analysis and Markovian modelling is that the former relies on deterministic data mining, whereas the latter uses probabilistic models [3]. Moreover, sequence analysis takes a more holistic approach by analysing sequences as a whole, whereas Markovian modelling focuses on the transitions between states, their probability, and the reasons (covariates) which explain why these transitions happen. We provide an introduction and practical guide to the topic of Markovian models for the analysis of sequence data. While we try to avoid advanced mathematical notations, to allow the reader to continue to other, more advanced sources when necessary, we do introduce the basic mathematical concepts of Markovian models. When doing so, we use the same notation as in the R package seqHMM[4], which we also use in the examples. In particular, we illustrate first-order Markov models, hidden Markov models, mixture Markov models, and mixture hidden Markov models with applications to synthetic data on students' collaboration roles throughout a complete study program. The Chapter proceeds to describe the theoretical underpinnings on each method in turn, then showcases each method with code, before presenting some conclusions and further readings. In addition to the aforementioned applications to collaboration roles and achievement sequences, we also provide a demonstration of the utility of Markovian models in another context, namely process mining. In the process mining application, we leverage Markov models and mixture Markov models to explore learning management system logs. Finally, we conclude with a brief discussion of Markovian models in general and provide some recommendations for further reading of advanced topics in this area as a whole. Methodological Background ### Markov model The simple first-order Markov chain or Markov model (MM) can be used to model transitions between successive states. In the MM, given the current observation, the next observation in the sequence is independent of the past --this is called the _Markov property_. For example, when predicting a student's school success in the fourth year we only need to consider their success in the third year, while their success in the first and second year give no additional information for the prediction (see Figure 1 for an illustration). As such, the model is said to be memoryless. As an example, consider the data described in Table 1 which includes four sequences of length ten consisting of an alphabet of two types of observed state, low achievement success (\(L\)) and high achievement (\(H\)). Here, the individuals are assumed to be independent from one another: Say \(t\) describes the position in the sequence, or in this example, the year (in other words, here \(t\) runs from 1 to 10). If we assume that the probability of observing \(L\) or \(H\) at any given point \(t\) depends on the current observation only, we can estimate the _transition probabilities_\(a_{LL}\) (from state \(L\) to state \(L\)), \(a_{LH}\) (\(L\) to \(H\)), \(a_{HL}\) (\(H\) to \(L\)), and \(a_{HH}\) (\(H\) to \(H\)) by calculating the number of observed transitions from each state to all states and scaling these with the total number of transitions from that state. Mathematically, we can write the transition probability \(a_{rs}\) from state \(r\) to state \(s\) as \[a_{rs}=P(z_{t}=s\,|\,z_{t-1}=r),\ s,r\in\{L,H\},\] which simply states that the observed state \(z_{t}\) in year \(t\) being \(L\) or \(H\) depends on which of the two states were observed in the previous year \(t-1\). For example, to compute \(a_{LH}=P(z_{t}=H\,|\,z_{t-1}=L)\), the probability of transitioning from the origin state \(L\) to the destination state \(H\), we divide the eight observed transitions to state \(H\) from state \(L\) by 20, which is the total number of transitions from \(L\) to any state. The basic MM assumes that the transition probabilities remain constant in time (this property is called _time-homogeneity_). This means, for example, that the probabilities of transitioning from the low-achievement state to the high-achievement state is the same in the ninth year as it was in the second year. We can collect the transition probabilities in a transition matrix (which we call \(A\)) which shows all of the possible transition probabilities between each pair of origin and destination states, as illustrated in Table 2. For example, when a student has low achievement in year \(t\), they have a 40 percent probability to have low achievement in year \(t+1\) and a higher 60 percent probability to transition to high achievement instead, regardless of the year \(t\). Notice that the probabilities in each row must add up to 1 (or 100%). \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline A & L & L & L & H & L & H & L & H & H & H \\ B & L & H & H & L & H & L & H & L & L & H \\ C & H & H & L & H & L & L & H & L & H & H \\ D & H & H & L & L & L & H & L & L & H \\ \hline \hline \end{tabular} \end{table} Table 1: Four example sequences of school achievement with individuals A–D across the rows and years 1–10 across the columns. Figure 1: Illustration of the MM. The quantities \(Y_{1}\) to \(Y_{4}\) refer to states at time points 1 to 4. The arrows indicate dependencies between states. Lastly, we need to define probabilities for the starting states of the sequences, i.e., the _initial probabilities_ \[\pi_{s}=P(z_{1}=s),\ s\in\{L,H\}.\] In the example, half of the students have low achievement and the other half have high achievement in the first year, so \(\pi_{L}=\pi_{H}=0.5\). This basic MM is very simple and is often not realistic in the context of educational sciences. We can, however, extend the basic MM in several ways. First of all, we can include covariates to explain the transition and/or initial probabilities. For example, if we think that transitioning from low to high achievement gets harder as the students get older we may add time as an explanatory variable to the model, allowing the probability of transitioning from low to high achievement to diminish in time. We could also increase the order of the Markov chain, accounting for longer histories. This may be more realistic, but at the same time increasing the order makes the model considerably more complex, the more so the longer the history considered. Secondly, one of the most usable extensions is the inclusion of hidden (or latent) states that cannot be observed directly but can be estimated from the sequence of observed states. An MM with time-constant hidden states is typically called the mixture Markov model (MMM). It can be used to find latent subpopulations, or in other words, to cluster sequence data. A model with time-varying hidden states is called the hidden Markov model (HMM), which allows the individuals to move between the hidden states. Allowing for both time-constant and time-varying hidden states leads to a mixture hidden Markov model (MHMM). Unless otherwise specified, from now on when talking about hidden states we refer always to time-varying hidden states, while time-constant hidden states are referred to as clusters. ### Mixture Markov model Consider a common case in sequence analysis where individual sequences are assumed to be clustered to subpopulations such as those with typically high and low achievement. In the introductory sequence analysis chapter, the clustering of sequences was performed based on a matrix of pairwise dissimilarities between sequences. Alternatively, we can use the MMM to group the sequences based on their initial and transition probabilities, for example, into those who tend to stay in and transition to high achievement states and those that tend to stay in and transition to low achievement states, as illustrated in Table 3. In MMMs, we have a separate transition matrix \(A^{k}\) for each cluster \(k\) (for \(k=1,\ldots,K\) clusters/subpopulations), and the initial state distribution defines the probabilities to start (and stay) in the hidden states corresponding to a particular cluster. This probabilistic clustering provides group membership probabilities for each sequence; these define how likely it is that each individual is a member of each cluster. We can easily add (time-constant) covariates to the model to explain the probabilities of belonging to each cluster. By incorporating covariates in this way we could, for example, find that being in a high-achievement cluster is predicted by gender or family background. However, we note that this is \begin{table} \begin{tabular}{l l l} \hline \hline & \(\rightarrow\) Low & \(\rightarrow\) High \\ \hline Low \(\rightarrow\) & 8/20 = 0.4 & 12/20 = 0.6 \\ High \(\rightarrow\) & 10/16 = 0.625 & 6/16 = 0.375 \\ \hline \hline \end{tabular} \end{table} Table 2: Transition matrix showing the probabilities of transitioning from one state to another (low or high achievement). The rows and columns describe the origin state and the destination state, respectively. \begin{table} \begin{tabular}{l l l} \hline \hline Cluster: Low achievement & \(\rightarrow\) Low & \(\rightarrow\) High \\ \hline Low \(\rightarrow\) & 0.8 & 0.2 \\ High \(\rightarrow\) & 0.4 & 0.6 \\ \hline \hline \end{tabular} \begin{tabular}{l l l} \hline \hline Cluster: High achievement & \(\rightarrow\) Low & \(\rightarrow\) High \\ \hline Low \(\rightarrow\) & 0.6 & 0.4 \\ High \(\rightarrow\) & 0.1 & 0.9 \\ \hline \hline \end{tabular} \end{table} Table 3: Two transition matrices showing the probabilities of transitioning from one state of achievement to another in two clusters of Low achievement and High achievement. The rows and columns describe the origin state and the destination state, respectively. distinct from the aforementioned potential inclusion of covariates to explain the transition and/or initial probabilities. An advantage of this kind of probabilistic modelling approach is that we can use traditional model selection methods such as likelihood information criteria or cross-validation for choosing the best model. For example, if the number of subpopulations is not known in advance --as is typically the case-- we can compare models with different clustering solutions (e.g., those obtained with different numbers of clusters, different subsets of covariates, or different sets of initial probabilities, for example) and choose the best-fitting model with, for example, the Bayesian information criterion (BIC) [5]. ### Hidden Markov model The HMM can be useful in a number of cases when the state of interest cannot be directly measured or when there is measurement error in the observations. In HMMs, the Markov chain operates at the level of hidden states, which subsequently generate or emit observed states with different probabilities. For example, think about a progression of a student's ability as a hidden state and school success as the observed state. We cannot measure true ability directly, but we can estimate the student's progress by their test scores that are emissions of their ability. There is, however, some uncertainty in how well the test scores represent students' true ability. For example, observing low test scores at some point in time does not necessarily mean the student has low ability; they might have scored lower than expected in the test due to other reasons such as being sick at that particular time. Such uncertainty can be reflected in the emission probabilities; for example, in the high-ability state students get high test scores eight times out of ten and low test scores with a 20 percent probability, while in the low-ability state the students get low test scores nine times out of ten and high test scores with a 10 percent probability. These probabilities are collected in an emission matrix as illustrated in Table 4. Again, the full HMM is defined by a set of parameters: the initial state probabilities \(\pi_{s}\), the hidden state transition probabilities \(a_{rs}\), and the emission probabilities of observed states \(b_{s}(m)\). What is different to the MM is that in the HMM, the initial state probabilities \(\pi_{s}\) define the probabilities of starting from each _hidden_ state. Similarly, the transition probabilities \(a_{rs}\) define the probabilities of transitioning from one _hidden_ state to another hidden state. The emission probabilities \(b_{s}(m)\) (collected in an emission matrix \(B\)) define the probability of observing a particular state \(m\) (e.g., low or high test scores) given the current hidden state \(s\) (e.g., low or high ability). When being in a certain hidden state, observed states occur randomly, following the emission probabilities. Mathematically speaking, instead of assuming the Markov property directly on our observations, we assume that the observations are conditionally independent given the underlying hidden state. We can visualise the HMM as a directed acyclic graph (DAG) illustrated in Figure 2. Here \(Z\) are the unobserved states (such as ability) which affect the distribution of the observed states \(Y\) (test scores). At each time point \(t\), the state \(z_{t}\) can obtain one of \(S\) possible values (there are two hidden states in the example of low and high ability, so \(S=2\)), which in turn defines how \(Y_{t}\) is distributed. \begin{table} \begin{tabular}{l l l} \hline \hline & Low scores & High scores \\ \hline Low ability & 0.9 & 0.1 \\ High ability & 0.2 & 0.8 \\ \hline \hline \end{tabular} \end{table} Table 4: Emission matrix showing the probabilities of each hidden state (low or high ability) emitting each observed state (low or high test scores). ### Mixture hidden Markov models Combining the ideas of both time-constant clusters and time-varying hidden states leads to the concept of mixture hidden Markov model (MHMM). Here we assume that the population of interest consists of subpopulations, each with their own HMM with varying transition and emission probabilities. For example, we could expect to find underlying groups which behave differently when estimating the progression of ability through the sequence of test scores, such as those that consistently stay on a low-ability or high-ability track (stayers) and those that move between low and high ability (movers). In this case, we need two transition matrices: the stayers' transition matrix allows for no transitions while the movers' transition matrix allows for transitioning between low and high ability, as illustrated in Table 5. Similarly, we need two emission matrices that describe how the observed states are related to hidden states, as illustrated in Table 6. In this example, there is a closer match between low/high ability and low/high test scores in the Stayers cluster in comparison to the Movers cluster. Mathematically, when estimating a MHMM we first fix the number of clusters \(K\), and create a joint \begin{table} \end{table} Table 6: Two emission matrices showing the probabilities of each hidden state (low or high ability) emitting each observed state (low or high test scores). Figure 2: Illustration of the HMM. The quantities \(Z_{1}\) to \(Z_{4}\) refer to hidden states at time points 1 to 4, while the quantities \(Y_{1}\) to \(Y_{4}\) refer to observed states. The arrows indicate dependencies between hidden and/or observed states. HMM consisting of \(K\) submodels (HMMs). The number of hidden states does not have to be fixed but can vary by submodel, so that the HMMs have more hidden states for some clusters and fewer for others (in our example, because the transition matrix is of the Stayers cluster is diagonal, we could also split the cluster into two single state clusters, one corresponding to low and other to high ability). This can increase the burden of model selection, so often a common number of hidden states is assumed for each cluster for simplicity. In any case, the initial state probabilities of this joint model define how sequences are assigned to different clusters. We estimate this joint model using the whole data and calculate cluster membership probabilities for each individual. The idea of using mixtures of HMMs has appeared in literature under various names with slight variations, e.g., [6], [7], and [4]. Notably, MHMMs inherit from MMMs the ability to incoporate covariates to predict cluster memberships. ### Multi-channel sequences There are two options to analyse multi-channel (or multi-domain or multi-dimensional) sequence data with Markovian models. The first option is to combine observed states in different channels into one set of single-channel sequences with an expanded alphabet. This option is simple, and works for MMs, HMMs, MMMs, and MHMMs, but can easily lead to complex models as the number of states and channels increases considerably. The second option, which can only be used when working with HMMs and MHMMs, is to treat the observed states in each channel independently given the current hidden state. This can be easily performed by defining multiple emission probability matrices, one for each channel. The assumption of conditional independence simplifies the model, but is sometimes unrealistic, in which case it is better to resort to the first option and convert the data into single-channel sequences. Both options are discussed further in Chapter 13 [8], a dedicated chapter on multi-channel sequences, where applications of distance-based and Markovian clustering approaches are presented. In this chapter, we henceforth focus on single-channel sequences. ### Estimating model parameters The model parameters, i.e. the elements of the initial probability vectors \(\pi\), transition probability matrices \(A\), and emission probability matrices \(B\), can be estimated from data using various methods. Typical choices are the Baum-Welch algorithm (an instance of the expectation-maximisation, i.e., the EM algorithm) and direct (numerical) maximum likelihood estimation. It is possible to restrict models, for example, by setting some parameters to fixed values (typically zeros), for example, to make certain starting states, transitions, or emissions impossible. After the parameter estimation, in addition to studying the estimated model parameters upon convergence, we can, for example, compute cluster-membership probabilities for each individual and find the most probable paths of hidden state sequences using the Viterbi algorithm ([9]). These can be further analysed and visualised for interpretation. ## 3 Review of the literature Markovian methods have been used across several domains in education and have gained renewed interest with the surge in learning analytics and educational data mining. Furthermore, the introduction of specialised R packages (e.g., seqHMM[10]) and software applications (e.g., Mplus [11, 12]) have made it easier to implement Markov models. One of the most common applications of Markovian methods is the clustering of sequence data [13, 14, 15]. Markov models offer a credible alternative to existing distance-based methods (e.g. optimal matching) and can be used with different sequence types (e.g. multi-channel sequences). Furthermore, Markovian methods offer some advantages in clustering sequential data such as the inclusion of covariates that can explain why a sequence emerged (e.g., [16]). More importantly, Markovian models are relatively scalable and can be used to cluster large sequences [17]. As Saqr et al. [17] noted, large sequences are hard to cluster using standard methods such as hierarchical clustering, which is memory inefficient, and hard to parallelise or scale [18, 19]. Furthermore, distance-based clustering methods are limited by the theoretical maximum dimension of a matrix in R which is 2,147,483,647 (i.e., a maximum of 46,430 sequences). In such a case, Markovian methods may be the solution. Examples of Markovian methods in clustering sequences are plentiful. For example, HMMs have been used to cluster students' sequences of learning management system (LMS) trace data to detect their patterns of activities or what the authors referred to as learning tactics and strategies [15]. Another close example was that of Lopez-Pernas and Saqr [20], who used HMMs to cluster multi-channel data of students' learning strategies of two different tools (an LMS and an automated assessment tool). Other examples include using HMM in clustering sequences of students' engagement states [21], sequences of students' collaborative roles [16], or sequences of self-regulation [13, 14]. Markovian methods are also popular in studying transitions and have therefore been used across several applications and with different types of data. One of the most common usages is what is known as stochastic processes mining which typically uses first-order Markov models to map students' transitions between learning activities. For example, Matcha et al. [22] used first-order Markov models to study students' processes of transitions between different learning tactics. Other uses include studying the transitions between tactics of academic writing [23], between self-regulated learning events [24], or within collaborative learning settings [25]. Yet, most of such work has been performed by the pMiner R package [26], which was recently removed from The Comprehensive R Archive Network (CRAN) due to slow updates and incompatibility with existing guidelines. This chapter offers a modern alternative that uses modern and flexible methods for fitting, plotting, and clustering stochastic process mining models as well as the possibility to add covariates to understand "why" different transitions pattern emerged. Indeed, transition analysis in general has been a popular usage for Markovian models and has been used across several studies. For instance, for the analysis of temporal patterns of students' activities in online learning (e.g., [27]), or transitions between latent states [28], or transitions between assignment submission patterns [29]. ## 4 Examples As a first step, we will import all the packages required for our analyses. We have used most of them throughout the book. Below is a brief summary: * qgraph: A package for visualising networks, which can be used to plot transition probabilities [30]. This is used only for the process mining application in Section 4.3. * rio: A package for reading and saving data files with different extensions [31]. * seqHMM: A package designed for fitting hidden (latent) Markov models and mixture hidden Markov models for social sequence data and other categorical time series [32]. * tidyverse: A package that encompasses several basic packages for data manipulation and wrangling [33]. * TraMineR: As seen in the introductory sequence analysis chapter, this package helps us construct, analyze, and visualise sequences from time-ordered states or events [34]. library(qgraph) library(rio) library(seqHMM) library(tidyverse) library(TraMineR) Henceforth, we divide our examples into two parts: the first largely focuses on traditional uses of the seqHMM package to fit Markovian models of varying complexity to sequence data; the latter presents a demonstration of Markovian models from the perspective of process mining. We outline the steps involved in using seqHMM in general in Section 4.1, demonstrate the application of MMs, HMMs, MMMs, and MHMMs in Section 4.2, and explore process mining using Markovian models in Section 4.3, leveraging much of the steps and code from the previous two sections. We note that different datasets are used in Section 4.2 and Section 4.3; we begin by importing the data required for Section 4.2 and defer the importing of the data used in the process mining application to the later section. With this in mind, we start by using the import() function from the rio package to import our sequence data. Based on the description of the MHMM in [35], we used the seqHMM package to simulate a synthetic dataset (simulated_data) consisting of students' collaboration roles (obtained from [36]) on different courses across a whole program. As the original data, the simulation was based on the two-channel model (collaboration and achievement), but we only use the collaboration sequences in the following examples, and leave the multi-channel sequence analysis to Chapter 13 [8]. While not available in the original study, we also simulated students' high school grade point average (GPA, for simplicity categorised into three levels) for each student, which will be used to predict cluster memberships. Using this data, we show how the seqHMM package can be used to analyse such sequences. We start with the simple MM, and then transition to HMMs and their mixtures. To be able to use the seqHMM functions we need to convert the imported data to a sequence using the function seqdef from the TraMineR package (see Chapter 10 [1] for more information about creating stslist objects). We can also extract the covariate information separately (cov_data). URL <- "[https://github.com/sonosoleslp/labook-data/raw/main/](https://github.com/sonosoleslp/labook-data/raw/main/)" simulated_data <- import(paste0(URL, "12_longitudinalRoles/simulated_roles.csv")) roles_seq <- seqdef(simulated_data, var = 3:22, alphabet = c("Isolate", "Mediator", "Leader"), cnames = 1:20) [>] 3 distinct states appear in the data: 1 = Isolate 2 = Leader 3 = Mediator [>] state coding: [alphabet] [label] [long label] 1 Isolate Isolate Isolate 2 Mediator Mediator Mediator [>] 3 Leader Leader Leader Leader [>] 200 sequences in the data set [>] min/max sequence length: 20/20 cpal(roles_seq) <- c("#FBCE4B", "#F67067", "#5C2262") cov_data <- simulated_data %>% select(ID, GPA) %>% mutate(GPA = factor(GPA, levels = c("Low", "Middle", "High"))) ### Steps of estimation We will first briefly introduce the steps of the analysis with the seqHMM package and then show examples of estimating MMs, HMMs, HMMs, and MHMMs. #### 4.1.1 Defining the model structure First, we need to create the model object which defines the structure of the model. This can be done by using one of the model building functions of seqHMM. The build functions include build_mm() for constructing the simple MM, build_hmm() for the HMM, build_mm() for the MMM, and build_mhm() for the MHMM. The user needs to give the build function the sequence data and the number of hidden states and/or clusters (when relevant). The user can also set restrictions on the models, for example, to forbid some transitions by setting the corresponding transition probabilities to zero. To facilitate the estimation of the parameters of more complex models, the user may also set informative starting values for model parameters. #### 4.1.2 Estimating the model parameters After defining the model structure, model parameters need to be estimated. The fit_model() function estimates model parameters using maximum likelihood estimation. The function has several arguments for configuring the estimation algorithms. For simple models the default arguments tend to work well enough, but for more complex models the user should adjust the algorithms. This is because the more parameters the algorithm needs to estimate, the higher the risk of not finding the model with the optimal parameter values (the one which maximises the likelihood). In order to reduce the risk of being trapped in a local optimum of the likelihood surface (instead of a global optimum), we advise to estimate the model numerous times using different starting values for the parameters. The seqHMM package strives to automate this. One option is to run the EM algorithm multiple times with random starting values for any or all of initial, transition, and emission probabilities. These are specified in the control_em argument. Although not done by default, this method seems to perform very well as the EM algorithm is relatively fast. Another option is to use a global direct numerical estimation method such as the multilevel single-linkage method. See [4] for more detailed information on model estimation. #### 4.1.3 Examining the results The output of the fit_model contains the estimated model (stored in fit_hmm$model) as well as information about the estimation of the model, such as the log-likelihood of the final model (fit_hmm$logLik). The print method provides information about the estimated model in a written format, while the plot() function visualises the model parameters as a graph. For HMMs and MHMMs, we can calculate the most probable sequence of hidden states for each individual with the hidden_paths() function. Sequences of observed and hidden state sequences can be plotted with the ssplot() function for MMs and HMMs and with the mssplot() function for the MMMs and the MHMMs. For MMMs and MHMMs, the summary() method automatically computes some features of the models such as standard errors for covariates and prior and posterior cluster membership probabilities for the subjects. ### Markov models We now follow the steps outlined above for each model in turn, starting from the most basic Markov model, proceeding through a hidden Markov model and a mixture Markov model, and finally concluding with a mixture hidden Markov model. #### 4.2.1 Markov model We focus on the sequences of collaboration roles, collected in the roles_seq object. The build_mm() function only takes one argument, observations, which should be an stslist object created with the seqdef() function from the TraMineR package as mentioned before. We can build a MM as follows: markov_model <- build_mm(roles_seq) For the MM, the build_mm() function estimates the initial probabilities and the transition matrix. Note that the build_mm() function is the only build function that automatically estimates the parameters of the model. This is possible because for the MM the estimation is a simple calculation while for the other types of models the estimation process is more complex. The user can access the estimated parameters by calling markov_model$initial_probs and markov_model$transition_probs or view them by using the print method of the model: print(markov_model) Initial probabilities : Isolate Mediator Leader 0.375 0.355 0.270 Transition probabilities : to from Isolate Mediator Leader Isolate 0.4231 0.478 0.0987 Mediator 0.1900 0.563 0.2467 Leader 0.0469 0.428 0.5252 We can see that the initial state probabilities are relatively uniform, with a slightly lower probability for starting in the Leader state. In terms of the transition probabilities, the most distinct feature is that that it is rare to transition directly from the Leader state to Isolate and vice versa (estimated probabilities are about 5% and 10%, respectively). It is also more common to drop from Leader to Mediator (43%) than to increase collaboration from Mediator to Leader (25%). Similarly, the probability of moving from Mediator to Isolate is only 19 percent, but there is a 48 percent chance of transitioning from Isolate to Mediator. We can also draw a graph of the estimated model using the plot method which by default shows the states as pie graphs (for the MM, the pie graphs only consist of one state), transition probabilities as directed arrows, and initial probabilities below each state (see Figure 3). #### 4.2.2 Hidden Markov models The structure of an HMM is set with the build_hmm() function. In contrast to build_mm(), other build_*() functions such as build_hmm() do not directly estimate the model parameters. For build_hmm(), in addition to observations (an stslist), we need to provide the n_states argument which tells the model how many hidden states to construct. Using again the collaboration roles sequences, if we want to estimate an HMM with two hidden states, we can write: ``` set.seed(1) hidden_markov_model<-build_hmm(observations=roles_seq,n_states=2) ``` The set.seed call ensures that we will always end up with the same exact initial model with hidden states in the same exact order even though we use random values for the initial parameters of the model (which is practical for reproducibility). We are now ready to estimate the model with the fit_model() function. The HMM we want to estimate is simple, so we rely on the default values and again use the print method to provide information about the estimated model: ``` fit_hmm<-fit_model(hidden_markov_model) fit_hmm$model ``` Initial probabilities : State 1 State 2 0.657 0.343 Transition probabilities : to from State 1 State 2 Figure 3: MM estimated model pie chart. State 1 0.9089 0.0911 State 2 0.0391 0.9609 Emission probabilities : symbol_names state_names Isolate Mediator Leader State 1 0.4418 0.525 0.0336 State 2 0.0242 0.478 0.4980 The estimated initial state probabilities show that it is more probable to start from hidden state 1 than from hidden state 2 (66% vs. 34%). The high transition probabilities on the diagonal of the transition matrix indicate that the students typically tend to stay in the hidden state they currently are in. Transition probabilities between the hidden states are relatively low and also asymmetric: it is more likely that students move from state 1 to state 2 than from state 2 to state 1. Looking at the emission matrices, we see that the role of the students in state 2 is mostly Leader or Mediator (emission probabilities are 50% and 48%). On the other hand, state 1 captures more of those occasions where students are isolated or exhibit at most a moderate level of participation (mediators). We can also visualise this with the plot() method of seqHMM (see Figure 4): plot(fit_hmm$model, ncol.legend = 4, legend.prop = 0.2, edge.label.color = "black", vertex.label.color = "black") The plot values mainly shows the same information. By default, to simplify the graph, the plotting method combines all states with less than 5% emission probabilities into one category. This threshold can be changed with the combine.slices argument (setting combine.slices = 0 plots all states). For simple models, using n_states is sufficient. It automatically draws random starting values that are then used for the estimation of model parameters. However, as parameter estimation of HMMs and mixture models can be sensitive to starting values of parameters, it may be beneficial to provide starting values manually using the initial_probs, transition_probs, and emission_probs arguments. This is also necessary in case we want to define structural zeros for some of these components, e.g., if we want to restrict the initial probabilities so that each sequence starts from the same hidden state, or if we want to set an upper diagonal transition matrix, which means that the model does not allow transitioning back to previous states (this is called a left-to-right model) [4]. It is also possible to mix random and user-defined starting values by using simulate_*() functions (e.g. simulate_transition_probs()) for some of the model components and user-defined values for others. Figure 4: HMM with two hidden states. In the following example we demonstrate estimating a three-state HMM with user-defined starting values for the initial state probabilities and the transition matrix but simulate starting values for the emission matrices. For simulating starting values with simulate_emission_probs, we need to define the number of hidden states, and the number of observed symbols, i.e., the length of the alphabet of the sequence data. ``` #Setseedforrandomisation set.seed(1) #Initialstateprobabilityvector,mustsumtoone init_probs<-c(0.3,0.4,0.3) #a3x3transitionmatrix,eachrowshouldsumtoone trans_probs<-rbind(c(0.8,0.15,0.05),c(0.2,0.6,0.2),c(0.05,0.15,0.8)) #Simulateemissionprobabilities emission_probs<-simulate_emission_probs(n_states=3, n_symbols=length(alphabet(roles_seq))) #BuildtheHMM hidden_markov_model_2<-build_hmm(roles_seq, initial_probs=init_probs, transition_probs=trans_probs, emission_probs=emission_probs) ``` Our initial probabilities suggest that it is slightly more likely to start from the second hidden state than the first and the third. Furthermore, the starting values for the transition matrices suggest that staying in hidden states 1 and 3 is more likely than staying in hidden state 2. All non-zero probabilities are, however, mere suggestions and will be estimated with the fit_model() function. We now estimate this model 50 times with the EM algorithm using randomised starting values: ``` #set.seed(1) fit_hmm_2<-fit_model(hidden_markov_model_2, control_em=list(restart=list(times=50))) ``` We can get the information on the EM estimation as follows: ``` #fit_hmm_2$em_results #logLik [1]-3546.155 #iterations [1]488 #change [1]9.947132e-11 #best_opt_restart [1]-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.1555-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.15-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.1555-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.1555-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.1555-3546.155-3546.1555-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.1555-3546.155-3546.155-3546.155-3546.155-3546.1555-3546.155-3546.155-3546.155-3546.1555-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.1555-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.15-3546.155-3546.155-3546.155-3546.155-3546.1555-3546.155-3546.155-3546.155-3546.155-3546.155-3546.1555-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.1555-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546.155-3546. what was the change in the log-likelihood at the final step. The most interesting element is the last one: best_opt_restart shows the likelihood for 25 (by default) of the best estimation rounds. We advise to always check these to make sure that the best model was found several times from different starting values: this way we can be fairly certain that we have found the actual maximum likelihood estimates of the model parameters (global optimum). In this case all of the 25 log-likelihood values are identical, meaning that it is likely that we have found the best possible model among all HMMs with three hidden states. Interpreting the results in Figure 5 we see that the first hidden state represents about equal amounts of isolate and mediator roles, the second hidden state represents mainly Leaders and some Mediator roles, and the third hidden state represents mainly Mediator roles and partly Leader roles. Interestingly, none of the students start as Mediator/Leader, while of the other two the Isolate/Mediator state is more typical (two thirds). There are no transitions from the first to the second state nor vice versa, and transition probabilities to the second state are considerably higher than away from it. In other words, it seems that the model has two different origin states and one destination state. We can visualise the observed and/or hidden state sequences with the ssplot() function. The ssplot() function can take an stslist object or a model object of class mm or hmm (see Figure 6). Here we want to plot full sequence index plots (type = "I") of both observed and hidden states (plots = "both") and sort the sequences using multidimensional scaling of hidden states (sortv = "mds.hidden"). See the seqHMM manual and visualisation vignette for more information on the different plotting options. ``` ssplot(fit_hmm_2$model, #Plotsequenceindexplot(fullsequences) type="I", #Plotobservedandhiddenstatesequences plots="both", #Sortequencesbythescoresofmultidimensionalscaling sortv="mds.hidden", #Xaxisticklabels ``` Figure 5: HMM with three hidden states. By looking at the sequences, we can see that even though none of the students start in hidden state 3, the majority of them transition there. In the end, most students end up alternating between mediating and leadership roles. Is the three-state model better than the two-state model? As already mentioned, we can use model selection criteria to test that. To make sure that the three-state model is the best, we also estimate a HMM with four hidden states and then use the Bayesian information criterion for comparing between the three models. Because the four-state model is more complex, we increase the number of re-estimation rounds for the EM algorithm to 100. ``` #Setseedforrandomisationset.seed(1) #BuildandestimateaHMMwithfourstateshidden_markov_model_3<-build_hmm(roles_seq,n_states=4) Figure 6: Observed and hidden state sequences from the HMM with three hidden states. fit_hmm_3<-fit_model(hidden_markov_model_3, control_em=list(restart=list(times=100))) fit_hmm_3$em_results$best_opt_restart [1]-3534.304-3534.304-3534.304-3534.304-3534.304-3534.304-3534.304-3534.304 [8]-3534.304-3534.304-3534.304-3534.304-3534.304-3534.304-3534.305 [15]-3534.305-3534.306-3534.308-3534.310-3534.332-3534.335-3534.335 [22]-3534.335-3534.336-3534.337-3534.337-3534.337 The best model was found only 13 times out of 101 estimation rounds from randomised starting values. A cautious researcher might be wise to opt for a higher number of estimation rounds for increased certainty, but here we will proceed to calculating the BIC values. BIC(fit_hmm$model) [1]7430.028 BIC(fit_hmm_2$model) [1]7208.427 BIC(fit_hmm_3$model) [1]7259.37 Generally speaking, the lower the BIC, the better the model. We can see that the three-state model (fit_hmm_2) has the lowest BIC value, so three clusters is the best choice (at least among HMMs with 2-4 hidden states). #### 4.2.3 Mixture Markov models The MMM can be defined with the build_mmm() function. Similarly to HMMs, we need to either give the number of clusters with the n_clusters argument, which generates random starting values for the parameter estimates, or give starting values manually as initial_probs and transition_probs. Here we use random starting values: # Set seed for randomisation set.seed(123) # Define model structure (3 clusters) mmm <- build_mmm(roles_seq, n_clusters=3) Again, the model is estimated with the fit_model() function: fit_mmm <- fit_model(mmm) The results for each cluster can be plotted one at a time (interactively, the default), or in one joint figure. Here we opt for the latter (see Figure 7). At the same time we also illustrate some other plotting options: plot(fit_mmm$model, # Plot all clusters at the same time interactive = FALSE, # Set the number of rows and columns for cluster plots (one row, three columns) nrow = 1, ncol = 3, # Omit legends with.legend = FALSE, # Choose another layout for the vertices (see plot.igraph) layout = layout_in_circle, # Omit pie graphs from vertices pie = FALSE, # Set state colours vertex.color = cpal(roles_seq), # Increase the size of the circle vertex.size = 80, # Plot state labels instead of initial probabilities vertex.label = "names", # Choose font colour for state labels vertex.label.color = "black", # Set state label in the centre of the circle vertex.label.dist = 0, # Omit labels for transition probabilities edge.label = NA) The following code plots the sequence distribution plot of each cluster (Figure 8). In Cluster 1, we see low probabilities to downward mobility and high probabilities for upward mobility, so this cluster describes leadership trajectories. In Cluster 2, we can see that the thickest arrows lead to mediator and isolates roles, so this cluster describes trajectories with less central roles in collaboration. In Cluster 3, we see the highest transition probabilities for entering the mediator role but also some transitions from mediator to leader, so this cluster describes trajectories with more moderate levels of participation in comparison to cluster 1. This behavior is easier to see when visualising the sequences in their most probable clusters. The plot is interactive, so we need to hit 'Enter' on the console to generate each plot. Alternatively, we can specify which cluster we want to plot using the which.plots argument. cl1 <- mssplot(fit_mmm$model, # Plot Y axis yaxis = TRUE, # Legend position with.legend = "bottom", # Legend columns Figure 7: MMM with three clusters. We can add covariates to the model to explain cluster membership probabilities. For this, we need to provide a data frame (argument data) and the corresponding formula (argument formula). In the example data we use the data frame called cov_data that we created at the beginning of the tutorial with columns ID and GPA, where the order of the ID variable matches to that of the sequence data roles_seq (note that the ID variable is not used in the model building, so the user needs to make sure that both matrices are sorted by ID). We can now use the information about students' GPA level as a predictor of the cluster memberships. Numerical estimation of complex models from random starting values may lead to convergence issues and other problems in the estimation (you may, for example, get warnings about the EM algorithm failing). To avoid such issues, giving informative starting values is often helpful. This model is more complex than the model without covariates and estimation from random starting values leads to convergence issues (not shown here). To facilitate model estimation, we use the results from the previous MMM as informative starting values. Here we also remove the common intercept by adding 0 to the formula, which simplifies the interpretation of the covariate effects later (instead of comparing to a reference category, we get separate coefficients for each of the three GPA categories). set.seed(98765) mmm_2 <- build_mmm(roles_seq, # Starting values for initial probabilities initial_probs = fit_mmm$model$initial_probs, Figure 8: State distribution plots by most probable clusters estimated with the mixture Markov model. # Starting values for transition probabilities transition_probs = fit_mmm@model$transition_probs, # Data frame for covariates data = cov_data, # Formula for covariates (one-sided) formula = - 0 + GPA) Again, the model is estimated with the fit_model() function. Here we use the EM algorithm with 50 restarts from random starting values: set.seed(12345) fit_mmm_2 <- fit_model(mmm_2, # EM with randomised restarts control_em = list(restart = list( # 50 restarts times = 50, # Store loglik values from all 50 + 1 estimation rounds n_optimum = 51) )) Warning in fit_model(mmm_2, control_em = list(restart = list(times = 50, : EM algorithm failed: Estimation of gamma coefficients failed due to singular Hessian. The model was estimated 50 + 1 times (first from the starting values we provided and then from 50 randomised values). We get one warning about the EM algorithm failing. However, 50 estimation rounds were successful. We can check that the best model was found several times from different starting values (37 times, to be precise): fit_mmm_2$em_results$best_opt_restart [1] -3614.627 -3614.627 -3614.627 -3614.627 -3614.627 -3614.627 [8] -3614.627 -3614.627 -3614.627 -3614.627 -3614.627 -3614.627 [15] -3614.627 -3614.627 -3614.627 -3614.627 -3614.627 [22] -3614.627 -3614.627 -3614.627 -3614.627 -3614.627 -3614.627 [29] -3614.627 -3614.627 -3614.627 -3614.627 -3614.627 [36] -3614.627 -3614.627 -3614.627 -3614.627 -3614.627 [36] -3614.627 -3614.627 -3619.695 -3624.547 -3624.547 -3624.547 [43] -3624.547 -3624.547 -3624.547 -3624.547 -3624.547 -3624.547 -3624.547 -3624.547 -3624.547 -3624.547 -3624.547 -3624.547 -3631.328 [50] -3637.344 -Inf We can now be fairly certain that the optimal model has been found, and can proceed to interpreting the results. The clusters are very similar to what we found before. We can give the clusters more informative labels and then show state distribution plots in each cluster in Figure 9: cluster_names(fit_mmm_2$model) <- c("Mainly leader", "Isolate/mediator", "Mediator/leader") mssplot(fit_mmm_2$model) The model summary shows information about parameter estimates of covariates and prior and posterior cluster membership probabilities (these refer to cluster membership probabilities before or after conditioning on the observed sequences, respectively): summary_mmm_2 <- summary(fit_mmm_2$model) summary_mmm_2 Covariate effects : Mainly leader is the reference. Isolate/mediator : Estimate Std. error GPALow 1.9221 0.478 GPAMiddle 0.3901 0.314 GPAHighigh -0.0451 0.277 Mediator/leader : Estimate Std. error GPALow 1.670 0.487 GPAMiddle 0.411 0.312 GPAHighigh -0.667 0.332 Log-likelihood: -3614.627 BIC: 7461.487 Means of prior cluster probabilities : Figure 9: State distribution plots by most probable clusters estimated with the mixture Markov model with covariates. Mainly leader Isolate/mediator Mediator/leader 0.244 0.425 0.331 Most probable clusters : Mainly leader Isolate/mediator Mediator/leader count 49 87 64 proportion 0.245 0.435 0.32 Classification table : Mean cluster probabilities (in columns) by the most probable cluster (rows) Mainly leader Isolate/mediator Mediator/leader Mainly leader 0.91758 0.00136 0.0811 Isolate/mediator 0.00081 0.89841 0.1008 Mediator/leader 0.05902 0.10676 0.8342 We will first interpret the information on prior and posterior cluster membership probabilities and then proceed to interpreting covariate effects. Firstly, the means of prior cluster probabilities give information on how likely each cluster is in the whole population of students (33% in Mediator, 24% in Leader, and 43% in Isolate). Secondly, Most probable clusters shows group sizes and proportions if each student would be classified into the cluster for which they have the highest cluster membership probability. Thirdly, the Classification table shows mean cluster probabilities (in columns) by the most probable cluster (in rows). We can see that the clusters are fairly crisp (the certainty of cluster memberships are fairly high) because the membership probabilities are large in the diagonal of the table. The uncertainty of the classification is the highest for the Mediator/leader cluster (among those that had the highest membership probability in that cluster, average cluster memberships were 84% for the Mediator/leader cluster, 6% for the Mainly leader cluster, and 10% for the Isolate/mediator cluster) and the highest in the Mainly leader cluster (92% for the Mainly leader cluster, 8% for the Mediator/leader cluster, and 0.1% for the Isolate/mediator cluster). The part titled Covariate effects shows the parameter estimates for the covariates. Interpretation of the values is similar to that of multinomial logistic regression, meaning that we can interpret the direction and uncertainty of the effect -relative to the reference cluster Mainly leader- but we cannot directly interpret the magnitude of the effects (the magnitudes are on log-odds scale). We can see that individuals with low GPA more often end up in the Isolate/mediator cluster and the Mediator/leader cluster in comparison to the Mainly leader cluster (i.e., the standard errors are small in comparison to the parameter estimates), while individuals with high GPA levels end up in the Mediator/leader cluster less often but are not more or less likely to end up in the Isolate/mediator cluster. For categorical covariates such as our GPA variable, we can also easily compute the prior cluster membership probabilities from the estimates with the following call: exp(fit_mmm_2$model$coefficients)/rowSums(exp(fit_mmm_2$model$coefficients)) Mainly leader Isolate/mediator Mediator/leader GPALow 0.07605453 0.5198587 0.4040868 GPAMiddle 0.25090105 0.3705958 0.3785031 GPAHigh 0.40497185 0.3870997 0.2079285 The matrix shows the levels of the covariates in the rows and the clusters in the columns. Among the high-GPA students, 41 percent are classified as Mainly leaders, 39 percent as Isolate/mediators, and 21 percent as Mediator/leaders. Among middle-GPA students classification is relatively uniform (25% as Mainly leaders, 37% as Isolate/mediators and 38 Mediator/leaders) whereas most of the low-GPA students are classified as Isolate/mediators or Mediator/leaders (52% and 40%, respectively). The summary object also calculates prior and posterior cluster memberships for each student. We omit them here, for brevity, but demonstrate that they can be obtained as follows: prior_prob<-summary_mmm_2$prior_cluster_probabilities posterior_prob<-summary_mmm_2$posterior_cluster_probabilities #### 4.2.4 Mixture hidden Markov models Finally, we will proceed to the most complex of the models, the MHMM. For defining a MHMM, we use the build_mhmm() function. Again, we can use the argument n_states which is now a vector showing the number of hidden states in each cluster (the length of the vector defines the number of clusters). We will begin by estimating a MHMM with three clusters, each with two hidden states: set.seed(123) mhmm<-build_mhmm(roles_seq, n_states=c(2,2,2), data=cov_data, formula=-0+GPA) fit_mhmm<-fit_model(mhmm) Errorinfit_model(mhmm):EMalgorithmfailed:EstimationofgammacoefficientsfailedduetosingularIn this case, we get an error message about the EM algorithm failing. This means that the algorithm was not able to find parameter estimates from the random starting values the build_mhmm() function generated and we need to adjust our code. Starting values for the parameters of the MHMM can be given with the the arguments initial_probs, transition_probs, and emission_probs. For the MHMM, these are lists of vectors and matrices, one for each cluster. We use the same number of hidden states (two) for each cluster. We define the initial values for the transition and emission probabilities as well as regression coefficients ourselves. We also restrict the initial state probabilities so that in each cluster every student is forced to start from the same (first) hidden state. set.seed(1) # Set initial probabilities init<-list(c(1,0),c(1,0),c(1,0)) # Define own transition probabilities trans<-matrix(c( 0.9,0.1, 0.1,0.9 ), nrow=2,byrow=TRUE) translist<-list(trans,trans,trans) # Simulate emission probabilities emiss<-simulate_emission_probs(n_states=c(2,2,2), n_symbols=3, n_clusters=3) emiss<-replicate(3,matrix(1/3,2,3),simplify=FALSE) # Define initial values for coefficients # Here we start from a case where low GPA correlates with Cluster 1, # whereas middle and high GPA has no effect beta<-cbind(0,c(-2,0,0),c(-2,0,0)) # Define model structure mhmm_2<- build_mhmm(roles_seq, initial_probs=init, transition_probs=translist, emission_probs=emiss,data=cov_data, formula=-0+GPA,beta=beta) ``` Now that we have built the MHMM, we can estimate its parameters: ``` set.seed(1) suppressWarnings(fit_mhmm_2<-fit_model( mhmm_2, control_em=list(restart=list(times=100,n_optimum=101))) ) ``` We can now check how many times the log-likelihood values occurred in the 101 estimations: ``` table(round(fit_mhmm_2$em_results$best_opt_restart,2)) -Inf-3672.25-3595.82-3588.58-3584.14-3526.42-3525.06-3519.53562131412-3519.5-3519.241516 ``` The best model was found 16 times out of 101 times, although the second beset model with log-likelihood of -3519.5 is likely almost indistinguishable from the optimal model (-3519.24) as their log-likelihoods are so close to each other. We will start to interpret the model by looking at the sequence plots in each cluster (see Figure 10). The function call is interactive. As before, if you only want to plot one cluster you can use the which.plots argument: ``` msplot(fit_mhmm_2$model,plots="both",type="I", sortv="mds.hidden",with.legend="bottom.combined") ``` We can also visualise the model parameters in each cluster (see Figure 11): Figure 10: MHMM estimated sequence distribution plot with hidden states. ``` plot(fit_mhmm_2$model, vertex.size = 60, label.color = "black", vertex.label.color = "black", edge.color = "lightgray", edge.label.color = "black", ncol.legend = 1, ncol = 3, rescale = FALSE, interactive = FALSE, combine.slices = 0) ``` Based on the two plots, we can determine that Cluster 1 describes students who start as leaders but then transition to alternating between mediator and leader. Cluster 2 describes students who start by alternating between isolate and mediator roles and then mainly transition to alternating between mediator and leader roles. Cluster 3 describes students who start as alternating between isolate and mediator roles, after which they transition between isolate/mediator and mediator/leader. ``` cluster_names(fit_mhmm_2$model)<-c("Downward transition", "Upward transition", "Alternating") ``` With summary(fit_mhmm_2$model) we get the parameter estimates and standard errors for the covariates and information about clustering: ``` summary(fit_mhmm_2$model) ``` Covariate effects: Downward transition is the reference. Figure 11: Transitions between states for each trajectory. Upward transition : Estimate Std. error GPALow -0.455 0.464 GPAMiddle 0.440 0.310 GPAHigh -2.743 0.727 Alternating : Estimate Std. error GPALow 1.3560 0.324 GPAMiddle 0.3461 0.316 GPAHigh 0.0468 0.250 Log-likelihood: -3519.243 BIC: 7237.543 Means of prior cluster probabilities : Downward transition Upward transition Alternating 0.302 0.181 0.517 Most probable clusters : Downward transition Upward transition Alternating 61 30 109 proportion 0.305 0.15 0.545 Classification table : Mean cluster probabilities (in columns) by the most probable cluster (rows) Downward transition Upward transition Alternating 0.95727 0.0267 0.0161 Upward transition 0.03007 0.8037 0.1662 Alternating 0.00975 0.0962 0.8940 We can see, that the prior probabilities of belonging to each cluster are very different: half of the students can be described as alternating, while of the rest, a downward transition is more typical (31%). Based on the classification table, the Downward transition cluster is rather crisp, while the other two are partly overlapping (see the MMM example for more information on interpreting the classification table). The Covariate effects tables show that, in comparison to Alternating cluster, students with low GPA are less likely to end up in the Upward or Downward transition clusters and students with high GPA are less likely to end up in Upward transition cluster. Again, we can calculate the probabilities of belonging to each cluster by GPA levels: exp(fit_mhmm_2$model$coefficients)/row$sums(exp(fit_mhmm_2$model$coefficients)) Downward transition Upward transition Alternating GPALow 0.1813217 0.11502283 0.7036555 GPAMiddle 0.2521406 0.39144399 0.3564154 GPAHigh 0.4734128 0.03048189 0.4961054 The table shows that students with low GPA typically belong to the Alternating cluster (70 % probability) while students with high GPA mainly end up in the Downward transition cluster (47%) or the Alternating cluster (50%). Most students with middle GPA end up in the Upward transition cluster (39%), but the probabilities are almost as high for the Alternating cluster (36%) and also fairly high for the Downward transition cluster (25%). In light of this, it is worth noting that the covariates do not merely explain the uncovered clusters; as part of the model, they drive the formation of the clusters. In other words, an otherwise identical model without the dependence on the GPA covariate may uncover different groupings with different probabilities. If we are not sure how many clusters or hidden states we expect, or if we wish to investigate different combinations of covariates, we can estimate several models and compare the results with information criteria or cross-validation. Estimating a large number of complex models is, however, very time-consuming. Using prior information for restricting the pool of potential models is useful, and sequence analysis can also be used as a helpful first step [10, 37]. ### Stochastic process mining with Markovian models Process mining can be performed using different methods, techniques and algorithms. Yet, MMs offer a very powerful framework for process mining with several advantages over the commonly used methods. First, it is more theoretically aligned with the idea of a transition from an action to an action and that actions are temporally dependent on each other. Second, MMs allow for data to be clustered into similar transition patterns, a possibility not offered by other process mining methods (see the process mining chapter of this book). Third, contrary to other process mining methods, MMs do not require researchers to exclude --or trim-- a large part of the data to "simplify" the model. For instance, most of the process mining analyses require an arbitrary cutoff to trim some transitions so that the process model is readable. Most importantly, MMs have several fit statistics that we can use to compare and judge the model fit as we have seen before. Several R packages can perform stochastic process mining; in this tutorial we will rely on the same package we discussed earlier and combine it with a powerful visualization that allows us to effectively visualize complex processes. In the next example, we will analyse data extracted from the learning management system logs and offer a detailed guide to process mining. We will also use MMMs to cluster the data into latent patterns of transitions. Given that the traditional plotting function in seqHMM works well with a relatively short alphabet, we will use a new R package called qgraph for plotting. The package qgraph offers powerful visualizations which makes plotting easier, and more interpretable especially for larger models. Furthermore, qgraph allows researchers to use a fixed layout for all the plotted networks so the nodes can be compared to each other more easily. Let us now go through the analysis. The next chunk of code imports the prepared sequence data from the sequence analysis chapter. The data belong to a learning analytics course and the events are coded trace logs of students' actions such as _Course view_, _Instructions_, _Practicals_, _Social_, etc. Then, we build a sequence object using the function seqdef() from TraMineR. seq_data <- import(paste0(URL, "1_moduleLAcourse/LMS_data_wide.xlsx")) seq_data_all <- seqdef(seq_data, var = 7:54 ) Before proceeding further, it is advisable to visualise the sequences. Figure 12 shows the sequence index plot, sorted according to the first states. The data are much larger than the collaboration roles and achievement sequences analysed previously; there are 9478 observations with an alphabet of 12 states. Unlike in the previous example, the sequence lengths vary considerably. Due to this, shorter sequences contain missing values to fill the empty cells in the data frame. However, there are no internal gaps. When creating the sequence object with the seqdef function, TraMineR allows for distinguishing between real missing values (NA, where the true state is unknown) and technical missing values (void) used to pad the sequences to equal lengths. The seqHMM package is able to account for both types of missing values and treats them slightly differently, for example when calculating the most probable paths of hidden states. seqplot(seq_data_all, type = "I", ncol = 4, sortv = "from.start", legend.prop = 0.2, cex.legend = 0.7, border = NA) A simple transition analysis can be performed by estimating and plotting the transition probabilities. This can be performed using the TraMineR package. Yet, this simple approach has drawbacks and it is advisable to estimate the MM and use their full power. The next code estimates the transition probabilities of the full dataset and visualize them using the function seqtrate() from TraMineR package. As we mentioned earlier, we will use a novel plotting technique that is more suitable for large process \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline From\(\backslash\)To & Applications & Assignment & Course\_view & Ethics & Feedback & General & Group\_work & Instructions & La\_type & Practicals & Social & Theory \\ \hline Applications & 0.46 & 0.07 & 0.13 & 0.01 & 0.01 & 0.19 & 0.05 & 0.01 & 0.01 & 0.05 & 0.00 & 0.00 \\ Assignment & 0.00 & 0.70 & 0.19 & 0.00 & 0.01 & 0.02 & 0.03 & 0.02 & 0.02 & 0.02 & 0.02 & 0.00 & 0.00 \\ Course\_view & 0.01 & 0.07 & 0.35 & 0.01 & 0.03 & 0.03 & 0.28 & 0.10 & 0.02 & 0.08 & 0.02 & 0.01 \\ Ethics & 0.01 & 0.00 & 0.12 & 0.61 & 0.01 & 0.04 & 0.10 & 0.01 & 0.03 & 0.04 & 0.01 & 0.02 \\ Feedback & 0.00 & 0.02 & 0.23 & 0.00 & 0.56 & 0.00 & 0.11 & 0.04 & 0.01 & 0.02 & 0.00 & 0.00 \\ General & 0.04 & 0.05 & 0.18 & 0.01 & 0.00 & 0.49 & 0.06 & 0.06 & 0.05 & 0.03 & 0.01 & 0.02 \\ Group\_work & 0.00 & 0.01 & 0.19 & 0.00 & 0.01 & 0.01 & 0.73 & 0.02 & 0.00 & 0.01 & 0.01 & 0.00 \\ Instructions & 0.00 & 0.02 & 0.33 & 0.00 & 0.03 & 0.04 & 0.12 & 0.37 & 0.02 & 0.03 & 0.04 & 0.00 \\ La\_type & 0.01 & 0.06 & 0.24 & 0.01 & 0.00 & 0.10 & 0.07 & 0.05 & 0.38 & 0.03 & 0.01 & 0.03 \\ Practicals & 0.00 & 0.02 & 0.17 & 0.00 & 0.01 & 0.01 & 0.03 & 0.02 & 0.01 & 0.73 & 0.00 & 0.01 \\ Social & 0.00 & 0.01 & 0.25 & 0.00 & 0.00 & 0.01 & 0.12 & 0.11 & 0.01 & 0.02 & 0.48 & 0.00 \\ Theory & 0.00 & 0.02 & 0.15 & 0.03 & 0.00 & 0.02 & 0.06 & 0.01 & 0.05 & 0.05 & 0.00 & 0.00 \\ \hline \hline \end{tabular} \end{table} Table 7: Transition probabilities Figure 12: Sequence index plot for the learning management system logs. models. Below, we plot the transition probabilities with the qgraph() function from the qgraph package (Figure 13). We use some arguments to improve the process model visualization. First, we use the argument cut = 0.15 to show the edges with probabilities below 0.15 in lower thickness and color intensity. This _cut_ makes the graph easier to read and less crowded, and gives emphasis to the edges which matter. The argument minimum = 0.05 hides small edges below the probability threshold of 0.05. We use edge.labels = TRUE to show the transition probabilities as edge labels. The argument color gets the colour palette from the sequence with the function pcal() and the argument curveAll = TRUE ensures the graph shows curved edges. The "colorblind" theme makes sure that the colours can be seen by everyone regardless of colour vision abilities. Lastly, the mar argument sets the margin of the figure to make all graphical aspects fit within the figure area. The seqrate() function only computes the transition probabilities but does not compute the initial probabilities. While it is not difficult to calculate the proportions of starting in each state, we can also estimate a simple Markov model which does the same with a short command. We do so using the build_mm() function as per Section 4.2, recalling that the build_mm() function is distinct from build_hmm(), build_mm(), and build_mhmm() in that it is the only build function that automatically estimates the parameters of the model. The plotting now includes an extra option called pie = overallmodel$initial_probs which tells qgraph to use the initial probabilities from the fitted MM as the sizes of the pie charts in the borders of the nodes in Figure 14. For instance, the pie around _Course view_ is around half of the circle corresponding to 0.48 initial probability to start from _Course view_. Please also note that the graph is otherwise equal to the one generated via seqtrate() apart from these initial probabilities. overallmodel <- build_mm(seq_data_all) overallplot <- qgraph(overalltransitions, cut = 0.15, minimum = 0.05, labels = Labelx, Figure 13: Process map for the overall process. Having plotted the transitions of the full dataset, we can now look for transition patterns, that is typical transition patterns (i.e., clusters) that are repeated within the data. The procedure is the same as before. In the next example, we use the function build_mmm() to build the model with four clusters as a demonstration. Ideally, researchers need to estimate several models and choose the best model based on model selection criteria (such as BIC) values as well as interpretability. The steps involved in fitting the model are as before; we make use of the function fit_model() to estimate the model. The results of the running the code will be an MM for each cluster (with distinct initial and transition probabilities). Given the number of sequences in the dataset, their length, and the number of states, the computational burden is larger than for previous applications in this chapter. For illustrative purposes, instead of repeated EM runs with random starting values, we use single EM run followed by global optimisation, using the argument global_step = TRUE. One benefit of this global (and local) step in fit_model over the EM algorithm is the flexibility to define a maximum runtime (in seconds) for the optimization process (argument maxtime in control_global). This can be valuable for larger problems with predefined runtime (e.g., in a shared computer cluster). Note, however, that relying on the runtime can lead to non-reproducible results even with fixed seed if the optimisation terminates due to the time limit. Finally, we run additional local optimisation step using the results of the global optimisation, for more accurate results. The last argument threads = 16 instructs to use parallel computing to enable faster fitting (please, customise according to the number of cores in your computer). As for the starting values, we use the the transition probabilities computed from the full data for all clusters, and random values for the initial probabilities. While in theory many of global optimisation algorithms should eventually find the global optimum, in practice there are no guarantees that it is found in limited time. Thus, as earlier, in practice it is advisable to try different global/local optimisation algorithms and/or EM algorithm with different initial values to make it more likely that the global optimum is found (see [4] for further discussion). Figure 14: Process map for the overall process with initial probabilities. set.seed(1) trans_probs <- simulate_transition_probs(12, 4, diag_c = 5) init_probs <- as.numeric(prop.table(table(seq_data_all[,1])[1:12])) init_probs <- replicate(4, init_probs, simplify = FALSE) builtseqLMS <- build_mmm(seq_data_all, transition_probs = trans_probs, initial_probs = init_probs) fitLMS <- fit_model(builtseqLMS, global_step = TRUE, control_global = list( maxtime = 3600, maxeval = 1e5, algorithm = "NLOPT_GD_STOGO_RAND"), local_step = TRUE, threads = 16) fitLMS$global_results$message fitLMS$logLik [1] "NLOPT_SUCCESS: Generic success return value." [1] -114491.2 Before plotting the clusters, let us do some cleanups. First, we get the transition probabilities of each cluster and assign them to a variable. In that way, it is easier to manipulate and work with. In the same way, we can extract the initial probabilities for each cluster. #extract transition probabilities of each cluster Clusterpt1 <- fitLMS$model$transition_probs$Cluster 1' Clusterpt2 <- fitLMS$model$transition_probs$Cluster 2' Clusterpt3 <- fitLMS$model$transition_probs$Cluster 3' Clusterpt4 <- fitLMS$model$transition_probs$Cluster 4' #extract initial probabilities of each cluster Clusterinitp1 <- fitLMS$model$initial_probs$Cluster 1' Clusterinitp2 <- fitLMS$model$initial_probs$Cluster 2' Clusterinitp3 <- fitLMS$model$initial_probs$Cluster 3' Clusterinitp4 <- fitLMS$model$initial_probs$Cluster 4' Plotting the process maps can be performed in the same way we did before. However, if we need to compare clusters, it is best if we use a unified layout. An average layout can be computed with the function averageLayout() which takes the transition probabilities of the four clusters as input and creates -- as the name implies-- an averaged layout. Another option is to use the same layout of the overallplot in the previous example. This can be obtained from the plot object overallplot$layout. This can be helpful if you would like to plot the four plots corresponding to each cluster with the same layout as the overall plot (see Figure 15). Labelx <- colnames(Clusterpt1) # we need to get the labels Averagelayout <- averageLayout(list(Clusterpt1, Clusterpt2, Clusterpt3, Clusterpt4)) Overalllayout <- overallplot$layout # You can also try with this layout from the previous plot qgraph(Clusterpt1, cut = 0.15, minimum = 0.05, labels = Labelx, edge.labels = TRUE, edge.label.cox = 0.65, color = cpal(seq_data_all), layout = Averagelayout, pie = Clusterinitp1, curveAll = TRUE, theme = "colorblind", title = "Diverse") qgraph(Clusterpt2, cut = 0.15, minimum = 0.05, labels = Labelx, edge.labels = TRUE, edge.label.cex = 0.65, color = cpal(seq_data_all), layout = AveragLayout, pie = Clusterinitp2, curveAll = TRUE, theme = "colorblind", title = "Assignment-oriented") qgraph(Clustert93, cut = 0.15, minimum = 0.05, labels = Labelx, edge.labels = TRUE, edge.label.cex = 0.65, color = cpal(seq_data_all), layout = AveragLayout, pie = Clusterinitp3, curveAll = TRUE, theme = "colorblind", title = "Practical-oriented") qgraph(Clustert94, cut = 0.15, minimum = 0.05, labels = Labelx, edge.labels = TRUE, edge.label.cex = 0.65, color = cpal(seq_data_all), layout = AveragLayout, pie = Clusterinitp4, curveAll = TRUE, theme = "colorblind", title = "Group-centered") Oftentimes, the researcher is interested in comparing two pre-defined fixed groups, e.g., high achievers and low achievers, rather than between the computed clusters. In the next example we will compare high to low achievers based on their achievement levels. First, we have to create a separate sequence object for each group. We do this by filtering but you can do it in other ways. For instance, you can create two sequences from scratch for each group. The next is to build the MMs separately for each group. seq_high <- seq_data_all[seq_data$Achievementlevel4 <= 2,] seq_low <- seq_data_all[seq_data$Achievementlevel4 > 2,] Figure 15: Process maps for each cluster. high_mm <- build_mm(seq_high) low_mm <- build_mm(seq_low) Before plotting the groups, let us do some cleaning, like we did before. First, we get the transition and initial probabilities of each group. We also compute an average layout. Please note that you can use the layout from the previous examples if you are comparing the models against each other and you need a unified framework. The plotting is the same as before (see Figure 16). We can also plot the difference plot (see Figure 17); that is, what the low achievers do less than high achievers. In this case, red edges are negative (events they do less) and blue edges are positive (events that they do more than high achievers). As you can see, the differences are not that huge. In fact, much of the literature comparing high and low achievers uses higher thresholds e.g., top 25% to bottom 25% or even top 10% to bottom 10%. Figure 16: Process maps for high achievers and low achievers using average layout. diffplot <- qgraph(Lowprobs - Highprobs, cut = 0.15, minimum = 0.05, labels = Labelx, edge.labels = TRUE, edge.label.cer = 0.65, layout = Averagelayout, color = cpal(seq_data_all), theme = "colorblind") ## 5 Conclusions & further readings Markovian models provide a flexible model-based approach for analysing complex sequence data. MMs and HMMs have proven useful in many application areas such as biology and speech recognition, and can be a valuable tool in analysing data in educational settings as well. Their mixture variants allow for the representation of complex systems by combining multiple MMs or HMMs, each capturing different aspects of the underlying processes, allowing probabilistic clustering, information compression (e.g. visualisation of multicategory data from multiple domains), and detection of latent features of sequence data (e.g, extraction of different learning strategies). The ability to incorporate covariates in the case of MMMs and M HMMs makes those models even more powerful, and generally MMs and MMMs represent useful tools in the field of process mining also. The seqHMM package used in the examples supports time-constant covariates for predicting cluster memberships for each individual. In theory, covariates could be used to define transition or emission probabilities as well, leading to subject-specific and possibly time-varying transition and emission probabilities (in the case of time-varying covariates). However, at the time of writing this chapter, these are not supported in seqHMM (this may change in the future). In R, there are at least two other, potentially useful packages: for MMs, the dynamic[38] package supports covariates on the transition probabilities with potentially time-varying effects, whereas LMest[39] supports MMs and HMMs with covariates, and restricted variants of the MHMM where only the initial and transition matrices vary between clusters. Going beyond the R software, some commercial software also offers tools to analyse Markovian models, including latentGold[40] and Mplus[11]. The conditional independence assumption of observations given the latent states in HMMs can sometimes be unrealistic. In these settings, the so-called double chain MMs can be used [41]. There the current observation is allowed to depend on both the current state and the previous observation. Some restricted variants of such models are implemented in the march package in R[42]. Finally, variable order MMs extend basic MMs by allowing the order of the MM to vary in time. A TraMineR-compatible implementation of variable-order models can be found in the PST package [43]. Figure 17: Difference between process maps of high achievers and low achievers using average layout. We encourage readers to read more about how to interpret the results in the original study where the data for this chapter was drawn from [36]. We also encourage readers to learn more about Markovian models in the context of multi-channel sequence analysis in Chapter 13 [8].
2310.02670
Searching 2D-Strings for Matching Frames
We introduce the natural notion of a matching frame in a $2$-dimensional string. A matching frame in a $2$-dimensional $n\times m$ string $M$, is a rectangle such that the strings written on the horizontal sides of the rectangle are identical, and so are the strings written on the vertical sides of the rectangle. Formally, a matching frame in $M$ is a tuple $(u,d,\ell,r)$ such that $M[u][\ell ..r] = M[d][\ell ..r]$ and $M[u..d][\ell] = M[u..d][r]$. In this paper, we present an algorithm for finding the maximum perimeter matching frame in a matrix $M$ in $\tilde{O}(n^{2.5})$ time (assuming $n \ge m)$. Additionally, for every constant $\epsilon> 0$ we present a near-linear $(1-\epsilon)$-approximation algorithm for the maximum perimeter of a matching frame. In the development of the aforementioned algorithms, we introduce inventive technical elements and uncover distinctive structural properties that we believe will captivate the curiosity of the community.
Itai Boneh, Dvir Fried, Shay Golan, Matan Kraus, Adrian Miclaus, Arseny Shur
2023-10-04T09:16:31Z
http://arxiv.org/abs/2310.02670v2
# Searching 2D-Strings for Matching Frames ###### Abstract We introduce the natural notion of a matching frame in a 2-dimensional string. A matching frame in a 2-dimensional \(n\times m\) string \(M\), is a rectangle such that the strings written on the horizontal sides of the rectangle are identical, and so are the strings written on the vertical sides of the rectangle. Formally, a matching frame in \(M\) is a tuple \((u,d,\ell,r)\) such that \(M[u][\ell..r]=M[d][\ell..r]\) and \(M[u..d][\ell]=M[u..d][r]\). In this paper, we present an algorithm for finding the maximum perimeter matching frame in a matrix \(M\) in \(\hat{O}(n^{2.5})\) time (assuming \(n\geq m\)). Additionally, for every constant \(\varepsilon>0\) we present a near-linear \((1-\varepsilon)\)-approximation algorithm for the maximum perimeter of a matching frame. In the development of the aforementioned algorithms, we introduce inventive technical elements and uncover distinctive structural properties that we believe will captivate the curiosity of the community. LCP, range queries, 2d strings 10.4230/LIPIcs... 10.4230/LIPIcs.0 ## 1 Introduction We consider the problem of finding a large _matching frame_ in a 2-dimensional string (i.e. a matrix) \(M\) over an alphabet \(\Sigma\). A _frame_ in \(M\) is a rectangle represented by a tuple \((u,d,\ell,r)\) such that \(u<d\) and \(\ell<r\). In this way, \(M[u][\ell]\) and \(M[d][r]\) are the top left corner and the bottom right corner of the rectangle, respectively. A frame is _matching_ if the string written on the top side equals the string written on the bottom side and also the string written on the left side equals the string written on the right side. Formally, \((u,d,\ell,r)\) is a matching frame if \(M[u][\ell..r]=M[d][\ell..r]\) and \(M[u..d][\ell]=M[u..d][r]\) (see Figure 1). Throughout the years, a large variety of notions for repetitive sub-structures in strings have been explored [15, 24, 20, 30, 22]. Even recently, new algorithmic and combinatorial results regarding palindromes [7, 29], squares [13], runs [6, 12], and powers [4] have been introduced. Several works have considered generalizations of periodicity to 2-dimensional strings [2, 3, 5, 9, 16, 23]. A matching frame is a natural notion of repetitiveness in 2-dimensional strings. Intuitively, a matching frame consists of a one-dimensional horizontal repetition and a vertical one-dimensional repetition that are linked via the structure of the matrix. The _perimeter_ of a frame \(F=(u,d,\ell,r)\) is the total number of cells in its marginal rows and columns, i.e. \(\mathsf{per}(F)=2(d-u+r-\ell)\). By _maximum_ frame (in a set of frames) we always mean the frame with the maximal perimeter in this set. In the _maximum matching frame problem_, the goal is to find a maximum matching frame in a given matrix or report that no matching frame exists. We also consider the \((1-\varepsilon)\)-approximation version of the problem, in which the goal is to find a matching frame with a perimeter within the factor \((1-\varepsilon)\) from the maximum possible. Our Results.We present algorithms that establish the following bounds on the complexity of the maximum matching frame problem and its approximation version. [Maximum Matching Frame] The time complexity of the maximum matching frame problem for an \(n\times m\) matrix \(M\) is \(\tilde{O}(n^{2.5})\) in the case \(m=\Theta(n)\). In the general case, Figure 1: An example of a matching frame \((u,d,\ell,r)=(2,6,3,9)\). The strings on the top and bottom sides of the frame are equal, and the strings on the left and right sides are also equal. The perimeter of the frame is \(2\cdot(6-2+9-3)=20\). The matrix also contains a smaller matching frame. the complexity is \(\tilde{O}(ab\min\{a,\sqrt{b}\})\), where \(a=\min\{n,m\}\) and \(b=\max\{n,m\}\).1 Footnote 1: Throughout the paper, \(\tilde{O}(f(n))=O(f(n)\cdot\mathsf{polylog}n)\) **Theorem 2** (\((1-\varepsilon)\)-Approximation).: _The time complexity of the \((1-\varepsilon)\)-approximation maximum matching frame problem for an \(n\times m\) matrix \(M\) is \(\tilde{O}(\frac{nm}{\varepsilon^{4}})\)._ Theorem 2 immediately implies the bound for the decision version of the problem. **Corollary 3**.: _The time complexity of deciding whether an \(n\times m\) matrix \(M\) contains a matching frame is \(\tilde{O}(nm)\)._ We remark that our exact and approximation algorithms can be straightforwardly adapted to find matching frames with the maximum area / the minimum perimeter / the minimum area instead of matching frames with the maximum perimeter. ### Technical Overview Here we give a high-level overview of the algorithms introduced in the paper. Maximum Matching Frame.The algorithm for finding a maximum matching frame follows a heavy-light approach. The parameter used to distinguish between heavy and light frames is the _shorter side_ of the frame. A frame \(F=(u,d,\ell,r)\) has _height_\(d-u\) and _width_\(r-\ell\). We assume that there is a maximum matching frame having its height smaller than or equal to its width. (Either the input matrix or its transpose satisfies this assumption and we can apply our algorithm to both matrices and return the better of two results.) For some integer threshold \(x\), we say that a frame with \(d-u\leq x\) is _short_ (or light); otherwise, it is _tall_ (or heavy). We provide two algorithms, one that returns a maximum _short_ matching frame in \(M\) and another returns a maximum _tall_ matching frame in \(M\). The largest of the two answers is the maximum matching frame in \(M\). The algorithm for short frames iterates over all pairs of rows with distance at most \(x\) from each other. Note that there are \(O(nx)\) such pairs. Moreover, under the assumption that some matching frame \(F=(u,d,\ell,r)\) is short, the rows \(u\) and \(d\) used by \(F\) are processed as a pair. When processing a pair, the algorithm decomposes its rows into maximal equal segments. Every segment is processed in linear time to obtain a maximum matching frame that uses a portion of the segment as top and bottom sides (see Section 5.1). The accumulated size of the segments is bounded by \(m\), so the algorithm runs in \(\tilde{O}(n\cdot m\cdot x)\) time. The algorithm for tall frames (see Section 5.2) first guesses a range \([H/2..H]\) for the height and a range \([W/2..W]\) for the width of a maximum matching frame. As we consider tall frames, the ranges are sufficiently large, so it is easy to find a small set of positions \(\mathcal{P}\) in the matrix \(M\) such that every frame with the height and width from the given ranges contains a position from \(\mathcal{P}\). The algorithm employs a subroutine that, given \(H,W\), and a position \((i,j)\), computes a maximum matching frame among the frames that contain \((i,j)\), have the height in \([H/2..H]\) and the width in \([W/2..W]\). The implementation of this subroutine is the main technical part of the algorithm. This is done by maintaining and querying a range data structure (see Section 4) that allows one to process pairs of columns and pairs of rows with the position \((i,j)\) between them. There are \(O(W^{2})\) pairs of columns and \(O(H^{2})\) pairs of rows to be processed, which we do in \(\tilde{O}(H^{2}+W^{2})=\tilde{O}(W^{2})\) total time. We also show that \(|\mathcal{P}|=O(\frac{nm}{HW})\), and therefore the running time for one pair of ranges is \(\tilde{O}(nm\frac{W}{H})\). Since \(\max\{n,m\}\geq W\), \(H\geq x\), and \(O(\log n\log m)\) guesses of ranges suffice to cover all tall frames, we reach the desired running time of \(\tilde{O}(nm\frac{\max(n,m)}{x})\). Finally, the algorithm selects the threshold \(x=\sqrt{\max\{n,m\}}\) and applies the algorithms for both the short and the tall case to obtain a running time of \(O(nm\sqrt{\max\{n,m\}})\). Alternatively, one can run the algorithm for short frames alone, setting \(x=\min(n,m)\). Taking the better of these two options proves Theorem 3.1. #### Approximation Algorithm. As a preliminary step in our approach for finding a \((1-\varepsilon)\)-approximation to the maximum matching frame, we apply a two-dimensional variant of the so-called _standard trick_[11, 8] from certain one-dimensional pattern matching problems. In pattern matching, we are given a text \(T[1..n]\) and a pattern \(P[1..m]\) and the goal is to find all the indices \(i\in[n-m+1]\) such that \(T[i..i+m-1]\) "matches" \(P\). The standard trick refers to partitioning \(T\) into \(O(n/m)\) overlapping fragments of size \(\Theta(m)\), such that every match of \(P\) is contained in a fragment. In general, the trick allows us to assume that the length of the text is within a small factor from the length of the pattern. The two-dimensional variant of the standard trick (see Lemma 3.1) allows us to assume that both dimensions of the maximum matching frame are within a \(\poly(1-\varepsilon)\) factor of the vertical and the horizontal lengths of \(M\). This assumption allows us to focus on matching frames with sides that are relatively "close" to the boundaries of \(M\); we call such frames _large_. The algorithm uses a carefully selected threshold for being close to the boundaries, that guarantees the following two properties: (1) the maximum matching frame is large and (2) the perimeter of every large frame approximates the perimeter of the maximum matching frame. With that, the problem boils down to determine whether there exists a large matching frame. The main technical novelty of the approximation algorithm is solving this decision problem in near-linear time. The algorithm for the above decision problem consists of two main components. The first component (see Section 6.3) is an \(\tilde{O}(1)\) time procedure that, given a triplet \((u,d,\ell)\), decides if there is an integer \(r\) such that \((u,d,\ell,r)\) is a large matching frame. Unfortunately, applying this procedure to every triplet would result in an \(\Omega(n^{2}m)\) time algorithm. The second component (see Section 6.2) of the algorithm is the retrieval of a set of \(\tilde{O}(nm)\) triplets such that if some large matching frame exists, there must also be a large matching frame derived from one of these triplets. We conclude by presenting the combinatorial structure that allows us to consider \(\tilde{O}(nm)\) triplets in the second component. Consider a triplet \((u,d,\ell)\) and let \(k\) be the largest integer such that \(M[u][\ell..k]=M[d][\ell..k]\) (let \(S\) denote this string). Assuming there exists an index \(r\) such that \((u,d,\ell,r)\) is a large matching frame, one has \(r\leq k\). Observe that if there is an index \(d^{\prime}<d\) that is close to the bottom boundary of \(M\) such that \(M[d^{\prime}][\ell..k]=S\), then \((u,d^{\prime},\ell,r)\) is also a large matching frame. Therefore, the triplet \((u,d,\ell)\) can be removed from the set of triplets that have to be processed. We say that a triplet that is not eliminated due to this reasoning is _interesting_. Surprisingly, the number of interesting triplets is bounded by \(O(nm\log n)\) (see Section 6.1). This combinatorial observation is the main novelty of the approximation algorithm. ## 2 Preliminaries We use range notation for integers and strings. We write \([i..j]\) and \([i..j)\) for the sets \(\{i,\ldots,j\}\) and \(\{i,\ldots,j-1\}\) respectively (assuming \(i\leq j\)). Further, we abbreviate \([1..n]\) to \([n]\). A string \(S[1..n]=S[1]S[2]\ldots S[n]\) is a sequence of characters from an alphabet \(\Sigma\). We also write \(\overleftarrow{S[1..n]}=S[n]S[n{-}1]\ldots S[1]\). For every \(i\leq j\in[n]\), \(S[i..j]=S[i]S[i+1]\ldots S[j]\) is a _substring_ of \(S\). The substring is called a _prefix_ (resp., a _suffix_) of \(S\) If \(i=1\) (resp., \(j=n\)). We assume \(\Sigma\) to be linearly ordered, so we can speak about _lexicographic order_ on strings. An \(n\times m\)_matrix_\(M\) is a \(2\)-dimensional array of symbols from \(\Sigma\) (in other words, a \(2\)-dimensional string). We say that the _size_ of \(M\) is \(|M|=n\cdot m\) - the number of cells in \(M\). We denote a horizontal substring of \(M\) as \(M[i][j_{1}..j_{2}]=M[i][j_{1}]M[i][j_{1}+1]\ldots M[i][j_{2}]\). Similarly, we denote a vertical substring as \(M[i_{1}..i_{2}][j]=M[i_{1}][j]M[i_{1}+1][j]\ldots M[i_{2}][j]\). ### Suffix Arrays, Longest Common Prefix For a tuple of strings \(\mathcal{S}=(S_{1},S_{2},\ldots,S_{n})\), the _lexicographically sorted array_\(\mathsf{LSA}_{\mathcal{S}}\) is an array of length \(n\) that stores the lexicographic order of the strings in \(\mathcal{S}\). Formally, \(\mathsf{LSA}_{\mathcal{S}}[i]=j\) if \(S_{j}\) is the \(i\)th string in \(\mathcal{S}\) according to the lexicographic order. For a string \(S[1..n]\), the _suffix array_\(\mathsf{SA}_{S}\) of \(S\) is the \(\mathsf{LSA}\) of all suffixes of \(S\). Formally, for every \(i\in[n]\) let \(S_{i}=S[i..n]\) and let \(\mathcal{S}_{S}=(S_{1},S_{2},\ldots,S_{n})\); then \(\mathsf{SA}_{S}=\mathsf{LSA}_{\mathcal{S}_{S}}\). The suffix arrays were introduced by Manber and Myers [25] and became ubiquitous in string algorithms. The array can be constructed in linear time and space by many algorithms [18, 19, 21, 27, 28]. Given a string \(S[1..n]\) over an integer alphabet of size polynomial in \(n\), the suffix array of \(S\) can be constructed in \(O(n)\) time and space. An important computational primitive is a data structure for computing the length of the _longest common prefix_ of two strings \(S[1..n]\) and \(T[1..m]\), given as \(\mathsf{LCP}(S,T)=\max\{\ell\in[\min\{n,m\}]\mid S[1..\ell]=T[1..\ell]\}\). An \(\mathsf{LCP}\) data structure \(\mathsf{LCP}_{\mathcal{S}}\) for a set of strings \(\mathcal{S}=\{S_{1},S_{2},\ldots,S_{n}\}\) supports queries in the form "given two indices \(i,j\in[n]\), report \(\mathsf{LCP}(S_{i},S_{j})\)". We denote by \(\mathsf{LCP}(S)\) the \(\mathsf{LCP}\) data structure for the set of suffixes of a given string \(S[1..n]\). It is known that the following can be obtained by applying the LCA data structure of [17] to the suffix tree of [30]. There is an \(\mathsf{LCP}\) data structure with \(O(1)\) query time. The data structure can be constructed from \(S\) in \(\tilde{O}(n)\) time and uses \(O(n)\) space. The following facts are straightforward. Let \(\mathcal{S}=(S_{1},S_{2},\ldots,S_{n})\) be a tuple of lexicographically ordered strings. For every \(i\leq j\leq k\in[n]\), it holds that \(\min(\mathsf{LCP}(S_{i},S_{j}),\mathsf{LCP}(S_{j},S_{k}))\geq\mathsf{LCP}(S_{ i},S_{k})\). Let \(\mathcal{S}=(S_{1},S_{2},\ldots,S_{n})\) be a tuple of strings and let \(P[1..m]\) be a string. Consider the set \(\mathsf{Occ}(\mathcal{S},P)=\{k\mid S_{k}[1..m]=P\}\). There is a pair of integers \(i,j\in[n]\) such that \(\mathsf{Occ}(\mathcal{S},P)=\mathsf{LSA}_{\mathcal{S}}[i..j]\). Furthermore, there is an \(O(\log n)\) time algorithm that given \(k\), \(m\), \(\mathsf{LSA}_{\mathcal{S}}\), and \(\mathsf{LCP}_{\mathcal{S}}\) computes \(i\) and \(j\) such that \(\mathsf{Occ}(\mathcal{S},S_{k}[1..m])=\mathsf{LSA}_{\mathcal{S}}[i..j]\). The algorithm specified in Fact 7 can be obtained in two steps as follows. First, apply a binary search on \(\mathsf{LSA}_{\mathcal{S}}\) for an index \(k^{\prime}\) that satisfies \(\mathsf{LSA}_{\mathcal{S}}[k^{\prime}]=k\) (Note that the lexicographical order between two strings in \(\mathcal{S}\) can be decided using an \(\mathsf{LCP}\) query). Then, apply a binary search in both directions of \(k^{\prime}\) to find the minimal and maximal indices \(i\in[1..k^{\prime}]\) and \(j\in[k^{\prime}..n]\) such that \(S_{\mathsf{LSA}[i]}[1..m]=S_{k}[1..m]\) and \(S_{\mathsf{LSA}[j]}[1..m]=S_{k}[1..m]\). [Fingerprint of a string] For a tuple \(\mathcal{S}\) and a string \(P=S_{k}[1..m]\), we define the fingerprint of \(P\) in \(\mathcal{S}\) to be the tuple \((i,j,m)\) with \(i\) and \(j\) being the indices specified in Fact 7. Given three strings \(S_{1},S_{2}\) and \(S_{3}\), the condition \(\mathsf{LCP}(S_{1},S_{2})>\mathsf{LCP}(S_{1},S_{3})\) implies \(\mathsf{LCP}(S_{1},S_{3})=\mathsf{LCP}(S_{2},S_{3})\). ### Orthogonal Range Queries Our algorithms make extensive use of _orthogonal range queries_ data structures. For a positive integer dimension \(d\), a \(d\)-dimensional data structure stores a set \(\mathcal{P}\subseteq\mathbb{R}^{d}\) of \(d\)-dimensional points. The data structure supports the queries regarding an input \(d\)-dimensional orthogonal range \(R=[a_{1}..b_{1}]\times[a_{2}..b_{2}]\times\ldots\times[a_{d}..b_{d}]\). For a point \(p=(x_{1},x_{2},\ldots,x_{d})\) one has \(p\in R\) if \(x_{i}\in[a_{i}..b_{i}]\) for every \(i\in[1..d]\). Additionally, every point \(p\in\mathcal{P}\) is associated with a value \(v(p)\in\mathbb{R}\). In our work, we are interested in the following queries. 1. \(\mathsf{Empty}(R)\): Is there a point in \(R\cap\mathcal{P}\) 2. \(\mathsf{Report}(R)\): Return \(R\cap\mathcal{P}\) 3. \(\mathsf{Maximum}(R)\): Return \(\mathsf{argmax}_{v(p)}(p\in R\cap\mathcal{P})\) 4. \(\mathsf{Minimum}(R)\): Return \(\mathsf{argmin}_{v(p)}(p\in R\cap\mathcal{P})\) Data structures for orthogonal range queries are a fundamental object in computational geometry with plenty of implementations that are optimized for various parameters in various settings. In this work, we are using an orthogonal range queries data structure [31, 10, 14, 26] with the following running times. For every integer \(d\in O(1)\), a set of \(n\) points in \(\mathbb{R}^{d}\) can be preprocessed in \(\tilde{O}(n)\) time to support \(\mathsf{Empty}\), \(\mathsf{Report}\), \(\mathsf{Maximum}\), and \(\mathsf{Minimum}\) range queries in \(\tilde{O}(1)\) time. The data structure uses \(\tilde{O}(n)\) space. Throughout the paper, we often query a range data structure for the point \(p=(x_{1},x_{2}..x_{d})\in R\cap\mathcal{P}\) that maximizes or minimizes one of the coordinates \(x_{i}\). These queries can be supported by constructing an additional copy of the data structure with \(v(p)=x_{i}\). ## 3 Data Structures When looking for matching frames in an \(n\times m\) matrix \(M\), we make use of the following data structures, which all our algorithms create during their preprocessing phase. * For each column \(\ell\in[m]\) we use * a lexicographically sorted array \(\mathsf{LSA}^{\ell}_{\mathsf{rows}}\) of the strings \(\{M[i][\ell..m]\mid i\in[n]\}\) (see Figure 3(a)); * an \(\mathsf{LCP}\) structure \(\mathsf{LCP}^{\ell}_{\mathsf{rows}}\) over \(\mathsf{LSA}^{\ell}_{\mathsf{rows}}\); * a range query structure \(D^{\ell}_{\mathsf{rows}}\), containing all pairs \(\{(i,I^{i,\ell}_{\mathsf{rows}})\mid i\in[n]\}\), where \(I^{i,\ell}_{\mathsf{rows}}\) is the index of the string \(M[i][\ell..m]\) in \(\mathsf{LSA}^{\ell}_{\mathsf{rows}}\) (see Figure 3(b)). In addition, we build the same three structures for the set of all strings of the form \(\overleftarrow{M[i][1..\ell]}\), denoted as \(\mathsf{LSA}^{\ell}_{\mathsf{rows}}\), \(\mathsf{LCP}^{\ell}_{\mathsf{rows}}\) and \(D^{\ell}_{\mathsf{rows}}\). * Symmetrically, for each row \(u\in[n]\) we use * a lexicographically sorted array \(\mathsf{LSA}^{u}_{\mathsf{columns}}\) of the strings \(\{M[u..n][i]\mid i\in[m]\}\); * an \(\mathsf{LCP}\) structure \(\mathsf{LCP}^{u}_{\mathsf{columns}}\) over \(\mathsf{LSA}^{u}_{\mathsf{columns}}\); * a range query structure \(D^{u}_{\mathsf{columns}}\), containing all pairs \(\{(i,I^{u,i}_{\mathsf{columns}})\mid i\in[m]\}\), where \(I^{u,i}_{\mathsf{columns}}\) is the index of the string \(M[u\ldots n][i]\) in \(\mathsf{LSA}^{u}_{\mathsf{columns}}\). In addition, we build the same three structures for the set of all strings of the form \(\overleftarrow{M[1..u][i]}\), denoted as \(\mathsf{LSA}^{u}_{\mathsf{columns}}\), \(\mathsf{LCP}^{u}_{\mathsf{columns}}\) and \(D^{u}_{\mathsf{columns}}\). In the remainder of this section we show how to construct the data structures for the rows in \(\tilde{O}(nm)\) time. The data structures for the columns can be built similarly. First we create the string \(M_{\mathsf{rows}}=M[1][1..m]\cdot\$_{1}\cdot M[2][1..m]\cdot\$_{2}\cdot\ldots \cdot M[n][1..m]\cdot\$_{n}\), where \(\$_{1}<\cdots<\$_{n}\) are distinct characters not in \(\Sigma\), and build its suffix array \(\mathsf{SA}_{\mathsf{rows}}\) using Lemma 3. The algorithm initializes \(\mathsf{LSA}^{\ell}_{\mathsf{rows}}\) for every \(\ell\in[m]\) as an empty array and then use \(\mathsf{SA}_{\mathsf{rows}}\) to populate these arrays. Namely, we scan \(\mathsf{SA}_{\mathsf{rows}}\) from left to right. Note that the suffix starting at position \(i\) of \(M_{\mathsf{rows}}\) corresponds to the horizontal substring \(M[j][\ell..m]\) such that \(j=\lceil i/(m+1)\rceil\), and \(\ell=i\,\mathrm{mod}(m+1)\) (unless \(\ell=0\)). When scanning \(M[j][\ell..m]\), the algorithms appends the string \(M[j][\ell..m]\) to \(\mathsf{LSA}^{\ell}_{\mathsf{rows}}\). Then, the algorithm constructs \(\mathsf{LCP}(M_{\mathsf{rows}})\) using Lemma 5. Using this structure, one can compute any \(\mathsf{LCP}\) query within any array \(\mathsf{LSA}^{\ell}_{\mathsf{rows}}\) in constant time. Indeed, in order to obtain the \(\mathsf{LCP}\) of \(M[i][\ell..m]\) and \(M[j][\ell..m]\), one can query \(\mathsf{LCP}(M_{\mathsf{rows}})\) with the pair of indices \((i-1)(m+1)+\ell\), \((j-1)(m+1)+\ell\). In order to construct \(D^{\ell}_{\mathsf{rows}}\), we view \(\mathsf{LSA}^{\ell}_{\mathsf{rows}}\) as a permutation of indices and compute the inverse permutation \(\mathsf{ILSA}^{\ell}_{\mathsf{rows}}\). Then we generate all the points \((i,I^{\ell,i}_{\mathsf{rows}})=(i,\mathsf{ILSA}^{\ell}_{\mathsf{rows}}[i])\) and build a \(2\)-dimensional orthogonal range data structure over these points using Lemma 10. We obtain the same data structures for strings of the form \(\overleftarrow{M[i][1..\ell]}\) by running the same procedures over the string \(\overleftarrow{M_{\mathsf{rows}}}\). Complexity.The time complexity for constructing the suffix array and an \(\mathsf{LCP}\) data structure is \(\tilde{O}(nm)\) via Lemma 4 and Lemma 5. The time required to inverse a permutation is linear, and the orthogonal range data structure over \(n\) points is built in \(\tilde{O}(n)\) time (Lemma 10) for each \(\ell\in[m]\). Thus, the overall time complexity for the described preprocessing is \(\tilde{O}(nm)\). ## 4 The Segment Compatibility Data Structure In this section we present the _segment compatibility data structure_ (SCDS), which is at the core of our maximum matching frame algorithm (see Section 5.2). We start with technical definitions (see Figure 3). Segment, aligned pair, compatible pairs.A horizontal (resp. vertical) _segment_ is a triplet \((i,j_{1},j_{2})\) (resp. \((i_{1},i_{2},j)\)) with \(j_{1}<j_{2}\) (resp. \(i_{1}<i_{2}\)). It represents the horizontal (resp. vertical) segment in the plane connecting the points \((i,j_{1})\) and \((i,j_{2})\) (resp. \((i_{1},j)\) and \((i_{2},j)\)). A pair of horizontal segments \((s_{1},s_{2})\) is _aligned_ if \(s_{1}=(i_{1},j_{1},j_{2})\) and \(s_{2}=(i_{2},j_{1},j_{2})\) for some \(i_{1}<i_{2},j_{1}<j_{2}\in\mathbb{N}\). Such a pair has _distance_\(|i_{2}-i_{1}|\). Symmetrically, a pair of vertical segments \((s_{1},s_{2})\) is aligned if \(s_{1}=(i_{1},i_{2},j_{1})\) and \(s_{2}=(i_{1},i_{2},j_{2})\) for some \(i_{1}<i_{2},j_{1}<j_{2}\in\mathbb{N}\). Such a pair has _distance_\(|j_{2}-j_{1}|\). An aligned pair of horizontal segments \((i_{1},j_{1},j_{2})\) and \((i_{2},j_{1},j_{2})\) and an aligned pair of vertical segments \((a_{1},a_{2},b_{1})\) and \((a_{1},a_{2},b_{2})\) are _compatible_ if and only if \(a_{1}\leq i_{1}\leq i_{2}\leq a_{2}\), and \(j_{1}\leq b_{1}\leq b_{2}\leq j_{2}\) (see Figure 3). The SCDS stores aligned pairs of horizontal segments and supports two operations: 1. \(\mathsf{Construct}(S)\): given a set \(S\) of aligned pairs of horizontal segments, initialize the data structure with all pairs from \(S\). 2. \(\mathsf{MaxCompatible}(v_{1},v_{2})\): given an aligned pair \((v_{1},v_{2})\) of vertical segments, return a pair \((h_{1},h_{2})\) with the maximum distance among the stored pairs compatible with \((v_{1},v_{2})\), or return null if no stored pair is compatible with \((v_{1},v_{2})\). The SCDS can be implemented in \(\tilde{O}(|S|)\) time for the \(\mathsf{Construct}\) operation and \(\tilde{O}(1)\) time for the \(\mathsf{MaxCompatible}\) operation. Proof.: We provide an implementation for both desired operations. Construct\((S)\).Let \(P=\big{(}(i_{1},j_{1},j_{2}),(i_{2},j_{1},j_{2})\big{)}\in S\) be any aligned pair of horizontal segments. We define a 4-dimensional point \(\mathsf{point}(P)=(i_{1},i_{2},j_{1},j_{2})\) with the value \(v(\mathsf{point}(P))=i_{2}-i_{1}\). Then we construct a 4-dimensional range query data structure \(D\) for the points \(\{\mathsf{point}(P)\mid P\in S\}\). \(\mathsf{MaxCompatible}(v_{1},v_{2})\).Let us denote the input as \((v_{1},v_{2})=\big{(}(a_{1},a_{2},b_{1}),(a_{1},a_{2},b_{2})\big{)}\). Let \(R=([a_{1},a_{2}],[a_{1},a_{2}],[-\infty,b_{1}],[b_{2},\infty])\) be a 4-dimensional range. It is clear that a pair \(P\) is compatible with \((v_{1},v_{2})\) if and only if \(\mathsf{point}(P)\in R\). The algorithm queries \(D\) for \(\mathsf{Maximum}(R)\) and return its output. Due to Lemma 10, the running time of all the operations is as required. ## 5 maximum matching frame In this section we prove Theorem 1, describing an algorithm with the announced time complexity. We assume that the input matrix \(M\) contains a maximum matching frame \((u,d,\ell,r)\) whose height is smaller or equal to its width, i.e. \(d-u\leq r-\ell\). To cover the complementary case, the algorithm is applied both to the original matrix \(M\) and to its transpose \(M^{T}\) and reports the maximum result. Our algorithm distinguish between two types of matching frames: _short_ frames of height at most \(x\) (for some parameter \(x\) determined later), and _tall_ frames with height larger than \(x\), and solves for each type separately. Then, the algorithm returns the maximum between both solutions, which is a maximum matching frame of \(M\). ### Algorithm for Short Frames In this section we prove the following lemma: There is an algorithm that for a given \(x\in[n]\) finds, in \(\tilde{O}(n\cdot m\cdot x)\) time, a maximum matching frame of height at most \(x\). Proof.: For every two rows \(u^{\prime},d^{\prime}\in[n]\) such that \(d^{\prime}\in[u^{\prime}+1..u^{\prime}+x]\) the algorithm works as follows. First, the algorithm finds all maximal ranges \([a..b]\) such that \(M[u^{\prime}][a..b]=M[d^{\prime}][a..b]\). By "maximal" we mean that a range can not be extended to the right or to the left while keeping equality. Note that all maximal ranges are disjoint. For \(k\in[m]\) we denote the vertical string \(M[u^{\prime}..d^{\prime}][k]\) by \(S_{k}\). Let \([a..b]\) be a maximal range. For every vertical string \(S_{k}\) with \(k\in[a..b]\) we find its leftmost and rightmost occurrences in the range \([a..b]\). This is achieved by initializing an empty dictionary \(D_{a,b}\) and scanning the range \([a..b]\) left to right. For each \(k\in[a..b]\) the algorithm computes the fingerprint \(f\) in \(\mathsf{LSA}^{u^{\prime}}_{\mathsf{columns}}\) of the string \(S_{k}\) (see Definition 3). If \(f\) is not in \(D_{a,b}\), we add \(f\) to \(D_{a,b}\) and update both the leftmost and rightmost occurrence of \(S_{k}\) to be \(k\). If \(f\) is already in \(D_{a,b}\), we update the rightmost occurrence of \(S_{k}\) to be \(k\). After completing the scan, the algorithm finds a vertical string \(S_{k}\) such that the distance between the leftmost occurrence \(\ell^{\prime}\) and the rightmost occurrence \(r^{\prime}\) of \(S_{k}\) is maximal. We call the frame \((u^{\prime},d^{\prime},\ell^{\prime},r^{\prime})\) the \((a,b)\)-range candidate of \((u^{\prime},d^{\prime})\). If \(\ell^{\prime}=r^{\prime}\), we say that there is no \((a,b)\)-range candidate of \((u^{\prime},d^{\prime})\). Among all maximal ranges \([a..b]\), an \((a,b)\)-range candidate with the maximal perimeter is the \((u^{\prime},d^{\prime})\)-candidate (if there are \((a,b)\)-range candidates for \((u^{\prime},d^{\prime})\), there is no \((u^{\prime},d^{\prime})\) candidate). Among all pairs of rows \((u^{\prime},d^{\prime})\), the algorithm outputs a \((u^{\prime},d^{\prime})\)-candidate with the maximal perimeter. If there are no \((u^{\prime},d^{\prime})\)-candidates, the algorithm returns \(\mathsf{null}\). Correctness.Let \(F^{\prime}=(u^{\prime},d^{\prime},\ell^{\prime},r^{\prime})\) be the frame returned by the algorithm. We claim that \(F^{\prime}\) is matching. Indeed, it is the \((a,b)\)-range candidate of \((u^{\prime},d^{\prime})\) for some range \([a..b]\) such that \(a\leq\ell^{\prime}<r^{\prime}\leq b\). Then, the equality \(M[u^{\prime}][a..b]=M[d^{\prime}][a..b]\) implies \(M[u^{\prime}][\ell^{\prime}..r^{\prime}]=M[d^{\prime}][\ell^{\prime}..r^{ \prime}]\). The vertical strings \(M[u^{\prime}..d^{\prime}][\ell^{\prime}]\) and \(M[u^{\prime}..d^{\prime}][r^{\prime}]\) are equal as the leftmost and the rightmost occurrences of the same vertical string. Let \(F=(u,d,\ell,r)\) be a maximum perimeter matching frame among the frames of height at most \(x\). When the algorithm iterates over the rows \(u,d\), it identifies a range \([a..b]\) such that \(a\leq\ell<r\leq b\). Let \(\hat{F}=(u,d,\hat{\ell},\hat{r})\) be the \((a,b)\)-range candidate of \((u,d)\). Since \(F\) is a valid choice for this candidate, the inequality \(\ell-r\leq\hat{\ell}-\hat{r}\) holds, implying \(\mathsf{per}(F)\leq\mathsf{per}(\hat{F})\leq\mathsf{per}(F^{\prime})\). Complexity.The time complexity for a pair of rows \((u^{\prime},d^{\prime})\) is as follows. Identifying the maximal ranges takes \(O(m)\) time. A maximal range \([a..b]\) requires \(O(b-a)\) dictionary operations, each take \(\tilde{O}(1)\) time, using for example an AVL [1] data structure. Since all the maximal ranges of \((u^{\prime},d^{\prime})\) are disjoint, their lengths sum to at most \(m\), leading to the running time \(\tilde{O}(m)\) for \((u^{\prime},d^{\prime})\). Since \(d^{\prime}\in[u^{\prime}+1..u^{\prime}+x]\), there are \(O(n\cdot x)\) pairs of rows to process. Therefore, the total running time of the algorithm is \(\tilde{O}(n\cdot m\cdot x)\). ### Algorithm for Tall Frames In this section, we prove the following lemma: There is an algorithm that for a given \(x\in[n]\) finds, in \(\tilde{O}(\frac{n\cdot m^{2}}{x})\) time, a maximum matching frame of height at least \(x\). Given a frame \(F=(u,d,\ell,r)\) and a position \(p=(i,j)\) such that \(i\in[u..d]\) and \(j\in[\ell..r]\), we say that \(p\) is _contained_ in \(F\) and \(F\)_contains_\(p\). We introduce an algorithm that finds a maximum matching frame containing a given position. Then we describe a small set of positions such that every tall frame contains a position from this set. Running the mentioned algorithm for each position of this set, we get the frame required by Lemma 4. Maximum matching frame containing a given position.We start by describing a subroutine finding a maximal matching frame containing a given position. For the sake of efficiency, we bound both the width and the height of the frames processed in one pass. The runtime of the subroutine is quadratic in the bounds on the width and height. Given a position \((i,j)\) in \(M\) and a pair of positive integers \((H,W)\in[n]\times[m]\), there is an algorithm that finds, in \(\tilde{O}(H^{2}+W^{2})\) time, a maximum matching frame \((u,d,\ell,r)\) with the following properties: 1. the position \((i,j)\) is contained in \((u,d,\ell,r)\), 2. \(d-u\in[H/2..H]\), 3. \(r-\ell\in[W/2..W]\). Proof.Intuitively, the algorithm iterates over all pairs \(\ell\) and \(r\) satisfying Properties 1 and 3. For every such pair, the algorithm computes the maximal aligned agreement between the columns \(\ell\) and \(r\) intersecting the \(i\)th row. The algorithm constructs an SCDS over all such pairs (see Section 4). Then the algorithm iterates over all pairs \(u\) and \(d\) satisfying Properties 1 and 2 and computes the maximal aligned agreement between the rows \(u\) and \(d\) intersecting the \(j\)th column. The algorithm queries the SCDS to find a maximum matching frame \((u,d,\ell,r)\) for some \(\ell\) and \(r\) processed in the previous step. The following describes the technical details for implementing the above approach. For every pair \((\ell,r)\in[m]^{2}\) such that \(r-\ell\in[W/2..W]\) and \(j\in[\ell..r]\), the algorithm finds the maximal aligned agreement between the columns \(\ell\) and \(r\) intersecting the \(i\)th row by executing two \(\mathsf{LCP}\) queries. First the algorithm queries \(\mathsf{LCP}^{i}_{\mathsf{columns}}\) to obtain the maximal \(d^{\prime}\) such that \(M[i..d^{\prime}][\ell]=M[i..d^{\prime}][r]\). Similarly, the algorithm queries \(\mathsf{LCP}^{i}_{\mathsf{columns}}\) to obtain the minimal \(u^{\prime}\) such that \(M[u^{\prime}..i][\ell]=M[u^{\prime}..i][r]\). Then the algorithm stores the pair of segments \(s_{1}=(u^{\prime},d^{\prime},\ell)\) and \(s_{2}=(u^{\prime},d^{\prime},r)\). To conclude this part, the algorithm constructs an \(\mathsf{SCDS}\) over all stored pairs. Next, the algorithm iterates over all pairs \((u,d)\in[n]^{2}\) such that \(d-u\in[H/2..H]\) and \(i\in[u..d]\). For each such pair, the algorithm queries the data structures \(\mathsf{LCP}^{j}_{\mathsf{rows}}\) and \(\mathsf{LCP}^{j}_{\mathsf{rows}}\) (similar to the above computation of vertical agreements), obtaining the minimal \(\ell^{\prime}\) and the maximal \(r^{\prime}\) such that \(M[u][\ell^{\prime}..r^{\prime}]=M[d][\ell^{\prime}..r^{\prime}]\). The algorithm then constructs the horizontal aligned pair of segments \(s^{h}_{1}=(u,\ell^{\prime},r^{\prime})\) and \(s^{h}_{2}=(d,\ell^{\prime},r^{\prime})\). The algorithm queries \(\mathsf{SCDS}\) for \((s^{v}_{1},s^{v}_{2})\leftarrow\mathsf{MaxCompatible}(s^{h}_{1},s^{h}_{2})\). Let \(s^{v}_{1}=(t_{1},t_{2},\ell)\) and \(s^{v}_{2}=(t_{1},t_{2},r)\). We call the frame \((u,d,\ell,r)\) the _\((u,d)\)-optimal frame_. If the query \(\mathsf{MaxCompatible}(s^{h}_{1},s^{h}_{2})\) returns \(\mathsf{null}\), there is no \((u,d)\)-optimal frame. The algorithm reports the \((u,d)\)-optimal frame with the maximum perimeter among all pairs \((u,d)\). If for every \((u,d)\) there is no \((u,d)\)-optimal frame, the algorithm returns \(\mathsf{null}\). Correctness.Clearly, every frame \((u,d,\ell,r)\) identified by the algorithm satisfies Properties 1-3. We proceed to show that it is a matching frame. Recall that \((u,d,\ell,r)\) was obtained from two compatible pairs of segments \(s^{v}_{1}\), \(s^{v}_{2}\) and \(s^{h}_{1}\), \(s^{h}_{2}\). Notice that for the pair \(s^{v}_{1}=(u_{v},d_{v},\ell)\) and \(s^{v}_{2}=(u_{v},d_{v},r)\) to be compatible with \(s^{h}_{1}=(u,\ell_{h},r_{h}),s^{h}_{2}=(d,\ell_{h},r_{h})\), the inequalities \(u_{v}\leq u\) and \(d_{v}\geq d\) must hold. By the construction of \(s^{v}_{1}\) and \(s^{v}_{2}\) we have \(M[u_{v}..d_{v}][\ell]=M[u_{v}..d_{v}][r]\) and then \(M[u..d][\ell]=M[u..d][r]\). In a similar way, one can prove \(M[u][\ell..r]=M[d][\ell..r]\), showing that \((u,d,\ell,r)\) is a matching frame as required. To conclude the correctness of our algorithm, we need to show that some maximum matching frame satisfying Properties 1-3 is \((u,d)\)-optimal for some \((u,d)\). Let \((u_{t},d_{t},\ell_{t},r_{t})\) be a maximum matching frame satisfying Properties 1 and 3. For \((u_{t},d_{t})\), the algorithm creates the horizontal aligned pair \(s^{h}_{1}=(u_{t},\ell_{h},r_{h}),s^{h}_{2}=(d_{t},\ell_{h},r_{h})\). Since \(M[u_{t}][\ell_{t}..r_{t}]=M[d_{t}][\ell_{t}..r_{t}]\), we have \(\ell_{h}\leq\ell_{t}\) and \(r_{h}\geq r_{t}\). By a similar argument, when constructing the \(\mathsf{SCDS}\), the algorithm creates a vertical aligned pair \(s^{v}_{1}=(u_{v},d_{v},\ell_{t})\), \(s^{v}_{2}=(u_{v},d_{v},r_{t})\) with \(u_{v}\leq u_{t}\) and \(d_{v}\geq d_{t}\). Denote the output of \(\mathsf{MaxCompatible}(s^{h}_{1},s^{h}_{2})\) by \(\big{(}(u^{\prime},\ell^{\prime},r^{\prime}),(d^{\prime},\ell^{\prime},r^{ \prime})\big{)}\). It is satisfied that \(r^{\prime}-\ell^{\prime}\geq r_{t}-\ell_{t}\) since the pair \((s^{v}_{1},s^{v}_{2})\) is compatible with \((s^{h}_{1},s^{h}_{2})\). Then \((u_{t},d_{t},\ell^{\prime},r^{\prime})\) is a matching frame with perimeter \(2(d_{t}-u_{t}+r^{\prime}-\ell^{\prime})\geq 2(d_{t}-u_{t}+r_{t}-\ell_{t})\). Due to the maximality of the perimeter of \((u_{t},d_{t},\ell_{t},r_{t})\), we have that \((u_{t},d_{t},\ell^{\prime},r^{\prime})\) is a maximum matching frame with the required properties. Complexity.It can be easily shown that there are \(O(W^{2})\) pairs \((\ell,r)\) satisfying \(r-\ell\leq W\) and \(j\in[\ell..r]\). Similarly, there are \(O(H^{2})\) pairs \((u,d)\) satisfying \(d-u\leq H\) and \(i\in[u..d]\). By Lemma 11, the construction of the \(\mathsf{SCDS}\) takes \(\tilde{O}(W^{2})\) time. The algorithm then applies \(O(H^{2})\) queries to the \(\mathsf{SCDS}\) and the overall complexity is \(\tilde{O}(W^{2}+H^{2})\). The following observation implies a sparse set of positions such that every frame of height at least \(h\) and width at least \(w\) contains a position in this set. The size of the set is \(O(\frac{mn}{hw})\). Given a pair of positive integers \((h,w)\in[n]\times[m]\), every frame \((u,d,\ell,r)\) with \(d-u\geq h\) and \(r-\ell\geq w\) contains a position \((i,j)\) with \(i\,\mathrm{mod}\,h=0\) and \(j\,\mathrm{mod}\,w=0\). Now we can prove Lemma 13: Proof of Lemma 13.: The algorithm iterates over all pairs \(H,W\in\{x\cdot 2^{i}\mid i\geq 1\}\) such that \(H\leq W\leq 2m\). For a pair \((H,W)\), the algorithm runs the subroutine from Lemma 14 for every position \((i,j)\in[n]\times[m]\) such that \(i\operatorname{mod}H/2=0\) and \(j\operatorname{mod}W/2=0\). Finally, the algorithm reports the maximum matching frame among all outputs of this subroutine. Correctness.Since every instance of the subroutine from Lemma 14 reports a matching frame or a null, the algorithm also reports a matching frame (or a null). Let \(F=(u,d,\ell,r)\) be a maximum matching frame of height at least \(x\). Let \(W\) (resp. \(H\)) be the smallest number in \(\{x\cdot 2^{i}\mid i\geq 1\}\) which is at least \(r-\ell\) (resp. \(d-u\)). By Observation 15, the algorithm runs the subroutine for the pair \((H,W)\) and some position \((i,j)\) contained in \(F\). This instance of the subroutine reports a matching frame \(F^{\prime}\) with \(\mathsf{per}(F^{\prime})\geq\mathsf{per}(F)\). Therefore, the algorithm returns a matching frame with a perimeter at least as large as the perimeter of \(F\). Complexity.For a given pair \((H,W)\), applying the subroutine of Lemma 14 on each position \((i,j)\in[n]\times[m]\) such that \(i\operatorname{mod}\frac{H}{2}=0\) and \(j\operatorname{mod}\frac{W}{2}=0\) costs \(\tilde{O}\big{(}\frac{nm}{HW}(W^{2}+H^{2})\big{)}=\tilde{O}\big{(}nm\frac{W}{ H}\big{)}\) time. Since \(x\leq H\leq W\leq 2m\), the algorithm runs in \(\tilde{O}(\frac{nm^{2}}{x})\) time. Note that the number of pairs \((H,W)\) such that \(H,W\in\{x\cdot 2^{i}\mid i\geq 1\}\) and \(H\leq W\leq 2m\) is \(O(\log^{2}m)\). Therefore, a multiplicative factor of \(O(\log^{2}m)\) is added to the total time complexity. ### Combining the Short and Tall Algorithms In this section, we combine the results of Section 5.1 and Section 5.2 to prove Theorem 1. Proof of Theorem 1.: Applying the algorithm of Lemma 12 and the algorithm of Lemma 13 with the same threshold \(x\) and reporting the maximum perimeter frame between both outputs yields an algorithm with running time \(\tilde{O}(nmx+\frac{nm^{2}}{x})\). By setting \(x=\sqrt{m}\), we obtain an algorithm with time complexity \(\tilde{O}(nm\cdot\sqrt{m})\). Recall that we need to run the algorithm on \(M^{T}\) as well, which yields a running time of \(\tilde{O}(nm\cdot\sqrt{n})\). In total, running the combined algorithm on both \(M\) and \(M^{T}\) takes \(\tilde{O}(nm\cdot\sqrt{\max\{n,m\}})\) time. We now describe an alternative algorithm using only Lemma 12. Notice that every frame has \(d-u\leq\min\{n,m\}\). Therefore, applying Lemma 12 with \(x=\min\{n,m\}\) yields an algorithm that outputs the maximum matching frame with time complexity \(\tilde{O}(nm\cdot\min\{n,m\})\). Since this expression is symmetric with respect to \(n\) and \(m\), applying the same algorithm on \(M^{T}\) does not affect the asymptotic running time. Choosing the faster between the two above algorithms yields Theorem 1. ## 6 Approximation Version In the \((1-\varepsilon)\)-approximation version of the problem, the goal is as follows. Given a matrix with a maximum matching frame \(F\) find a matching frame \(F^{\prime}\) with \(\mathsf{per}(F^{\prime})\geq(1-\varepsilon)\mathsf{per}(F)\). Our algorithm reduces the problem to multiple instances of a certain decision problem defined below. The reduction is described in Appendix B and the decision problem is solved in Section 6.3. Decision problem.The input for this problem is a matrix \(M\), and an _inner rectangle_\((u_{\Box},d_{\Box},\ell_{\Box},r_{\Box})\) in \(M\) (see Figure 6). A frame \((u,d,\ell,r)\) in \(M\) is _surrounding_ if \((u_{\Box},d_{\Box},\ell_{\Box},r_{\Box})\) is fully contained inside \((u,d,\ell,r)\). Formally, \(u<u_{\Box}\leq d_{\Box}<d\) and \(\ell<\ell_{\Box}\leq r_{\Box}<r\). The goal in this version of the problem is to output a surrounding matching frame \((u,d,\ell,r)\) of \(M\), or report that a surrounding matching frame does not exist in \(M\). In Section 6.3, we show that this problem can be solved in near-linear time, by proving the following lemma. Given an \(n\times m\) matrix \(M\) with an inner rectangle \((u_{\Box},d_{\Box},\ell_{\Box},r_{\Box})\), there is an algorithm that finds, in \(\tilde{O}(nm)\) time, a surrounding matching frame in \(M\) or reports that no such frame exists. Via an application of a 2-dimensional variant of the so-called _standard trick_, we obtain the following reduction. For every \(h,w\in[0..\log_{a}n]\times[0..\log_{a}m]\), there is a set \(\mathcal{M}_{h,w}\) of sub-matrices and an inner rectangle \(R_{\Box}=(u_{\Box},d_{\Box},\ell_{\Box},r_{\Box})\) satisfying the following properties. 1. \(|\mathcal{M}_{h,w}|=O(\frac{\varepsilon^{2}a^{n+w}}{\varepsilon^{2}a^{n+w}})\). 2. For every sub-matrix \(M^{\prime}\in\mathcal{M}_{h,w}\), \(|M^{\prime}|=O(a^{h+w})\). 3. For every frame \((u,d,\ell,r)\) with \(d-u\in[a^{h}..a^{h+1}-1]\) and \(r-\ell\in[a^{w}..a^{w+1}-1]\) there is a sub-matrix \(M^{\prime}\in\mathcal{M}_{h,w}\) such that \((u,d,\ell,r)\) is a surrounding frame in \(M^{\prime}\) with respect to the inner rectangle \(R_{\Box}\). 4. For every surrounding frame \(F\) in any \(M^{\prime}\in\mathcal{M}_{h,w}\), \(\mathsf{per}(F)\geq(1-\varepsilon)\bigg{(}2(a^{w+1}+a^{h+1})\bigg{)}\). The inner rectangle \(R_{\Box}\) and the endpoints of the sub-matrices in \(\mathcal{M}_{h,w}\) can be obtained in \(O(|\mathcal{M}_{h,w}|)\) time given \(h\) and \(w\). The proof of Lemma 4.2 appears in Appendix B. With that, we are ready to prove Theorem 4.2. Proof of Theorem 4.2.: The algorithm works as follows. For every pair \((h,w)\in[\log_{a}n]\times[\log_{a}m]\), the algorithm obtains \(\mathcal{M}_{h,w}\) and \(R_{\Box}\) and applies Lemma 4.2 on every \(M^{\prime}\in\mathcal{M}_{h,w}\) with the inner rectangle \(R_{\Box}\). The algorithm returns the maximum matching frame obtained from an application of Lemma 4.2 across all sub-matrices in \(\mathcal{M}_{h,w}\) of all values of \((h,w)\). Correctness.Let \(F=(u,d,\ell,r)\) be a maximum matching frame in \(M\). There is a pair \((h,w)\in[\log_{a}n]\times[\log_{a}m]\) such that \(d-u\in[a^{h}..a^{h+1}-1]\) and \(r-\ell\in[a^{w}..a^{w+1}-1]\). Due to Property 3 of Lemma 4.2, there is a sub-matrix \(M^{\prime}\in\mathcal{M}_{h,w}\) that contains \(F\) as a surrounding frame. The algorithm in Lemma 4.2 returns a surrounding frame \(F^{\prime}\) of \(M^{\prime}\), and by Property 4 of Lemma 4.2, \(\mathsf{per}(F^{\prime})\geq(1-\varepsilon)\bigg{(}2(a^{w+1}+a^{h+1})\bigg{)}\). Finally, notice that \(\mathsf{per}(F)\leq 2(a^{w+1}+a^{h+1})\) to conclude the correctness of the algorithm. Complexity.Given \(h\) and \(w\), the running time of the algorithm that obtains \(\mathcal{M}_{h,w}\) and the suitable \(R_{\Box}\) is \(O(|\mathcal{M}_{h,w}|)\subseteq O(nm/\varepsilon^{2})\) by Property 1 of Lemma 4.2. Due to Properties 1 and 2 of Lemma 4.2, the sum of the sizes of the matrices in \(\mathcal{M}_{h,w}\) is \(O\left(\frac{nm}{\varepsilon^{2}}\right)\). Hence, applying Lemma 4.2 on all \(M^{\prime}\in\mathcal{M}_{h,w}\) takes \(\tilde{O}\left(\frac{nm}{\varepsilon^{2}}\right)\) time. Recall that there are \(O(\log_{1+\varepsilon}n\cdot\log_{1+\varepsilon}m)=O(\frac{1}{\varepsilon^{2} }\log n\cdot\log m)\) values of \(h\) and \(w\). Thus, the total running time of the algorithm is \(O(\frac{nm}{\varepsilon^{2}})\). ### Interesting Pairs and Interesting Triplets In order to prove Lemma 4.2, we introduce and study the following notion (see Figure 5). Given a tuple \((S_{1},\ldots,S_{n})\) of strings, we call a pair \((i,j)\) interesting if \(i<j\) and for any \(\ell\) such that \(\ell\in[i+1,j-1]\) one has \(\mathsf{LCP}(S_{i},S_{\ell})<\mathsf{LCP}(S_{i},S_{j})\). Trivially, all pairs of the form \((i,i+1)\) are interesting for any tuple. The next lemma bounds the number of interesting pairs. This bound is tight as shown in Appendix C. For a given tuple \((S_{1},\ldots,S_{n})\), fix an integer \(\ell\in[1..\lceil\log n\rceil]\) and consider the set \(\mathcal{I}_{\ell}=\{(i,j)\mid(i,j)\text{ is interesting and }j-i\in[2^{\ell-1}..2^{\ell}-1]\}\). We say that a pair \((i,j)\in\mathcal{I}_{\ell}\) is of the _first type_ if \(i=\max\{i^{\prime}\mid(i^{\prime},j)\in\mathcal{I}_{\ell}\}\) and of the _second type_ otherwise. The following claim is crucial. All pairs of the first type from \(\mathcal{I}_{\ell}\) have different second components; all pairs of the second type from \(\mathcal{I}_{\ell}\) have different first components. Proof.: The first statement stems directly from the definition of the first type. Let us prove the second one. Assume by contradiction that \((i,j),(i,j^{\prime})\in\mathcal{I}_{\ell}\) are pairs of the second type, with \(j^{\prime}<j\). As \((i,j)\) is not of the first type, \(\mathcal{I}_{\ell}\) contains a pair \((i^{\prime},j)\) with \(i^{\prime}>i\) (see Figure 2). We prove the following sequence of inequalities, leading to a contradiction. \[\mathsf{LCP}(S_{i},S_{i^{\prime}})\stackrel{{(1)}}{{<}}\mathsf{ LCP}(S_{i},S_{j^{\prime}})\stackrel{{(2)}}{{=}}\mathsf{LCP}(S_{j^{ \prime}},S_{j})\stackrel{{(3)}}{{=}}\mathsf{LCP}(S_{i^{\prime}},S_ {j^{\prime}})\stackrel{{(4)}}{{<}}\mathsf{LCP}(S_{i^{\prime}},S_ {j})\stackrel{{(5)}}{{=}}\mathsf{LCP}(S_{i},S_{i^{\prime}}),\] Since \(2^{\ell-1}\leq j-i^{\prime}\), \(2^{\ell-1}\leq j^{\prime}-i\) and \(j-i<2^{\ell}\leq j-i^{\prime}\ +\ j^{\prime}-i\), we have \(i^{\prime}<j^{\prime}\). Since \((i,j^{\prime})\) is an interesting pair and \(i^{\prime}\in[i{+}1..j^{\prime}{-}1]\), we obtain (1) by Definition 3. Since \((i,j)\) is an interesting pair, every \(k\in[i{+}1..j{-}1]\) satisfies \(\mathsf{LCP}(S_{i},S_{k})<\mathsf{LCP}(S_{i},S_{j})\). Hence, by Fact 3 we have \(\mathsf{LCP}(S_{i},S_{k})=\mathsf{LCP}(S_{k},S_{j})\). We obtain (2) and (5) by setting \(k=j^{\prime}\) and \(k=i^{\prime}\) respectively. Finally, \((i^{\prime},j)\) is an interesting pair, and \(j^{\prime}\in[i^{\prime}+1..j-1]\). So, Definition 3 gives us (4) and then Fact 3 implies (3). Claim 3 says that \(\mathcal{I}_{\ell}\) contains at most \(n\) pairs of the first type and at most \(n\) pairs of the second type. As \(\ell\) takes \(\lceil\log n\rceil\) values, the lemma follows. To relate interesting pairs to our decision problem we need one more notion. Let \(M\) be an \(n\times m\)-matrix and \(\ell\in[m]\). A triplet \((u,d,\ell)\) is called _interesting_ if the pair \((u,d)\) is interesting for the tuple \((M[1][\ell..m],\ldots,M[n][\ell..m])\). ### Finding all interesting Triplets In this section we show how to find efficiently all interesting triplets for a matrix, proving the following lemma. All interesting triplets for an \(n\times m\) matrix \(M\) can be found in \(\tilde{O}(nm)\) time. Figure 2: Interesting pairs. Each bold lines depicts the \(\mathsf{LCP}\) of the given string with the string \(S_{i}\). We assume that the data structures described in Section 3 are constructed. We process each \(\ell\in[m]\) independently, computing all interesting triplets of the form \((u,d,\ell)\). By Definition 21, such a triplet is interesting if the pair \((u,d)\) is interesting for the tuple \(\mathcal{S}=(S_{1},\ldots,S_{n})\), where \(S_{i}=M[i][\ell..m]\). Below we work with this fixed tuple \(\mathcal{S}\). The algorithm scans \(\mathcal{S}\) string by string; while processing \(S_{i}\), the algorithm finds all the interesting pairs \((i,j)\). For \(i<j\in[n]\), let \(L(i,j)\) be the maximum \(\mathsf{LCP}\) value between \(S_{i}\) and any \(S_{k}\) for \(k\in[i+1\ldots j]\). Let \(I(i,j)=\min\{k\in[i+1\ldots j]\mid\mathsf{LCP}(S_{i},S_{k})=L(i,j)\}\) be the minimum index \(k\) with this maximum \(\mathsf{LCP}\) value. Using the function \(I(i,j)\) we characterize the set of interesting pairs that share the first index \(i\). For \(i\in[n]\), let \(j_{1}>j_{2}>\cdots>j_{z}\) be the second coordinates of all interesting pairs of the form \((i,j)\). Then 1. \(j_{1}=I(i,n)\); 2. \(j_{k}=I(i,j_{k-1}-1)\) for every \(k\in[2\ldots z]\). Proof.: (1) We need to prove that \((i,I(i,n))\) is interesting and that there is no interesting pair \((i,j^{\prime})\) with \(j^{\prime}>I(i,n)\). By the definitions of \(L(i,n)\) and \(I(i,n)\), for every \(j^{\prime}<I(i,n)\) we have \(\mathsf{LCP}(S_{i},S_{j^{\prime}})<L(i,n)=\mathsf{LCP}(S_{i},S_{I(i,n)})\), so \((i,I(i,j))\) is interesting. Now consider a pair \((i,j^{\prime})\) with \(j^{\prime}>I(i,n)\). The same definitions imply \(\mathsf{LCP}(S_{i},S_{j^{\prime}})\leq L(i,n)=\mathsf{LCP}(S_{i},S_{I(i,n)})\), so the pair \((S_{i},S_{j^{\prime}})\) is not interesting and we have \(j_{1}=I(i,n)\) as required. (2) Let \(k\in[2\ldots z]\). Similar to the the previous case, we argue that the pair \((i,I(i,j_{k-1}-1))\) is interesting and no pair \((i,j^{\prime})\) such that \(I(i,j_{k-1}-1)<j^{\prime}<j_{k-1}\) is interesting. Hence \(I(i,j_{k-1}-1)\) follows \(j_{k-1}\) in the list of second coordinates of interesting pairs of the form \((i,j)\), i.e., \(j_{k}=I(i,j_{k-1}-1)\). Having established the importance of \(I(i,j)\) and \(L(i,j)\), we proceed to show how to compute them efficiently. Given \(i\) and \(j\), \(L(i,j)\) can be computed in \(\bar{O}(1)\) time. Proof.: Consider the tuple \((S_{i},\ldots,S_{j})\). Note that if this tuple is sorted lexicographically, then the maximum \(\mathsf{LCP}\) value with \(S_{i}\) would be reached by one of its neighbors \(S_{j_{\mathsf{fin}}}\) and \(S_{j_{\mathsf{qua}}}\) in the sorted tuple. Let \(S_{j_{\mathsf{fin}}}\) and \(S_{j_{\mathsf{qua}}}\) be the left (smaller) and right (larger) neighbors of \(S_{i}\) respectively in the sorted tuple; one of the neighbors may be absent. Then, \(L(i,j)=\max\{\mathsf{LCP}(S_{i},S_{j_{\mathsf{fin}}}),\mathsf{LCP}(S_{i},S_{ j_{\mathsf{qua}}})\}\). The algorithm retrieves \(j_{\mathsf{left}}\) and \(j_{\mathsf{right}}\) using range queries on \(D^{\ell}_{\mathsf{rows}}\) as detailed below. Recall the notation \(I^{x,\ell}_{\mathsf{rows}}\) denotes the index of \(S_{x}\) in \(\mathsf{LSA}^{\ell}_{\mathsf{rows}}\). Note that \(I^{j_{\mathsf{qua}},\ell}_{\mathsf{rows}}\) is the minimal value satisfying \(I^{x,\ell}_{\mathsf{rows}}>I^{i,\ell}_{\mathsf{rows}}\) with \(x\in[i+1..j]\). Hence, in order to get \(j_{\mathsf{right}}\) one can query \(D^{\ell}_{\mathsf{rows}}\) for a point \((x,I^{x,\ell}_{\mathsf{rows}})\) in the range \([i+1..j]\times[I^{i,\ell}_{\mathsf{rows}}+1..\infty]\) that minimizes \(I^{x,\ell}_{\mathsf{rows}}\); the first coordinate of this point is \(j_{\mathsf{right}}\). Symmetrically, in order to get \(j_{\mathsf{left}}\) one can query \(D^{\ell}_{\mathsf{rows}}\) for a point \((x,I^{x,\ell}_{\mathsf{rows}})\) in the range \([i+1..j]\times[1..I^{i,\ell}_{\mathsf{rows}}-1]\) that maximizes \(I^{x,\ell}_{\mathsf{rows}}\); the first coordinate of this point is \(j_{\mathsf{left}}\). Given that \(j_{\mathsf{right}}\) and \(j_{\mathsf{left}}\) were retrieved, one is able to query the \(\mathsf{LCP}\) data structure for \(\mathsf{LCP}(S_{i},S_{j_{\mathsf{qua}}})\) and \(\mathsf{LCP}(S_{i},S_{j_{\mathsf{fin}}})\), and output the maximum as \(L(i,j)\). Using two range queries and two \(\mathsf{LCP}\) queries for this process, the running time is \(\bar{O}(1)\) and the lemma follows. Given \(i\) and \(j\), \(I(i,j)\) can be computed in \(\bar{O}(1)\) time. Proof.: The algorithm starts by applying Lemma 24 to obtain \(L(i,j)\). Let \(P=S_{i}[1..L(i,j)]\) be the prefix of length \(L(i,j)\) of \(S_{i}\). Note that by definition, \(I(i,j)\) is the minimal index \(k\in[i+1..j]\) such that \(S_{k}[1..L(i,j)]=P\). Using Fact 7, the algorithm finds a pair of indices \(i_{P},j_{P}\) such that \(S_{z}[1..L(i,j)]=P\) if and only if \(I_{\mathsf{rows}}^{z,\ell}\in[i_{P}..j_{P}]\). After finding \(i_{P}\) and \(j_{P}\), the algorithm formulates \(I(i,j)\) as a range query in \(D_{\mathsf{rows}}^{\ell}\), which is the minimal value \(k\in[i+1..j]\) such that \(I_{\mathsf{rows}}^{k,\ell}\in[i_{P}..j_{P}]\). The algorithm obtains \(I(i,j)\) by querying \(D_{\mathsf{rows}}^{\ell}\) for the point \((k,I_{\mathsf{rows}}^{k,\ell})\) in the range \([i+1..j]\times[i_{P}..j_{P}]\) with the minimal value of \(k\) (i.e. the minimal first coordinate). The algorithm outputs \(k\) as \(I(i,j)\). The lemma follows since obtaining \(L(i,j),i_{P},j_{P}\) takes \(\tilde{O}(1)\) time (see Lemma 24, Fact 7), and a single range query to \(D_{\mathsf{rows}}^{\ell}\) also takes \(\tilde{O}(1)\) time. We are now ready to present the algorithm proving Lemma 22. Proof of Lemma 22.: Let \(\ell\) be fixed and \(\mathcal{S}=\{S_{1},\ldots,S_{n}\}\) be defined as above. When processing \(S_{i}\), the algorithm starts by finding \(j_{1}=I(i,n)\) using Lemma 25 and reports \((i,j_{1})\) as an interesting pair (see Lemma 23). Then, the algorithm proceeds iteratively. As long as \(j_{k}\neq i+1\), the algorithm finds \(j_{k+1}=I(i,j_{k}-1)\) using Lemma 25 and reports \((i,j_{k+1})\) as an interesting pair. Note that the algorithm is guaranteed to finish the iteration, as \((i,i+1)\) is an interesting pair for every \(i\). Note that we find \(j_{1}\) in \(\tilde{O}(1)\) time, and then proceed to find \(j_{k+1}\) from \(j_{k}\) in \(\tilde{O}(1)\) time. In general, we spend \(\tilde{O}(1)\) time per interesting pair. By Lemma 19, the number of such pairs is \(\tilde{O}(n)\). Multiplying this by \(m\) choices for \(\ell\), we obtain the time bound \(\tilde{O}(nm)\) as required. ### Algorithm for the Decision Variant In this section we prove Lemma 16, presenting the required algorithm. The algorithm starts by modifying \(M\) as follows. For every \((i,j)\in[u_{\Box}\ldots d_{\Box}]\times[\ell_{\Box}\ldots r_{\Box}]\), we set \(M[i][j]\leftarrow\$_{i,j}\) with \(\$_{i,j}\) being a unique symbol not in \(\Sigma\). Since neither of the changed symbols belongs to a side of a surrounding frame, this modification does not affect surrounding matching frames. From now on we assume that all positions of the inner rectangle contain unique disjoint symbols. We make the following claim. If a matrix \(M\) with an inner rectangle \((u_{\Box},d_{\Box},\ell_{\Box},r_{\Box})\) contains a surrounding matching frame \((u,d,\ell,r)\), then it contains a surrounding matching frame \((u^{\prime},d^{\prime},\ell,r)\) such that \((u^{\prime},d^{\prime},\ell)\) is an interesting triplet. Proof.: Let \((u,d,\ell,r)\) be a surrounding matching frame in \(M\). We denote \(S_{h}=M[u][\ell..r]=M[d][\ell..r]\). Let \(u^{\prime}\) be the maximal index \(u^{\prime}\in[u..u_{\Box}-1]\) such that \(M[u^{\prime}][\ell..r]=S_{h}\) and let \(d^{\prime}\) be the minimal index \(d^{\prime}\in[d_{\Box}+1..d]\) such that \(M[d^{\prime}][\ell..r]=S_{h}\). Then \((u^{\prime},d^{\prime},\ell,r)\) is a surrounding frame by definition. Since \(M[u^{\prime}][\ell..r]=M[d^{\prime}][\ell..r]=S_{h}\) by construction and \(M[u..d][\ell]=M[u..d][r]\) implies \(M[u^{\prime}..d^{\prime}][\ell]=M[u^{\prime}..d^{\prime}][r]\), the frame \((u^{\prime},d^{\prime},\ell,r)\) is matching. Finally, consider an arbitrary \(d^{\prime\prime}\in[u^{\prime}+1..d^{\prime}-1]\) and show that \(M[d^{\prime\prime}][\ell..r]\neq S_{h}\). If \(d^{\prime\prime}<u_{\Box}\) or \(d^{\prime\prime}>d_{\Box}\), this condition holds by the choice of \(u^{\prime}\) and \(d^{\prime}\) respectively. Otherwise the condition is guaranteed by uniqueness of the symbols of the inner rectangle. Hence \[\mathsf{LCP}(M[u^{\prime}][\ell..m],M[d^{\prime\prime}][\ell..m])<|S_{h}| \leq\mathsf{LCP}(M[u^{\prime}][\ell..m],M[d^{\prime}][\ell..m])\] and the triplet \((u^{\prime},d^{\prime},\ell)\) is interesting by definition. The AlgorithmRecall that at the beginning for every \((i,j)\in[u_{\Box}\ldots d_{\Box}]\times[\ell_{\Box}\ldots r_{\Box}]\), the algorithm sets \(M[i][j]\leftarrow\$_{i,j}\). Then the algorithm applies the preprocessing described in Section 3 and finds all interesting triplets in \(\tilde{O}(nm)\) time by applying Lemma 22. The final ingredient we need in order to report the existence of a surrounding matching frame is a mechanism for verifying, given an interesting triplet \((u,d,\ell)\), if there is a surrounding matching frame \((u,d,\ell,r)\). For this purpose, we present the following lemma. Given an interesting triplet \((u,d,\ell)\) of \(M\), there is an algorithm that outputs an integer \(r\) such that \((u,d,\ell,r)\) is a surrounding matching frame or reports that such \(r\) does not exist. The algorithm runs in \(\tilde{O}(1)\) time. Proof.: First, the algorithm eliminates the triplet \((u,d,\ell)\) if \(u\geq u_{\Box}\), if \(d\leq d_{\Box}\) or if \(\ell\geq\ell_{\Box}\). Otherwise, the goal is to find a value \(r\) such that: 1. \(r\geq r_{\Box}+1\), 2. \(M[u][\ell..r]=M[d][\ell..r]\) and 3. \(M[u..d][r]=M[u..d][\ell]\). The algorithm queries the \(\mathsf{LCP}\) data structure for \(L_{u,d}=\mathsf{LCP}(M[u][\ell..m],M[d][\ell..m])\). By the definition of \(\mathsf{LCP}\), we have \(M[u][\ell..r]=M[d][\ell..r]\) if and only if \(r\leq\ell+L_{u,d}-1\). In conclusion, we have that Properties 1 and 2 are satisfied if and only if \(r\in[r_{\Box}+1\ldots\ell+L_{u,d}-1]\). Let \(S_{v}=M[u..d][\ell]\). Due to Fact 7, we have that there is a pair of indices \(i_{v},j_{v}\) such that \(M[u..d][r]=S_{v}\) if and only if \(r\in\mathsf{LSA}^{u}_{\mathtt{columns}}[i_{v}..j_{v}]\). The algorithm finds \(i_{v}\) and \(j_{v}\) using Fact 7. The algorithm checks for the existence of a value \(r\) satisfying both Properties 1 and 2 by querying \(D^{u}_{\mathtt{columns}}\) for the existence of a point within the range \([r_{\Box}+1..\ell+L_{u,d}-1]\times[i_{v}..j_{v}]\). If the query outputs a point \((r,I^{u,r}_{\mathtt{columns}})\), the algorithm reports that \((u,d,\ell,r)\) is a surrounding matching frame. Otherwise, the algorithm reports that there is no value of \(r\) such that \((u,d,\ell,r)\) is a surrounding matching frame. The algorithm performs a constant number of local comparisons to verify that the values of \(u\), \(d\) and \(\ell\) are not disqualified from participating in a surrounding matching frame. Then, the algorithm performs a single \(\mathsf{LCP}\) query, finds \(i_{v}\) and \(j_{v}\), and executes a single query on \(D^{u}_{\mathtt{columns}}\). The time complexity is dominated by \(\tilde{O}(1)\). We are finally ready to prove Lemma 16. Proof of Lemma 16.: After finding all interesting triplets, the algorithm applies Lemma 27 to every interesting triplet \((u,d,\ell)\) to check if there is a surrounding matching frame \((u,d,\ell,r)\). If some application of Lemma 27 outputs a surrounding matching frame \((u,d,\ell,r)\), the algorithm outputs this frame. If no surrounding matching frame has been found, it follows from Lemma 26 that no surrounding matching frame exists and the algorithm reports so accordingly. The time complexity for the preprocessing and for finding all the interesting triplets is \(\tilde{O}(nm)\) by Section 3 and Lemma 22, respectively. Verifying each of the \(\tilde{O}(nm)\) interesting triplets is done in \(\tilde{O}(1)\) by Lemma 27. The overall time complexity is dominated by \(\tilde{O}(nm)\) as required. ## References * [1] Georgii Maksimovich Adelson-Velskii and Evgenii Mikhailovich Landis. An algorithm for organization of information. In _Doklady Akademii Nauk_, volume 146, pages 263-266. Russian Academy of Sciences, 1962. * [2] Amihood Amir and Gary Benson. Two-dimensional periodicity and its applications. In _Proceedings of the third annual ACM-SIAM symposium on Discrete algorithms_, pages 440-452, 1992. * [3] Amihood Amir and Gary Benson. Two-dimensional periodicity in rectangular arrays. _SIAM Journal on Computing_, 27(1):90-106, 1998. * Leibniz-Zentrum fur Informatik, 2019. doi:10.4230/LIPIcs.ESA.2019.5. * [5] Amihood Amir, Gad M Landau, Shoshana Marcus, and Dina Sokol. Two-dimensional maximal repetitions. _Theoretical Computer Science_, 812:49-61, 2020. * [6] Hideo Bannai, Tomohiro I, Shunsuke Inenaga, Yuto Nakashima, Masayuki Takeda, and Kazuya Tsuruta. The "runs" theorem. _SIAM J. Comput._, 46(5):1501-1514, 2017. doi:10.1137/15M1011032. * Leibniz-Zentrum fur Informatik, 2017. doi:10.4230/LIPIcs.CPM.2017.23. * Leibniz-Zentrum fur Informatik, 2022. doi:10.4230/LIPIcs.ESA.2022.35. * Leibniz-Zentrum fur Informatik, 2020. doi:10.4230/LIPIcs.ESA.2020.32. * [10] Bernard Chazelle. A functional approach to data structures and its use in multidimensional searching. _SIAM J. Comput._, 17(3):427-462, 1988. doi:10.1137/0217026. * [11] Peter Clifford and Raphael Clifford. Simple deterministic wildcard matching. _Inf. Process. Lett._, 101(2):53-54, 2007. doi:10.1016/j.ipl.2006.08.002. * Leibniz-Zentrum fur Informatik, 2021. doi:10.4230/LIPIcs.ICALP.2021.63. * [13] Jonas Ellert, Pawel Gawrychowski, and Garance Gourdel. Optimal square detection over general alphabets. In Nikhil Bansal and Viswanath Nagarajan, editors, _Proceedings of the 2023 ACM-SIAM Symposium on Discrete Algorithms, SODA 2023_, pages 5220-5242. SIAM, 2023. doi:10.1137/1.9781611977554.ch189. * 39th International Colloquium, ICALP 2012, Proceedings, Part I_, volume 7391 of _Lecture Notes in Computer Science_, pages 327-338. Springer, 2012. * [15] Nathan J Fine and Herbert S Wilf. Uniqueness theorems for periodic functions. _Proceedings of the American Mathematical Society_, 16(1):109-114, 1965. * [16] Pawel Gawrychowski, Samah Ghazawi, and Gad M. Landau. Lower bounds for the number of repetitions in 2d strings. In Thierry Lecroq and Helene Touzet, editors, _String Processing and Information Retrieval - 28th International Symposium, SPIRE_, volume 12944 of _Lecture Notes in Computer Science_, pages 179-192. Springer, 2021. doi:10.1007/978-3-030-86692-1_15. * [17] Dov Harel and Robert Endre Tarjan. Fast algorithms for finding nearest common ancestors. _SIAM J. Comput._, 13(2):338-355, 1984. doi:10.1137/0213024. * [18] Juha Karkkainen and Peter Sanders. Simple linear work suffix array construction. In Jos C. M. Baeten, Jan Karel Lenstra, Joachim Parrow, and Gerhard J. Woeginger, editors, _Automata, Languages and Programming, 30th International Colloquium, ICALP 2003, 2003. Proceedings_, volume 2719 of _Lecture Notes in Computer Science_, pages 943-955. Springer, 2003. doi:10.1007/3-540-45061-0_73. * [19] Dong Kyue Kim, Jeong Seop Sim, Heejin Park, and Kunsoo Park. Linear-time construction of suffix arrays. In Ricardo A. Baeza-Yates, Edgar Chavez, and Maxime Crochemore, editors, _Combinatorial Pattern Matching, 14th Annual Symposium, CPM 2003_, volume 2676 of _Lecture Notes in Computer Science_, pages 186-199. Springer, 2003. doi:10.1007/3-540-44888-8_14. * [20] Donald E. Knuth, James H. Morris Jr., and Vaughan R. Pratt. Fast pattern matching in strings. _SIAM J. Comput._, 6(2):323-350, 1977. doi:10.1137/0206024. * [21] Pang Ko and Srinivas Aluru. Space efficient linear time construction of suffix arrays. _J. Discrete Algorithms_, 3(2-4):143-156, 2005. doi:10.1016/j.jda.2004.08.002. * [22] Roman M. Kolpakov and Gregory Kucherov. Finding maximal repetitions in a word in linear time. In _40th Annual Symposium on Foundations of Computer Science, FOCS '99, 17-18 October, 1999, New York, NY, USA_, pages 596-604. IEEE Computer Society, 1999. doi:10.1109/SFFCS.1999.814634. * [23] Manasi S Kulkarni and Kalpana Mahalingam. Two-dimensional palindromes and their properties. In _International Conference on Language and Automata Theory and Applications_, pages 155-167. Springer, 2017. * [24] Glenn K. Manacher. A new linear-time 'on-line' algorithm for finding the smallest initial palindrome of a string. _J. ACM_, 22(3):346-351, 1975. doi:10.1145/321892.321896. * [25] Udi Manber and Gene Myers. Suffix arrays: A new method for on-line string searches. In David S. Johnson, editor, _Proceedings of the First Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 1990_, pages 319-327. SIAM, 1990. URL: [http://dl.acm.org/citation.cfm?id=320176.320218](http://dl.acm.org/citation.cfm?id=320176.320218). * [26] Yakov Nekrich. New data structures for orthogonal range reporting and range minima queries. In Daniel Marx, editor, _Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms, SODA 2021_, pages 1191-1205. SIAM, 2021. * [27] Ge Nong, Sen Zhang, and Wai Hong Chan. Linear suffix array construction by almost pure induced-sorting. In James A. Storer and Michael W. Marcellin, editors, _2009 Data Compression Conference (DCC 2009)_, pages 193-202. IEEE Computer Society, 2009. * [28] Ge Nong, Sen Zhang, and Wai Hong Chan. Linear time suffix array construction using d-critical substrings. In Gregory Kucherov and Esko Ukkonen, editors, _Combinatorial Pattern Matching, 20th Annual Symposium, CPM 2009, Proceedings_, volume 5577 of _Lecture Notes in Computer Science_, pages 54-67. Springer, 2009. * [29] Mikhail Rubinchik and Arseny M. Shur. EERTREE: an efficient data structure for processing palindromes in strings. _Eur. J. Comb._, 68:249-265, 2018. doi:10.1016/j.ejc.2017.07.021. * [30] Peter Weiner. Linear pattern matching algorithms. In _14th Annual Symposium on Switching and Automata Theory, 1973_, pages 1-11. IEEE Computer Society, 1973. doi:10.1109/SWAT.1973.13. * [31] Dan E. Willard. New data structures for orthogonal range queries. _SIAM J. Comput._, 14(1):232-253, 1985. doi:10.1137/0214019.
2302.02461
Dramatic enhancement of visible-light absorption in TiO2 by adding Bi
TiO2 is a wide band-gap semiconductor that has been intensively investigated for photocatalysis and water-spiting. However, weak light absorption in the visible region of the spectrum poses stringent limitation to its practical application. Doping of TiO2 with N or transition-metal impurities has been explored to shift the onset of optical absorption to the visible region, yet with limited success. Based on hybrid density functional calculations, we propose adding Bi to TiO2, in the form of dilute Ti_(1-x)Bi_(x)O2 alloys, to efficiently shift the optical absorption to the visible region. Compared to N, Bi introduces an intermediate valence band that is significantly higher in the band gap, and leaves the conduction band almost unchanged, leading to a remarkable redshift in the absorption coefficient to cover almost all the visible-light spectrum. Comparing formation enthalpies, our results show that adding Bi costs significantly less energy than N in oxidizing conditions, and that Ti_(1-x)Bi_(x)O2 might make a much more efficient photocatalyst than TiO_(2-y)N_(y) for water splitting.
Fernando P. Sabino, Anderson Janotti
2023-02-05T19:10:41Z
http://arxiv.org/abs/2302.02461v1
# Dramatic enhancement of visible-light absorption in TiO\({}_{\text{2}}\) by adding Bi ###### Abstract TiO\({}_{\text{2}}\) is a wide band-gap semiconductor that has been intensively investigated for photocatalysis and water-spiting. However, weak light absorption in the visible region of the spectrum poses stringent limitation to its practical application. Doping of TiO\({}_{\text{2}}\) with N or transition-metal impurities has been explored to shift the onset of optical absorption to the visible region, yet with limited success. Based on hybrid density functional calculations, we propose adding Bi to TiO\({}_{\text{2}}\), in the form of dilute Ti\({}_{\text{1-x}}\)Bi\({}_{\text{x}}\)O\({}_{\text{2}}\) alloys, to efficiently shift the optical absorption to the visible region. Compared to N, Bi introduces an intermediate valence band that is significantly higher in the band gap, and leaves the conduction band almost unchanged, leading to a remarkable redshift in the absorption coefficient to cover almost all the visible-light spectrum. Comparing formation enthalpies, our results show that adding Bi costs significantly less energy than N in oxidizing conditions, and that Ti\({}_{\text{1-x}}\)Bi\({}_{\text{x}}\)O\({}_{\text{2}}\) might make a much more efficient photocatalyst than TiO\({}_{\text{2-y}}\)N\({}_{\text{y}}\) for water splitting. ## I Introduction Solar energy is an abundant, clean, and renewable energy resource, and our ability to efficiently harness and use it is key for an energy sustainable future [1]. Artificial photosynthesis, and photocatalysis in general, is a promising technology to harvest and store solar energy, producing hydrogen or hydrocarbons via water splitting or CO\({}_{\text{2}}\) transformation using a photoelectrochemical device [2; 3]. It remains a significant challenge to fabricate an efficient and stable solar-energy conversion device. Controlling the semiconductor properties in these devices is of primary concern in the development of new materials for solar-energy conversion. Among the semiconductor materials that have been explored as photoelectrodes, TiO\({}_{\text{2}}\) stands out as being composed of earth-abundant and non toxic elements, and being photochemically stable under acidic and basic conditions [4; 5; 6; 7]. However, the large band gap of 3.0 eV for rutile and 3.2 eV for anatase severely limit the performance of TiO\({}_{\text{2}}\) since only 5% of the solar spectrum can in principle be utilized[8; 9; 6]. Attempts have been made to extend light absorption in TiO\({}_{\text{2}}\) to the visible region by adding impurities (doping) or forming dilute alloys, while maintaining the photochemical stability and low cost. Adding N has been considered one of the most effective way to bring light absorption in TiO\({}_{\text{2}}\) to the visible range [10; 11; 12; 13; 14; 15]. The 2\(p\) orbitals of the N substituting for O couple with the O 2\(p\) orbitals and lead to N-related bands above the original valence band, resulting in photoabsorption in the upper part of the visible spectrum. Similar effects have also been proposed to occur for C and S additions[16; 15]. Transition-metal and noble-metal additions, such as Cr, Co, V, Fe, Au, Ag, Cu, Pt and Pd [17; 18; 19; 7; 20], have also been proposed to bring the optical absorption in TiO\({}_{\text{2}}\) into the visible region. In the cases that have been tried in the laboratory [10; 11; 12; 13; 17; 20], the reported enhancements in the visible-light absorption are not substantial, and limited to wavelengths longer than 450 nm (i.e., photon energies higher than 2.75 eV). Such slight improvements were attributed in part to the low solubility and insufficient redshift of the band gap in the case of N [10; 11; 12; 13], or to introducing localized in-gap states close to the conduction band and incorporating on the surface and blocking the catalytic sites in the case of transition metals [17; 20]; visible-light responsiveness of TiO\({}_{\text{2}}\) with noble-metal additions was attributed to noble-metal related surface plasmons [20] instead of absorption in the TiO\({}_{\text{2}}\). More recently, addition of post transition metals to TiO\({}_{\text{2}}\), such as Bi, has also been considered, aiming at enhancing visible-light absorption and improving its photocatalytic efficiency [21; 22; 23; 24; 25]. Earlier work on the effects of adding Bi to TiO\({}_{\text{2}}\) reported an increase of a factor of 10 in the hydrogen photo-generation rate and a remarkable photocurrent enhancement, with optimum results obtained for 1% mol Bi content [21]. However, it remains unclear if these improvements came from Bi at the surface or Bi incorporated in the bulk. Subsequent experiments of Bi-added TiO\({}_{\text{2}}\) thin films and nanoparticles also reported improvements in photocatalytic efficiency, yet the proposed mechanisms either involved the assumption of a Bi-related band near the TiO\({}_{\text{2}}\) conduction band [22; 23] or Bi-metal/Bi\({}_{\text{2}}\)O\({}_{\text{3}}\) formation at the surface [25]. The microscopic mechanism, local structure, and effects of Bi on the electronic structure of TiO\({}_{\text{2}}\) are yet to be resolved. Inspired by these earlier promising results, we per formed density functional theory (DFT) and hybrid functional calculations to study the structural, electronic, and optical properties of dilute Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) alloys, in rutile and anatase phases. For comparison, we also performed calculations for dilute TiO\({}_{\text{2-y}}\)N\({}_{\text{y}}\) alloys, focusing on the formation enthalpy of these two systems as function of Bi and N concentrations, and their electronic and optical properties. We find that Bi introduces a partially occupied intermediate valence band, detached from the original O 2\(p\) valence band, lying almost in the middle of the band gap, leaving the conduction band unchanged. This intermediate valence band leads to high absorption coefficients in the visible range, making Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) much more robust for visible-light water splitting than TiO\({}_{\text{2-y}}\)N\({}_{\text{y}}\) alloys. ## II Computational approach The calculations are based on density functional theory [26; 27] within the Perdew, Burke, and Ernzerhof exchange and correlation functional revised for solids (PBEsol) [28] and the hybrid functional of Heyd, Scuseria, and Ernzerhof (HSE06) [29; 30], implemented with projected augmented wave (PAW) potentials [31] in the VASP code [32; 33]. The stress tensor and the atomic forces were relaxed using PBEsol with a cutoff energy of 620 eV for the plane-wave basis set. We employed a **k**-point mesh of 5\(\times\)5\(\times\)9 for integrations over the Brillouin zone of the 6-atom primitive cell of rutile TiO\({}_{2}\), and maintain the same **k**-mesh density for anatase and the supercells containing Bi or N. Since PBEsol severely underestimates band gaps, we employ the HSE06 hybrid functional to describe the electronic properties of rutile and anatase TiO\({}_{2}\), Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) and TiO\({}_{\text{2-y}}\)N\({}_{\text{y}}\). Under this approximation, the exchange potential is divided in short and long-range parts by a screening parameter \(\omega=0.206\text{\AA}^{-1}\)[29; 30]. In the short-range part, non-local Hartree-Fock exchange is mixed with semi-local PBE exchange [34] in a ratio of 25%/75%; the long-range part is described by the PBE functional. For the electronic structure calculations, with the HSE06 hybrid functional, the cutoff energy was reduced to 470 eV. The density of states (DOS) for the primitive cells and the supercells representing the alloys were calculated using \(\Gamma\)-centered **k**-point meshes that are equivalent to a 11\(\times\)11\(\times\)17 mesh for the rutile TiO\({}_{2}\) 6-atom primitive cell. The dilute alloys in the rutile and anatase phases, \(r\)-Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\), \(a\)-Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\), \(r\)-TiO\({}_{\text{2-y}}\)N\({}_{\text{y}}\), and \(a\)-TiO\({}_{\text{2-y}}\)N\({}_{\text{y}}\), were simulated using special quasi-random structures [35; 36] to represent random arrangements of Bi or N. For the rutile phase we use a supercell of 192 atoms where we replaced 1, 2, and 3 Ti with Bi atoms to simulate concentrations of 1.6%, 3.1%, and 4.7%; equivalently we replaced 2, 4, and 6 O with N atoms. For the anatase phase, we used a supercell of 108 atoms where we replaced 1 and 2 Ti with Bi atoms to simulate concentrations of 2.8% and 5.6%; equivalently we replaced 2 and 4 O with N atoms. Since Bi and N are aliovalent species in TiO\({}_{2}\), the calculations for the density of states and optical properties of Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) and TiO\({}_{\text{2-y}}\)N\({}_{\text{y}}\) alloys were performed for charge compensated closed shell systems, i.e., with the intermediate valence bands completely filled, assuming that, in practice, donor defects such as oxygen vacancies will be present as compensation centers. This avoids the difficulties of having to calculate dielectric matrices for metallic systems. The optical properties were computed with the tetrahedral smearing method, phonon assisted transitions and exciton effects were neglected, as these effects will not affect our conclusions. Finally, due to the high computation cost to calculate the dielectric matrix with required high-density **k**-point mesh for the alloy supercells using HSE06, we performed the calculations with PBEsol and used a scissors operator to the PBEsol results and shifted the optical absorption coefficient based on the difference between the band gap obtained in HSE06 and PBEsol. This approach is expected to not affect the intensity of the absorption coefficient near the band gap or elsewhere. ## III Results and discussion Structural properties of Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) and TiO\({}_{\text{2-y}}\)N\({}_{\text{y}}\) alloys and their formability The most common phase of bulk TiO\({}_{2}\) is rutile (\(r\)-TiO\({}_{2}\)), whereas anatase (\(a\)-TiO\({}_{2}\)) is mostly found in thin-film and nanostructure forms. \(r\)-TiO\({}_{2}\) belongs to \(P4_{2}/mmn\) space group and contains 6 atoms in the primitive cell, while \(a\)-TiO\({}_{2}\) belongs to \(I4_{1}/amd\) space group and contains 12 atoms in the primitive cell. In both phases, each Ti atom is surrounded by six O atoms forming edge-sharing octahedra, which are almost perfect in rutile and highly distorted in anatase. Each O atom is bonded to three Ti atoms in planar configurations. Based on atomic radii [37], Bi (1.6 A) is expected to incorporate on the Ti (1.4 A) sites, while N (0.65 A) is expected to substitute for O (0.60 A). The calculated lattice parameters for \(r\)-TiO\({}_{2}\) and \(a\)-TiO\({}_{2}\), listed in Table 1, are in good agreement with the experimental data [38; 39]. Adding Bi to TiO\({}_{2}\), with each Bi substituting on a Ti site, leads to a sizable increase in lattice parameters, attributed to the large atomic radius of Bi compared to that of Ti. In contrast, adding N, replacing O, leads to only slight increase in lattice parameters and in the volume per formula unit. For example, for N concentration of \(y=5.6\%\) in \(a\)-TiO\({}_{\text{2-y}}\)N\({}_{\text{y}}\), we find a volume expansion of only 0.20%, while for Bi concentration of \(x=5.6\%\) in \(a\)-Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\), the volume increases by 8.6%. Neglecting the charge state of the Bi addition, the formation enthalpy of the dilute Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) alloys are calculated using: \[\Delta H_{f}(\text{Ti}_{1-x}\text{Bi}_{x}\text{O}_{2}) =E_{tot}(\text{Ti}_{1-x}\text{Bi}_{x}\text{O}_{2})-E_{tot}(\text{TiO }_{2})\] \[+n[E_{tot}(\text{Ti})-E_{tot}(\text{Bi})]\] \[+n(\mu_{\text{Ti}}-\mu_{\text{Bi}}), \tag{1}\] where \(E_{tot}(\text{Ti}_{1-x}\text{Bi}_{x}\text{O}_{2})\) is the total energy of the supercell representing the alloy, \(E_{tot}(\text{TiO}_{2})\) is the total energy of pristine TiO\({}_{2}\) using the same supercell size. The chemical potentials \(\mu_{\text{Ti}}\) and \(\mu_{\text{Bi}}\) are referenced to the total energies of Ti and Bi bulk metallic phases (\(E_{tot}(\text{Ti})\) and \(E_{tot}(\text{Bi})\)), with \(\mu_{\text{Ti}}\leq 0\) and \(\mu_{\text{Bi}}\leq 0\). Similar expression is used to calculate the formation enthalpy of the dilute TiO\({}_{2\text{-y}}\)N\({}_{y}\) alloys. The variation of the chemical potentials \(\mu_{\text{Ti}}\), \(\mu_{\text{O}}\), \(\mu_{\text{Bi}}\), and \(\mu_{\text{N}}\) are restricted by the stability of TiO\({}_{2}\) and the formation of the secondary phases Bi\({}_{2}\)O\({}_{3}\) and TiN. For example, in the O-rich limit condition we have: \(\mu_{\text{O}}=0\), \(\mu_{\text{Ti}}=\Delta H_{f}(\text{TiO}_{2})\), \(\mu_{\text{Bi}}=\frac{1}{2}\Delta H_{f}(\text{Bi}_{2}\text{O}_{3})\), and \(\mu_{\text{N}}=0\); in the O-poor limit we have: \(\mu_{\text{O}}=\frac{1}{2}\Delta H_{f}(\text{TiO}_{2})\), \(\mu_{\text{Ti}}=0\), \(\mu_{\text{Bi}}=0\), and \(\mu_{\text{N}}=\Delta H_{f}(\text{TiN})\). \(\Delta H_{f}(\text{TiO}_{2})\), \(\Delta H_{f}(\text{Bi}_{2}\text{O}_{3})\), and \(\Delta H_{f}(\text{TiN})\) are the formation enthalpy of TiO\({}_{2}\), Bi\({}_{2}\)O\({}_{3}\), and TiN, respectively. The results for alloy formation enthalpy (\(\Delta H_{f}\)) as function of concentration of Bi and N in the O-rich and O-poor limit conditions are shown in Fig. 1(a). The chemical potential \(\mu_{\text{O}}\) plays important role in the incorporation of Bi and N in TiO\({}_{2}\). Since Bi occupies the Ti site, its incorporation is most favorable in O-rich (Ti-poor) conditions, while the incorporation of N is most favorable in O-poor (Ti-rich) conditions since it sits on the O site. As seen in Fig. 1(a), we find that the formation enthalpy of the dilute Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) alloys varies over a wider range from O-rich to O-poor limit conditions compared to the TiO\({}_{2\text{-y}}\)N\({}_{y}\) alloys. A crossing in the formation enthalpy plot as function of \(\mu_{\text{O}}\) is thus expected. This crossing indicates the \(\mu_{\text{O}}\) value above which the formation enthalpy of the Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) alloy is lower than that of the TiO\({}_{2\text{-y}}\)N\({}_{y}\) for the same Bi and N concentration. Since the formation enthalpy varies linearly with concentration in the dilute regime considered here, the \(\mu_{\text{O}}\) value at the crossing does not depend on the Bi and N content, but it is different for the two phases. We find that \(\mu_{\text{O}}=-2.73\) eV for rutile and \(-2.62\) eV for anatase. Considering typical growth conditions of TiO\({}_{2}\) thin films by molecular beam epitaxy [12; 13], for example, with temperature in the range of \(400-700^{\circ}\)C and pressure of \(10^{-3}-10^{-7}\) torr, \(\mu_{\text{O}}\) for O\({}_{2}\) gas falls in the range of \(-1.7\) eV to \(-0.9\) eV, being closer to the O-rich limit, and favoring Bi incorporation. The results in Fig. 1(a) also show that it cost less energy to incorporate Bi on the octahedral chemical environment of rutile than on the distorted octahedral environment of the anatase phase. In contrast, the formation enthalpy for the incorporation of N in \(r\)-TiO\({}_{2}\) and \(a\)-TiO\({}_{2}\) are almost the same for a given value of \(\mu_{\text{O}}\), which \begin{table} \begin{tabular}{l c c c c} Material & \(x\) or \(y\) & \(a_{0}\) & \(c_{0}\) & Volume \\ & (\%) & (Å) & (Å) & (Å\({}^{3}\)/f.u.) \\ \hline \(r\)-TiO\({}_{2}\) & & 4.584 & 2.937 & 30.86 \\ \(r\)-TiO\({}_{2}\) (exp.) & & 4.594 & 2.959 & 31.22 \\ \(r\)-Ti\({}_{1\text{-x}}\)Bi\({}_{x}\)O\({}_{2}\) & 1.6 & 4.593 & 2.945 & 31.06 \\ \(r\)-Ti\({}_{1\text{-x}}\)Bi\({}_{x}\)O\({}_{2}\) & 3.1 & 4.604 & 2.952 & 31.28 \\ \(r\)-TiTi\({}_{1\text{-x}}\)Bi\({}_{x}\)O\({}_{2}\) & 4.7 & 4.617 & 2.960 & 31.55 \\ \(r\)-TiO\({}_{2\text{-y}}\)N\({}_{y}\) & 1.6 & 4.584 & 2.939 & 30.87 \\ \(r\)-TiO\({}_{2\text{-y}}\)N\({}_{y}\) & 3.1 & 4.585 & 2.939 & 30.90 \\ \(r\)-TiO\({}_{2\text{-y}}\)N\({}_{y}\) & 4.7 & 4.587 & 2.940 & 30.93 \\ \hline \(a\)-TiO\({}_{2\text{-y}}\) & & 3.765 & 9.538 & 33.81 \\ \(a\)-TiO\({}_{2\text{ (exp.)}}\) & & 3.784 & 9.515 & 34.06 \\ \(a\)-Ti\({}_{1\text{-x}}\)Bi\({}_{x}\)O\({}_{2}\) & 2.8 & 3.782 & 9.579 & 34.25 \\ \(a\)-Ti\({}_{1\text{-x}}\)Bi\({}_{x}\)O\({}_{2}\) & 5.6 & 3.800 & 9.624 & 36.73 \\ \(a\)-TiO\({}_{2\text{-y}}\)N\({}_{y}\) & 2.8 & 3.767 & 9.543 & 33.86 \\ \(a\)-TiO\({}_{2\text{-y}}\)N\({}_{y}\) & 5.6 & 3.768 & 9.545 & 33.88 \\ \end{tabular} \end{table} Table 1: Lattice parameters \(a_{0}\) and \(c_{0}\) and volume of \(r\)-TiO\({}_{2}\) (rutile), \(a\)-TiO\({}_{2}\) (antatase) and Ti\({}_{1\text{-x}}\)Bi\({}_{x}\)O\({}_{2}\) and TiO\({}_{2\text{-y}}\)N\({}_{y}\) alloys for different Bi and N concentrations (\(x\) or \(y\)). The experimental data for \(r\)-TiO\({}_{2}\) and \(a\)-TiO\({}_{2}\) from Refs. [38; 39] are also listed. Figure 1: Formation enthalpy (\(\Delta H_{f}\)) of Ti\({}_{1\text{-x}}\)Bi\({}_{x}\)O\({}_{2}\) and TiO\({}_{2\text{-y}}\)N\({}_{y}\) alloys. (a) Formation enthalpy as function of Bi and N content for the alloys in rutile (\(r\)-) and anatase (\(a\)-) phases. The continuous lines refer to O-rich limit condition, while the dashed lines refer to the O-poor limit condition. (b) Formation enthalpy as function of chemical potential \(\mu_{\text{O}}\) for Bi and N concentrations of 3.13% for \(r\)-Ti\({}_{1\text{-x}}\)Bi\({}_{x}\)O\({}_{2}\) and \(r\)-TiO\({}_{2\text{-y}}\)N\({}_{y}\), and 2.8% for the alloys in the anatase phase. The crossing point in \(\mu_{\text{O}}\) indicates the chemical potential value above which Ti\({}_{1\text{-x}}\)Bi\({}_{x}\)O\({}_{2}\) alloys have lower formation enthalpy than TiO\({}_{2\text{-y}}\)N\({}_{y}\), showing that the former are more favorable to form in more oxidizing conditions. we attribute to the similarity between the anion chemical environment in rutile and anatase. Both Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) and TiO\({}_{2\text{-y}}\)N\({}_{\text{y}}\) alloys have been demonstrated [10; 11; 12; 22; 23; 24; 25], with Bi and N concentrations of up to a few atomic percent. For Bi, it was found a maximum solubility around 5% before formation of a secondary phase occurs[24]. These dilute concentrations are consistent with the values of formation enthalpy shown in Fig. 1. Our results show a large variation of the formation enthalpy of Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) with \(\mu_{\text{O}}\), and indicate that for maximizing the Bi concentration, O-rich growth or deposition conditions should be employed. Electronic structure and optical properties of Ti\({}_{\text{1-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) and TiO\({}_{\text{2-y}}\)N\({}_{\text{y}}\) alloys For the effects of adding Bi on the electronic and optical properties of TiO\({}_{2}\), our results show that Bi leads to remarkably larger redshift in band gap and optical absorption than N. The calculated density of states (DOS) of dilute Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) and TiO\({}_{2\text{-y}}\)N\({}_{\text{y}}\) with x = y = 4.7% for rutile and x = y = 2.8% for anatase are shown in Fig 2. In TiO\({}_{2}\), the band gap of \(\sim\)3 eV separates the occupied valence band, derived mostly from O \(2p\) orbitals, from the unoccupied conduction band derived mostly from the Ti \(3d\) orbitals. For \(r\)-TiO\({}_{2}\), the calculated band gap is 3.10 eV, and 3.35 eV for \(a\)-TiO\({}_{2}\), compared to the experimental values of 3.02 eV [8] and 3.20 eV, respectively. When Bi is incorporated into TiO\({}_{2}\), forming a dilute Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) alloy, a partially occupied intermediate band is created and located in the band gap, closer to the valence band. This intermediate band is derived from the coupling between the O \(2p\) and the low lying Bi \(6s\) bands-- the later Figure 3: Schematic representation of the coupling between the Bi \(6s\) and O \(2p\) forming the intermediate valence band in dilute Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) alloys, considerably reducing the excitation energy to the conduction band, yet leaving the position of the conduction band unchanged. Figure 2: Electronic density of states (DOS) for dilute Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) and TiO\({}_{2\text{-y}}\)N\({}_{\text{y}}\) alloys in (a) rutile and (b) anatase phases. The zero in energy axes are set to the top of the intermediate valence band in the Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) alloys. The DOS of Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) and TiO\({}_{2\text{-y}}\)N\({}_{\text{y}}\) in the same phase were aligned using the low lying O \(2s\) bands located around \(-19\) eV. The results show that Ti\({}_{1\text{-x}}\)Bi\({}_{\text{x}}\)O\({}_{2}\) alloys show smaller energy differences between the intermediate valence band and the conduction band than TiO\({}_{2\text{-y}}\)N\({}_{\text{y}}\) alloys. In these calculations electrons were added to have fully occupied intermediate valence bands with Bi and N, representing electronic compensated dilute alloys without explicitly adding the compensating centers, such as oxygen vacancies[40] for example. is located between \(-9\) and \(-10\) eV below the top of the valence band, as schematically shown in Fig. 3. The partially occupied intermediate valence band in TiO2-yNy alloys, for comparison, is much closer to the valence band, visibly modifying the top of the O \(2p\) band, where a small shoulder in the DOS can be distinguished. The intermediate valence band significantly reduces the excitation energy to the conduction band as indicated in Fig. 2 and Fig. 3, and this effect is remarkably stronger in Ti1-xBixO2 than in TiO2-yNy. Note than the distance between the O \(2p\) and the Ti \(3d\) bands remains almost unchanged, indicating that adding Bi or N only affects the highest occupied bands and does not change the position of the conduction band with respect to the vacuum level. This is a desirable effect in photoelectrochemical devices based on TiO2, enabling visible-light absorption while keeping the conditions that favor the redox potentials for water splitting [6]. Our results also show that adding Bi not only enables visible-light absorption over a wider range in the spectrum, but also leads to a significantly increase in the absorption coefficient near the effective band gap (between the intermediate valence band and the conduction band), compared to adding N. The calculated absorption coefficients for Ti1-xBixO2, TiO2-yNy, and TiO2 for comparison, are shown in Fig. 4. Having the system electronically compensated facilitates these calculations, and the results were averaged over the three orthogonal directions, which can be argued to better represent the absorption in polycrystalline films or nanostructures. For bulk TiO2, the optical band gap does not coincide with the fundamental band gap, as observed for several other oxides, including SnO2, In2O3, PbO2 [41; 42; 43; 44]. For \(r\)-TiO2, the dipole transition from valence-band maximum to conduction-band minimum at \(\Gamma\) is forbidden due to symmetry considerations; however, transitions in the vicinity of the \(\Gamma\) point are allowed (yet with amplitude smaller than \(\sim\)10\({}^{-4}\) cm\({}^{-1}\)) and, due to the relatively small dispersion of the valence and conduction bands, the onset in the optical absorption coefficient is slightly shifted, by \(\sim\)0.1 eV, to higher energies with respect to the fundamental band gap, as shown in Fig. 4(a). For \(a\)-TiO2, the dipole matrix element for the minimum-energy transition from valence to conduction band is also forbidden by symmetry, and due to larger dispersion of the valence and conduction bands, the optical gap is shifted to higher energies by \(\sim\)0.4 eV with respect to the fundamental band gap, as shown in Fig. 4(b). These results corroborate the fact that TiO2 by itself is so inefficient for visible-light photoelectrochemical processes. For dilute TiO2-yNy alloys, the N-related intermediate valence band leads to a reduction in the effective band gap, indicating that the alloy absorbs visible light, yet restricted to photons of relatively high energies. As shown in Fig. 4, \(r\)-TiO2-yNy alloys start absorbing visible light at \(\sim\)2.7 eV (459 nm), while \(a\)-TiO2-yNy alloys start at \(\sim\)2.6 eV (477 nm). The position of the onset of optical absorption varies only slightly with N concentration; nevertheless, the amplitude of the absorption coefficients increases with N content as seen in Fig. 4. Experimentally, it is known that adding N to \(a\)-TiO2 leads to visible light absorption starting at around 2.5 eV [10; 11]. This is in good agreement with our calculations considering that the calculated band gap of \(a\)-TiO2 using HSE06 is 0.15 eV higher than the experimental value. In the case of dilute Ti1-xBixO2 alloys, the redshift in the absorption coefficient is significantly larger than in TiO2-yNy. The Bi-related intermediate valence band lies almost in the middle of TiO2 band gap, and the predicted onset of optical absorption occurs at 2.0 eV (620 nm) in \(r\)-Ti1-xBixO2 and 1.7 eV (719 nm) in \(a\)-Ti1-xBixO2. It the later, it covers almost all the visible spectrum. Similar to the N case, the position of the onset of optical absorption does not change with Bi content in the dilute regime considered here, yet high amplitudes in the ab Figure 4: Calculated absorption coefficient (averaged over the three cartesian directions) as a function of photon energy for TiO2, Ti1-xBixO2 and TiO2-yNy with Bi and N concentrations of 1.6 and 4.7% in rutile and 2.8 and 5.6% in anatase crystal structure. The colored region indicate the visible-light spectrum. sorption coefficient near the threshold are obtained with higher Bi concentrations. Our results explain experimental observations of Bi-doped TiO\({}_{2}\) indicating a band gap of 2.05 eV (600 nm) [23], yet the microscopic mechanism has not been addressed. The combination of high optical absorption in the visible for both rutile and anatase phases of dilute Ti\({}_{1\textrm{-x}}\)Bi\({}_{\textrm{x}}\)O\({}_{2}\) alloys, and the fact that adding Bi does not affect the position of the conduction band make these alloys promising candidates for photocatalysis and water splitting. The comparison with TiO\({}_{2\textrm{-y}}\)N\({}_{\textrm{y}}\) clearly shows the superiority of adding Bi instead of N, leading to significantly higher absorption and wider range in the visible, almost reaching the IR region of the spectrum [Fig. 4(b)]. ## Conclusions In summary, using DFT and hybrid functional calculations we show that adding Bi leads to significantly more efficient visible-light absorption than adding N to TiO\({}_{2}\). Our results for \(a\)-Ti\({}_{1\textrm{-x}}\)Bi\({}_{\textrm{x}}\)O\({}_{2}\) alloy with 5.6% shown onset of optical absorption at \(\sim\)1.7 eV, which is near the upper limit of the IR spectrum, thus efficiently covering almost the whole visible region. As with N, adding Bi does not affect the position of the conduction band, offering an optimum straddling of the redox potentials for water splitting. The incorporation of Bi is predicted to be most favorable in oxidizing conditions in contrast to N incorporation. Our results not only explain the available experimental data on Bi-doped TiO\({}_{2}\), but also provide the microscopic mechanisms for the observed enhancement of visible-light absorption, calling for further experimental efforts to study the stability and performance of Bi-added TiO\({}_{2}\) in photocatalysis and water splitting. ## Acknowledgments This work was supported by the NSF Early Career Award grant no. DMR-1652994, the Extreme Science and Engineering Discovery Environment (XSEDE) supported by National Science Foundation grant number ACI-1053575, and the Information Technologies (IT) resources at the University of Delaware. FPS acknowledges support from FAPESP grant no. 2019/21656-8.
2306.10592
Conditional expectation using compactification operators
The separate tasks of denoising, least squares expectation, and manifold learning can often be posed in a common setting of finding the conditional expectations arising from a product of two random variables. This paper focuses on this more general problem and describes an operator theoretic approach to estimating the conditional expectation. Kernel integral operators are used as a compactification tool, to set up the estimation problem as a linear inverse problem in a reproducing kernel Hilbert space. This equation is shown to have solutions that allow numerical approximation, thus guaranteeing the convergence of data-driven implementations. The overall technique is easy to implement, and their successful application to some real-world problems are also shown.
Suddhasattwa Das
2023-06-18T16:11:40Z
http://arxiv.org/abs/2306.10592v4
# Conditional expectation using compactification operators ###### Abstract The separate tasks of denoising, least squares expectation, and manifold learning can often be posed in a common setting of finding the conditional expectations arising from a product of two random variables. This paper focuses on this more general problem and describes an operator theoretic approach to estimating the conditional expectation. Kernel integral operators are used as a compactification tool, to set up the estimation problem as a linear inverse problem in a reproducing kernel Hilbert space. This equation is shown to have solutions that allow numerical approximation, thus guaranteeing the convergence of data-driven implementations. The overall technique is easy to implement, and their successful application to some real-world problems are also shown. _MSC 2020 classification_ : 46E27, 46E22, 62G07, 62G05 _Keywords_ : Markov kernel, statistical denoising, compact operators, RKHS ## Introduction. In many experiments, due to the uncertainty in the parameters of the setup, the outcome has to be interpreted as a conditional expectation. This is owing to the fact that the measured outcome which is ideally the "typical" or "true" outcome, is in fact conditional on the prevailing parameters. This notion of conditional expectation has different interpretations in various contexts, such as mean-curves, least square fitting, and denoising. We present an operator theoretic approach to the problem of finding conditional expectation, which also provides a robust technique for denoising. The techniques uses ideas from both kernel mean embedding as well as kernel-based operator compactification. It has an easy adaptation to be data-driven, which we also prove is convergent in the limit of large data. To give our discussion a more firm mathematical footing we make the following assumption : **Assumption 1**.: _There is a compact metric space \(X\) and a topological space \(Y\), each equipped with their Borel \(\sigma\)-algebras \(\Sigma_{X}\) and \(\Sigma_{Y}\) respectively. There is a probability measure \(\mu\) on the product space \((X\times Y,\Sigma_{X\times Y})\)._ We interpret \(X\) as the space being directly observed, and \(Y\) to be the space from which a random input prior parameter is drawn. The observation or measurement being performed is via the following function : **Assumption 2**.: _There is an unknown function \(f\in X\times Y\to\mathbb{R}\) which lies in the space \(C\left(X;C(Y)\right)\)._ Our focus is on the conditional expectation of such functions : \[\bar{f}:=\mathbb{E}_{:X}^{\mu}(f)\in L^{1}(\mu_{X}),\quad\bar{f}(x):=\int_{yvY }f(x,y)d\mu(y|x). \tag{1}\] In spite of the simplicity of (1), \(\bar{f}\) has a problem of not being well defined at every point on \(X\). In general, \(\bar{f}\) is only an \(L^{1}\) equivalence class and has no guarantee of having a continuous representative, without further assumptions. We later introduce some technical but broad assumptions which would enable these conditional expectations to be continuous functions. At the moment we look at some examples. Examples.The situation described in Assumptions 1 and 2 occurs commonly in many situations. 1. Additive noise : Consider the situation where \(\bar{f}\) is a random variable (r.v.) on \(X\), \(Y=\mathbb{R}^{p}\) endowed with a zero mean distribution \(\mu_{Y}\), and \(f(x,y)=\bar{f}(x)+y\) is a contamination of \(\bar{f}\) with noise \(y\). Let the product space \(X\times Y\) be endowed with the product measure \(\mu=\mu_{X}\times\mu_{Y}\). Then the task of denoising is about recovering \(\bar{f}\), which is related to \(f\) and \(\mu\) via (1). 2. Pull-backs : Suppose \((\Omega,\tilde{\mu})\) is a probability measure, and \(\mathcal{X}:\Omega\to X\) and \(\mathcal{Y}:\Omega\to Y\) are two random variables. Let \(H_{\mathcal{X}}\) be the subspace of \(L^{2}(\mu)\) consisting of functions of the form \(\{\phi\circ\mathcal{X}\,:\,\phi\in L^{2}\left(\mathcal{X}_{*}\mu\right)\}\). Thus these are the square integrable functions which factorized through \(\mathcal{X}\). Alternatively, these are the pullbacks of square integrable functions on \(X\), under \(\mathcal{X}\). A space \(H_{\mathcal{X}\times\mathcal{Y}}\) can be defined similarly. Now set \(\mu=(\mathcal{X}\times\mathcal{Y})_{*}\tilde{\mu}\), the push forward of \(\tilde{\mu}\) onto \(X\times Y\). Then for any function \(f\in L^{2}(\mu;\mathbb{R})\), the pullback \(f\circ(\mathcal{X}\times\mathcal{Y})\) lies in \(H_{\mathcal{X}\times\mathcal{Y}}\). Then \[\left(\mathbb{E}_{;X}^{\mu}f\right)\circ\mathcal{X}=\mathbb{E}\left(f\circ( \mathcal{X}\times\mathcal{Y})|\mathcal{X}\right)=\operatorname{proj}_{H_{ \mathcal{X}}}\left(f\circ(\mathcal{X}\times\mathcal{Y})\right).\] Thus the conditional expectation from (1) pulls back to an equality of random variables. This situation is the focus of the _input model uncertainty_ -problem in statistics. The two r.v.-s \(A,B\) represent subsystems of the larger system \(\Omega\), and \(f\) is a statistic depending on the outcomes of \(A\) and \(B\). Then \(\mathbb{E}\left(f\circ(A\times B)|A\right)\) can be interpreted to be the mean value of the statistic \(f\), as the input parameter \(B\) is varied. The second equality in the equation above also implies that the conditional expectation may be derived as a least squares estimate, with a proper choice of norm. 3. Manifold learning : The notion of principal curves and manifolds are used to describe formulate manifold learning within a statistical context. Principal curves capture the notion of a curve passing through the center of a distribution. While there is no unique definition, the most commonly practice is to define the "central" curve as saddle points of the mean-squared projection distance. A commonly used definition is that of an expectation minimizing function from a manifold \(M\). In Section 5, we convert this expectation minimization problem into a conditional expectation problem, by assuming an unknown prior distribution from which the data-points are generated. In Section 5 we investigate a a few real world manifestations of these above scenarios, using the theoretical and numerical tools that we build. espite its importance in several applications, robust estimation techniques are yet to be fully explored. We next try to understand the challenges associated to this task. Challenges.There are multiple objectives one needs to be careful about in any such technique : 1. Smoothness : the estimated conditional expectation function preferably has some degree of regularity. 2. Consistency : the outcome of the estimation technique should converge to the truth with more data. 3. Data-driven : ideally the technique should not assume any prior distribution. 4. Robustness : The problem with trying to approximate the integral in (1) is that there may not be sufficient number of samples along each leaf. The techniques should have some robustness to this undersampling problem. We now take a brief look at a few important paradigms developed to address this estimation problem. Most of them lack in addressing one or more out of the above four objectives. Related work.While Assumptions 1 and 2 describe a basic scenario in many theoretical and real-world situations, they do not provide a recipe for estimating \(\bar{f}\) from (1). There has been several ideas proposed, which we organize into five classes. Nearest-neighbor based techniques (e.g. 1, 2, 3) are data-driven and easily scalable with large data, but lack a framework to guarantee consistency. The idea is to denoise a local piece of the data cloud by taking an average of nearest neighbors, via a technique known as Non-Local-Means (NLM) [2, 3]. In spite of different weighting schemes being proposed, these techniques also suffer from the presence of bias. A second class of techniques use principal component analysis and its statistical properties (e.g. 4). However, these techniques are framed in a set of Assumptions more restrictive than our general case. A third important class of techniques are based on the notion of _principal curves_(e.g. 5). Here, the target function is set to be a curve passing through the middle of a distribution. One then uses any of the vast number of gradient-descent-based to minimize the squared divergence from the mean curve. Principal curve estimation techniques come with all the advantages and disadvantages of gradient-descent learning, and also lack a firm footing in probabilistic assumption. A fourth class of techniques rely on concepts from traditional Harmonic analysis. The most notable is a technique named _conditional mean embedding_(6), which sets up the estimation as a linear-algebraic concept. Another idea is to assume a hypothesis space, and a noise prior on the coefficients with respect to a basis or frame (e.g. 7, 8). Our proposed technique falls under a fifth class of techniques - kernel based methods. Several ideas have been proposed for applying kernel-based techniques to conditional expectation estimation. Some examples are the use of kernels for non-isotropic mixing spatial data [9], using local linear kernel estimators [10], or adaptation of ideas from kernel density estimation [11]. We use a combination of compactification and reproducing kernel Hilbert space theory to perform our estimation. Outline.We describe the technical details next in Section 2. We setup the problem in a hypothesis space called reproducing kernel Hilbert space. The target function is to be determined via a linear inverse problem. To make the problem robust to finite rank approximations, we use compact operators on both sides of the equation. Readers looking for the algorithmic implementation and convergence guarantees are directed to the next Section 4. We demonstrate some applications in Section 5 and provide some discussions in Section 5.3. Finally, we provide the proofs of our theorems and lemmas in Section 3. ## 2 The technique. We begin by being more specific about the measure \(\mu\). Let \(\operatorname{Prob}(X)\) and \(\operatorname{Prob}(Y)\) respectively denote the space of probability measures on the measurable spaces \((X,\Sigma_{X})\) and \((Y,\Sigma_{Y})\). Suppose \(\mu_{X}\in\operatorname{Prob}(X)\), and one has a continuous map \(m:\operatorname{supp}(\mu_{X})\to\operatorname{Prob}(Y)\). This leads to a probability measure \(\mu\) on \((X\times Y,\Sigma_{X\times Y})\), defined as \[\mu(A):=\int_{x\in X}m\left\{y\in Y\,:\,(x,y)\in A\right\}d\mu_{X}(x),\quad \forall A\in\Sigma_{X\times Y}.\] We shall denote the set of all such measures \(\mu\) as \(\mathcal{M}\). We make the formal assumption : **Assumption 3**.: _The probability measure \(\mu\) from Assumption 2 lies in \(\mathcal{M}\)._ For measures \(\mu\) constructed this way, \(\mu_{X}\) coincides with the projection of \(\mu\) onto \(X\). For a general measure \(\mu\in\operatorname{Prob}(X\times Y)\), the conditional measures \(\{\mu_{x}\,:\,x\in X\}\) are guaranteed to exist only up to a set of \(\mu_{X}\) measure zero. Assumption 3 ensures that these conditional measures are not only defined on all points of \(\operatorname{supp}(\mu_{X})\), they also vary continuously. More importantly, we get the following very important regularity result on conditional expectations : **Lemma 2.1**.: _Suppose Assumptions 1 holds, and \(f\) be a function in \(C\left(X;C(Y)\right)\), and \(\mu\) a probability measure in \(\mathcal{M}\). Then the conditional expectation (1) can be realized as a function in \(C\left(\operatorname{supp}(\mu_{X})\right)\)._ Lemma 2.1 is proved in Section 6.1. We next introduce our main tool : _kernel functions_. Kernel.A kernel on the space \(X\) is a bivariate function \(k:X\times X\to\mathbb{R}\), which is supposed to be a measure of similarity between points on \(X\). Bivariate functions such as distance and inner-products are examples of kernels. Kernel based methods offer a non-parametric approach to learning, and has been used with success in many diverse fields such as spectral analysis [12, 13], discovery of spatial patterns [e.g. 14, 15], and the discovery of periodic and chaotic components of various real world systems [e.g. 16, 17], and even abstract operator valued measures [18]. We shall use the widely used _Gaussian_ kernels, defined as \[k_{\text{Gauss},\delta}(x,y):=\exp\left(\frac{1}{\delta}\operatorname{dist}(x,y)^{2}\right),\quad\forall x,y\in X. \tag{2}\] Gaussian kernels have been shown to have deep connections with the geometry or topology of the underlying space [e.g. 19, 20, 21, 22, 23]. Gaussian kernels have the important property of being _strictly positive definite_, which means that given any distinct points \(x_{1},\ldots,x_{N}\) in \(X\), numbers \(a_{1},\ldots,a_{N}\) in \(\mathbb{R}\), the sum \(\sum_{i=1}^{N}\sum_{i=1}^{N}a_{i}a_{j}k(x_{i},x_{j})\) is non-negative, and zero iff all the \(a_{i}\)-s are zero. Closely associated to kernels are kernel integral operators (k.i.o.). Given a probability measure \(\nu\) on \(X\), one has an integral operator associated to a continuous kernel \(k\), defined as \[K^{\nu}:L^{2}(\nu)\to C^{r}(X),\quad(K^{\nu}\phi)(x):=\int_{X}k(x,y)\phi(y)d \nu(y).\] If the kernel \(k\) is \(C^{r}\), then its image set will also be \(C^{r}\) functions. For this reason, k.i.o.-s are also known as smoothing operators. In fact, under mild assumptions, k.i.o.-s embed functions in \(L^{2}(\nu)\) into function spaces of higher regularit, called _RKHS_. Recall that a kernel \(k\) is _symmetric_ if for every \(x,x^{\prime}\in X\), \(k(x,x^{\prime})=k(x^{\prime},x)\). Symmetric kernels allow the use of tools from RKHS theory, which we briefly review next. Convolution of kernels.Suppose \(\beta\) is a fixed measure, and \(k_{1},k_{2}\) are two kernels on \(X\). Then these two kernels can be combined in a procedure called _convolution_ to get a kernel \[\left(k_{1}\star_{\beta}k_{2}\right)\in C\left(X\times X;\mathbb{R}\right), \quad\left(k_{1}\star_{\beta}k_{2}\right)(x,z):=\int k_{1}(x,y)k_{2}(y,z)d \beta(y).\] Let \(K_{1}^{\beta},K_{2}^{\beta}\) be the integral operators corresponding to these kernels. Then note that \[\left(K_{1}^{\beta}K_{2}^{\beta}\phi\right)(x) =\int\int k_{1}(x,y)k_{2}(y,z)\phi(z)d\mu(z)d\mu(y)=\int\int k_{1} (x,y)k_{2}(y,z)\phi(z)d\mu(y)d\mu(z)\] \[=\int\left(k_{1}\star_{\beta}k_{2}\right)(x,z)\phi(z)d\mu(z)\] Thus the operator that we get by composing two kernel integral operators is also a kernel integral operator, whose kernel is the convolution of the kernels of the two operators being composed. RKHS.A reproducing kernel Hilbert space or _RKHS_ is a Hilbert space of continuous functions, in which pointwise evaluations are bounded linear functionals. Any continuous symmetric, strictly positive definite kernel \(k\) (such as (22)) induces an RKHS which contains linear sums of the form \(\sum_{n=1}^{N}a_{n}k(\cdot,x_{n})\), in which the inner product is given by \[\left\langle\sum_{n=1}^{N}a_{n}k(\cdot,x_{n}),\sum_{m=1}^{M}b_{m}k(\cdot,y_{m} )\right\rangle=\sum_{n=1}^{N}\sum_{m=1}^{M}a_{n}^{*}b_{m}k(x_{n},y_{m}).\] The full details of the construction of this space \(\mathcal{H}\) can be found in any standard literature [e.g. 24]. The functions \(k(\cdot,x_{n})\) are called the _sections_ of the kernel \(k\). The kernel sections are members of the RKHS and span the RKHS. One of the defining properties of RKHS is the _reproducing_ property : \[\langle k(\cdot,x),f\rangle=f(x),\quad\forall x\in X,\,\forall\,f\in\mathcal{H}.\] When an RKHS is used as the hypothesis space in a learning problem, the target function is assumed to be a finite sum \(\sum_{n=1}^{N}a_{n}k(\cdot,x_{n})\) of the kernel sections. Let \(\nu\) be any probability measure on \(X\), and \(K^{\nu}\) be the kernel integration operator associated to \(k\) and \(\nu\). Then it is well known that the image of \(K^{\nu}\) lies in \(\mathcal{H}\). We denote this image as \(\mathcal{H}_{\nu}\). For example, let \(\nu\) be a discrete measure \(\nu=\sum_{n=1,2,\ldots}w_{n}\delta_{x_{n}}\), i.e., an aggregate of Dirac-delta measures supported on discrete points \(x_{n}\) along with weights \(w_{n}>0\) which sum to \(1\). Then \(\mathcal{H}_{\nu}\) is precisely the span of the kernel sections \(\{k(\cdot,x_{n})\,:\,n=1,\ldots,N\}\). The theory of RKHS remains in the background of our work. Our key idea is more a simple application of the theory of the decomposition of a measure into its conditional measures. We describe this next. Kernel smoothing.Suppose \(\alpha\) is a probability measure on \(X\), absolutely continuous with respect to \(\mu_{X}\). Then : \[\int_{X}\bar{f}(x)d\alpha(x) =\int_{X}\int_{Y}f(x,y)d\mu(y|x)d\alpha(x)=\int_{X}\int_{Y}f(x,y) d\mu(y|x)\frac{d\alpha}{d\mu_{X}}(x)d\mu_{X}(x)\] \[=\int_{X}\int_{Y}f(x,y)\frac{d\alpha}{d\mu_{X}}(x)d\mu(y|x)d\mu_{ X}(x)\] \[=\int_{X\times Y}\biggl{[}f(x,y)\frac{d\alpha}{d\mu_{X}}(x) \biggr{]}d\mu(x,y)\] By this trivial manipulation, an integral of \(\bar{f}\) which we consider to be unknown, against the measure \(\alpha\) which we also assume to be unknown, is converted into an integral over the joint domain \(X\times Y\). We now repeat this idea not for a single such absolutely continuous probability measure \(\alpha\), but for an entire parameterized family. Such a family of absolutely continuous probability measures is realized by a Markov integral operator, built from a _Markov transition kernel_ as defined below : \[p:X\times X\to\mathbb{R}_{0}^{+},\quad\forall\,x\in X,\,\int p(x,x^{\prime})d \mu_{X}(x^{\prime})=1.\] Note that for every \(x\in X\), the kernel section \(p(x,\cdot)\) is a non-negative function with integral equal to one. Thus it can be interpreted as a probability density. Then we have an associated integral operator \(P^{\mu_{X}}:L^{2}(\mu_{X})\to C(X)\), whose action on any \(\psi\in L^{2}(\mu_{X})\) is given by \[\left(P^{\mu_{X}}\psi\right)(z)\coloneqq\int_{X}p(z,x)\psi(x)d\mu(x)=\left(p_{ *}(z),\psi\right),\quad\forall\,z\in X.\] An important kernel for us will be a Markov normalized version of the Gaussian kernel (2). Given a measure \(\beta\) on \(X\) and a bandwith parameter \(\delta>0\), define \[k_{\text{Gauss},\delta}^{\text{symm},\beta}:X\times X\to\mathbb{R},\quad k_{ \text{Gauss},\delta}^{\text{symm},\beta}(x,x^{\prime}):=k_{\text{Gauss}, \delta}(x,x^{\prime})/\int_{X}k_{\text{Gauss},\delta}(x,x^{\prime\prime})d \beta(x^{\prime\prime}). \tag{3}\] We denote the integral operator by \(\mathcal{G}_{\delta}^{\beta}\). Then given any Markov kernel \(p\), the composite operator \(P^{\mu_{X}}\mathcal{G}_{\delta}^{\mu_{X}}\) has as its kernel the convolved kernel \[\tilde{p}_{\delta}\coloneqq p\star_{\mu_{X}}k_{\text{Gauss},\delta}.\] This kernel \(\tilde{p}_{\delta}\) on \(X\) has the following trivial extension to a transition kernel \[q_{\delta}:X\times(X\times Y)\to\mathbb{R}_{0}^{+},\quad q(x,x^{\prime}.y):= \tilde{p}_{\delta}(x,x^{\prime})=\int p(x,x^{\prime\prime})\exp\left(-\frac{1}{ \delta}\operatorname{dist}^{2}(x^{\prime\prime},x^{\prime})\right)d\mu_{X}(x^ {\prime\prime}).\] While \(q_{\delta}\) is not a kernel in a true sense, it it still generates integral operator-like action \[Q_{\delta}^{\mu}:L^{2}(\mu)\to C(X),\quad\left(Q_{\delta}^{\mu}\phi\right)(x): =\int_{X\times Y}q_{\delta}(x,x^{\prime},y)d\mu(x^{\prime},y).\] This trivial extension allows us to write \[P^{\mu_{X}}\mathcal{G}_{\delta}^{\mu_{X}}\,\bar{f}=P^{\mu_{X}}\mathcal{G}_{ \delta}^{\mu_{X}}\mathbb{E}_{;X}^{\mu}f=Q_{\delta}^{\mu}f \tag{4}\] The simple identity in (4) underlines our main idea. It is based on the compactness of integral operators. Approximations of measures.From a practical point of view, neither of the measures \(\mu\) or \(\mu_{X}\) are known explicitly. Instead they would be approximated by measures \(\alpha\) and \(\nu\) respectively, which we assume satisfies the following : **Assumption 4**.: _There are two probability measures \(\alpha\) and \(\nu\) supported on \(X\times Y\) and \(X\) respectively, such that \(\nu\) is absolutely continuous with respect to \(\alpha_{X}:=(\operatorname{proj}_{X})_{*}\,\alpha\), the projection of \(\alpha\) to \(X\). Moreover, \(d\nu/d\alpha\) is a function bounded away from zero._ The measure \(\alpha\) is meant to be an approximator of \(\mu\). Similar to (4), we have \[P^{\alpha_{X}}\mathcal{G}_{\delta}^{\alpha_{X}}\bar{f}=P^{\alpha_{X}}\mathcal{ G}_{\delta}^{\alpha_{X}}\mathbb{E}_{;X}^{\alpha}f=Q_{\delta}^{\alpha}f \tag{5}\] Note that instead of trying to determine the actual conditional expectation \(\bar{f}\), we approximate a smoothed version of it. This converts the \(L^{1}\) function \(\bar{f}\) into a function which is well-defined defined pointwise and is continuous. Thus \(C(X)\) becomes the common space in which both \(\mathcal{G}_{\delta}^{\mu_{X}}\mathbb{E}_{;X}^{\alpha}f\) and \(\mathcal{G}_{\delta}^{\alpha_{X}}\mathbb{E}_{;X}^{\alpha}f\) can be compared. This avoids the theoretical challenge of comparing conditional measures, which not only may not converge pointwise, but may not converge in distribution too[e.g. 38, Cor 4.1]. Furthermore, the bandwidth parameter \(\delta\) also controls the \(L^{2}\) distance of \(\mathcal{G}_{\delta}^{\mu_{X}}\mathbb{E}_{;X}^{\alpha}f\) from \(\mathbb{E}_{;X}^{\alpha}f\). Null hypothesis.A kernel is said to be _cc-universal_[see 25] if \(\mathcal{H}_{\mu_{X}}\) is dense in \(C(\operatorname{supp}(\nu))\), the space of continuous functions on the support of \(\nu\).. Kernels such as a \(k_{\text{Gauss}}\) are of the form \(k(x,y)=\psi(x-y)\), with \(\psi\) being a bounded, continuous, integrable function and a Fourier transform of some Borel measure. It has been shown in [25] that such kernels are cc-universal. With this in mind, we assume : **Assumption 5**.: _The smoothed conditional expectation \(\mathcal{G}_{\delta}^{\mu}\bar{f}\) lies in \(\mathcal{H}_{\mu_{X}}\)._ Assumption 5 might seem artificial as the choice of the kernel is independent of the unknown function \(f\). Thus the function \(\bar{f}\) need not lie in \(\mathcal{H}_{\mu_{X}}\). Our first statement in Theorem 1 does not require Assumption 5, but uses it to provide a stronger statement. In addition Assumption 5 is not restrictive due to the density of the RKHS in \(C(X)\). Given any RKHS \(\mathcal{H}\) and an \(f\in C(X)\), the sequence of norms \[a_{n}:=\inf\left\{\left\|h\right\|_{\mathcal{H}}\,:\,h\in\mathcal{H}_{\mu_{X}},\,\left\|f-h\right\|_{C(X)}<\frac{1}{n}\right\},\quad n=1,2,\ldots\] is called the rate of approximation of \(f\). Each of the RKHS approximations can be used as a candidate for \(f\) that satisfies Assumption 5. Its oscillatory nature, captured by \(\left\|f\right\|_{\mathcal{H}}\), determines the tuning of the experiment parameters. Thus, an RKHS supported on the observed data-space \(X\) will be our choice of hypothesis space in the estimation of the conditional expectation. An RKHS provides several advantages, it has a Hilbert space structure, pointwise evaluation is a bounded operation, and under mild conditions, they are dense in the space of continuous functions. More importantly, he conditional expectation operator has been shown to be well approximated in operator norm by Hilbert Schmidt operators [26]. This gives kernel-based techniques a clear edge over other techniques. Finally, RKHS in many situations can be endowed with a Banach algebra structure [27], thus enriching them further for Harmonic analysis. Our main result below is in terms of an _\(\epsilon\)-regularized_ least squares solution to a linear inverse problem \(Ma=b\). This is the solution \(a=\left(M^{T}M+\epsilon\right)^{-1}M^{T}b\). **Theorem 1**.: _Suppose Assumptions 1, 2, 3 and 4 hold. Then if \(\phi_{\alpha,\nu,\epsilon}\in L^{2}(\nu)\) is the least-squares solution in \(\phi\) to the equation_ \[P^{\alpha_{X}}K^{\nu}\phi=Q_{\delta}^{\alpha}f \tag{6}\] _then_ \[\lim_{\alpha\rightarrow\mu}\lim_{\epsilon\to 0^{\ast}}\left\|K^{\nu} \phi_{\alpha,\nu,\epsilon}-\mathcal{G}_{\delta}^{\alpha_{X}}\bar{f}\right\|_ {L^{2}(\nu)}=0. \tag{7}\] _Furthermore, if Assumption 5 holds, then_ \[\lim_{\nu\rightarrow\mu_{X}}\lim_{\alpha\rightarrow\mu,\,\epsilon\to 0^{ \ast}}\lim\left\|K^{\nu}\phi_{\alpha,\nu,\epsilon}-\mathcal{G}_{\delta}^{\mu_ {X}}\bar{f}\right\|_{\mathcal{H}}=0. \tag{8}\] Theorem 1 is proved in Section 3. Remark.Since an RKHS is continuously embedded in \(C(X)\), (8) implies \[\lim_{\nu\rightarrow\mu_{X}}\lim_{\alpha\rightarrow\mu,\,\epsilon\to 0^{ \ast}}\left\|K^{\nu}\phi_{\alpha,\nu,\epsilon}-\mathcal{G}_{\delta}^{\alpha_{X} }\bar{f}\right\|_{C(X)}=0, \tag{9}\] a guarantee of uniform convergence to the \(\delta\)-smoothed version of the expectation operator. Remark.A major difference of Claim (ii) from Claim (i) is that the former involves a joint limit \(\alpha\rightarrow\mu\) and \(\epsilon\to 0^{\ast}\). This implies that the solutions converge implies that the equation is stable to finite rank approximation, a fact we utilize in Theorem 2 later in Section 4. Commutations.The following diagram in (10) illustrates the operator theoretic commutations used in our scheme. (10) The blue loop expresses the identity in (5). The smoothing operator \(Q^{\alpha_{X}}\) is shown as the composite of the conditional expectation operator, and the smoothing operator \(P^{\alpha_{X}}\). The map \(B_{\alpha,\nu,\epsilon}\) shown in red is the linear map that provides the \(\epsilon\)-regularized least squares solution to (6). It is explicitly constructed later in (13), and explored in more detail in Section 3. The commutation shown in brown is the action of the smoothing operator on the result of the linear inverse problem. As a result, \(T_{\alpha,\nu,\epsilon},\bar{T}_{\alpha,\nu,\epsilon}\) are respectively the \(L^{2}(\alpha_{X})\) and continuous versions of the estimation technique. The various colored paths represent not only the mathematical aspect of commutation loops, but also the practical aspects of the technique. In a data-driven application, both \(\alpha,\nu\) are sampling measures built from data. The Markov operator \(\bar{Q}_{\delta}^{\alpha}\) then behaves similarly as a moving average, and helps overcome the scarcity of samples along individual leaves of the partition \(\{\{x\}\times Y\,:\,x\in X\}\). The commutation ensures that although \(\bar{Q}_{\delta}^{\alpha}\) is easily constructible from samples of the function \(f\in C\left(X;C(Y)\right)\), it bears a meaningful relation with the conditional expectation. The green loop represents the difference between in-sample and out-of sample extensions. Although \(\tilde{K}^{\nu}\) and \(K^{\nu}\) are related by the simple inclusion map \(\iota_{\nu}\), they have different implementations. Although \(\tilde{K}^{\nu}\) is expressed as the composition of \(K^{\nu}\) with \(\iota_{\nu}\), it has a more direct and immediate evaluation. We next prove Theorem 1, by taking a closer look at the operators and spaces in the background. ## 3 A closer look at Theorem 1. We prove Theorem 1 in this section, and begin by being more particular with our notation. In the various diagrams below, we use dashed, green arrows to indicate that a new operator is being defined via construction. Given any continuous kernel \(k\) and a finite measure \(\nu\) supported on \(X\), one has the following diagram of spaces and operators : Here \(\iota_{\nu}:C(X)\to L^{2}(\boldsymbol{\nu})\) is the inclusion of continuous maps into the space of square integrable maps. The operators \(\tilde{K}^{\nu}\) and \(\tilde{K}^{\nu}\) are respectively pre and post compositions of \(K^{\nu}\) with \(\iota_{\nu}\). We use the analogous notation for \(\bar{P},\tilde{P}\) and \(\bar{P}\) too. With this notation in mind, (10) will be rewritten as (11) We have named the composite operator \(\mathcal{G}_{\delta}^{\alpha X}\mathbb{E}_{;X}^{\alpha}\) as \(\mathbb{E}_{;X}^{\alpha,\delta}\). Let \(P^{\alpha_{X}}\) and \(K^{\nu}\) respectively be the kernel integral operators corresponding to the Markov kernel \(p\) and symmetric s.p.d. kernel \(k\), and probability measures \(\alpha\) and \(\nu\). This leads to (12) The operator \(A_{\alpha,\nu}\) so constructed is effectively the left hand side of (6). Its \(\epsilon\)-regularized pseudo-inverse is the operator \(B_{\alpha,\nu,\epsilon}\) given by (13) The operators \(A_{\alpha,\nu}\) and \(B_{\alpha,\nu,\epsilon}\) are the building blocks to constructing the solution to (6), as shown in (10). We next explore a consequence of Assumption 4. The condition of absolutely continuity implies the following commuting diagram (14) The map \(j_{\nu\to\alpha}\) is a simple inclusion map, and is built on the fact that any function which is \(L^{2}(\nu)\) integrable is also \(L^{2}(\alpha_{X})\) integrable. The simple commutation in (14) has several important consequences throughout our proof. The first is **Lemma 3.1**.: _The operator \(A_{\alpha,\nu}\) is a compact and injective operator._ Lemma 3.1 is proved in Section 6.2. To get more out of (14), we expand it to and then This commutations added to (11) gives : (15) This allows us to write \[\tilde{Q}^{\alpha_{X}}=\tilde{P}^{\alpha_{X}}\iota_{\alpha}K^{\nu}\left(\tilde{K}^ {\nu}\right)^{-1}\iota_{\nu}\mathbb{E}^{\alpha,\delta}_{X}=A_{\alpha,\nu}\left( \tilde{K}^{\nu}\right)^{-1}\iota_{\nu}\mathbb{E}^{\alpha,\delta}_{;X},\] where the last equality follows from (12). Applying \(\tilde{K}^{\nu}B_{\alpha,\nu,\epsilon}\) on both sides gives : \[\tilde{K}^{\nu}B_{\alpha,\nu,\epsilon}\bar{Q}^{\alpha}_{\delta}=\tilde{K}^{\nu }B_{\alpha,\nu,\epsilon}A_{\alpha,\nu}\left(\tilde{K}^{\nu}\right)^{-1}\iota_{ \nu}\mathbb{E}^{\alpha,\delta}_{;X}. \tag{16}\] **Lemma 3.2**.: _The operator \(B_{\alpha,\nu,\epsilon}\) is compact. Moreover, the following limit holds pointwise :_ \[\lim_{\epsilon\to 0^{+}}B_{\alpha,\nu,\epsilon}A_{\alpha,\nu}=\mathrm{Id}_{L^{2} (\nu)}\,.\] Lemma 3.2 is proved in Section 6.3. Using Lemma 3.2 one can establish the limits of (16) : \[\lim_{\epsilon\to 0^{+}}\tilde{K}^{\nu}B_{\alpha,\nu,\epsilon}\bar{Q}^{ \alpha}_{\delta}f=\tilde{K}^{\nu}\left(\tilde{K}^{\nu}\right)^{-1}\iota_{\nu} \mathbb{E}^{\alpha,\delta}_{;X}f=\iota_{\nu}\mathbb{E}^{\alpha,\delta}_{;X}f. \tag{17}\] To study the limit of (15), we use the following lemma. **Lemma 3.3**.: _For every \(f\in C(X;C(Y))\), \(\mathbb{E}^{\alpha,\delta}_{;X}f\) converges uniformly to \(\mathbb{E}^{\mu,\delta}_{;X}f\) as \(\alpha\) converges weakly to \(\mu\)._ Lemma 3.3 is proved in Section 6.4. Lemma 3.3 applied to (15) gives : \[\begin{split}\lim_{\alpha\to\mu}\lim_{\epsilon\to 0^{+}}\tilde{K}^{ \nu}B_{\alpha,\nu,\epsilon}\bar{Q}^{\alpha}_{\delta}f&=\lim_{ \alpha\to\mu}\iota_{\nu}\mathbb{E}^{\alpha,\delta}_{;X}f,\quad\text{ by \eqref{eq:C(X)}}\\ &=\iota_{\nu}\lim_{\alpha\to\mu}\mathbb{E}^{\alpha,\delta}_{;X}f \\ &=\iota_{\nu}\mathbb{E}^{\mu,\delta}_{;X}f,\quad\text{ by Lemma \ref{eq:C(X)}.}\end{split} \tag{18}\] Equation (18) states that if the least squares solution \[\phi_{\alpha,\nu,\epsilon}:=B_{\alpha,\nu,\epsilon}\bar{Q}^{\alpha}_{\delta}f\] is smoothed using \(K^{\nu}\), then the resulting function converges in \(L^{2}(\nu)\) norm to the conditional expectation. This proves the first claim of Theorem 1. To proceed with the next part of the theorem, we reuse the notation \(K^{\nu}\) to also denote the map of \(L^{2}(\nu)\) into \(\mathcal{H}_{\nu}\). Thus we have the following commutation of maps Since \(\mathrm{supp}(\nu)\subseteq\mathrm{supp}(1\mu_{X})\), the space \(\mathcal{H}_{\nu}\) is a subspace of \(\mathcal{H}_{\mu_{X}}\). Thus there is a projection between these spaces. The following commutation provides an alternate interpretation of this projection. (19) Next we state another important consequence of the hypothesis in Assumption 5. **Lemma 3.4**.: _Suppose Assumptions 1, 2, 3, 4 and 5 holds. Then_ Lemma 3.4 is proved in Section 6.5. This leads to \[K^{\nu}B_{\alpha,\nu,\epsilon}\bar{Q}_{\delta}^{\alpha}f =K^{\nu}B_{\alpha,\nu,\epsilon}A_{\alpha,\nu}\left(\tilde{K}^{\nu }\right)^{-1}\iota_{\nu}\mathbb{E}_{:X}^{\alpha,\delta}f,\quad\text{ by \eqref{eq:KK}}\] \[\xrightarrow{\epsilon\to 0^{+},\alpha\to\mu}K^{\nu}\left(\tilde{K}^{\nu }\right)^{-1}\iota_{\nu}\mathbb{E}_{:X}^{\alpha,\delta}f,\quad\text{by Lemma \ref{lem:KK}}\] \[\xrightarrow{\epsilon\to 0^{+},\alpha\to\mu}K^{\nu}\left(\tilde{K}^{\nu }\right)^{-1}\iota_{\nu}\mathbb{E}_{:X}^{\alpha,\delta}f,\quad\text{by Lemma \ref{lem:KK}}\] \[=\operatorname{proj}_{\mu_{\nu}}\mathbb{E}_{:X}^{\mu}f,\quad \text{ by \eqref{eq:KK}}\] \[\xrightarrow{\nu\to\mu_{X}}\mathbb{E}_{:X}^{\mu}f.\] Thus \(K^{\nu}\phi_{\alpha,\nu,\epsilon}\) converges to \(g\) in RKHS norm. This completes the proof of Theorem 1. In the next section we look at a practical implementation of this scheme, and the accompanying guarantee of convergence. ## 4 Numerical implementation. In a data-driven implementation, the inputs to any numerical recipe is a dataset, along with some algorithmic parameters. We assume that the data originates as follows : **Assumption 6**.: _There is a sequence of points \((x_{n},y_{n})\in X\times Y\) for \(n=1,2,3,\ldots\), equidistributed with respect to the probability measure \(\mu\) from Assumption 1._ The concept of equidistribution is a major relaxation of the assumption of being i.i.d.. Such an assumption has been utilized with great success in the theoretical understanding of numerical methods for timeseries which have strong correlations, such as those arising from dynamical systems [e.g. 12, 18, 13]. Algorithm 1 below presents our main procedure. **Algorithm 1**.: _RKHS representation of conditional expectation._ * _Input._ _A sequence of pairs_ \(\{(x_{n},y_{n})\,:\,n=1,\ldots,N\}\) _with_ \(x_{n}\in\mathbb{R}^{d}\) _and_ \(y_{n}\in\mathbb{R}\)_._ * _Parameters._ 1. _Choice of RKHS kernel_ \(k:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}^{+}\)_._ 2. _Smoothing parameter_ \(\delta>0\)_._ 3. _Sub-sampling parameter_ \(M\in\mathbb{N}\) _with_ \(M<N\)_._ * _Output._ _A vector_ \(\vec{a}=(a_{1},\ldots,a_{M})\in\mathbb{R}^{M}\) _such that_ \[(\mathbb{E}_{:X}^{\alpha}f)(x)\approx\sum_{m=1}^{N}a_{m}k(x,x_{m}),\quad\forall x \in\mathbb{R}^{d}.\] * _Steps._ 1. _Compute a Gaussian Markov kernel matrix using (_3_) :_ \[[G_{\delta}]\in\mathbb{R}^{N\times N},\quad[G_{\delta}]_{i,j}=k_{Gauss,\delta} ^{\text{sym},\beta}(x_{i},x_{j})\] (20) 2. _Compute a Markov kernel_ \([P]\in\mathbb{R}^{N\times N}\) _as_ \([P]_{i,j}:=p(x_{i},x_{j})\)__ 3. _Compute the kernel matrix_ \([K]\in\mathbb{R}^{N\times M}\) _as_ \([K]_{i,j}=k(x_{i},x_{j})\)_._ 4. _Find a vector_ \(\bar{a}\in\mathbb{R}^{M}\) _as the_ \(\epsilon\)_-regularized least-squares solution to the equation_ \[[P]\,[K]\,\bar{a}=[P]\,[G_{\delta}]\,\bar{y}\] Algorithm 1 has two components, the choice of an RKHS kernel, and the creation of a Markov kernel which approximates the smoothing operator. We usually choose choose \(p\) to be the Markov normalized Gaussian kernel from (3), \[p(z,x):=\exp\left(-\operatorname{dist}\left(z-x\right)^{2}/\delta\right)/\int_ {X}\exp\left(-\operatorname{dist}\left(z-y\right)^{2}/\delta\right)d\mu_{X}(y).\] Theorem 2 below provides an interpretation of the output vector \(\vec{a}\) from Algorithm 1, and the nature of the convergence of the results. **Theorem 2**.: _Suppose Assumptions 1, 2, 5, 3 and 6 hold. Let \(\vec{a}\) be the output of Algorithm 1 is applied to the data \((x_{n},y_{n})_{n=1}^{N}\). Then_ \[\lim_{M\to\infty}\lim_{N\to\infty,\epsilon\to 0^{\nu}}\left\|\sum_{n=1}^{N}a_{n}k( \cdot,x_{n})-\mathcal{G}_{\delta}^{\mu}\,\bar{f}\right\|_{\mathcal{H}}=0. \tag{21}\] Note that Theorem 2 is independent of the choice of the kernel \(k\) in Algorithm 1. Algorithm 1 itself can be carried out on any dataset \((x_{n},y_{n})\), irrespective of whether any of Assumptions 1, 2, 5 and 6 hold. Assumptions 1 and 6 are needed to place the dataset in context, and Assumptions 2 and 5 are required to guarantee their convergence. Theorem 2 is proved in Section 6.6, and is a direct consequence of Theorem 1. We next apply Algorithm 1 to a few practical problems. ## 5 Examples. Our choice of kernel in all the experiments is the _diffusion_ kernel (e.g. 28, 29). Among its various constructions, we choose the following : \[k_{\operatorname{diff}_{\epsilon}}^{\mu}(x,y)=\frac{k_{\operatorname{ Gauss},\epsilon}(x,y)}{\deg_{\!\mathrm{f}}(x)\deg_{\!\mathrm{r}}(y)}, \tag{22}\] \[\deg_{\!\mathrm{r}}(x):=\int_{X}k_{\operatorname{Gauss},\epsilon} (x,y)d\mu(y),\quad\deg_{\!\mathrm{f}}(x):=\int_{X}k_{\operatorname{Gauss}, \epsilon}(x,y)\frac{1}{\deg_{\!\mathrm{r}}(x)}d\mu(y).\] Diffusion kernels have been shown to be good approximants of the local geometry in various different situations (e.g. 21, 19, 30, 31), and is a natural choice for non-parametric learning. It has the added advantage of being symmetrizable : \[\rho(x)k_{\operatorname{diff},\epsilon}^{\mu}(x,y)\rho(y)^{-1}=\tilde{k}_{ \operatorname{diff},\epsilon}^{\mu}(x,y)=\frac{k_{\operatorname{Gauss}, \epsilon}(x,y)}{\left[\deg_{\!\mathrm{r}}(x)\deg_{\!\mathrm{r}}(y)\deg_{\! \mathrm{f}}(x)\deg_{\!\mathrm{f}}(y)\right]^{1/2}}, \tag{23}\] where \[\rho(z)=\deg_{\!\mathrm{f}}(z)^{1/2}/\deg_{\!\mathrm{r}}(z)^{1/2}.\] The kernel \(\tilde{k}_{\operatorname{diff},\epsilon}^{\mu}\) from (23) is clearly symmetric. Since it is built from the s.p.d. kernel \(k_{\operatorname{Gauss},\epsilon}\), \(\tilde{k}_{\operatorname{diff},\epsilon}^{\mu}\) is s.p.d. too and thus generates an RKHS of its own. Moreover, the kernel \(k_{\operatorname{diff},\epsilon}^{\mu}\) can be symmetrized by a degree function \(\rho\), which is both bounded and bounded above \(0\). Such a kernel will be called _RKHS-like_. Let \(M_{\rho}\) be the multiplication operator with \(\rho\). Then \[\operatorname{ran}K^{\mu}_{\operatorname{diff},\epsilon}=\operatorname{ran}M_{ \rho}\circ\tilde{K}^{\mu}_{\operatorname{diff},\epsilon}.\] Again, because of the properties of \(\rho\), both \(M_{\rho}\) and its inverse are bounded operators. Thus there is a bijection between the RKHS generated by \(\tilde{k}^{\mu}_{\operatorname{diff},\epsilon}\), and the range of the integral operator \(K^{\mu}_{\operatorname{diff},\epsilon}\). ### Denoising. As explained in Section 1, denoising is a particular instance of Assumptions 1 and 2. We illustrate an application of Algorithm 1 to continuous images in Figures 1. The task of imaging discontinuous images involves many other considerations such as edge detection, and is postponed to a later study. Each of the RGB components of a continuous image may be considered to be points on the graph of a continuous map \(\bar{f}:[0,1]^{2}\to\mathbb{R}\). The points correspond to the image under \(\bar{f}\) of points on a rectangular lattice within \([0,1]^{2}\). The noise can be considered as an addition from a Gaussian random variable drawn from \(\mathbb{R}\). We chose for \(\bar{f}\) the function \[\bar{f}\!:[0,1]^{2}\to\mathbb{R},\quad\bar{f}(x_{1},x_{2})\coloneqq\cos\left( \kappa 2\pi x\right)+e^{\sin(\kappa 2\pi y)}. \tag{24}\] where \(\kappa\in\mathbb{N}\) is an index for the \(C^{1}\) norm of the image. Theorem 2 states that the convergence or accuracy of the results are dependent on increasing the number of data samples. This presents as a problem in image denoising, as the number of data samples is exactly the number of pixels, and is usually fixed and limited. As a result, the outcome of the numerical procedure becomes sensitive to the smoothing parameter parameter \(\delta\) and the \(C^{1}\) norm of the true image. ### Principal curves - electrostatic charge. Given any \(C^{2}\) curve \(\lambda\!:\![0,1]\to\mathbb{R}^{+}\), one can define the function \[f\!:\![0,1]\!\times\!\mathbb{R}\to\mathbb{R},\quad(x,y)\mapsto\lambda(x)+y/ \rho(x),\quad\rho(x)\!:=\frac{3}{2+C|\lambda^{\prime\prime}(x)|},\quad C\!:=\! 4/\left\|\lambda^{\prime\prime}\right\|_{\sup}. \tag{25}\] Equation (25) is a simplified model of electrostatic charged distribution on curved surfaces. The function \(\rho\) controls the spread or variance of points around the mean value. By design, \(\rho\) has a range in \([0.5,1.5]\). For our test case, we choose for \(\lambda\) the function \[\lambda(x)=\exp\left(\sin(2\pi x)^{2}\right),\quad\forall x\in[0,1]. \tag{26}\] The two derivatives of \(\lambda\) are : \[\lambda^{\prime}(x) =2\pi\lambda(x)\sin(4\pi x)\] \[\lambda^{\prime\prime}(x) =8\pi^{2}\lambda(x)\cos(4\pi x)+4\pi^{2}\lambda(x)\sin(4\pi x)^{2}\] \[=4\pi^{2}\lambda(x)\left[2\cos(4\pi x)+\sin(4\pi x)^{2}\right]\] \[=4\pi^{2}\lambda(x)\left[2-\left[\cos(4\pi x)-1\right]^{2}\right]\] Let us assign spaces and measures \[X=[0,1],\,\mu_{X}=\operatorname{Leb}_{[0,1]},\,Y=\mathbb{R},\,\mu_{Y}\sim N(0, 1),\mu=\mu_{X}\times\mu_{Y}.\] Then Algorithm 1 applied to data points distributed according to the push-forward of \(\mu\) under \(f\) should yield an approximation of \(\bar{f}=\mathbb{E}^{\mu}_{X}f\), which according to (25) coincides with \(\bar{f}=\lambda\). Also note that \(\rho(x)\), which is the standard deviation of the conditional measure \(\mu(\colon|x)\), is itself a conditional expectation, namely, \(\mathbb{E}^{\mu}_{\colon X}\left|f-\bar{f}\right|^{2}=\rho\). See Figure 2 for the results of our algorithm applied to data. ### Conclusions. These two relatively simple problems from the fields of image denoising and manifold learning were a test for Algorithm 1, and the convergence guarantees of Theorems 1 and 2. This also leads to several future directions of work, for a finer performance of numerical methods. 1. Tuning parameter \(\delta\) : The main challenge in the scheme based on (4) is to average out the effect of the \(Y\)-variable, within the integrals \[\left(Q^{\mu}f\right)(z)=\int_{X\times Y}p(z,x)f(x,y)d\mu(x,y)\approx\int_{B(z ^{\prime})}p(z,x)\int_{Y}f(x,y)d\mu(y|x)d\mu_{X}(x).\] Figure 1: Denoising a monochromatic image. Such an image can be expressed as a continuous function of x–y coordinates. The mathematical formulation is the problem is described in Section 5.1. The test-image shown here is described by (24). The parameter \(\kappa\) is an index of the \(C^{1}\) norm of the function. The first row shows that Algorithm 1 performs reasonably well for \(\kappa=2\) on a \(50\times 50\) pixel image, but the performance deteriorates when \(\kappa=2\). The third row shows a much improved result when the image gets more detailed with an increased size of \(75\times 75\). Here is \(\delta^{\prime}\) is the effective radius of integration, which goes to zero as \(\delta\) goes to zero. The smaller \(\delta\) is, the more number of total samples are needed so that sufficiently many of them fall with this sphere \(B(z^{\prime})\). Thus a larger smoothing radius \(\delta\) leads to a faster convergence with \(N\). On the other hand, a larger \(\delta\) and \(N\) also increases the condition number of the matrix \([P]\), which could adversely affect the accuracy of the solution to the linear least squares problem (10). There is no recipe for tuning these parameters that would work uniformly well in all applications. 2. Images with limited resolution : Given a fixed resolution for an image, say \(m\times n\) pixels, one would be limited to \(N=mn\) data samples, as each pixel would correspond to a data point \(x_{n}\) drawn uniformly from the unit square. As seen in Figure 1, this could lead to a deterioration in performance as the image function becomes more oscillatory. A remedy is to first increase the resolution of the image using the capability of RKHS for accurate out-of-sample evaluations. This would be the subject of a more thorough and focused study in a subsequent work. 3. Subsampling : While it is desirable to use all available data samples when approximating \(Q\), one has the flexibility of choosing a subset of the data-points for approximating \(K^{\mu}\), as this is an integral Figure 2: Principal curve estimation. Section 5.2 presents an example of a principal curve problem, from data-points scattered around a ”true” or ”principal” curev. Equation (25) is a realization of Assumptions 1 and 2, and presents a simplified view of electrostatic charge distribution along a wire. We assume that the function \(\lambda\) takes the form in (26). The left panels above show the results of applying Algorithm 1 to data equidistributed with respect to this distribution, to recover the conditional expectation as a function over \(X=[0,1]\). The results show a close match with the true mean, which is simply the curve \(\lambda\). The results also visibly improve as the number of samples are increased. The right panel shows a repeated use of Algorithm 1 to reconstruct the variance as a function over \(X=[0,1]\). Again, the results show a strong match with the true function, which is \(\rho\). operator on the lower dimensional space \(X\). As long as the condition that the subsampling measure \(\nu\) converges weakly to \(\mu\), any subsampling strategy would suffice. 4. Choice of kernel : The question of which kernel would be optimal in a learning problem has been a long standing question, with no clear and unique answer [e.g. 32, 33, 34]. While we have explained the reason behind our choice of using diffusion kernels, various other adaptive kernels could be good candidates, such as variable bandwidth kernels [e.g. 35, 36] and dynamics adapted kernels [e.g. 37, 12]. Algorithm 1 does not specify the kernel, and any RKHS-like kernel such as the diffusion kernel would be sufficient. 5. Finally, our results depend on the function \(f\) having a continuous conditional expectation \(\bar{f}\). This condition is violated images with background and foreground objects, as well as in audio streams involving human speech or ambient sounds. Adapting these situations to fit Assumptions 1-4 is an interesting and promising direction of research. ## 6 Appendix. ### Proof of Lemma 2.1 Fix an \(\epsilon>0\) and an \(x\in\operatorname{supp}(X)\). We prove the lemma by finding a neighborhood of \(x\) in \(\operatorname{supp}(\mu_{X})\) such that for every \(x^{\prime}\) drawn from this neighborhood, \(\left|\int_{Y}f(x^{\prime},\cdot)d\mu_{x^{\prime}}-\int_{Y}f(x,\cdot)d\mu_{x} \right|<2\epsilon\). By Assumption 2 there is a neighborhood \(U\) of \(x\) in \(X\) such that \[\left\|f(x,\cdot)-f(x^{\prime},\cdot)\right\|_{C(Y)}<\epsilon,\quad\forall x^ {\prime}\in U.\] Next by Assumption 3, thre is a neighborhood \(V\) of \(x\) in \(X\) such that \[\left|\int f(x,\cdot)dm(x)-\int f(x,\cdot)dm(x^{\prime})\right|<\epsilon, \quad\forall x^{\prime}\in V\cap\operatorname{supp}(\mu_{X}).\] Fix an \(x^{\prime}\in U\cap V\cap\operatorname{supp}(\mu_{X})\). Then \[\left|\int_{Y}f(x^{\prime},\cdot)d\mu_{x^{\prime}}-\int_{Y}f(x, \cdot)d\mu_{x}\right|=\left|\int_{Y}f(x^{\prime},\cdot)dm(x^{\prime})-\int_{Y }f(x,\cdot)dm(x)\right|\] \[\qquad=\left|\int_{Y}\left[f(x^{\prime},\cdot)-f(x,\cdot)\right] dm(x^{\prime})-\int_{Y}f(x,\cdot)d\left[m(x^{\prime})-m(x)\right]\right|\] \[\qquad\leq\int_{Y}\left|f(x^{\prime},\cdot)-f(x,\cdot)\right|dm(x ^{\prime})-\left|\int_{Y}f(x,\cdot)d\left[m(x^{\prime})-m(x)\right]\right|\] \[\qquad\qquad<2\epsilon.\] This completes the proof of Lemma 2.1. ### Proof of Lemma 3.1 Equations (12) and (14) together give : Since the kernel is s.p.d., \(\tilde{K}^{\nu}\) is an invertible operator. Since \(\nu<<\alpha\), the map \(j_{\nu\to\alpha}\) is injective. By its Markovian property, \(P^{\alpha_{X}}\) is also injective. Finally, the restriction \(\iota_{\alpha}:C(X)\to L^{2}(\alpha_{X})\) is also injective Thus their composition \(\iota_{\alpha}P^{\alpha_{X}}j_{\nu\to\alpha}\tilde{k}^{\nu}\) is also surjective. This equals \(A_{\alpha,\nu}\), which also must be surjective. According to (12), \(A_{\alpha,\nu}\) is the composition of the integral operator \(\tilde{P}^{\alpha_{X}}\) along with the bounded operators \(\iota_{\alpha}\) and \(K^{\nu}\). This makes \(A_{\alpha,\nu}\) a compact operator. ### Proof of Lemma 3.2. Since by Lemma 3.1\(A_{\alpha,\nu}\) is compact, by Shauder's theorem, its adjoint \(A_{\alpha,\nu}^{*}\) is compact too. By (13), \(B_{\alpha,\nu,\epsilon}\) is the composition of \(A_{\alpha,\nu}^{*}\) with the inverse of \(A_{\alpha,\nu}^{*}A_{\alpha,\nu}+\epsilon\). Now \(A_{\alpha,\nu}^{*}A_{\alpha,\nu}+\epsilon\) is a symmetric positive definite, bounded operator, whose spectrum lies in \([\epsilon,\infty)\). This makes its inverse bounded. Thus \(B_{\alpha,\nu,\epsilon}\) is compact too. Next, by Lemma 3.1, \(A_{\alpha,\nu}\) has the SVD \[A_{\alpha,\nu}=\sum_{n=1,2,\ldots}\sigma_{n}\left|u_{n}\right\rangle\left\langle v _{n}\right|,\] where \(\left\{u_{n}\right\}_{n=1,2,\ldots}\) is an orthonormal basis for \(L^{2}(\alpha_{X})\), \(\left\{v_{n}\right\}_{n=1,2,\ldots}\) is an orthonormal basis for \(\left(\ker A_{\alpha,\nu}\right)^{\perp}\), and \(\sigma_{1}\geq\sigma_{2}\geq\ldots\) are the singular values of \(A_{\alpha,\nu}\). By Lemma 3.1, the space \(\left(\ker A_{\alpha,\nu}\right)^{\perp}\) is trivial. Thus \(\left\{v_{n}\right\}_{n=1,2,\ldots}\) is an orthonormal basis for the entire \(L^{2}(\nu)\). In that case Continuing to utilize this expansion, we get compact diagonal operators As a result, \[B_{\alpha,\nu,\epsilon}=\left[A_{\alpha,\nu}^{*}A_{\alpha,\nu}+\epsilon \right]^{-1}A_{\alpha,\nu}^{*}=\sum_{n=1,2,\ldots}\frac{\sigma_{n}}{\sigma_{n }^{2}+\epsilon}\left|v_{n}\right\rangle\left\langle u_{n}\right|\] and \[B_{\alpha,\nu,\epsilon}A_{\alpha,\nu}=\sum_{n=1,2,\ldots}\frac{\sigma_{n}^{2 }}{\sigma_{n}^{2}+\epsilon}\left|v_{n}\right\rangle\left\langle v_{n}\right|, \quad B_{\alpha,\nu,\epsilon}A_{\alpha,\nu}-\operatorname{Id}_{L^{2}(\nu)}=- \sum_{n=1,2,\ldots}\frac{\epsilon}{\sigma_{n}^{2}+\epsilon}\left|v_{n}\right \rangle\left\langle v_{n}\right|. \tag{27}\] This completes the proof of the lemma. ### Proof of Lemma 3.3. Let \(\left\{\alpha_{N}\right\}_{N=1}^{\infty}\) be a sequence of measures in \(\mathcal{M}\) converging weakly to \(\mu\). It has to be shown that the continuous functions \(g_{N}:=\mathbb{E}_{:X}^{\alpha_{N},\delta}f\) converge uniformly to \(g=\mathbb{E}_{:X}^{\mu,\delta}f\). By Assumption 1, \(\operatorname{supp}(\mu_{X})=X\) and is a compact metric space. At this point we recall the following elementary result from Real Analysis : **Lemma 6.1**.: _Let \(X\) be a compact metric space, and \(g_{n}\) be a sequence of Lipschitz functions converging to another Lipschitz function \(g\) pointwise on a dense subset of \(X\). Then the \(g_{n}\) converge uniformly to \(g\)._ In our case, our functions are uniformly Lipschitz by the following lemma **Lemma 6.2**.: _Let \(\kappa\) be a kernel lying in the space \(\operatorname{Lip}(X;C(X))\). Fix a Borel measure \(\beta\) on \(X\), and denote the corresponding integral operator by \(\mathcal{K}^{\beta}\). Then any \(\phi\in L^{2}(\beta)\),_ \[\left|\left(\mathcal{K}^{\beta}_{\delta}\phi\right)\left(x\right)-\left( \mathcal{K}^{\beta}_{\delta}\phi\right)\left(x^{\prime}\right)\right|\leq \left\|\kappa\right\|_{Lip}\operatorname{dist}(x,x^{\prime})\left\|\phi \right\|_{L^{2}(\beta)},\] This completes the proof of Lemma 3.3. ### Proof of Lemma 3.4. Denote \(h\) denote the function \(\big{(}\tilde{K}^{\nu}\big{)}^{-1}\iota_{\nu}\mathbb{E}_{:X}^{\mu,\delta}f\). Let \(\{\alpha_{N}\}_{N=1}^{\infty}\) be a sequence of probability measures on \(X\times Y\), converging weakly to \(\mu\). The claim of the lemma can be restated as \[\forall\theta>0,\ \exists N_{0}\in\mathbb{N},\,\epsilon_{0}>0\ \text{s.t.}\ \ \forall N>N_{0},\,\forall\epsilon\in(0,\epsilon_{0}),\ \|(B_{\alpha_{N},\nu,\epsilon}A_{\alpha_{N},\nu}-\operatorname{Id})\,h\|_{L^{ 2}(\nu)}<\theta. \tag{28}\] In the proof of Lemma 3.2, the variables \(\sigma_{n},v_{n}\) that appear in (27) depend on the measure \(\alpha\). To indicate this dependency, we change the notation to \(\sigma_{n,N},v_{n,N}\). By the spectral convergence of \(P^{\alpha_{X,N}}\) to \(P^{\mu_{X}}\) ( [e.g. 12, Prop 25] [e.g. 23, Prop 13] ), the singular vectors and singular values of the operator \(A_{\alpha,\nu}\) converges to \(A_{\mu,\nu}\). Thus the \(v_{N,n}\) are right singular vectors of \(A_{\alpha_{N},\nu}\). Similarly, we have right singular vectors \(V_{n}\) of \(A_{\mu,\nu}\). The function \(h\) can be expanded expanded along both these bases as \[h=\sum_{n=1,2,\ldots}a_{n,N}v_{n,N},\quad h=\sum_{n=1,2,\ldots}a_{n}v_{n}.\] Since \(\mathbb{E}_{:X}^{\mu,\delta}f\) is assumed to be in the RKHS, by [13, Thm 2.1], for each index \(n\), \(\lim_{N\to\infty}a_{n,N}=a_{n}\). At this point, we take note of the fact that \(B_{\alpha_{N},\nu,\epsilon}A_{\alpha_{N},\nu}\) is bounded in norm by some constant \(\Gamma\), for every \(N\in\mathbb{N}\). Since \(h\) has a bounded \(L^{2}(\nu)\) norm, there are \(M_{0},N_{0}\in\mathbb{N}\) such that \[\sum_{n\geq M_{0}}|a_{n,N}|^{2}<\frac{1}{2(\Gamma+1)}\theta,\quad\forall N>N_{0}.\] Then by (27) we have \[\big{(}B_{\alpha_{N},\nu,\epsilon}A_{\alpha_{N},\nu}-\operatorname{Id}\big{)} \,h=\epsilon\sum_{n<M_{0}}\frac{a_{n}}{\sigma_{n}^{2}+\epsilon}v_{n,N}+\big{(} B_{\alpha_{N},\nu,\epsilon}A_{\alpha_{N},\nu}-\operatorname{Id}\big{)}\sum_{n \geq M_{0}}a_{n,N}v_{n,N}\] Note that the first sum on the RHS converges to zero. Thus there is an \(\epsilon_{0}\) such that the first sum is less than \(\theta/2\) in norm, for all \(\epsilon\in(0,\epsilon_{0})\). The norm of the second term is less than \(\gamma/2\) by design. Thus the condition (28) holds, and completes the proof of the lemma. ### Proof of Theorem 2. The timeseries \(\big{(}x_{n},y_{n}\big{)}_{n=1}^{N}\) leads to the following _sampling_ measures : \[\mu_{M}:=\frac{1}{M}\sum_{m=1}^{N}\delta_{x_{m}},\quad\mu_{N}:=\frac{1}{N} \sum_{n=1}^{N}\delta_{x_{n}},\quad\bar{\mu}_{N}:=\frac{1}{N}\sum_{n=1}^{N} \delta_{(x_{n},y_{n})}.\] These three measures respectively play the role of \(\nu\), \(\alpha_{X}\) and \(\alpha\) from Assumption 4. Since \(\nu\) is built from a subsample of the support for \(\alpha_{X}\), Assumption 4 is fulfilled. Thus all the criterion for Theorem 1 (ii) are fulfilled, and (8) applies The equidistribution assumed in Assumption 6 implies that \(\mu,M\mu_{N}\), \(\bar{\mu}_{N}\) converges weakly to \(\mu_{X}\), \(\mu_{X}\) and \(\mu\) respectively. With these choices of \(\nu,\alpha\), \(L^{2}(\nu)\) and \(L^{2}(\alpha_{X})\) are isomorphic to \(\mathbb{C}^{N}\). The integral operators \(P\) and \(K^{\nu}\) take the form of the \(N\times N\) matrices \([P]\) and \([K]\). Equation (8) thus takes the form of (21).
2310.14012
Locked entropy in partially coherent fields
We introduce a taxonomy for partially coherent optical fields spanning multiple degrees of freedom (DoFs) based on the rank of the associated coherence matrix (the number of non-zero eigenvalues). When DoFs comprise two spatial modes and polarization, a fourfold classification emerges, with rank-1 fields corresponding to fully coherent fields. We demonstrate theoretically and confirm experimentally that these classes have heretofore unrecognized different properties. Specifically, whereas rank-2 fields can always be rendered separable with respect to its DoFs via a unitary transformation, rank-3 fields are always non-separable. Consequently, the entropy for a rank-2 field can always be concentrated into a single DoF (thus ridding the other DoF of statistical fluctuations), whereas some entropy is always 'locked' in one DoF of a rank-3 field.
Mitchell Harling, Varun A. Kelkar, Kimani C. Toussaint, Jr., Ayman F. Abouraddy
2023-10-21T13:52:14Z
http://arxiv.org/abs/2310.14012v1
# Locked entropy in partially coherent optical fields ###### Abstract We introduce a taxonomy for partially coherent optical fields spanning multiple degrees of freedom (DoFs) based on the rank of the associated coherence matrix (the number of non-zero eigenvalues). When DoFs comprise two spatial modes and polarization, a fourfold classification emerges, with rank-1 fields corresponding to fully coherent fields. We demonstrate theoretically and confirm experimentally that these classes have heretofore unrecognized different properties. Specifically, whereas rank-2 fields can always be rendered separable with respect to its DoFs via a unitary transformation, rank-3 fields are always non-separable. Consequently, the entropy for a rank-2 field can always be concentrated into a single DoF (thus ridding the other DoF of statistical fluctuations), whereas some entropy is always 'locked' in one DoF of a rank-3 field. The study of optical coherence and the statistical fluctuations in optical fields extends back to the pioneering work of Zernike [1], and subsequently reached maturation in the work of Wolf and Mandel [2; 3; 4]. Recently, new insights into optical coherence have been brought to light [5; 6] by exploiting the mathematical correspondence between the coherence matrix for classical optical fields involving multiple degrees of freedom (DoFs) [7; 8; 9] and the density operator representing multipartite quantum mechanical states. This correspondence has led to the coinage of the term 'classical entanglement' [10; 11; 12; 13; 14] to describe optical fields that are _not_ separable with respect to their DoFs, in analogy with quantum entanglement that is intrinsic to non-separable multipartite quantum states. The concept of classical entanglement has helped solve problems with regards to Mueller matrices [15], determine the maximum achievable Young double-slit interference visibility [16], and enable the characterization of quantum optical communications channels [17], among many other applications [18; 19; 20; 21; 22; 23; 24; 25; 26]. The study of classical entanglement in optical fields is enriched by the possibility of implementing inter-DoF (or global) unitary transformations ('unitaries' henceforth for brevity [16; 27]), including entangling and disentangling unitaries; e.g., a spatial light modulator can entangle or disentangle polarization and spatial modes [28]. This feature is central to the recent demonstration of entropy swapping [29; 30; 31], which refers to the reversible reallocation of statistical fluctuations from one DoF to another in a partially coherent field. For example, starting with a _polarized_ but spatially _incoherent_ field (the entropy is confined to the spatial DoF), a global unitary can convert the field to one that is _unpolarized_ but spatially _coherent_ (the entropy has been swapped to the polarization DoF with no loss of energy). A similar approach enables entropy concentration, whereby the entropy shared among the DoFs can be optimally transferred into a single DoF via a unitary [30]. Here we uncover a surprising feature of partially coherent optical fields that places a constraint on entropy concentration under arbitrary global unitaries [16; 29]. For concreteness, we examine a canonical optical field model having two binary DoFs, and introduce a fourfold taxonomy based on the _coherence rank_ of the associated \(4\!\times\!4\) coherence matrix, which corresponds to the number of its non-zero eigenvalues (from 1 to 4). While the rank-1 class embraces all coherent fields, rank-2 through rank-4 classes comprise partially coherent fields. We find that fields of different ranks have altogether different characteristics that have not been investigated to date. Specifically, we find that the potential for concentrating the field entropy into a single DoF depends crucially on the rank. Most conspicuously, the entropy of rank-2 fields - no matter how _high_ - can _always_ be concentrated into a single DoF, thereby leaving the other DoF free of statistical fluctuations [Fig. 1]. Indeed, there always exists a global unitary that renders the field separable with respect to its DoFs, with all the initial entropy concentrated into a single DoF. In stark contrast, it is _impossible_ to concentrate all the entropy of rank-3 fields - no matter how _low_ - into one DoF, and residual fluctuations must be retained by the other DoF, which we call 'locked entropy' [Fig. 1]. This stems from the fact that rank-3 fields possess a _fundamentally non-separable_ structure that cannot be eliminated unitarily. We demonstrate these effects experimentally using optical fields defined by polarization and two spatial modes as the binary DoFs of interest. These results open a new window onto understanding the dynamics of optical coherence upon traversing optical systems or media that couple multiple DoFs, and suggests new applications that may exploit the coherence rank in optical imaging and communications. **Vector-space formulation of partially coherent optical fields.** The most general state of an optical field characterized by a binary DoF is described by a \(2\times 2\) coherence matrix. The polarization coherence matrix is \(\mathbf{G}_{\mathrm{p}}\!=\!\left(\begin{array}{cc}G^{\mathrm{HH}}&G^{\mathrm{HV}} \\ G^{\mathrm{VH}}&G^{\mathrm{VV}}\end{array}\right)\), where \(G^{ij}\!=\!\left\langle E^{i}(E^{j})^{*}\right\rangle\), \(i,j\!=\!\mathrm{H},\mathrm{V}\), and \(E^{i}\) is a scalar field component at a point. Similarly, the spatial coherence matrix at two points \(a\) and \(b\) in a scalar field is \(\mathbf{G}_{\mathrm{s}}\!=\!\left(\begin{array}{cc}G_{aa}&G_{ab}\\ G_{ba}&G_{bb}\end{array}\right)\), where \(G_{kl}\!=\!\left\langle E_{k}E_{l}^{*}\right\rangle\), \(k,l\!=\!a,b\), and \(E_{k}\) is the scalar field at a point. The polarization entropy is \(S_{\mathrm{p}}\!=\!-\lambda_{1}\log\lambda_{1}-\lambda_{2}\log_{2}\lambda_{2}\), where \(\lambda_{1}\) and \(\lambda_{2}\) are the eigenvalues of \(\mathbf{G}_{\mathrm{p}}\); the spatial entropy \(S_{\mathrm{s}}\) associated with \(\mathbf{G}_{\mathrm{s}}\) is similarly defined. In general \(0\!\leq\!S_{\mathrm{p}},S_{\mathrm{s}}\!\leq\!1\), with \(S_{\mathrm{p}},S_{\mathrm{s}}\!=\!0\) in the case of fully coherent fields (no statistical fluctuations) [32]. The maximum entropy is 1 bit when the field is unpolarized or spatially incoherent \(\mathbf{G}_{\mathrm{p}},\mathbf{G}_{\mathrm{s}}\!=\!\frac{1}{2}\mathcal{I}\) (where \(\mathcal{I}\) is the identity matrix). Taking _both_ DoFs (i.e., two points in a vector field), the first-order coherence is described by a 4\(\times\)4 coherence matrix \(\mathbf{G}\)[6; 8; 16], \[\mathbf{G}=\left(\begin{array}{cccc}G^{\mathrm{HH}}_{aa}&G^{\mathrm{HV}}_{ aa}&G^{\mathrm{HH}}_{ab}&G^{\mathrm{HV}}_{ab}\\ G^{\mathrm{VH}}_{aa}&G^{\mathrm{VV}}_{aa}&G^{\mathrm{VH}}_{ab}&G^{\mathrm{VV}} _{ab}\\ G^{\mathrm{HH}}_{ba}&G^{\mathrm{HV}}_{ba}&G^{\mathrm{HH}}_{bb}&G^{\mathrm{HV}} _{bb}\\ G^{\mathrm{VH}}_{ba}&G^{\mathrm{VV}}_{ba}&G^{\mathrm{VH}}_{bb}&G^{\mathrm{VV}} _{bb}\end{array}\right), \tag{1}\] where \(G^{ij}_{kl}\!=\!\left\langle E^{i}_{k}(E^{j}_{l})^{*}\right\rangle\), \(i,j\!=\!\mathrm{H},\mathrm{V}\), and \(k,l\!=\!a,b\). The coherence matrices \(\mathbf{G}\), \(\mathbf{G}_{\mathrm{s}}\), and \(\mathbf{G}_{\mathrm{p}}\) are all Hermitian, positive semi-definite, unity-trace matrices. A \(4\!\times\!4\) unitary \(\hat{U}\) spanning both DoFs [16] diagonalizes \(\mathbf{G}\): \(\mathbf{G}_{\mathrm{D}}\!=\!\hat{U}\mathbf{G}\hat{U}^{\dagger}\!=\!\mathrm{ diag}\{\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\}\), with \(\sum_{j}\lambda_{j}\!=\!1\), and the field can carry up to 2 bits of entropy \(S\!=\!-\sum_{j=1}^{4}\lambda_{j}\log_{2}\lambda_{j}\), where \(0\!\leq\!S\!\leq\!2\) and \(\mathrm{diag}\{\cdot\}\) refers to a diagonal matrix with the listed elements along the diagonal. If, and only if, \(\lambda_{1}\lambda_{4}\!=\!\lambda_{2}\lambda_{3}\) can \(\mathbf{G}_{\mathrm{D}}\) be separated into a direct product with respect to the two DoFs, \(\mathbf{G}_{\mathrm{D}}\!=\!\mathrm{diag}\{\psi_{a},\psi_{b}\}\!\otimes\! \mathrm{diag}\{\gamma^{\mathrm{H}},\gamma^{\mathrm{V}}\}\), where each 2\(\times\)2 coherence matrix corresponds to one DoF [33]. The condition \(\lambda_{1}\lambda_{4}\!=\!\lambda_{2}\lambda_{3}\) therefore delineates optical fields that can - in principle - be rendered separable with respect to their DoFs via unitaries. We introduce the reduced coherence matrices that result from 'tracing out' one DoF from \(\mathbf{G}\): the reduced spatial coherence matrix \(\mathbf{G}_{\mathrm{s}}^{\mathrm{red}}\). after tracing out polarization, and the reduced polarization coherence matrix \(\mathbf{G}_{\mathrm{p}}^{\mathrm{red}}\). after tracing over space. We define entropies \(S_{\mathrm{s}}\) and \(S_{\mathrm{p}}\) for \(\mathbf{G}_{\mathrm{s}}^{\mathrm{red}}\). and \(\mathbf{G}_{\mathrm{p}}^{\mathrm{red}}\), respectively; in general, \(S\!\leq\!S_{\mathrm{s}}+S_{\mathrm{p}}\), with equality occurring only when the field is separable. Crucially, whereas \(S\) is invariant with respect to global unitaries, \(S_{\mathrm{s}}\) and \(S_{\mathrm{p}}\) are _not_. Indeed, whereas \(\mathbf{G}\) suffices to completely identify the field coherence, these reduced matrices do _not_[6; 16; 29; 34]. **Coherence rank and entropy concentration.** We classify these optical fields into four families according to their _coherence rank_, defined as the number of non-zero eigenvalues of \(\mathbf{G}\). Rank-1 fields, \(\left\{\lambda\right\}\!=\!\left\{1,0,0,0\right\}\), comprise fully coherent fields, \(S\!=\!0\) (no statistical fluctuations). It is _always_ possible to render rank-1 fields separable via a unitary: \(\mathbf{G}\!\rightarrow\!\mathbf{G}_{\mathrm{D}}\!=\!\mathrm{diag}\{1,0\} \otimes\mathrm{diag}\{1,0\}\), whereupon both DoFs are fully coherent. Partially coherent rank-2 fields, \(\left\{\lambda\right\}\!=\!\left\{\lambda_{1},\lambda_{2},0,0\right\}\), with entropy in the range \(0\!<\!S\!\leq\!1\), can _always_ be transformed unitarily into the separable form: \(\mathbf{G}_{\mathrm{D}}\!=\!\mathrm{diag}\{1,0\}\otimes\mathrm{diag}\{\lambda_{1}, \lambda_{2}\}\). This corresponds to a partially polarized field that is fully coherent spatially (\(S_{\mathrm{p}}\!=\!S\) and \(S_{\mathrm{s}}\!=\!0\)). Alternatively, the field can be converted into a fully polarized field that is partially coherent spatially (\(S_{\mathrm{p}}\!=\!0\) and \(S_{\mathrm{s}}\!=\!S\)). In general, the entropy of a rank-2 field is shared between the two DoFs. Nevertheless, even in its highest-entropy state \(S\!=\!1\), \(\left\{\lambda\right\}\!=\!\left\{\frac{1}{2},\frac{1}{2},0,0\right\}\), such fields can always be rendered separable such that one DoF is fully coherent (ridding it completely from statistical fluctuations), with the 1 bit of field entropy concentrated in the other DoF [29; 30; 31]; see Fig. 1. In stark contrast, the coherence matrices associated with rank-3 fields, \(\left\{\lambda\right\}\!=\!\left\{\lambda_{1},\lambda_{2},\lambda_{3},0\right\}\), whose entropy is in the range \(0\!<\!S\!\leq\!1.585\), can_not_ be expressed as a direct product \(\left(\lambda_{1}\lambda_{4}\!=\!0\!\neq\!\lambda_{2}\lambda_{3}\right)\); that is, rank-3 fields are _never_ separable with respect to their DoFs. This fundamental non-separability is independent of the values \(\left\{\lambda\right\}\) and is solely a consequence of the rank of \(\mathbf{G}\). This hitherto unrecognized feature has important consequences for entropy concentration: it prevents riding either DoF altogether from statistical fluctuations. Indeed, after concentrating the entropy into one DoF, a residual amount of entropy is retained that we refer to as _locked entropy_. The entropy in a rank-3 field must always be shared between the DoFs no matter how low \(S\) is. Even when \(S\!<\!1\), it is impossible to realize the condition \(S_{\mathrm{p}}\!=\!S\) and \(S_{\mathrm{s}}\!=\!0\) (or \(S_{\mathrm{p}}\!=\!0\) and \(S_{\mathrm{s}}\!=\!S\)) unitarily, which is attainable for rank-2 fields _of the same entropy_ [Fig. 1]. Furthermore, when \(S\!>\!1\) one cannot concentrate 1 bit of entropy in one of the DoFs. Defining the function \(f(x)\!=\!-x\log_{2}x-\left(1\!-x\right)\log_{2}\left(1\!-x\right)\), the minimum Figure 1: Starting with a non-separable field with 1 bit of entropy (\(S\!=\!1\), left) that is unpolarized \(S_{\mathrm{p}}\!=\!1\) and spatially incoherent \(S_{\mathrm{s}}\!=\!1\), a unitary can reversibly convert it into one of two forms depending on the rank of \(\mathbf{G}\). For a rank-2 field, the entropy can be _always_ fully concentrated into one DoF, leaving the other DoF free of statistical fluctuations. For a rank-3 field, entropy can _never_ be fully concentrated in one DoF. There always remains ‘locked entropy’ in the other DoF. entropy that is locked in one DoF is \(S_{\rm min}\!=\!f(\lambda_{1}\!+\!\lambda_{2})\), in which case the entropy concentrated into the other DoF is \(S_{\rm max}\!=\!f(\lambda_{1}\!+\!\lambda_{3})\). Rank-4 fields, \(\{\lambda\}\!=\!\{\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\}\), can sometimes be unitarily rendered separable with respect to their DoFs depending on the eigenvalues, and they thus share the properties of rank-2 or rank-3 fields. We will not examine rank-4 fields here and focus instead on delineating the characteristics of rank-2 and rank-3 fields. **Experiment.** We first prepare and characterize representative rank-2 and rank-3 fields [Fig. 2]. Starting from unpolarized, spatially incoherent light from an LED (wavelength 625 nm), we select two spatial modes using slits at points \(a\) and \(b\) that are sufficiently separated to guarantee mutual incoherence [Fig. 2(a)]. For a rank-2 field \(\mathbf{G}=\frac{1}{2}\mathrm{diag}\{1,0,1,0\}\), the source configuration along with the measured coherence matrix are shown in Fig. 2(b), and the corresponding results for the rank-3 field with \(\mathbf{G}\!=\!\frac{1}{3}\mathrm{diag}\{1,0,1,1\}\) are shown in Fig. 2(c). The rank-2 field is prepared by placing a polarizer at both \(a\) and \(b\), yielding \(S\!=\!1\): the field is polarized \(S_{\rm p}\!=\!0\) but spatially incoherent \(S_{\rm s}\!=\!1\). The rank-3 field is prepared by placing a linear polarizer at \(b\) only (the field at \(a\) remains unpolarized) to yield \(S\!=\!1.585\): the field is partially polarized and partially coherent spatially. Throughout, \(\mathbf{G}\) is reconstructed via optical coherence matrix tomography (OCmT) [Fig. 2(a)], which extends to optical fields with multiple DoFs [34; 35] the analogous procedure of quantum state tomography [36; 37; 38]; see Supplementary for further experimental details. The impact of the coherence rank on the limits of entropy concentration is illustrated in Fig. 3. We consider iso-entropy rank-2 [Fig. 3(a)] and rank-3 [Fig. 3(b)] fields. We make use of an entropy converter that unitarily couples the two DoFs [Fig. 3(c)], which comprises a half-wave plate (HWP) W\({}_{1}\) in path \(a\) oriented at \(45^{\circ}\) with respect to H (H\(\rightarrow\)V, V\(\rightarrow\)H), a polarizing beam splitter (PBS) that couples modes \(a\) and \(b\) and produces modes \(a^{\prime}\) and \(b^{\prime}\), followed by a HWP W\({}_{2}\) in mode \(a^{\prime}\) in one of two orientations: at \(0^{\circ}\) with H (H\(\rightarrow\)H and V\(\rightarrow-\)V), and at \(45^{\circ}\) with H (H\(\rightarrow\)V, V\(\rightarrow\)H). The first orientation minimizes the entropy in the spatial DoF (entropy concentration), while the second orientation swaps the entropy of the spatial and polarization DoFs (entropy swapping). Either binary DoF (polarization or spatial modes) can support up to 1 bit of entropy. We thus first prepare rank-2 and rank-3 fields with \(S\!=\!0.75\) [Fig. 3(d)]. For the rank-2 field, the entire entropy can be concentrated in the spatial DoF, \(S_{\rm s}\!=\!0.75\) (partially coherent spatially) and \(S_{\rm p}\!=\!0\) (fully polarized). Using the first setting for W\({}_{2}\), the entropy converter minimizes the spatial entropy: \(S_{\rm s}\!\rightarrow\!0\) (spatially coherent) and \(S_{\rm p}\!\rightarrow\!0.75\) (partially polarized). The second setting for W\({}_{2}\) swaps the entropy between the DoFs, which yields here the same result as that of entropy concentration with the first setting. The corresponding results for the rank-3 field are entirely in contrast to those for the iso-entropy \(S\!=\!0.75\) rank-2 field. The rank-3 source configuration yields theoretical values of \(S_{\rm s}\!=\!0.6\) (partially coherent spatially) Figure 2: (a) Schematic of the OCmT measurement scheme used to measure coherence matrices; L: spherical lens (focal length \(f\!=\!30\) cm); F: spectral filter; \(\frac{3}{4}\): quarter wave plate; P: linear polarizer; C: CMOS camera. (b) Source preparation for a rank-2 field and the measured \(4\times 4\) coherence matrix \(\mathbf{G}\). (c) Same as (b) for a rank-3 field. Figure 3: Unitary entropy conversion for rank-2 and rank-3 fields. (a) Source configurations for rank-2 and (b) rank-3 fields. P: Linear polarizer oriented along H; N: neutral density filter. (c) Setup for entropy conversion. W: Half-wave plate; PBS: polarizing beam splitter. (d-f) From left to right: \(\mathbf{G}\) reconstructed before the entropy converter; \(\mathbf{G}^{\prime}\) after the entropy converter with W\({}_{2}\) oriented at \(0^{\circ}\); and \(\mathbf{G}^{\prime}\) with W\({}_{2}\) at \(45^{\circ}\). All matrices are measurements, and the fidelity throughout was \(>\!98\%\) with respect to theoretical expectations (see Supplementary). (d) Rank-2 and rank-3 fields with \(S\!\approx\!0.75\); (e) same as (d) for \(S\!\approx\!1\); and (f) rank-3 field with \(S\!\approx\!1.585\). and \(S_{\rm p}\!=\!0.38\) (partially polarized); see Supplementary. The first setting minimizes the spatial entropy but can_not_ concentrate all the entropy into the polarization DoF; rather, some entropy remains locked in the spatial DoF \(S_{\rm s}\!\rightarrow\!0.38\). The second setting for the entropy converter swaps the entropy between the DoFs: \(S_{\rm s}\!\rightarrow\!0.38\) (partially coherent spatially) and \(S_{\rm p}\!\rightarrow\!0.6\) (partially polarized). Similar results are obtained when the initial field has a total of 1 bit of entropy, \(S\!=\!1\) [Fig. 3(e)]. Whereas the entire entropy can be concentrated in either DoF in the case of a rank-2 field, this cannot be achieved for the iso-entropy rank-3 field, and some entropy must remain locked in either DoF. Finally, the entropy of rank-3 can exceed 1 bit (whereas that of rank-2 fields cannot). In Fig. 3(f) we repeat the measurements with a maximum-entropy rank-3 field, \(S\!=\!1.585\). Here the locked entropy in the spatial DoF is \(S_{\rm s}\!=\!f(\frac{2}{3})\!=\!0.92\). The field rank can be identified by reconstructing \(\mathbf{G}\), as shown in Fig. 3. Nevertheless, information concerning the coherence rank can be deduced by observing the visibility of the spatial interference fringes produced by the field when the fields at \(a\) and \(b\) are superposed after a polarization projection. Two theorems (see Supplementary for proofs) help establish a strategy for this approach. **Theorem 1**.: _For a vector optical field supported on two spatial points with a coherence matrix \(\mathbf{G}\), if there exists a polarization projection along vector \(\mathbf{P}\) along which the field is spatially coherent (i.e., it can produce spatial interference fringes with \(100\%\) visibility), then \(\mathcal{R}(\mathbf{G})\leq 3\)._ **Theorem 2**.: _For a vector optical field supported on two points with a coherence matrix \(\mathbf{G}\), if there exist two orthogonal polarization projections \(\mathbf{P}\) and \(\mathbf{Q}\) along which the field is spatially coherent (i.e., it can produce \(100\%\)-visibility spatial interference fringes), then \(\mathcal{R}(\mathbf{G})\!\leq\!2\)._ In other words, identifying an orthogonal pair of polarization projections that both yield a spatially coherent field indicates that the field is either rank-1 or rank-2. Identifying only a single polarization projection that yields a spatially coherent field indicates that the field is rank-3. There is _no_ polarization projection for a rank-4 field that yields a spatially coherent field. We demonstrate these results experimentally in Fig. 4 with pairs of iso-entropy rank-2 and rank-3 fields. After the field is prepared, it is directed through the entropy converter shown in Fig. 3(c), and then the field is globally projected onto a prescribed polarization. We search for pairs of directions along which the resulting scalar field yields spatial interference fringes with \(100\%\) visibility. We start with a pair of fields at \(S\!\approx\!0.75\) [Fig. 4(a)]. The rank-2 field is prepared by projecting the polarization at \(45^{\circ}\) with respect to H and adjusting the amplitude of one spatial mode to obtain the targeted entropy (see Supplementary for the full coherence matrices associated with the fields in Fig. 4). After the entropy converter with W\({}_{2}\) oriented at \(0^{\circ}\), no spatial interference fringes of high visibility are observed at any polarization projection. After setting W\({}_{2}\) at \(45^{\circ}\), the polarization projections along H and V yield high-visibility spatial interference fringes, as expected for a rank-2 field. We contrast these observations with those for an iso-entropy rank-3 field \(S\!\approx\!0.75\). This field is prepared by projecting the polarization at \(a\) alone along \(45^{\circ}\) and adjusting the amplitude at \(b\) to obtain the target entropy. After the entropy converter with W\({}_{2}\) oriented at \(0^{\circ}\), no spatial interference fringes are observed at any polarization projection. However, after setting W\({}_{2}\) at \(45^{\circ}\), projecting the polarization along H yields a field that produces high-visibility spatial interference fringes. The corresponding polarization projection along V does _not_ yield a spatially coherent field, and no interference fringes can be observed. We increase the field entropy for an iso-entropy pair of rank-2 and rank-3 fields to \(S\!\approx\!1\) (the maximum entropy for rank-2) [Fig. 4(b)], and observe similar results to those for the lower-entropy fields [Fig. 4(a)]. Despite the higher entropy, we can still identify a pair of polarization projections for the rank-2 field that result in spatial coherence, whereas only a single polarization projection is identified for the rank-3 field. **Discussion.** The approach outlined here in terms of coherence matrices [39, 40, 41, 8] reveals features that are difficult to discern otherwise when extended to multiple DoFs. The analysis and experiments suggest a wealth of Figure 4: Identifying the coherence rank through the spatial coherence after a polarization projection. (a,b) From left to right: the source preparation; optimal interference fringes along the H and V polarization projections after the entropy converter in Fig. 3(c), with W\({}_{2}\) oriented at \(0^{\circ}\) with H; and optimal interference fringes along the H and V polarization projections with W\({}_{2}\) oriented at \(45^{\circ}\). (a) Iso-entropy rank-2 and rank-3 fields with \(S\!=\!0.75\). (b) Same as (a) for \(S\!=\!1\). fundamental questions regarding the statistical behavior of optical fields: How does the rank vary spatially across a vector optical field? How does the spatial distribution of the rank evolve with free propagation? How is the coherence rank affected by optical nonlinearities? Although we have couched the coherence matrix here in terms of polarization and spatial modes, this description can be extended to other DoFs, including higher-dimensional DoFs (e.g., orbital angular momentum), and even continuous DoFs after implementing the Schmidt decomposition to obtain an effective finite-dimensional representation [42; 43; 25; 44; 45]. This is particularly relevant in light of recent realizations of optical fields in which the spatial, temporal, and polarization DoFs are all coupled [46; 47; 48; 49]. In addition to the intrinsic interest of the coherence rank as a potential thermodynamic variable for electromagnetic fields, it may also serve as an integer identifier of the global properties of the field to be exploited for communications schemes using partially coherent light [50]. In conclusion, we have presented a classification scheme of partially coherent optical fields based on the rank of the \(4\times 4\) coherence matrix for two binary DoFs. This classification unveils surprising structural distinctions: _all_ rank-2 fields are fundamentally separable whereas all rank-3 fields are intrinsically _non_-separable. Consequently, the entropy in rank-2 fields - no matter how high - can always be concentrated into one DoF, thereby leaving the other DoF free of statistical fluctuations. In contrast, in a rank-3 field the entropy - no matter how low - cannot be fully concentrated into one DoF, and locked entropy remains associated with the other DoF. ###### Acknowledgements. We thank C. Okoro, M. Yessenov, and A. Dogariu for useful discussions and assistance. This work was funded by the US Office of Naval Research (ONR) under contracts N00014-17-1-2458 and N00014-20-1-2789.
2304.07697
Dependence of trefoil vortex knots upon the initial vorticity profile
Six sets of Navier-Stokes trefoil vortex knots in $(2\pi)^3$ domains show how the shape of the initialprofile influences the evolution of the enstrophy $Z$, helicity ${\cal H}$ and dissipation-scale. Significant differences develop even when all have the same three-fold symmetric trajectory, the same initial circulation and the same range of the viscosities $\nu$. Maps of the helicity density $h=u\cdot\omega$ onto vorticity isosurfaces patches show where $h\lesssim0$ sheets form during reconnection. For the Gaussian/Lamb-Oseen profile helicity ${\cal H}$ grows significantly, with only a brief spurt of enstrophy growth as thin braids form then decay during reconnection. The remaining profiles are algebraic. For the untruncated algebraic cases,$h<0$ vortex sheets form in tandem with $\nu$-independent convergence of $\sqrt{\nu}Z(t)$ at a common $t_x$. For those with the broadest wings, enstrophy growth accelerates during reconnection, leading to approximately $\nu$-independent convergent finite-time dissipation rates $\epsilon=\nu Z$. By mapping terms from the budget equations onto centerlines, the origins of the divergent behavior are illustrated. Lamb-Oseen has six locations of centerline convergence form with local negative helicity dissipation, $\epsilon_h<0$, and small, but positive $h$. Later, the sum of these localized patches of $\epsilon_h<0$ leads to a positive increase in the global ${\cal H}$ and suppression of enstrophy production. For the algebraic profiles: There are only three locations of centerline convergence, each with spans of less localized $\epsilon_h<0$ and some $h<0$. Spans that could be the seeds for the $h<0$ vortex sheets that form in the lower half of the trefoil as the $\sqrt{\nu}Z(t)$ phase begins and can explain accelerated growth of the enstrophy and evidence for finite-time energy dissipation $\Delta E_\epsilon$. Despite the initial symmetries.
Robert M. Kerr
2023-04-16T05:02:36Z
http://arxiv.org/abs/2304.07697v1
# Dependence of trefoil vortex knots upon the initial vorticity profile. ###### Abstract Six sets of Navier-Stokes trefoil vortex knots in \((2\pi)^{3}\) domains show how the shape of the initialprofile influences the evolution of the enstrophy \(Z\), helicity \(\mathcal{H}\) and dissipation-scale. Significant differences develop even when all have the same three-fold symmetric trajectory, the same initial circulation and the same range of the viscosities \(\nu\). Maps of the helicity density \(h=u\cdot\omega\) onto vorticity isosurfaces patches show where \(h\lesssim 0\) sheets form during reconnection. For the Gaussian/Lamb-Oseen profile helicity \(\mathcal{H}\) grows significantly, with only a brief spurt of enstrophy growth as thin braids form then decay during reconnection. The remaining profiles are algebraic. For the untruncated algebraic cases, \(h<0\) vortex sheets form in tandem with \(\nu\)-independent convergence of \(\sqrt{\nu}Z(t)\) at a common \(t_{x}\). For those with the broadest wings, enstrophy growth accelerates during reconnection, leading to approximately \(\nu\)-independent convergent finite-time dissipation rates \(\epsilon=\nu Z\). By mapping terms from the budget equations onto centerlines, the origins of the divergent behavior are illustrated. Lamb-Oseen has six locations of centerline convergence form with local negative helicity dissipation, \(\epsilon_{h}<0\), and small, but positive \(h\). Later, the sum of these localized patches of \(\epsilon_{h}<0\) leads to a positive increase in the global \(\mathcal{H}\) and suppression of enstrophy production. For the algebraic profiles: There are only three locations of centerline convergence, each with spans of less localized \(\epsilon_{h}<0\) and some \(h<0\). Spans that could be the seeds for the \(h<0\) vortex sheets that form in the lower half of the trefoil as the \(\sqrt{\nu}Z(t)\) phase begins and can explain accelerated growth of the enstrophy and evidence for finite-time energy dissipation \(\Delta E_{\epsilon}\). Despite the initial symmetries. ## I Background For the incompressible, three-dimensions Navier-Stokes equation the three significant quadratic integrated diagostics of the velocity \(u\) and vorticity \(\omega\) are: the kinetic energy with \(E\sim 0.5u^{2}\); the enstrophy with \(Z\sim\omega^{2}\); and the helicity \(\mathcal{H}\). \(\mathcal{H}\) is the global integral of the helicity density \(h=\mathbf{u}\cdot\mathbf{\omega}\) and can take either sign. Equations representing their budgets are defined in section II. The robust relationship between the energy \(E\) and enstrophy \(Z\) is well-known. Given a viscosity \(\nu\), the energy dissipation rate \(dE/dt=\epsilon\) with \(\epsilon=\nu Z\). The importance of \(\epsilon\) for turbulent flows is that irregularity of the vorticity can lead to very large enstrophy and a energy dissipation rate \(\epsilon\) that is large enough to support a finite, Reynolds number-independent energy dissipation. This is known as a _dissipation anomaly_, defined as the finite integral \[\Delta E_{\epsilon}=\int_{0}^{T_{\epsilon}}\epsilon\,dt>0\quad\text{in a finite-time $T_{\epsilon}$}\,. \tag{1}\] This is observed in many laboratory and environmental turbulent flows. This relation between irregular vorticity and turbulent decay is robust, but has this caveat: Can a smooth initial state far from boundaries numerically generate \(\nu\to 0\) finite \(\Delta E_{\epsilon}\) without either forcing or a parameterized dissipation \(\epsilon\)? Could a better understanding of the helicity density \(h\) help? What is known is that without viscosity, that is for the inviscid \(\nu=0\) Euler equations, the global helicity \(\mathcal{H}\) is preserved, in addition to the energy \(E\). And on that basis it has been proposed that \(\mathcal{H}\) can constrain nonlinear Euler growth of the enstrophy \(Z\). However, could the formation of local \(h<0\) along a vortex lead to a alternative scenario? Trefoil vortex knots are an initial state that is inherently helical, self-reconnecting, and mathematically compact, meaning that they can be isolated far from boundaries. The goal of this paper is to revisit recent trefoil knots simulations [1; 2; 3; 4] to ascertain why different initial vorticity profiles generate starkly contrasting answers to those questions. Before the results in papers [1; 2; 3; 4], the most that numerics has been able to tell us about the role of helicity is that for single-signed helical Fourier modes, energy dissipation can be suppressed for a short time [5]. These flows then evolve into traditional decaying numerical turbulence: without any further insight into whether \(h\) has a role in either achieving, or suppressing, finite energy dissipation as the viscosity decreases. Could trefoil vortex knots robustly overcome those limitations? Robustly meaning, are the numerics adequate to reach consistent conclusions? One conclusion coming from comparing the recent trefoil papers is that the results are not robust. With different initial states or numerics, different trends are observed for the evolution of the enstrophy \(Z(t)\) and helicity \(\mathcal{H}(t)\), particularly as reconnection begins and immediately afterward. To illustrate the differences, figures 1 and 2 compare \(Z(t)\) and \(\mathcal{H}(t)\) for two sets of calculations with the same circulation \(\Gamma=1\) (7) and same three-fold symmetric trajectories, but representing different initial core profiles. Respectively, evolution using a Gaussian/Lamb-Oseen (10) core profile, as recently reported [3], and from a \(p_{r}=1\) algebraic core (9) that has already provided evidence for a dissipation anomaly, finite \(\Delta E_{\epsilon}\) (1) [2]. Another difference in their initialization is the vortex core width. How do \(Z(t)\) and \(\mathcal{H}(t)\) evolve for these cases? At very early times and for all the profiles, \(Z(t)\) decreases, meaning more enstrophy dissipation than production. This similarity between the two continues only until \(t=0.4\). After which \(Z(t)\) and \(\mathcal{H}(t)\) diverge slowly until the innermost (centerline) vorticity isosurfaces begin to reconnect at a common time of \(t_{r}\approx 4\). Then as \(t\to t_{r}\), the differences become dramatic. For Lamb-Oseen, after some enstrophy growth at \(t\sim t_{r}\), its enstrophy \(Z(t)\) decreases again while the helicity \(\mathcal{H}\) grows. With thin vortex bridges and braids forming, as previously observed [3] and discussed in section III.3. In contrast, for the three-fold symmetric trefoils with a \(p_{r}=1\) algebraic profile, while reconnection begins at the same \(t_{r}\), it is not completed until a somewhat later time of \(t_{x}\). Figure 3a defines \(t_{x}\) as when there is \(\nu\)-independent convergence of \(\sqrt{\nu}Z(t)\), a'reconnection-enstrophy'. Convergence that has previously been associated with the formation of vortex sheets [2]. Figure 16 in section III.2 goes further: showing that the vortex sheets have \(h<0\). However convergence of \(\sqrt{\nu}Z(t)\) is not convergence of the dissipation rates \(\epsilon(t)=\nu Z(t)\). What has been found for algebraic trefoils with perturbations, in far larger domains, is that convergence of \(\epsilon(t)=\nu Z(t)\) in a finite time is possible [2]. Can the algebraic calculations reported here develop finite-time convergence of \(\epsilon(t)=\nu Z(t)\): despite the three-fold symmetry and a tighter domain? They do, with figure 3b providing evidence for weak convergence of the dissipation rates \(\epsilon(t)=\nu Z(t)\) at \(t_{\epsilon}\approx 2t_{x}\). In figure 2 this is accompanied by a modest increase in \(\mathcal{H}(t)\) at the higher Reynolds numbers before \(\mathcal{H}\) decays. This is discussed in section III.4. To complete the discussion of profiles, a set calculations using the \(p_{r}=2\) Rosenhead regularized profile (9) of a point vorticity [6] is discussed in section III.5. The mathematics community calls this the Kaufman-Sculley profile and it will be designated as the K-S-R profile here. The shape of the central core is intermediate between the two others, but its overall behavior is closer to that of the \(p_{r}=1\) algebraic profile. Given these differences in the \(Z(t)\) and \(\mathcal{H}(t)\) evolution, these questions can be asked (tentative answers in parentheses). * Can the \(t\sim 0\) origins of the divergent behavior be identified? (The Rayleigh inflection-point instability discussed in section II.2.) * What are the differences in the post-reconnection \(t>t_{x}\) dissipative structures? (Sheets lead to a _dissipation anomaly_, braids and bridges do not.) * Are there diagnostics for identifying the intervening, divergent \(0<t<t_{r}\) dynamics? (Mapping terms in the enstrophy and helicity budgets onto vortices' centerlines.) To reduce the number of possible sources for those differences, all of the new calculations are three-fold symmetric and run in \((2\pi)^{3}\) periodic domains. This ensures that the only differences between each set of trefoils are the choices of their initial vorticity profiles and their widths. Figure 4 provides an early time, three-dimensional perspective on the vorticity isosurfaces at \(t=1.2\) for algebraic case r1d015 and Lamb-Oseen Gd05. In terms of the overall structure they are almost identical. Perhaps the only identifiable difference is the different positions of the maximum of vorticity \(\omega_{m}=\|\omega\|_{\infty}\), indicated by \(\mathbf{X}.\) For the algebraic case on the left, \(\omega_{m}\) is co-located with the blue triangle, maximum of helicity \(h_{mx}\). For Lamb-Oseen on the right, \(\omega_{m}\) is at the maroon diamond, a local minima of the helicity flux (6), \(\min(h_{f})\). However, on the centerlines their respective enstrophy and helicity density budgets are quite different. The paper is organized as follows. After the introduction of the profile-dependent evolution of the primary global diagnostics, and their early vorticity isosurfaces, the governing and budget equations are given. Next are the steps required to initialize the vortices, including how the raw, unbalanced mapped vorticity fields are made incompressible. Once the initial profiles are defined, recent mathematics for determining their stability is referenced and a new set of diagnostics are defined that map the terms from the enstrophy and helicity budget equations (5,6) onto the evolving centerline trajectories. Up to \(t=3.6\), both helicity-mapped vorticity isosurfaces and mapped centerline budgets are used in the comparisons between the evolution of cases Gd05 (Gaussian/Lamb-Oseen) and r1d015 (\(p_{r}=1\), \(r_{o}=0.015\) algebraic). The \(t<t_{r}=4\) differences in the budget terms lead to profound differences in the \(t\gtrsim 4\) dissipative structures and dissipation rates \(\epsilon(t)\). For Lamb-Oseen at and after reconnection: thin bridges, then braids and decaying dissipation rates. While for all of the algebraic calculations: vortex sheets start to form with \(\sqrt{\nu}Z(t)\) convergence for \(t_{x}\lesssim 1.5t_{r}\); and for the widest initial algebraic profiles, \(\nu\)-independent dissipation rates \(\epsilon\) that approximately converge at \(t_{\epsilon}\approx 2.5t_{r}\). Figure 4: Three-dimensional vorticity isosurfaces with mapped helicity at \(t=1.2\) for two of the three-fold symmetric trefoils. (a) From the \(p_{r}=1\), \(r_{o}=0.015\) algebraic (9) calculation (r1d015). (b) Lamb-Oseen profile (10) (Gd05). The primary extrema of interest: Maximum vorticity, minima and maxima of the helicity, and the maximum velocity are indicated in both frames, with symbols in the legends. In addition, each frame indicates the three-dimensional positions of the \(s_{f}\), local \(\min(h_{f})\), and their opposing \(s_{o}\) points, closest points in 3D on their opposite loops. For the algebraic, the \(s_{d}\), local \(\min(\epsilon_{h})\). These are also marked on the \(t=1.2\) centerline budget profiles in figures 9 and 11 and will be used for reference at later times. Figure 3: For the case and viscosities in figure 2: (a) time dependence of the reconnection-enstrophy \(\sqrt{\nu}Z(t)\), with convergence at \(t_{x}=6\) that is used to define the end of the first reconnection; (b) the dissipation rate \(\epsilon(t)=\nu Z\), whose convergence at \(t\approx 10\) is used to define the dissipation anomaly \(\Delta E_{\epsilon}\) (1). Figure 2: Time dependence of (a) the enstrophy \(Z(t)\) and (b) the global helicity \(\mathcal{H}(t)\) for algebraic (9) case r1d015, with \(p_{r}=1\), \(r_{o}=0.015\) and \(r_{e}=0.08\), at several viscosities (in legend) with Reynolds numbers [24000 12000 6000 3000]. Equations, numerics, initial conditions, centerline maps, stability. The governing equations are the incompressible Navier-Stokes equations: for the velocity \[\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\nabla)\mathbf{u}=-\nabla p+\underbrace {\nu\triangle\mathbf{u}}_{\text{viscousdrag}},\qquad\nabla\cdot\mathbf{u}=0\,; \tag{2}\] and the vorticity \(\mathbf{\omega}=\nabla\times\mathbf{u}\) \[\frac{\partial\mathbf{\omega}}{\partial t}+(\mathbf{u}\cdot\nabla)\mathbf{\omega}=(\mathbf{ \omega}\cdot\nabla)\mathbf{u}+\nu\triangle\mathbf{\omega},\qquad\nabla\cdot\mathbf{\omega}= 0\,. \tag{3}\] **Numerics.** All of the calculations are done in \((2\pi)^{3}\) periodic boxes with a 2/3rds-dealiased pseudo-spectral code and a high-wavenumber cutoff filter [7; 8]. These features remove aliasing errors and absorb high-wavenumber fluctuations that would otherwise be reflected (in Fourier space) from the abrupt high-wavenumber cut-off. Extensive tests showed that with these features the calculations do at least as well as a calculation on a mesh that is 1.5 times greater. Some tests, such as doubling the mesh and comparing the maximum vorticities, have been repeated here. Based on this past experience, the evolution of the global helicity and enstrophy shown for all cases can be trusted. For the more detailed analysis on vortex lines and three-dimensional graphics, the algebraic r1d015 \(\nu=1.6\)e-4 statistics are reliable for all times, but those with \(\nu=8.4\)e-5 are given only to \(t=3.6\). The detailed results for case G1e3d05 \(\nu=8.4\)e-4 can be trusted up to \(t=4.4\), but not for \(t\geq 4.8\). Five initial profiles are discussed, each run for at least three viscosities. A larger number of profiles were done before choosing these five, so in the interest of economy and ease of use, the vorticity graphics for cases other than Gd015 and r1d015 use \(512^{3}\) meshes. Several of the smallest viscosity calculations, and all of the Lamb-Oseen calculations, are from \(1024^{3}\) mesh calculations. The continuum equations for the densities of the energy, enstrophy and helicity, \(e=\frac{1}{2}|\mathbf{u}|^{2}\), \(\zeta=|\mathbf{\omega}|^{2}\) and \(h=\mathbf{u}\cdot\mathbf{\omega}\), with their production, flux and dissipation rates are: \[\frac{\partial e}{\partial t}+(\mathbf{u}\cdot\nabla)e=-\nabla\cdot(\mathbf{u}p)+\nu \triangle e-\underbrace{\nu(\nabla\mathbf{u})^{2}}_{\epsilon=\text{dissipation}= \nu Z},\qquad E=\tfrac{1}{2}\int\mathbf{u}^{2}dV\,; \tag{4}\] \[\frac{\partial\zeta}{\partial t}+(\mathbf{u}\cdot\nabla)|\mathbf{\omega}|^{2}= \underbrace{2\mathbf{\omega}\mathbf{S}\mathbf{\omega}}_{\zeta_{p}=\text{production}}+\nu \triangle|\mathbf{\omega}|^{2}-\underbrace{2\nu(\nabla\mathbf{\omega})^{2}}_{\epsilon_ {\omega}=Z-\text{dissipation}},\qquad Z=\int\mathbf{\omega}^{2}dV\,; \tag{5}\] \[\frac{\partial h}{\partial t}+(\mathbf{u}\cdot\nabla)h=\underbrace{-\mathbf{\omega} \cdot\nabla\Pi}_{h_{f}=\omega-\text{transport}}+\underbrace{\nu\triangle h}_{ \nu-\text{transport}}-\underbrace{2\nu\text{tr}(\nabla\mathbf{\omega}\cdot\nabla \mathbf{u}^{T})}_{\epsilon_{h}=\mathcal{H}-\text{dissipation}}\qquad\mathcal{H}= \int\mathbf{u}\cdot\mathbf{\omega}dV\,. \tag{6}\] \(\Pi=p-\tfrac{1}{2}\mathbf{u}^{2}\neq p_{h}\) is not the pressure head \(p_{h}=p+\tfrac{1}{2}\mathbf{u}^{2}\). While the global energy \(E\) and helicity \(\mathcal{H}\) are inviscid invariants [11], their inviscid Lagrangian local densities \(e\) and \(h\) can change due to the pressure gradient \(-\nabla p\) and the \(\omega\)-transport \(h_{f}\) respectively. Under \(\nu\neq 0\) Navier-Stokes, both the helicity flux \(h_{f}\) and dissipation \(\epsilon_{h}\) can generate local negative helicity \(h\!<\!0\). Note that \(h\) is not locally Galilean invariant due to \(h_{f}\). **Role for \(\mathbf{h<0}\)?** Can local \(h\!<\!0\) break helicity's constraint upon the nonlinear growth of the enstrophy \(Z\)? Section II.4 shows how this question can be addressed by mapping the budget terms onto the vorticity centerlines. For short times another set of inviscid short-time conservation laws are the circulations \(\Gamma_{i}\) for closed loops \(\mathcal{C}_{i}\) about those trajectories: \[\Gamma_{i}=\oint_{\mathcal{C}_{i}}\mathbf{u}_{i}\cdot\mathbf{r}_{i}\quad\text{where} \quad\mathbf{r}_{i}\ \ \text{is a closed loop about}\ \ \mathcal{C}_{i}\,. \tag{7}\] With the appropriate choice of the closed loop, \(\Gamma_{i}\) can be preserved during Navier-Stokes reconnection for very short times. Could this constraint that have additional consequences? ### Initial conditions Four elements are used to define an incompressible vortex knot. 1. The \(\mathbf{x}(\phi)\) trajectory of the centerline of the vortex knot (8). 2. The vorticity profile \(|\omega(\rho)|\), with the distance \(\rho\) defined as the distance between a given mesh point \(\mathbf{x}\) and the nearest point on the trajectory \(\mathbf{x}(\phi)\): \(\rho=|\mathbf{x}-\mathbf{x}(\phi)|\). 1. The profiles are either algebraic (9), with a chosen power-law \(p_{r}\), or Gaussian/Lamb-Oseen (10). 2. Each \(|\omega(\rho)|\) has two parameters: A radius \(r_{o}\) and the centerline vorticity \(\omega_{o}\). * The final \(\omega_{o}\) are chosen so that the circulation \(\Gamma\equiv 1\) (7) after step 4. * In this paper \(\Gamma=1\) and \(r_{f}=1\) are fixed so the nonlinear timescale for all the calculations is \(t_{NL}=1\) (8). 3. The chosen profile is mapped onto a Cartesian mesh using previous algorithms [1], with the direction of vorticity given by the centerline direction: \(\hat{\omega}(\rho)=\hat{\omega}(\mathbf{x}(\phi))\). 4. Finally, we need to remove the non-solenoidal components of the raw vorticity field by projection. This also makes the velocity field incompressible. Except for the Lamb-Oseen profile, this operation invariably leads to reductions in the values of the maximum vorticity \(\omega_{m}\) and the enstrophy \(Z\). The initial trajectory \(\mathbf{\xi}_{0}(\phi)=[x(\phi),y(\phi),z(\phi)\) of all the trefoils in this paper is defined over \(\phi=1:4\pi\) by this closed double loop, with \(r_{f}=1\) and \(r_{1}=0\): \[\begin{array}{rl}x(\phi)=&r(\phi)\cos(\alpha)\\ y(\phi)=&r(\phi)\sin(\alpha)\qquad z(\phi)=a\cos(\alpha)\\ \mbox{where}\ \ r(\phi)=&r_{f}+r_{1}a\cos(\phi)+a\sin(w\phi+\phi_{0})\\ \mbox{and}\ \ \ \ \alpha=&\phi+a\cos(w\phi+\phi_{0})/(wr_{f})\\ \mbox{with}\ \ t_{NL}=&r_{f}^{2}/\Gamma\mbox{ the nonlinear time-scale,}\\ \mbox{and}\ \ \ \ \ r_{e}=&(\Gamma/(\pi\omega_{m}))^{1/2}\mbox{ the effective radius.}\end{array} \tag{8}\] The four algebraic Rosenhead regularized profiles \(\omega_{\mbox{raw}}(\rho)\) are parameterized by a radius \(r_{o}\), maximum/centerline vorticity \(\omega_{o}\) and a power law \(p_{r}\). \[\omega_{\mbox{raw}}(\rho)=\omega_{o}\frac{(r_{o}^{2})^{p_{r}}}{(\rho^{2}+r_{o} ^{2})^{p_{r}}}\,. \tag{9}\] For a columnar vortex, (14) suggests that the \(p_{r}=2\) K-S-R profile is stable unless there are perturbations with high azimuthal wavenumber \(m\) (13). The 'broader' \(p_{r}=1\) algebraic profile has been used as the second initialzation step of several earlier papers [1; 2; 12]. The Gaussian/Lamb-Oseen profile is \[\omega_{\mbox{raw}}(\rho)=\omega_{o}\exp(-(\rho/r_{o})^{2})\quad\mbox{for} \quad\rho<\rho_{+}\,. \tag{10}\] This definition of the Lamb-Oseen profile has these advantages: \(\omega_{m}=\omega_{o}\) and the effective radius \(r_{e}=r_{o}\), without the factor of 2 required by the Lamb-Oseen profile in current use [3]. The only difference between that profile and (10) is that the core in figure 5 is \(\sqrt{2}\) wider. This, along with a different definition of the enstrophy \(Z\) (5) (a factor of 2), yields enstrophy and helicity evolution that are (in appearance) nearly identical to theirs [3]. Table 1 gives the details of the 5 initial profiles: The parameters, \(r_{o}\) and \(\omega_{o}\) for the profile formulae (9,10), the generated raw enstrophies \(Z_{o}\). Then the divergence-free \(t=0\) values: the effective radii \(r_{e}\) (8), vorticity maxima \(\omega_{m}\) and enstrophies \(Z(0)\). The viscosities are given in the figure legends. An additional, inherent parameter is the maximum radius \(\rho_{+}\) used to map \(\omega_{\mbox{raw}}(\rho)\) onto the Cartesian mesh in step 3. Empirically, the trefoils' evolution is independent of \(\rho_{+}\) so long as the circulation \(\Gamma=1\) and \(\rho_{+}\sim 0.5-1\) (trefoil radius is \(r_{f}=1\)), with \(\rho_{+}\geq 0.75\) for all cases here except one in the appendix. Case r1d015dm025 with \(\rho_{+}=0.025\) and evolution that is similar to Lamb-Oseen. **Initial profiles.** The specific profiles listed are: Lamb-Oseen (case Gd05), two broad algebraic \(p_{r}=1\) cases (r1d015, r1d006) and two K-S-R \(p_{r}=2\) cases (r2d05, r2d1). With most of the analysis figures are taken from the highest Reynolds number calculations of the Lamb-Oseen (Gd05) and the \(p_{r}=1\), \(r_{o}=0.015\) 'broad' algebraic profile (r1d015). Figure 4 compares their slightly evolved \(t=1.2\) three-dimensional helicity-mapped vorticity isosurfaces. Figure 5 compares the \(t=0\) profiles of \(\omega_{y}(z)\) for three of the profiles in table 1. Each taken through the min(\(\omega_{y}\)) positions in their \(y=0\), \(x-z\) planes, as in figure 7. Both the main figure\({}^{\dagger}\) and the 'wings' inset show that all Figure 5: \(t\)=0, \(\omega_{y}(z)\) profiles through the min(\(\omega_{y}\)) of the \(y=0\)\(x-z\) plane for three of the cases from table 1. All except one curve are taken after the non-solenoidal Fourier components have been removed. The profiles are for the \(r_{o}=0.05\) Lamb-Oseen case (10) (Gd05) and two of the algebraic profiles that use the Rosenhead regularization (9). r2d05: \(p_{r}=2\), \(r_{o}=0.05\), referred to K-S-R, and r1d015: \(p_{r}=1\), \(r_{o}=0.015\). The other curve is the ‘raw’ \(p_{r}=1\), \(d=0.015\) curve, taken through its pre-Fourier-projected \(\omega_{y}\) field. (a) The primary figure shows the full profiles in \(z\). (b) The lower-left inset focuses upon \(z>0.1\) wings with small \(\omega_{y}\). Note the slight \(\omega_{y}>0\) overshoot at the boundaries of the Lamb-Oseen profile. This is the likely seed for the oscillations about \(\omega_{y}=0\) in figure 8. \begin{table} \begin{tabular}{c c c c c c c c c c} Cases & \(p_{r}\)\(n^{3}\) & \(r_{o}\) & \(\omega_{o}\) & \(Z_{o}\) & \(r_{e}\) & \(\omega_{m}\) & \(Z(0)\)\(\nu\)’s & t-3D-\(\omega\) \\ Gd05 & \(-\) & 10243 & 0.05 & 130 & 1057 & 0.05 & 130 & 1055 & 5e-4 & 1.7e-4 & 8.4e-5 & \(t\leq 4.4\) \\ r2d05 & 2 & 5123 & 0.05 & 64.3 & 326 & 0.07 & 62 & 306 & 3.3e-4 & 1.7e-4 & 8.4e-5 & \(t\leq 5.2\) \\ r2d1 & 2 & 5123 & 0.1 & 17.85 & 97.1 & 0.14 & 17.3 & 96.5 & 3.3e-4 & 1.7e-4 & 8.4e-5 & all times \\ r1d006 & 1 & 10243 & 0.006 & 554 & 333 & 0.053 & 138 & 229 & 1.7e-4 & 8.4e-5 & only \(Z\),\(\mathcal{H}\) \\ r1d015 & 1 & 10243 & 0.015 & 100 & 138 & 0.078 & 56 & 124 & 1.7e-4 & \(t\leq 6\) \\ r1d015 & 1 & 5123 & 0.015 & 100 & 138 & 0.078 & 56 & 124 & 3.3e-4 & 4.2e-5 & only \(Z\),\(\mathcal{H}\) \\ r1d015 & 1 & 10243 & 0.015 & 100 & 138 & 0.078 & 56 & 124 & 8.4e-5 & \(t\leq 3.6\) \\ r1d015dm025 & 1 & 5123 & 0.015 & 182 & 362 & 0.056 & 102 & 325 & 8.4e-5 & \(t\leq 10\) \\ \end{tabular} \end{table} Table 1: Raw core radius \(r_{o}\) and vorticity \(\omega_{o}\) parameters, resulting enstrophy \(Z_{o}\), then effective radii \(r_{e}(8)\), maximum vorticity \(\omega_{m}\) and enstrophy \(Z\) after fields are made divergent-free. The t-3D-\(\omega\) column is the last time for which detailed three-dimensional graphics were made for those cases. The global enstrophy \(Z(t)\) and helicity \(\mathcal{H}(t)\) are reliable for all cases listed. The only Lamb-Oseen case is labeled Gd05 and the algebraic cases are labeled by the power-law \(p_{r}\) as in r1d015: (r1\(\equiv p_{r}\)=1) and raw core radii (d015=\(r_{o}=0.015\)). Last is r1d015dm025: (r1\(\equiv p_{r}\)=1) with radii (d015=\(r_{o}=0.015\)) and a \(\rho_{+}=0.025\) cut-off. of the \(t=0\) algebraic profiles have smooth extended wings that never overshoot the \(\omega_{y}=0\) axis. In contrast, on the outer edge of the Lamb-Oseen profile there is some overshoot. Consistent with what has been seen before when Gaussian-like profiles are used for anti-parallel reconnection [7; 9]. The source of the Lamb-Oseen overshoot arises from the combined effects of the steepness of the outer edge of the L-O profile and a limitation of the algorithm (here and [3]) that is used map the \(\omega_{\rm raw}(\rho)\) field onto the Cartesian mesh in step 3). The mapping problem arises when the directions \(\hat{\omega}\) of neighboring mesh points come from different positions on the centerline. Common when the distance \(\rho\) from the centerline is large. The steepness problem arises when finite \(|\omega|\) points are next to points with \(|\omega|\approx 0\). The mapped field sees these as finite jumps. Combined, in step 4) the projection of the mapped field can generate overshoots to negative values on the profile's edge. Overshoots whose magnitude is a function of the curvature of the centerline and the outer, \(\rho\sim\rho^{+}+\), steepness of \(|\omega|(\rho)\). It has been claimed that a curved coordinate system that accommodates internal twist [13] can yield divergence-free fields. That is the trajectory (8) used here, with zero internal twist and because the vortices are thinner than in my earlier papers, trajectory source points \(\mathbf{x}(\phi)\) are adjacent for neighbouring \(x\), so that the raw vorticity fields are divergence-free. However, these are not the \(t=0\) initial fields of the simuations. This is because the profiles have sharp cut-offs at \(\rho=\rho_{+}\) and when imported into a Fourier code, those interfaces generate Gibbs fluctuations. Leaving the investigator two choices. Either remove those fluctuations with a Fourier filter. Or continue with that background noise. Figure 1 quantifies that noise. To demonstrate the importance of excessive steepness, one can decrease the maximum radius \(\rho^{+}\) on an otherwise smooth profile. In section A a \(\rho_{+}=0.025\) variant of the \(p_{r}=1\), \(r_{o}=0.015\) case is given whose \(Z(t)\) and \({\cal H}(t)\) evolution has similarities with that of Lamb-Oseen in figure 1. Further implications of this could be the topic of another paper. Figure 6: To show how stability is determined using the \(t\)=0 Richardson functions \(J(\rho)\) (12) for the Lamb-Oseen (10) and K-S-R (9) profiles with \(r_{o}=1\). (a) First, their \(\Omega(\rho)\) and \(\Omega^{\prime}(\rho)\) profiles are similar. (b) What is important is how \(J(\rho)\) asymptotes as \(\rho\to\infty\). For Lamb-Oseen its \(J(\rho)\to 0\) from (15), suggesting instability. While K-S-R, it is almost always stable by (14) as \(J(\rho)\to r_{o}^{2}\), finite. Figure 8: \(\omega_{y}\) at \(t=1.2\) on the \(y=0\), \(x\!-\!z\) plane from the Lamb-Oseen profile (10) Gd05 calculation. (a) Contour plot with local \(\min(\omega_{y})\) indicated. A few \(|\omega_{y}|\sim 0.001\) contours are included. (b) \(\omega_{y}(z)\) profiles through those minima at \(x=1.52\) and \(x=0.65\). The positive overshoots of \(\omega_{y}(z)\) show the magnitude of the \(|\omega_{y}|\sim 0\) contours on the left. \(\dagger\) Note that the \(y=0\), \(x-z\) plane negative \(\omega_{y}\) extrema are not at the positions of the global \(\max(|\omega|)\) for these fields. Figure 7: \(\omega_{y}\) at \(t=2.4\) on the \(y=0\), \(x\!-\!z\) plane from algebraic case r1d015 with \(p_{r}=1\) and \(r_{o}=0.015\) (9). (a) Contour plot with local \(\min(\omega_{y})\) indicated. \(|\omega_{y}|\sim 0\) contours do not appear. (b) \(\omega_{y}(z)\) profiles through those minima at \(x=1.58\) and \(x=0.81\). First full \(\omega_{y}(z)\), then focus on small \(\omega_{y}\). Contours and profiles at \(t=1.2\) are similar. Figure 9: \(t=1.2\) center-line budget profiles for algebraic case r1d015, \(p_{r}=1\) with \(r_{o}=0.015\), \(\nu=1.6\)e-4 of \(h\), \(\epsilon_{h}\), \(|\omega|\), \(h_{f}\), \(\epsilon_{\zeta}\) and \(\zeta_{p}\). (a) \(h\) and \(\epsilon_{h}\) (6). (b) \(|\omega|\). (c) \(h_{f}\). (d) Production \(\zeta_{p}\) and dissipation of \(\epsilon_{\zeta}\) of the enstrophy (5). Each frame has three vertical maroon lines at the \(s_{f}\) positions of the local \(\min(h_{f})\). Frame (a) has two additional sets: \(s_{d}\) positions of the local \(\min(\epsilon_{h})\); \(s_{o}\) positions that oppose the \(s_{f}\). All of the algebraic \(0.4<t\lesssim 2.4\) budget profiles are similar to these. Figure 11: \(t=1.2\)\(r_{o}=0.05\) Lamb-Oseen budget profiles. These are very different than the \(t=1.2\) algebraic budget profiles in figure 9. In (a) there are six positions with strong negative helicity dissipation, local \(\min(\epsilon_{h})\) and local \(\min(h)\). The positions are separated into two sets of three. The \(s_{f}\) in maroon are at the strongest \(\min(\epsilon_{h})\), adjacent to the local \(\min(h_{f})\) (\(h_{f}\) panel is not shown). The \(s_{o}\) in turquoise are the points that oppose the \(s_{f}\) in 3D figure 4. In (b), all six positions are at very large positive gradients of \(\zeta_{p}\) between local \(\min(\zeta_{p})\) and \(\max(\zeta_{p})\). Strong local \(\min(\zeta_{p})\) means strong local centerline compression. The \(s_{f}\) are also at \(\max(\epsilon_{\zeta})\) positions, maxima of the enstrophy dissipation. Figure 12: \(t=2.4\)\(r_{o}=0.05\) Lamb-Oseen centerline budget profiles. (a) \(h(s)\), \(\epsilon_{h}(s)\), \(s_{f}\) (maroon) for local \(\min(h_{f})\) and the \(s_{f}\)’s opposing \(s_{o}\) (turquoise) are marked. The \(\epsilon_{h}(s)\) profiles are three-fold symmetric again and more like the algebraic profiles at \(t=1.2\) and \(t=2.4\) and Lamb-Oseen at \(t=0.4\). (b) However, there are still six positions of local \(\min(\zeta_{p})<0\) compression: The three \(s_{f}\) and three \(s_{o}\). Having this many local compression locations is why the post-reconnection Lamb-Oseen vortex structures in section III.3 are braids, not the sheets generated by the algebraic profiles. Figure 14: Vorticity centerline budget profiles at \(t=2.4\) of \(h\), \(\epsilon_{h}\), \(|\omega|\), \(h_{f}\), \(\epsilon_{\zeta}\) and \(\zeta_{p}\), case r1d015. Added to each panel are three sets of three vertical lines. Maroon lines at the local min(\(h_{f}\)). Yellow for local min(\(\epsilon_{h}\)) and turquoise for the \(s_{o}\), the points opposing the \(s_{f}\). The \(s_{f}\) points are on one side of each reconnection, with the \(s_{d}-s_{o}\) zones representing the other side of those reconnections. Figure 13: A \(t=2.4\) mapped-helicity \(\omega\)-isosurface from the r1d015 \(p_{r}=1\), \(r_{o}=0.015\) algebraic (9) calculation at the beginning of the initial phase of reconnection. Symbols (from legend) show the three-dimensional positions of the basic \(u\), \(\omega\) and \(h\) extrema as well as extrema from the enstrophy and helicity budget equations (5,6). Plus, from their centerline positions in figure 14, the \(s_{f}\) (maroon) positions of local min(\(h_{f}\)), the \(s_{o}\) (turquoise) positions that oppose the \(s_{f}\) and the \(s_{d}\) (yellow) positions of the local min(\(\epsilon_{h}\)). Each in sets of three associated with the 1st, 2nd and 3rd local centerline min(\(h_{f}\)) positions. There is a cluster of \(\omega_{m}\) (**X**), max(\(\epsilon_{\zeta}\)) and \(s_{f}(h_{f}^{2})=5.9\) on the left. Another cluster is next to \(u_{m}\) with min(\(h_{f}\)), min(\(\zeta_{p}\)) and \(s_{f}(h_{f}^{2})=11.7\). And one at the bottom with min(\(h\)) and max(\(\zeta_{p}\)) with \(s_{d}(h_{f}^{2})=2.3\) and \(s_{o}=3.2\), both \(\mathcal{O}\)’s. The \(s_{d}\) and \(s_{o}\) with the same symbols are approaching one another on the same centerline spans of the trefoil. The best diagnostic for the Biot-Savart evolution of the vortex centerlines over this period is the separation of the three color-coded \(\mathcal{O}\)’s on the left from \(t=1.2\), to \(2.4\) then \(3.6\). ### Rayleigh stability criterion The stability of different core profiles \(\omega(\rho)\) can be determined using the \(J(\rho)\) (12) stability functions. The \(J(\rho)\) are a type of Richardson number and derived for columnar vortices [14] by extending an earlier result for shears on boundary layers. Recent analysis [15] that determines and uses the \(J(\rho)\) begins with the azimuthal profiles of the velocity \(u(\rho)\), vorticity \(\omega(\rho)\) and the pressure \(p\): \[u=V(\rho)e_{\theta},\quad\omega=W(\rho)e_{z},\quad p=P(\rho)\,. \tag{11}\] \(P\) is determined up to an additive constant by centrifugal balance \(rP^{\prime}(\rho)=V^{2}(\rho)\). Then by introducing, the angular velocity \(\Omega(\rho)=V(\rho)/\rho\) and \(\Phi(\rho)=2\Omega(\rho)W(\rho)=-P\), one can define these \(\mathcal{C}^{\infty}\) and \(\mathcal{C}^{1}\) functions: \[\Phi(\rho)=2\Omega(\rho)\omega(\rho)\quad\text{and}\quad J(\rho)=\frac{\Phi( \rho)}{\Omega^{\prime}(\rho)^{2}},\quad\rho>0\,. \tag{12}\] Next, consider a small, but not tiny, perturbation of one Fourier mode: \[\mathbf{u}(\rho,\theta,z,t)=u_{m,k}(\rho,t)e^{im\theta}e^{ikz},\quad\mathbf{\omega}( \rho,\theta,z,t)=\omega_{m,k}(\rho,t)e^{im\theta}e^{ikz}\,, \tag{13}\] stability is determined by \[\frac{k^{2}}{m^{2}}J(\rho)\geq\tfrac{1}{4}\quad\text{for all $\rho>0$} \tag{14}\] Figure 6 shows \(J(\rho)\), and how it is determined, for the Lamb-Oseen (10) and \(p_{r}=2\) algebraic (9) profiles for the same \(\omega_{o}=1\) and \(r_{o}=1\). What is important are their different \(\rho\to\infty\) behavior. For the Lamb-Oseen profile \[J_{G}(\rho)\to\frac{\rho^{4}}{r_{o}^{2}}e^{-(\rho/r_{o})^{2}}\to 0\,, \tag{15}\] implying that the inequality (14) is always violated as \(r\to\infty\). Whereas for the K-S-R \(p_{r}=2\) algebraic profile, \[\frac{k^{2}}{m^{2}}J(\rho)\to\frac{(k^{2}r_{o}^{2})}{m^{2}}\quad\text{as}\quad \rho\to\infty. \tag{16}\] This says that unless \(m\) is large for \(kr_{o}\sim 1\), that is its azimuthal wavelength is small, then for all \(\rho\), \((k^{2}/m^{2})J(\rho)\geq\tfrac{1}{4}\) can be satisfied. With an example of small being the Lamb-Oseen perturbation in the inset of figure 5, probably generated by the solenoidal projection in initialization step 4 in section II.1. Can the respective algebraic and Lamb-Oseen \(J(\rho)\) stability curves in figure 6 foretell whether their evolution diverges at early times? The first test in figures 7 (r1d015, \(t=2.4\)) and 8 (Gd05 \(t=1.2\)). considers vertical profiles of \(\omega_{y}\) taken though \(y=0\), \(x-z\) slices. For K-S-R, \(J(\rho)\to r_{o}^{2}>0\), so stability is expected if \(m\) is large. And demonstrated by the \(\omega_{y}\) contours in figure 7. And for Lamb-Oseen \(J(\rho)\to 0\) (\(<\tfrac{1}{4}\)) and because there is a small perturbation, instability is possible. And demonstrated by the irregular \(\omega_{y}\sim 0\) contours in figure 8. What is less clear for Lamb-Oseen is how tiny the perturbations must be to create instability [15]. As discussed in section IV.2. ### Effect of being stable or unstable Do the stability differences indicated by figures 6, 7 and 8 yield differences in the subsequent evolution of the Lamb-Oseen and algebraic cases? One difference between the respective \(x-z\) slices (figures 7 and 8) is that the algebraic contours in figure 7 do not generate oppositely-signed contours. In contrast, Lamb-Oseen in figure 8 does: as shown by the \(|\omega_{y}|\sim 0\) contours and the \(\omega_{y}(z)\) slice on the right. These fluctuations of oppositely-signed \(\omega_{y}\) are a source of local interactions. Local interactions that could be the source for the \(t=1.2\) differences between the algebraic and Lamb-Oseen centerline budget profiles in figures 9,and 11 respectively. This is discussed further in section III. ### Mapping budgets terms onto centerline vortices While single-color helicity isosurfaces [1] suggested that helicity has a role in reconnection, the mapped \(h\)-vorticity isosurfaces used by two 2021 trefoil papers [3; 4] are a better tool. In particular, small values of localized oppositely-signed helicity \(h<0\) indicated where reconnection was forming. There are similar yellow to red \(h<0\) patches at \(t=1.2\) in figure 4. For both algebraic and Lamb-Oseen. And for all cases, up to \(t=3.6\) there are similar \(h<0\) patches on their inner, higher \(\omega\) isosurfaces. However, are the observed \(t\leq 3.6\) differences sufficient for identifying the origins of the post-reconnection differences in the evolution of the algebraic and Lamb-Oseen calculations? Given how small those \(t\leq 3.6\) inner isosurface differences are, they are not. Why are the surface helicities of the different cases qualitatively similar? Likely because before reconnection begins, similar long-range Biot-Savart terms dominate the surface helicity dynamics for all cases. Therefore, what is needed are new diagnostics related to what is within the isosurfaces to explain the major differences in the \(T>3.6\) enstrophy and helicity evolution in figures 1 and 2. Meaning another set of pre-reconnection diagnostics is required. Because these are questions about the evolution of local helicity \(h(\mathbf{x},t)\), which is controlled by its budget equation (6), one alternative set of diagnostics is to instead map the primary terms from the enstrophy and helicity density budget equations (5,6) onto the isosurfaces. The variations of these terms upon the isosurfaces are very small, so are not useful for analysing the dynamics by themselves. However, this exercise indicated that the local variations are strongest near the centerlines. Suggesting that a better way to visualize the budget terms would be to map them onto the vorticity centerlines directly, if the centerlines can be identified. If successful, this would provides us with an analysis tool that is both local (at a point) and global (between distant points on the centerline). To identify centerlines one must first choose appropriate seed points \(\mathbf{x}_{\omega}(0)\) within a vorticity isosurface, then trace the vortex lines emanating from those points using a streamline function, giving trajectories \(\mathbf{x}_{\omega}\in\mathcal{C}\) obeying \[\mathbf{\xi}_{\omega}(s)=\frac{d\mathbf{x}_{\omega}(s)}{ds}=\mathbf{\omega}(\mathbf{x}_{\omega }(s))\,,\ \ \text{whose lengths are}\ \ L_{\omega}=\oint|\mathbf{\xi}_{\omega}(s)|\,ds\,. \tag{17}\] In [2; 12] the position of the maximum vorticity was used as the seed. With more experience, it has been found that seeding at either maximum or minimum of helicity, then using \(-\mathbf{\omega}(\mathbf{x})\) direction in (17), yields trajectories that stay within the observed isosurfaces. This is the practice in this paper. In all cases, the trajectories do not close upon themselves perfectly, which is only relevant for determining the topological numbers, twist, helicity and self-linking as in earlier work [2; 12]. That is not an objective of this paper. Once the trajectories have been defined, the profiles of important dynamical terms are mapped onto those curves to determine how those properties are related to one another. Note that because these vortex lines are almost closed upon themselves, initially the integral of the stretching \(u_{s,s}=d\mathbf{u}/ds\cdot\hat{\omega}\) on the \(\omega\)-line is identically zero: \[\oint_{0}^{L_{\omega}}u_{s,s}ds=u(L_{\omega})-u(0)\equiv 0\,. \tag{18}\] Due to this, any stretching along this line at \(t=0\) is balanced by equal compression somewhere else. And for these vortices, that compression also immediately yields an increase in the local enstrophy dissipation and negative helicity dissipation rates, \(\epsilon_{\zeta}\) and \(-\epsilon_{h}\). As well as a very early decrease in the enstrophy and increase in the helicity: \(dZ/dt|_{t=0}<0\) and \(d\mathcal{H}/dt|_{t=0}>0\) as seen in figures 1 (Lamb-Oseen) and 2 (algebraic). More for the larger \(\nu\) Lamb-Oseen calculations than the others. ### Using these tools as time progresses. The six terms from enstrophy and helicity budget terms that are mapped onto the centerlines are arranged into four panels: 1. The helicity density \(h\) (cyan) and its dissipation rate \(\epsilon_{h}\) (yellow). 2. The vorticity magnitude \(|\omega|=\sqrt{\zeta}\)) (black). 3. Helicity flux \(h_{f}\) (maroon), which includes a pressure gradient. 4. Enstrophy density dissipation \(\epsilon_{\zeta}\) (red) and production \(\zeta_{p}\) (lime). All four panels appear in figures 9, 10, 14 and 18. For figures 11 (Gd05, \(t=1.2\)), 12 (Gd05, \(t=2.4\)) and 15 (r1d015, \(t=3.6\), some panels are not shown. In particular panel b) with \(|\omega|\) is not shown because its \(s\)-profile closely follows that for the helicity \(h\). Figures with all, or most, of these six mapped terms are teamed with relevant three-dimensional helicity-mapped vorticity isosurfaces. The following markers indicate the locations of the primary extrema in three-dimensional space: \(\omega_{m}\)\(=\)\(\parallel\)\(\omega\)\({}_{\ ## III Results The comparisons between helicity-mapped vorticity isosurfaces and the mapped centerline budget terms are presented chronologically: * Early times for algebraic and Lamb-Oseen (\(t=0.4,1.2\)). * Algebraic mid-reconnection \(t\)=2.4 and pre-reconnection \(t=3.6\), with the first appearance of extended \(h<0\) vortex sheets. * After \(t=3.6\), the algebraic and Lamb-Oseen vortical structures and global evolution of \(Z(t)\) and \(\mathcal{H}(t)\) diverge, as shown by figures 1 and 2. * \(t\geq 3.6\) Lamb-Oseen Gd05. In figure 20 reconnection with vorticity bridges, localized sheets, then \(t=4.4\) braids. * \(t\geq 4.8\) algebraic reconnection with broad \(h<0\)\(\omega\)-sheets leading to wrapping and accelerated enstrophy growth. Figure 15: Vorticity centerline profiles and an isosurface plot at \(t=3.6\) for case r1d015. Budget profiles: \(h\), \(\epsilon_{h}\), \(h_{f}\), \(\epsilon_{\zeta}\) and \(\zeta_{p}\), with added vertical dashed lines in each panel for these local positions: \(s_{f}\) (maromon, \(\min(h_{f})\)); \(s_{d}\) (yellow, \(\min(\epsilon_{h})\)); with in the upper-left panel \(s_{g}\) (green) for the \(s_{d}\) opposing points. The \(s_{f}\) are also at \(\min(\zeta_{p})\) and are at two of the \(\max(\epsilon_{\zeta})\) positions, local enstrophy dissipation peaks. The \(s_{d}\) are also at the local minima of the helicity \(\min(h)<0\), at cross-overs between secondary local \(\min(\zeta_{p})\) to \(\max(\zeta_{p})\) and at two of the local \(\max(\epsilon_{\zeta})\) positions. And are co-located with the opposing positions to the \(s_{f}\). The \(s_{g}\) oppose the \(s_{d}\) and nearly coincide with the \(s_{f}\). Where might reconnection form? The positioning of the \(s_{f}\) and \(s_{d}\), plus their opposing points, suggests that reconnection would form between the \(s_{f}\) and \(s_{d}\). Consequences: Local \(\zeta_{p}<0\) means that \(du_{s}/ds<0\) and due to incompressibility this implies the existence of stretching perpendicular to the vorticity at these points. The stretching needed to needed to create the \(h<0\) vortex sheets. The upper-right panel uses a larger vorticity (\(\omega=0.2\omega_{m}\)) isosurface than in figure 16 to show continuity with the earlier inner isosurface evolution. The labels for the auxiliary symbols are in figure 16. III-E Finally there is a short discussion of the K-S-R \(p_{r}=2\) r2d05 case. ### Early times (\(t=0.4,1.2\)) profile dependent evolution and differences. To begin, recall that for the \(t=1.2\) isosurfaces in figure 4 (cases r1d015, Gd05), the only clear difference between the frames is the position of the vorticity maximum \(\omega_{m}\). Can the centerline budget maps identify any greater differences at early times? First, the similarities at very early times are given, then the differences. The centerline maps for the corresponding earliest times in figures 9, \(t=1.2\) algebraic, and \(10\), \(t=0.4\) Lamb-Oseen, are similar. While the strongest local \(\max(h)\) and local \(\max(|\omega|)\) are near to one another, other local extrema are associated with local \(\min(h_{f})\), the vortical helicity flux indicated by dashed maroon lines at local \(s_{f}\). Positions of local helicity dissipation minima (\(\min(\epsilon_{h})<0\)) are near the \(s_{f}\) and the positions of local compression, \(\min(\zeta_{p})<0\) are on the \(s_{f}\). Suggesting that the dominant dynamics at these points is local compression with pinching at these points on the vortices. However, starting at \(t=1.2\) the centerline dynamics of the two profiles diverge. * For **algebraic case r1d015**, the alignments in figure 9 persist from \(t=0.4\) until the reconnection time of \(t_{r}\sim 4\) is approached. * However, for **Lamb-Oseen at \(t=1.2\)** the corresponding Lamb-Oseen budgets in figure 11 are very different, showing six locations with roughly equivalent variations of the positive and negative helicity dissipation \(\epsilon_{h}\) at six significant local \(\min(h_{f})\) positions, split into two sets of three, maroon \(s_{f}\) and turquoise \(s_{o}\). In figure 11a the \(s_{f}\) positions at local \(\min(h_{f})<0\) (not shown) are also at the largest dips of \(h\sim 0\) and the strongest local \(\min(\epsilon_{h})\). In (b), the \(s_{f}\) are not exactly on local \(\min(\zeta_{p})\), but on the adjacent large positive gradients and local enstrophy dissipation peaks: \(\max(\epsilon_{\zeta})\). These \(s_{f}\) can be viewed as one side of the developing reconnection sites. The turquoise \(s_{o}\) positions that oppose the \(s_{f}\) positions in figure 4 are the other side of the developing reconnections. They are also secondary local \(\min(\epsilon_{h})\), secondary local dips in \(h\) and near secondary local \(\min(\zeta_{p})\). Meaning that all six positions (the \(s_{f}\) and \(s_{o}\)) are sitting at or near local compressive \(\min(\zeta_{p})<0\). Having multiple points of local compression at an early time has a significant effect upon the the enstrophy growth (or decay). At \(t=1.2\) and \(2.4\), the localized pinching enhances the localized dissipation of both helicity \(\epsilon_{h}\) and enstrophy \(\epsilon_{\zeta}\), which also suppresses the \(\zeta_{p}\) terms needed to enhance enstrophy growth: before that growth has even begun. A likely source of this localization of the dynamics is the interactions between the primary vorticity and the oppositely-signed flotsam seen in figure 8. That is, the origin of this localized dynamics is the amplification of that noise by instability, as previously suggested [9] and discussed here in section II.2. The \(t=2.4\) Lamb-Oseen centreline budget profiles in figure 12 show some return to normal. They have similarities with the \(t=0.4\) Lamb-Oseen profiles in figure 10 and the pre-reconnection algebraic profiles for \(t\leq 3.6\). While there are only three local \(\min(\epsilon_{h})\) and \(\min(h_{f})\), in the right frame there still is strong compression with local \(\min(\zeta_{p})<0\) at all six of the former (\(t=1.2\)) \(\min(h_{f})\) positions: The three current (\(t=2.4\)) \(s_{f}\) positions and their three \(s_{o}\) opposing positions. In addition, the magnitudes of the enstrophy production \(\zeta_{p}\) and dissipation \(\epsilon_{\zeta}\) terms are tempered, being a factor of 5 less than at \(t=1.2\). This localized dynamics is only temporarily stronger than the long-range Biot-Savart interactions: Once that dynamics dissipates, the Biot-Savart interactions again control the large scales and the evolution of the centerline trajectory. However, the dynamics along the centerlines is permanently affected. When reconnection bridges do form, with some enstrophy growth, it is entirely concentrated at the locations in figure 11. Not over the entire trefoil. With rapid post-reconnection dissipation of the vorticity in the bridges, leading to divergent evolution of the enstrophy \(Z(t)\) and the helicity \(\mathcal{H}(t)\). Explained further in section III.3. Figure 16: A \(t=3.6\) mapped-helicity \(\omega\)-isosurface for case r1d015 with a color-coded centerline from three-perspectives. Symbols show the three-dimensional positions of the basic \(u\), \(\omega\) and \(h\) extrema as well as extrema from the enstrophy and helicity budget equations (5,6). Plus the \(s_{f}\) (maroon) positions of local min(\(h_{f}\)) and the \(s_{d}\) (yellow) positions of the local min(\(\epsilon_{h}\)), which also oppose the \(s_{f}\) (the \(s_{o}\) in figure 15). (a) is a plan view perspective with faint \(h\lesssim 0\) yellow sheets extending out from lower reddish ring. Then two sideviews from the same. (b) shows the entire domain. (c) shows only \(z<-0.8\) with the lower emerging ring, below the \(\mathbf{X}\) position of \(\omega_{m}\) at \((x,y,z)\)=(-1.37, -0.25, -0.39). The centerline vortex has mapped helicity ranging from red (\(h=-13\)) to blue (\(h=26\)). By using a small \(\omega\sim 1.4\sim 0.03\omega_{m}\) vorticity isosurface, a gradation can be seen in the lower \(h<0\) zone from a red \(h\sim-0.4\) inward facing half to the yellow-green \(h\lesssim 0\) outward half. This is the first step in the formation of the yellow negative helicity \(h\lesssim 0\) vortex sheets at later times. It is rotated to the right to give some 3D perspective of the yellow lobes on the right and above. ### Mid-reconnection \(t\)=2.4, 3.6, with algebraic spawning sheets. In the \(t\leq 3.6\) period before reconnection begins, there are few differences between the inner, larger \(\omega\) isosurfaces of cases r1d015 and Gd05. However, there are significant differences between their pre-reconnection budget profiles. Significant enough that for this mid-reconnection phase, the evolution of algebraic case r1d015 and that of Lamb-Oseen case Gd015 are considered separately. Algebraic in this section and Lamb-Oseen in section III.3. To follow the evolution of the r1d015 isosurfaces and budgets between \(t=1.2\), 2.4 and 3.6, three sets of three-fold positions are indicated on each: \(s_{f}\) at local \(\min(h_{f})\); the \(s_{d}\) at local \(\min(\epsilon_{h})\); and points opposing either the \(s_{f}\) (the \(s_{o}\)) or the \(s_{d}\) (the \(s_{g}\)). These are in addition to the usual extrema: \(\max|u|\), \(\max|\omega|\), \(\max(h)\), \(\min(h)\), \(\min(\epsilon_{h})\), \(\min(h_{f})\), \(\max(\epsilon_{\zeta})\) and the \(\min\) and \(\max(\zeta_{p})\). Once defined, the \(s_{f}\), \(s_{d}\) and \(s_{o}/s_{g}\) can be used to follow the evolution of the isosurfaces and budget profiles of the r1d015 calculation at \(t=1.2\), 2.4 and 3.6 as follows: * At the points of closest approach, the \(s_{f}\) and \(s_{o}\), the isosurfaces are drawn together over time. * At the same time, the \(s_{d}\) and \(s_{o}\) approach one another along the centerline until the coincide at \(t=3.6\). * These locations can help identify where there are spans of \(\epsilon_{h}<0\) and \(h<0\) along the centerline. So that at \(t=2.4\) and 3.6 besides the local \(\min(\epsilon_{h})<0\) at the \(s_{d}\), there are also growing, smaller peaks of \(\epsilon_{h}<0\) next to the \(s_{f}\) and between the \(s_{f}+s_{d}\) pairs, growing \(s\)-spans of \(\epsilon_{h}\lesssim 0\). On both the isosurfaces and the centerlines as in figures 13 and 14a and 15a,b. With some \(h\lesssim 0\) at the \(s_{d}\). * At \(t=3.6\) the \(s_{o}\) are co-located with the \(s_{d}\). With the \(s_{g}\) nearly co-located with the \(s_{f}\), as shown in figure 15a. And the spans of \(\epsilon_{h}<0\) and \(h\lesssim 0\) from \(t=2.4\) are now concentrated at the \(s_{d}\) points, with \(\epsilon_{h}<0\) and \(h<0\) being particularly deep at those points. There is also local \(\epsilon_{h}<0\) at the \(s_{f}\) with \(\epsilon_{h}\approx 0\) between the \(s_{f}\) the next \(s_{d}\). * For example \(\epsilon_{h}\approx 0\) between \(s_{f}\)=6.3 and \(s_{d}\)=9.2. Another \(\epsilon_{h}\approx 0\) that started at \(t=2.4\) with \(20\epsilon_{h}<-5\) at \(s_{f}\)=0.4 to \(s_{d}\)=2.3 at \(t=3.6\) goes to \(s_{d}\)=3.3. * These small patches of \(h<0\) and \(\epsilon_{h}<0\) on spans of the centerlines and inner isosurface are not evidence for \(h<0\) vortex sheets. The patches are even similar to Lamb-Oseen as reconnection at \(t=3.6\) in section III.3. Instead, the patches of \(\epsilon_{h}\lesssim 0\) could be evidence of where \(h<0\) vortex structures are being created. **How the \(h<0\) isosurface vorticity forms:** * \(h<0\) formation. While at \(t=2.4\) there are spans of \(h(s)<0\) in figure 14, this does not translate into signficant \(\pm\) variations of \(h\) on the \(t=2.4\) isosurface or signs of vortex sheets. It is not until \(t=3.6\) that significant dips of \(h<-5\) appear at the \(s_{d}\) locations. On both the centerline and the inner (large \(\omega\)) isosurface in figure 15(a,b). * What is new in 3D at \(t=3.6\) is extensive \(h<0\) on parts of the smaller vorticity magnitude outer isosurfaces in figure 16. Red for strong \(h<0\) along the red-coded centerline in the lower (\(z<-0.8\)) portion of the trefoil. And yellow \(h\lesssim 0\) helicity on the other side of those isosurfaces, with faint signs of shed vorticity. A trend that continues to later times, as illustrated in figure 21 at \(t=4.8\). * **Relation between \(h<0\) centerlines and isosurface zones.** The red on the isosurface is associated with the broader spans of centerline \(\epsilon_{h}(s)\lesssim 0\) that connect the \(s_{f}\) and \(s_{d}\) local positions. Example: Follow the maroon \(s_{f}\)\(\star\) through where the loops cross, then down to the yellow \(s_{d}\)\(\diamond\). Or from the maroon \(s_{f}\)\(\circ\) to the yellow \(s_{d}\)\(\star\) underneath the maroon \(\star\). * With all corresponding to \(\epsilon_{h}\sim 0\) spans between all six local \(\min(\epsilon_{h})\) at the \(s_{f}\) and \(s_{d}\) in figure 15a. * The reddish \(h<0\) patches extend over roughly 2/3rds of these spans on the lower (\(z<-0.7\)) part of the isosurface. * With the reddish zones smoothly transitioning into the yellowish, more sheet-like outer surfaces. * This is illustrated further at \(t=4.8\) with the red patches in figures 21 and 23. * Further \(t=3.6\) figures from different perspectives and different cropping levels will appear shortly. Phy. Rev. Fluids (accepted, 2023) _Sensitivity of trefoil vortex knots upon the initial vorticity profile._ Figure 17: Two \(t=3.6\) Lamb-Oseen isosurfaces with different vorticity thresholds. (a) The primary \(\omega=19\) isosurface is similar to the higher-\(\omega\) algebraic isosurface in figure 15. Additional markers indicate the three-dimensional locations of the \(s_{d}\) (yellow), local min(\(\epsilon_{h}\)), \(s_{f}^{+}\) (cobalt) for the local max(\(h_{f}\)) points and \(s_{o}^{+}\) (turquoise), points opposing the \(s_{f}^{+}\) that are also min(\(h\)) \(<0\) and min(\(\zeta_{p}\)) points. Reconnection is commencing between the \(s_{f}^{+}\) and \(s_{o}^{+}\) points. The local \(s_{d}\) (yellow), min(\(\epsilon_{h}\)) sit in strongly positive \(h>0\) zones, not \(h<0\) as for the algebraic calculations or Lamb-Oseen for \(t\leq 2.4\). (b) The vorticity of the second isosurface uses very small \(\omega=1.7\) to show that the outer edges of the isosurface are shedding sheets with slightly negative helicity. Figure 18: \(t=3.6\) Lamb-Oseen (Gd05) (10) centerline budget profiles. The \(s_{d}\) (yellow/maroon) at local \(\min(\epsilon_{h})\) and co-located with local \(\max(\epsilon_{\zeta})\) and \(\max(|\omega|)\), are in large \(h>0\) zones far from the reconnections. The \(s_{f}^{+}\) (cobalt) are at local \(\max(h_{f})\) points and co-located with local \(\max(\zeta_{p})\) and secondary velocity minima. The \(s_{o}^{+}\) (turquoise) points oppose the \(s_{f}^{+}\) and are co-located with \(\min(h)<0\) and \(\min(\zeta_{p}))\) points. Reconnection is commencing between the \(s_{f}^{+}\) and their opposing \(s_{o}^{+}\) points. Figure 19: For Lamb-Oseen isosurfaces \(t=4.0\) there are two isosurfaces surrounding the centerline vortex line. (a) The primary isosurface shows the overall structure using a very small vorticity of \(\omega=9.3=0.014\omega_{m}\). (b) Shows a \(\omega=37\) isosurface that focuses upon the lower-left reconnection site between the two loops of the centerline to highlight one of the reconnection bridges. Figure 20: \(t=4.4\) Lamb-Oseen isosurfaces. (a-c) Three views the isosurfaces, with the bottom two focusing upon the smallest structures. (a) The primary \(t=4.4\) isosurface shows the overall structure with \(\omega=49=0.015\omega_{m}(=\)312) to show how braids are forming from bridges, as seen for previous Lamb-Oseen calculations. (b) Shows full length of one of the double braids, including where it attaches to the new upper and lower vortex rings. Similar to \(t=4.29\) of figure 18 from [3]. (c) Focuses on one end as that double braid winds around the primary vortex. ### Gaussian/Lamb-Oseen reconnection: braid formation. In section III.1, early divergence of \(t=1.2\) Lamb-Oseen budget profiles from the algebraic profiles was shown respectively in figures 11 (Gd05) and 9 (rld015). Section gives the effect of that early divergent dynamics upon Lamb-Oseen as reconnection begins. Beginning at \(t=3.6\) with figures 17 and 18. \(t=3.6\) is the last time that a single centerline could be identified for case Gd05. The Lamb-Oseen analysis ends with the \(t=4\) and \(4.4\) isosurfaces in figures 19 and 20. These show how the trefoil then breaks into two vortex rings, connected first by what could be described as bridges, then as braids. The two Lamb-Oseen \(t=3.6\) isosurfaces in figure 17 are: 1. A primary, higher magnitude \(\omega=19\) isosurface that shows continuity with the earlier Biot-Savart evolution and has minimal differences with the \(t=3.6\) inner algebraic structure in figure 15. 2. The lower magnitude \(\omega=1.7\) isosurface shows how the Lamb-Oseen profile reconnection begins on the outer wings, with sheets shedding with some \(h\lesssim 0\). These sheets with bits of \(h\leq 0\) are localized around the reconnection points, unlike the broad \(h<0\) isosurface zones of the r1d015 algebraic trefoil in figure 16. The \(t=3.6\) budget profiles and isosurfaces in figures 17 and 18 have three sets of primary local positional marks. \(s_{d}\), \(s_{f}^{+}\) and \(s_{o}^{+}\). Plus the \(s_{f}\). * The \(s_{d}\) in yellow (with embedded maroon \(s_{f}\)) are at local min(\(\epsilon_{h}\))+min(\(h_{f}\)) positions. The \(s_{d}\) are exactly on local max(\(|\omega|\)) and max(\(\epsilon_{\zeta}\)), the maximum enstrophy dissipation. * The \(s_{f}^{+}\) in cobalt are at the local max(\(h_{f}\)) and are coincident with local max(\(\zeta_{p}\)). Local \(\zeta_{p}>0\) implies stretching, suggesting that these positions could be the seeds for the bridges that form during reconnection. * The third set of \(s_{o}^{+}\) in turquoise are at the points opposing the \(s_{f}^{+}\). The \(s_{o}^{+}\) are also local min(\(h\)) and min(\(\zeta_{p}\)), local compression, suggesting that there is pinching on the trefoil vortex at the other end of the nascent bridges. * All consistent with active reconnection at these positions. * What can the \(t=3.6\) markers tell us about the separation of the trefoil into two rings? * The cobalt max(\(h_{f}\)) \(s_{f}^{+}\) points with large \(\zeta_{p}>0\) become one end of the bridges, with their opposing turquoise \(s_{o}^{+}\) at the other end. * The \(s_{d}\) yellow min(\(\epsilon_{h}\)) points are on what becomes the upper (u) ring, with magnitudes \(h_{u}>0\). * The turquoise \(s_{o}^{+}/\)min(\(h\)) points become the lower (\(\ell\)) ring, with some \(h(s_{o}^{+})<0\) appearing on the localized vortex sheets in figure 17b, such as to the left of \(\omega_{m}\) (**X**). * What develops out of this \(t=3.6\) state? * At \(t=4\) in figure 19, short, flattened bridges are generated as the trefoil is begins to separate into two rings. * The positions of \(\omega_{m}\), \(u_{m}\), \(h_{mx}\) and \(h_{mn}\) are all on the bridges. * At \(t=4.4\), in figure 20, the new upper (blue) and lower (red) rings are separating, with each bridge splitting into two braids. * The positions of \(\omega_{m}\), \(h_{mx}\) and \(h_{mn}\) are on the the lower ring and \(u_{m}\) is on the upper ring. * Figures 19 and 20 are roughly equivalent to the \(Re=12000\) figures at the same times for a previous trefoil calculations using Lamb-Oseen profiles [3]. Including the splitting of each bridge into two braids. * So providing further Gaussian/Lamb-Oseen graphics and discussion in this paper is unnecessary. \(\bullet\) Summary of how the Lamb-Oseen budget profiles in figures 11, 12 and 18 can explain the evolution of the global enstrophy \(Z(t)\) and the helicity \(\mathcal{H}(t)\) in figure 1: * Starting at \(t=0\) when \(\int ds\,\zeta_{p}\equiv 0\), for the spans with local compression, \(\zeta_{p}<0\), the viscous terms and \(\epsilon_{\zeta}\) are enhanced. Resulting in \(Z(t)\) decreasing for at least short \(t\gtrsim 0\) times for all cases and viscosities \(\nu\). * Between \(t=2.4\) and \(3.6\), the global enstrophy production and its dissipation rate are approximately equal to their centerline integrals: \(Z_{p}=\int dV\zeta_{p}\sim\int ds\,\Gamma\zeta_{p}\) and \(\epsilon_{Z}=\int dV\epsilon_{\zeta}\sim\int ds\,\Gamma\epsilon_{\zeta}\). With \(Z_{p}\) and \(\epsilon_{Z}\) roughly balancing one another in figures 12,18 (\(t=2.4\), 3.6), giving \(dZ/dt=Z_{p}-\epsilon_{Z}\approx 0\) over the temporal span of \(2.4\leq t\leq 3.6\). And relatively steady \(Z(t)\), enstrophy, over those times in figure 1. * At \(t=3.6\) in figure 18, at the locations of positive, not negative, spikes in \(h_{f}\), there are sharp positive spikes in the enstrophy production\(\zeta_{p}\). * These spikes of \(\zeta_{p}>0\) continue through \(t=4\), generating the are brief enstrophy spurt in figure 1. This spurt is when the bridges form, shown in figures 17 and 19. * Then as the strong centerline enstrophy dissipation \(\epsilon_{\zeta}\) in figure 18 takes over, the centerline spikes of local \(h_{f}>0\), \(\zeta_{p}>0\) and \(\omega=\sqrt{\zeta}\) and \(\zeta_{p}\), are dissipated. Along with the temporally spikes of \(Z(t)\) in figure 1. * For \(\mathcal{H}(t)\), except at \(t\sim 1.2\) as in figure 11, its \(t\leq 3.6\) evolution is dominated by the strongly localized negative helicity dissipation \(\epsilon_{h}\), which removes \(h<0\), thereby leading to increasing \(\mathcal{H}(t)>0\). After \(t=3.6\), as dissipation removes the small amounts of \(h<0\) associated with the bridges, \(\mathcal{H}(t)\) increases further. ### Algebraic reconnection scaling with \(h<0\)\(\omega\)-sheets. Due to the constraints imposed upon the calculations in this paper, three-fold symmetry and a \((2\pi)^{3}\) domain, it has been a surprise that the algebraic profile cases have generated this: Finite-time, finite energy dissipation \(\Delta E_{\epsilon}\) (1), as shown in figures 2 and 25 by the finite-time convergence of the dissipation rates \(\epsilon(t)=\nu Z\) of the broadest profiles: cases r1d015 and r2d1. At least for a short range of viscosities. The evidence for finite \(\Delta E_{\epsilon}\) in the earlier perturbed trefoil calculations [2] could only be achieved by using very large domains. Furthermore, for all of the algebraic profile calculations there are vortex sheets and convergent \(\sqrt{\nu}Z\), such as in figure 3 (r1d015) and the examples in section III.5. Although with profile dependent convergent times \(t_{x}>t_{r}\). What are the underlying structures and dynamics that allow the subsequent enstrophy growth to accelerate and form finite \(\Delta E_{\epsilon}\) for these cases? Figures 16 and 15 at \(t=3.6\) show where, and how, the conditions for generating negative helicity vortex sheets originate. This section extends that analysis to \(t=4.8\) to show how the sheets then expand and contribute to the enstrophy growth: growth that can lead to finite-time energy dissipation. Skipping the gradual changes at the intermediate times of \(t=4\) and \(t=4.4\). The important differences with the Lamb-Oseen calculation are also highlighted. The three-dimensional structure at \(t=4.8\) is illustrated in figures 21 and 22 using several perspectives of two vorticity isosurfaces and red \(h<0\) hash marks. Mapped-\(h\) is on the broader isosurface with a lower vorticity: \(\omega\)=\(0.64\approx 0.02\omega_{m}\). And a higher vorticity \(\omega=14\) monochrome isosurface that encases the centerline vortex. With the red hash marks indicating the \(\epsilon_{h}\lesssim 0\) spans on the centerline from which the sheets are shed. Figure 21 shows the entire structures from two perspectives. To clearly see the yellow \(h\lesssim 0\) sheets, figure 22 lops off upper parts of the trefoil. **t=4.8 r1d015 centerline budgets** Similar to how figure 16 at \(t=3.6\) marks in red the centerline spans with the strongest \(\min(\epsilon_{h})<0\), for \(t=4.8\) in figure 21 marks those spans with with red hashes. Spans whose extent on both the centerline in figure 23 and the isosurfaces is indicated by: one end by the green \(s_{g}\), then continues to \(2/3\)rds of the way to a \(s_{d}\) mark from another \(s_{d}-s_{g}\) pair. The maroon \(s_{f}\) positions are no longer part of the ongoing reconnection, but are on a \(h>0\) zone that is becoming an upper vortex rings. While the red hashes and the \(s_{d}\) and \(s_{g}\) marks are becoming part of a lower ring. The sideview in figure 21b shows this more clearly. Further remarks: * In figure 23a the \(s_{d}\) mark the primary \(\min(h)<0\) positions and in 23c the positions of \(\max(\epsilon_{\zeta})\), enstrophy dissipation. * The \(\epsilon_{h}\lesssim 0\) spans with red hashes show that the reconnection between the loops is between segments on those loops and is not simply point-to-point as with Lamb-Oseen. * The yellow vortex sheets at \(t=4.8\) now encompass almost the entire interior within the trefoil. * Comparing figure 21 to Lamb-Oseen in figure 18, the only similarity is that reconnection is forming between a primary marker and its opposing point. However the primary L-O reconnection markers are not the \(s_{d}\), but the \(s_{f}^{+}\) at local \(\max(h_{f})\) points. Locations with stretching, \(\zeta_{p}>0\), not compression. Part of the dynamics responsible for why the algebraic and Lamb-Oseen reconnection structures are so different. * While Lamb-Oseen creates isolated braids that quickly dissipate, and shut down enstrophy production, the algebraic profiles shed vortex sheets. Sheets whose mutual interactions that can accelerate enstrophy production. In figure 22 the upper, blue \(h>0\) zone has been lopped off to reveal the full extent and nature of the vortex sheets. **Centerline budgets and bridge formation.** Up through \(t=3.6\) the centerline budget profiles have largely been used to identify the origins of the divergent evolution between the two types of initial vorticity profiles. What can the \(t=3.6\) centerline budgets tell us about the dynamics and structures during the next phase? First question: Why is so little negative helicity (\(h<0\)) seen on the centerlines? Despite the presence of neighboring \(h<0\) vortex sheets, A likely contributing factor is the spans of strong \(\epsilon_{h}<0\) on the centerlines can act as sponges that remove centerline \(h<0\). Second: What is the local dynamics when the trefoil starts to break into two rings? At \(t=3.6\), the three \(s_{d}\) and the opposing \(s_{f}\)-\(s_{g}\) are all locations with local \(\min(h_{f})\) and \(\min(\zeta_{p})\), indicating local compression and pinching along the vortex lines on both sides of the developing reconnection bridges. Probably due in part to the interactions between the bridges' two ends in three-dimensions. Third: For how long does this compression/pinch persist? In \(t=4.8\) figure 23, the local \(\min(h_{f})\) and \(\min(\zeta_{p})\) diagnostics that foreshadowed reconnection for \(t\leq 3.6\) still have coincident large negative spikes. However these are now located within the developing upper ring, far from the three developing reconnections. And unlike at \(t=3.6\), are not adjacent to \(s\)-spans with significant enstrophy production, \(\zeta_{p}>0\). Fourth: Even as the compression/pinch dynamics subsides at \(t\sim 4.8\), the enstrophy continues to grow. On the centerlines this is because the yellow, local \(\min(\epsilon_{h})\)\(s_{d}\) points still have local enstrophy production maxima, \(\max(\zeta_{p})>0\). And overall is because for \(t\geq 4.8\), most of the enstrophy production is coming from the growth of the \(h<0\) vortex sheets that that now envelop the lower ring and the bridges that connect the upper and lower rings. Why is the creation of \(h<0\) sheets so important? Starting with these two reasons. First, by creating \(h<0\) zones, the vorticity in the \(h>0\) zones can grow; this breaks the early, pre-viscous, helicity conservation constraint upon vorticity growth. Second, by spreading the vorticity into sheets, the enstrophy in figure 2 can continue to grow during the first phase of reconnection; unlike the Lamb-Oseen enstrophy in figure 1. Which sets up the next stage as those sheets begin to interact with one another at \(t=6\). **t=6** The last set of r1d015 isosurfaces are for \(t=6\) in figure 24. Instead of a finding a centerline vortex, there is a higher vorticity isosurface within the low vorticity isosurface. This \(t=6\) figure represents when the first phase of reconnection ends, defined as the time \(t_{x}\) when the \(\sqrt{\nu}Z(t)\) converge in figure 3 and the shedding of \(h<0\) sheets has ended. The views of the isosurfaces at \(t=6\) in figure 24 are similar to those at \(t=4.8\) in figures 21 and 22: (a) a side view of the entire trefoil; and (b) a plan view of the lower ring, taken from the subdomain outlined in 24a. With differences. The side view in figure 24a shows that the legs of the lower ring have separated from the upper ring, with connecting bridges whose inner, large-\(\omega\) isosurfaces are winding around one another. Such as in the upper right, with some wrapping of the helicity-mapped isosurface about the core. This has some similarities to how the Lamb-Oseen upper and lower rings in figure 19 with connecting bridges at \(t=4\). Bridges whose ends then wrap about the rings in figure 20. Except that for Lamb-Oseen the bridges transform into isolated braids in figure 20. Not broad vortex sheets. What the experiments can visualize with Lagrangian markers are only the strongest isolated vortices. What those experiments miss are the low vorticity sheets, like those at \(t\geq 4.8\) in figure 22. In this sense, the algebraic large-\(\omega\) bridges in figure 24a, are a better representation of recent directly observed experimental vortices [16; 17] than Lamb-Oseen bridges, such as in figures 19 and 20. The plan view in figure 24b shows the beginnings of the next phase, with changes in the pigmentation on the sheets of the lower ring as they start wrapping around one another. The pigmentation changes from the almost all yellow, and some red, at \(t=4.8\) in figure 22 to pigmentation at \(t=6\) in figure 24b that varies from red to yellow to green. Along the leg that runs from lower right to the upper left, there is orange (\(h<0\)) coming out of the bridge in the lower right, yellow (\(h\lesssim 0\)) on the shed sheet in the middle, then green (\(h\gtrsim 0\)) on the left that is wrapping around the bridge and another sheet. This variation in color suggests that the sign for the vortical velocity \(\mathbf{u}\cdot\hat{\omega}\) is also changing, which implies stretching along the legs. Given that these stretched sheets are wrapping around the bridges and their neighboring sheets, a configuration has been created with all the elements required to invoke the Lundgren model [19] for stretched spiral vortices. This is the only analytic model that generates the growth of enstrophy required to generate a -5/3 Kolmogorov-like spectra. Which also implies the generation of a dissipation anomaly (1). Work on the details of the responsible inter-sheet dynamics is in progress. Figure 21: Two views of the same \(t=4.8\) isosurfaces from the \(p_{r}=1\), \(r_{o}=0.015\) (r1d015) calculation from different elevation angles. (a) (planar view) and (b) (side view). \(t=4.8>t_{r}\sim 4\) represents the middle of the initial phase leading that ends with the first reconnection at \(t_{x}\)=6. **u****is****osurfaces**: A blue inner \(\omega=14\) surface and a small \(\omega\!=\!0.65=0.02\omega_{m}\) isosurface with mapped helicity. The positions of \(\omega_{m}\), max(\(h\)), min(\(h\)) and \(u_{m}\) are given along with extrema of terms from the enstrophy and helicity budgets. The red hashes indicate where sheets arise from the marked centerline spans of \(\epsilon_{h}<0\) in budget figure 23a. Plus three triplets of local positions \(s_{f}\), \(s_{d}\) and \(s_{g}\) at local min(\(h_{f}\)), min(\(\epsilon_{h}\)) and its opposing points. The symbols given in the legend are also used in figure 23. In (a) the overall structure of the lobes is emphasized. (b) shows that the red hashes are all in the lower portion and represent where a separate lower vortex ring is forming. The origins and location of the yellow regions are given in the next figure. Figure 22: Two views of the \(t=4.8\) lower region for \(z<-1.1\) and -0.65 respectively, with each perspective is dominated by yellow \(h\lesssim 0\): (a) looking down; (b) looking up with the domain flipped across a line from [x y]=[-1 1.5] (green triangle) to the [x y]=[1 -1] corner, with some of the upper \(h>0\) zone included. It is also rotated a bit about the \(z\)-axis to give a flavor of how the legs of the lower ring are connecting with the bridge. Gray is where we are looking through both the lower yellow and upper blue. Some of the \(h>0\) zone is included to show the while the \(h\lesssim 0\) sheets are being shed from the lower \(h<0\) centerline, they extend up to the upper \(h>0\) blue-marked centerline. The orange \(s_{d}\) and the opposing green \(s_{g}\), both marked with \(\diamond\)’s, are highlighted to show how the legs might be starting to wind around each other. Figure 23: Vorticity centerline profiles at t = 4.8 for case r1d015. Budget profiles: \(h\), \(\epsilon_{h}\), \(h_{f}\), \(\epsilon_{\zeta}\) and \(\zeta_{p}\), with added vertical dashed lines for these local positions: \(s_{f}\) (maroon, \(\min(h_{f})\)), \(s_{d}\) (yellow, \(\min(\epsilon_{h})\)), and \(s_{g}\) (green) for the \(s_{d}\) opposing points. The \(s_{f}\) are also at \(\min(\zeta_{p})\) and at large local enstrophy dissipation \(\epsilon_{\zeta}\) positions. The \(s_{d}\) are at secondary \(\min(\zeta_{p})\) and at local \(\max(\epsilon_{\zeta})\) positions. The \(\epsilon_{h}(s)\lesssim 0\) spans over which the \(h<0\) sheets are being shed are indicated by thick, dashed red lines that are to the right of each \(s_{g}\). Reconnection is forming between spans near each \(s_{d}\) and the red hashed patches on the opposing loops with green \(s_{g}\) symbols at one end. For example: the yellow diamond at \(s_{d}=4.7\) and the span next to the green diamond at \(s_{g}=16\) Figure 24: Two \(t=6\) r1d015 isosurface perspectives at \(t_{x}\), the end of the first reconnection, as defined by figure 3a. This is when the dissipation in figure 3b begins to accelerate, with convergence of \(\epsilon=\nu Z\) at \(t\approx 10\). There are two isosurfaces: inner \(\omega=12\) blue that encases the centerline; outer \(\omega=15\) with helicity-mapping. The two perspectives are similar to those at \(t=4.8\): (a) is a side view similar to that in figure 21; (b) is a cropped plan view, similar to figure 22 but with the helicity brightened. A box is drawn on both frames to show where the subdomain in (b) has been taken from the full domain in (a). In (a) the dominant structure is the pure blue \(\omega=12\) centerline isosurface with three bridges connecting the separating upper and lower vortex rings. This illustrates what direct experimental visualizations of cores are probably observing [16]. The plan view shows what those experiments cannot see: lower \(\omega\) magnitude \(h\lesssim 0\) vortex sheets. Two differences with figure 22 are that the sheets shed from the legs change pigmentation along their length, and they are wrapping around one another at the bridges. The ‘left’ bridge has the \(\min(h)\) (red \(\blacktriangledown\)) mark. The ‘right’ bridge has the \(\omega_{m}\) (**X**) and \(u_{m}\) (green \(+\)) marks. The color change on the bottom leg is from orange \(h<0\) at the (**X,+**) ‘right’ bridge to green at the ‘left’ bridge. With the ‘left’ green wrapping around the ‘left’ bridge in the upper left and green from the leg on the right wrapping about ‘right’ bridge and some of the y-axis leg. Figure 25: For case r2d1, algebraic K-S-R profile (9) with \(p_{r}=2\) and \(r_{o}=0.1\), evolution of the dissipation rate \(\epsilon(t)=\nu Z\) (a) with approximate convergence at \(t_{e}=10.75\), convergence of the reconnection-enstrophy \(\sqrt{Z}(t)\) at \(t_{x}=5.45\) in the inset, and (b) the helicity \(\mathcal{H}\) for different viscosities. These curves are similar to those for case r1d015 in figures 2 and 3. ### Reconnection-dissipation structures for K-S-R \(p_{r}=2\) To finish the cases, a few results from the two K-S-R \(p_{r}=2\) cases r2d1 and r2d05 are included. Recall that due to stability (14), these profiles are stable unless the azimuthal wavenumber \(m\) (13) is very large. For case r2d1, the evolution of \(Z\), \(\sqrt{\nu}Z\) and \(\mathcal{H}\) mirrors that of case r1d015 in figure 2. This includes strong convergence of \(\sqrt{\nu}Z\) at the same time of \(t_{x}\simeq 6\), and approximate convergence of the dissipation rate \(\epsilon=\nu Z\) at \(t_{e}\approx 10\), with similar post-reconnection \(\mathcal{H}(t)\) growth, then decay. The evolution of its three-dimensional structures is also similar. The calculations with thinner initial algebraic cores (r2d05 and r1d006) behave differently. Both generate \(\sqrt{\nu}Z\) convergence, but earlier than r2d05 and r1d015, and both fail to generation dissipation rate \(\epsilon\) convergence. And for r2d05, the post-reconnection vortex structures in figure 27 have similarities with the Lamb-Oseen braids in figure 20. These final results are likely due to the constraints imposed by the three-fold symmetry and the confined \((2\pi)^{3}\) periodic domain. It has previously been shown that if the core thickness is thinner [1] or the Reynolds number is higher [2], larger domains are required to get convergence of \(\sqrt{\nu}Z\). And that by breaking these constraints [2], the calculation can attain the accelerated enstrophy growth required for first \(\sqrt{\nu}Z(t)\) convergence, then approximate convergence of the dissipation rates \(\epsilon=\nu Z\) by a \(\nu\)-independent time. Which is not possible for the final r2d05 and r1d006 calculations due to those constraints. Full discussion of these questions using new calculations in larger domains and a wider range of viscosities will be in a paper in preparation. Figure 27: For r2d05 side views at \(t=4.4\) and \(5.2\). (a) At time \(t=t_{x}=4.4\), when the \(\sqrt{\nu}Z(t)\) cross, a vortex sheet is being generated. (b) Which become connecting bridges at \(t=5.2\). High \(\omega\) isosurfaces are used instead of vortex lines to indicate the centerlines. Figure 26: For case r2d05, algebraic with \(p_{r}=2\) and \(r_{o}=0.05\), for different viscosities: (a) Convergence of \(\sqrt{\nu}Z(t)\) at \(t=4.45\), (b) evolution of the dissipation rate \(\epsilon(t)=\nu Z\) as an inset, and (c) the helicity \(\mathcal{H}\). Case r1d006 (\(p_{r}=1\), \(r_{o}=0.006\)) has similar \(Z(t)\), \(\sqrt{\nu}Z(t)\) and \(\mathcal{H}(t)\) evolution and incipient vortex sheets because for both, the \((2\pi)^{3}\) domain is too restrictive when the core radius is very thin. Summary ### Concluding remarks. The critical points in this paper are: * Demonstrating that the enstrophy and helicity at reconnection depend upon the initial vorticity profile when vortex knots have the same initial trajectory and circulation. * Vortex centerline diagnostics that demonstrate how the evolution for different initial profiles diverges. * Explaining the structural differences that form during the first reconnection. Vortex bridges/braids for the Gaussian/Lamb-Oseen profile and vortex sheets for all the algebraic profiles. * Not covered are the interactions between the vortex sheets of the widest algebraic profiles that lead to \(\nu\)-independent convergence of \(\epsilon\) and finite \(\Delta E_{\epsilon}\) (1). That will be the topic of another paper that extends to later times the previous calculations of perturbed trefoil knots in domains that grow as the viscosity decreases [2]. Only the two outlying cases (Gd05 and r1d015) have been discussed in detail. For each, these are the critical questions: 1) Is it subject to infinitesimal instabilities? 2) How does its \(t=0\) stability influence its reconnection-time behavior? 3) And does that behaviour allow to finite energy dissipation to form, or not? The answer to 1) comes from recent mathematics [15] that shows that initial profiles can be subject to instabilities when the initial state has small, but not tiny, perturbations. If so, then the mathematics of instabilities upon a columnar vortex [14], illustrated in figure 6, can be used to show that for almost all wavenumbers, there is a Richardson number dependent instability (12), as in figure 5. This develops despite the Lamb-Oseen profile being the usually successful and favorite choice of the engineering community. The resulting instability-induced proliferation of \(\omega\)=0-contours is illustrated by the \(t=1.2\)\(\omega_{y}\) cross-section in figure 8. A property previously observed for perturbed anti-parallel vortices [7; 9]. In contrast, the regularized \(p_{r}=1\) and \(p_{r}=2\) algebraic profiles (9) are almost always stable, with a comparison \(\omega_{y}\) cross-section given in figure 7. How can those small \(t\gtrsim 0\) differences be the origin of the dramatic post-reconnection differences? New diagnostics are required because with the usual diagnostics of \(Z(t)\) and \(\mathcal{H}(t)\), there are few differences between cases until reconnection truly begins. The most that the mapped-helicity isosurfaces can tell us about the dynamics is that around regions of negative helicity \(h<0\), sometimes just spots of yellow or red, viscous reconnection develops as the nonlinear timescale of \(t_{r}\sim 4\) is approached. What the isosurfaces cannot explain is why the new structures that are generated are so different. Bridges and braids for Lamb-Oseen and isosurface sheets for all of the algebraic profiles. What is needed is a set of diagnostics that can follow the dynamics of the interiors before the enstrophy \(Z(t)\) and the helicity \(\mathcal{H}(t)\) diverge after \(t\sim t_{r}\). 2a) The terms from the enstrophy and helicity budget equations (5,6) are another set of diagnostics that might provide evidence for the early origins for the differences between cases. These could be mapped onto isosurfaces, as done for the helicity, or on the centerlines. When mapped onto the isosurfaces, their variations are too weak to be useful. In contrast, when mapped onto the centerline vortices (17), the variations are substantial. 2b) The chosen centerline diagnostics in this paper are \(h\), \(\epsilon_{h}\), \(|\omega|=\sqrt{\zeta}\), \(h_{f}\), \(\epsilon_{\zeta}\) and \(\zeta_{p}\), and are arranged into four panels. Plus vertical dashed lines in every panel at positions related to local extrema. This includes the positions of local \(\min(h_{f})\), local \(\min(\epsilon_{h})\) and their nearest positions on the opposite loop of the trefoil. By following and comparing their extrema between the panels and the isosurfaces, a picture of the evolution emerges. The diagnostics that carry the most information at early times are the centerline positions of local \(\min(h_{f})\), \(h\)-flux minima (6). At the earliest times shown, \(t=1.2\) for r1d015 algebraic profile and \(t=0.4\) for Lamb-Oseen case Gd05, the local \(\min(h_{f})\) can be matched with several extrema. Local minima and maxima of the helicity dissipation \(\epsilon_{h}\) and minima of the enstrophy production \(\zeta_{p}\) (5), as given in figures 9 and 10. For algebraic case r1d015, from \(t=1.2\) to when reconnection begins, the relative centerline positions of these extrema are stable. Allowing the \(h<0\) zones on the new lower ring to gradually shed \(h<0\) vortex sheets. In the period \(t=1.2\) to \(2.4\), the relative positions on the Lamb-Oseen centerline profiles are not stable. Figure 11 at \(t=1.2\) has six roughly equivalent positive and negative excursions of \(\epsilon_{h}\) around positions of local compression, local \(\min(\zeta_{p})<0\). Likely due to local interactions with the instability-induced, oppositely-signed patches shown in figure 8. Three are associated with the \(s_{f}\) points. The other three with their \(s_{o}\) opposing points. The Lamb-Oseen \(s_{f}\) points return to something akin to normal for the budget curves at \(t=2.4\) in figure 12. However, the damage has been done and when reconnection begins at \(t=3.6\), the reconnection structures form only between the \(t=1.2\) extrema points. 3) It is these differences in the respective \(t\leq 2.4\) budgets that determine whether the post-reconnection structures are braids or sheets. And whether finite energy dissipation can form. Post-reconnection Lamb-Oseen first generates bridges, as at \(t=4\) in figure 19. Then progresses to braids at \(t=4.4\) in figure 19. With only a sort-lived growth in the enstrophy \(Z(t)\) and energy dissipation \(\epsilon(t)\) in figure 1 before \(Z\) and \(\epsilon\) decay. This contrasts with the algebraic profiles that do not have this instability, or any excessive local compression. And due to this, the helicity transport \(h_{f}\) is able to spread \(h<0\) along the centerline. From which \(h<0\) vortex sheets can be shed as the trefoil self-reconnects, as shown in figure 16a,c at \(t=3.6\) and figures 21 and 22 at \(t=4.8\). Figure 24 at \(t=6\) shows how those sheets, when interacting, can allow the enstrophy growth to accelerate and convergent energy dissipation rates \(\epsilon\) to be achieved. Leading to evidence for a dissipation anomaly with finite \(\Delta E_{\epsilon}\) (1). With the only evidence for bridges or braids from the algebraic calculation coming from internal higher-\(\omega\) isosurfaces, as in figure 24. ### Discussion The centerline budget diagnostics introduced here will next be applied to extensions, or variations upon, two existing calculations. First, extensions of the earlier, perturbed trefoils in very large domains [2] to higher Reynolds numbers and later times. Second, versions of recent calculations of interacting orthogonal vortices [18]. For both, approximately convergent \(\nu\)-independent dissipation rates \(\epsilon=\nu Z\) develop after the interacting vortices flatten, \(\nu\)-independent convergent \(\sqrt{\nu}Z\) is observed at \(t_{x}\) and the sheets wrap around one another. On the orthogonal isosurfaces, the mapped helicity indicates that within that wrapping, the vortex stretching is vortical. Observations that are consistent with the Lundgren spiral vortex model [19] for generating a -5/3 energy spectrum. At the time (circa 1982), a mechanism for creating wrapped and stretched vortex sheets within a turbulent flow had not been demonstrated. Although in retrospect, this is probably what stills [20] taken from the earliest color, three-dimensional animations of interacting vortices are showing. The recent orthogonal vortices [18] were initialized with a Lamb-Oseen profile, and did not develop \(t\gtrsim 0\) negatively-signed ghost vortices. Probably because those vortex tubes were not curved, but straight, so were not modified by the solenoidal projection as in initialization step 4 in section II.1. Meaning, they lacked a perturbation on their outer edge similar to that in figure 5. With the only perturbations being inherently numerical and tiny. The additional analysis [15] given after stating the stability function \(J(\rho)\) (12) for columnar vortices [14] says that tiny perturbations should not generate strong instabilities. That is, if a Lamb-Oseen profile is applied to straight vortex tubes, there will not be any instabilities capable of generating negatively-signed ghosts like those in figure 8 and earlier work [9]. **Other Lamb-Oseen calculations.** In the recent review [21] of the state of numerical vortex reconnection, a reconnection-to-bridges to braids cascade paradigm was presented based upon the results from Lamb-Oseen profile calculations, without any examples given of a second step in that cascade. Given the contrasting enstrophy evolution of the algebraic calculations, how should that paradigm be changed? The changes are substantial, with the algebraic alternative being a two-step process instead of a cascade. First the period that ends at \(t_{x}\) with \(\sqrt{\nu}Z(t)\) convergence, generation of \(h<0\) vortex sheets and completion of the first reconnection. Next the period \(t_{x}<t\lesssim t_{\epsilon}\approx 2t_{x}\) during which the sheets wrap around one another, leading to convergent \(\epsilon=\nu Z\). As that large \(\epsilon\) persists, finite-time, finite \(\Delta E_{\epsilon}\) (1) forms. Furthermore, because that review [21] focuses upon their recent trefoil calculation [3] as the latest support for the reconnection-to-braids paradigm, it is fair to ask whether the instabilities identified here extend to all the cited Gaussian/Lamb-Oseen calculations in that review. They probably do, going back to the first in 1989 [22]. The effects of such instabilities were first clearly identified for an Euler calculation using an elongated Gaussian profile [7] and were then clarified by 2013 anti-parallel analysis [9] that shows \(t\sim 0\)\(\omega=0\) contours that are more intense than those in figure 8. If the authors of that recent review [21] disagree with the analysis behind that conclusion, what would be useful would be a submission to Physical Review Fluids that applied the centerline diagnostics introduced here to another one of their recent calculations. ## Acknowledgements I would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme _Mathematical Aspects of Fluid Turbulence: where do we stand?_ in 2022, when work on this paper was undertaken and supported by grant number EP/R014604/1. Including interactions with, among others, A. Leonard and M. Musso. I thank E. Brambley at Warwick for clarifying crucial elements during the final preparation. Computing resources have been provided by the Scientific Computing Research Technology Platform at the University of Warwick.
2302.04010
Transport properties of a 1000-nm HgTe film: the interplay of surface and bulk carriers
We report on systematic study of transport properties of a 1000-nm HgTe film. Unlike to thinner and strained HgTe films, which are known as high-quality three-dimensional (3D) topological insulators, the film under study is much thicker than the limit of pseudomorphic growth of HgTe on a CdTe substrate. Therefore, it is expected to be fully relaxed and has the band structure of bulk HgTe, i.e., a zero gap semiconductor. Nevertheless, since the bands inversion the two-dimensional (2D) topological surface states are still expected to exist. To check this claim we studied classical and quantum transport response of the system. We demonstrate that by tuning the top-gate voltage one can change the electron-dominating transport to the hole one. The highest electron mobility is found to be more than $300 \times 10^3$ cm$^2$/Vs. The system exhibits Shubnikov-de Haas (SdH) oscillations with a complicated pattern and shows up to 5 independent frequencies in corresponding Fourier spectra. They are attributed to the topological surface states, Volkov-Pankratov states and spin-degenerate bulk states in the accumulation layer near the gate. The observed peculiarities of the quantum transport are the strong SdH oscillations of the Hall resistance, and the suppressed oscillatory response of the topological surface states.
M. L. Savchenko, D. A. Kozlov, N. N. Mikhailov, S. A. Dvoretsky, Z. D. Kvon
2023-02-08T11:50:03Z
http://arxiv.org/abs/2302.04010v1
# Transport properties of a 1000-nm HgTe film: ###### Abstract We report on systematic study of transport properties of a 1000-nm HgTe film. Unlike to thinner and strained HgTe films, which are known as high-quality three-dimensional (3D) topological insulators, the film under study is much thicker than the limit of pseudomorphic growth of HgTe on a CdTe substrate. Therefore, it is expected to be fully relaxed and has the band structure of bulk HgTe, i.e., a zero gap semiconductor. Nevertheless, since the bands inversion the two-dimensional (2D) topological surface states are still expected to exist. To check this claim we studied classical and quantum transport response of the system. We demonstrate that by tuning the top-gate voltage one can change the electron-dominating transport to the hole one. The highest electron mobility is found to be more than \(300\times 10^{3}\,\mathrm{cm}^{2}/\mathrm{Vs}\). The system exhibits Shubnikov-de Haas (SdH) oscillations with a complicated pattern and shows up to 5 independent frequencies in corresponding Fourier spectra. They are attributed to the topological surface states, Volkov-Pankratov states and spin-degenerate bulk states in the accumulation layer near the gate. The observed peculiarities of the quantum transport are the strong SdH oscillations of the Hall resistance, and the suppressed oscillatory response of the topological surface states. ## I Introduction HgTe and HgCdTe crystals and films have been intensively studied for more than 50 years [1; 2]. At the first, the most intriguing phenomenon in these systems was a zero or close to zero bulk band gap. It results in the sharp increase of spin-orbit corrections to the Hamiltonian that modifies the dispersion of the system, and makes it possible to study ultrarelativistic particles in the solid state and check the theoretical predictions related their properties. Despite difficulties relating the pour quality and instability of the first HgTe devices, the most important characteristic of a HgTe semiconductor - its dispersion - was obtained, but only for bulk carriers of the system. It was successfully found how the dispersion depends on structure composition and temperature. Moreover, it has been predicted [3; 4] and accidentally found [5] that HgTe has peculiar non-degenerate surface states that now called "topological surface states". Following the strong increase of the crystal quality of HgCdTe systems and clarifying surface states properties, especially spin-momentum locking and linear dispersion, there was reopening of rich physics in HgTe-based structures [6; 7]. Now it is well-known that there are topologically non-trivial surface states in strained 2D or quasi-2D HgTe films, which exist at all accessible in the experiment Fermi level positions regardless of the bulk band gap existence [8; 9; 10]. However, apart from magneto-optic measurements [11; 12; 13; 14; 15] and a short transport report [8], there was no systematic study of transport properties of bulk HgTe, where topological surface states are taking into account. Compare to previously studied 80- and 200-nm HgTe films, a much thicker 1000-nm film has trivial bulk 3D carriers that can interact with topological surface states and modify their transport response. Moreover, a high spatial separation of the surface states results in their full electrostatic decoupling, making this object promising to study only one spin-degenerate topological surface. In this paper, we report the study of transport properties of a 1000-nm HgTe film. The analysis and comparison of classical and quantum magnetotransport allows us to identify several groups of carriers. According to our results, the system can be tuned from a low-mobility mixed electron and hole bulk transport regime to the 2D mode, when high-mobility electrons or holes located in the accumulation layer near the gate dominate the transport and exhibit pronounced SdH oscillations. ## II Methods Measurements are carried out on 1000-nm HgTe films that have been grown by molecular beam epitaxy on a GaAs(013) substrate at the same conditions and with the same layer ordering as it was for usual 80 and 200 nm films [9; 10]. But apart from previously studied systems, the studied structure is a clear 3D system in terms of the bulk sub-bands formation. In Fig. 1 (a) we schematically show a cross-section view of the system under study. The HgTe film is placed between thin Cd\({}_{0.6}\)Hg\({}_{0.4}\)Te buffer layers, a Ti/Au gate has been deposited on the 200+200 nm Si\({}_{3}\)N\({}_{4}\)+SiO\({}_{2}\) insulator grown by a low temperature chem ical vapor deposition process. An approximate thickness of the pseudomorphic growth of a HgTe film on a CdTe substrate with a 0.3% larger lattice constant is about 100 - 150 nm (according to [8] and our experience). Thereby we believe that our 10 times thicker HgTe film is fully relaxed to its own lattice constant and is a zero gap semiconductor [1]. The studied Hall-bars have a 50 \(\mu\)m current channel and equal to 100 and 250 \(\mu\)m distances between potential probes. Transport measurements were performed using a standard lock-in technique with a driving current in the range of 10\({}^{-10}\) - 10\({}^{-7}\) A in a perpendicular magnetic field \(B\) at temperature 0.2 K. A current frequency for transport measurements was 12 Hz. For the capacitance measurements we mixed the dc bias \(V_{\text{g}}\) with a small ac voltage \(V_{\text{ac}}\) and measured the ac current flowing across the device phase sensitively. The total capacitance measured in such a way between the metallic top gate and a two-dimensional electron-hole system depends, besides the geometric capacitance, on the quantum capacitance \(e^{2}D\), connected in series and reflecting the finite density of states \(D\) of the system [16; 17]; \(e\) is the elementary charge, \(D\) is the thermodynamic density of states. The ac frequency for capacitance measurements was in the range of 0.2 - 3 kHz. The frequency independence of measured resistance and capacitance \(C\) was controlled excluding both the existence of leakage currents and resistive effects. The parasitic capacitance of our set up is about 20 pF. ## III Results and discussion The system under study can have several groups of carriers. There are trivial 3D bulk electrons and holes. An introduced by the gate voltage accumulation layer can host either electrons or holes that have 2D nature. Besides them the system is expected to have the 2D topological surface states at all gate voltages [4]. These states are non-degenerate and located near the top (closer to the gate) and bottom (closer to the substrate) surfaces. Additionally, one may expect to detect the response from spin-degenerate Volkov-Pankratov states [4]. They have the same origin as topological surface states [18; 19], however they form only if there is a smooth transition between topologically trivial and non-trivial materials, or if there is strong enough band bending near the boundary of materials with the opposite topology [19]. Note that the high density of states of 3D carriers pins the Fermi level in the bulk. Therefore, the gate voltage changes the charge state of the system primarily near the gate. ### Classical transport and Drude fitting In Fig. 1 (b) we show the examples of the gate voltage dependences of the longitudinal resistance \(\rho_{\text{xx}}\) (black line, left axis, zero magnetic field) and the Hall resistance \(\rho_{\text{xy}}\) (red line, right axes, magnetic field \(B=0.5\) T). The gate voltage dependence of capacitance \(C\) is presented in panel (c). A similar picture was observed earlier on thinner HgTe films [9; 10] and has the following explanation. The measured capacitance is represented as two capacitors connected in series. The first capacitor reflects the geometric capacitance, whose value is determined by the distance from the gate to the center of the carrier wave function. The second capacitor represents quantum capacitance and its value is proportional to the density of states. In our system, due to screening effects, the measured capacitance is sensitive to the density of states of the carriers closest to the gate, i.e., those in the accumulation layer or topological surface electrons. Carriers located in the bulk practically do not influence the measured capacitance. At large positive gate voltages the system exhibits electron-dominated transport. Moving \(V_{\text{g}}\) to its lower values we decrease the electron density, increase the longitudinal and Hall resistance, while the capacitance goes down because of a small increase in the distance from electrons to the gate. A sharp increase of capacitance in the region from 7.5 to 5 V, indicates the Fermi level enters the valence band for the carriers, located near the gate. The capacitance increase governs by about ten times higher effective mass and consequently density of states of holes compare to electrons in HgTe [20]. However, the sign of the Hall resistance still shows the electron-dominated transport, indicating the co-existence of electrons and holes in this region of the gate voltages. A further gate voltage decrease results in both an electron density decrease and a hole density Figure 1: (a) Schematic cross-section of the structure under study. The 1000-nm HgTe film is placed between thin Cd\({}_{0.6}\)Hg\({}_{0.4}\)Te barrier layers covered by a 200+200 nm Si\({}_{3}\)N\({}_{4}\)+SiO\({}_{2}\) insulator and a metallic gate. Bright red lines represent the surface states on the top and bottom surfaces of the HgTe film. (b) The gate voltage dependences of longitudinal resistance \(\rho_{\text{xx}}\) (black line, left axis, zero magnetic field) and Hall resistance \(\rho_{\text{xy}}\) (red line, right axis, magnetic field \(B=0.5\) T). (c) The gate voltage dependences of capacitance \(C\) measured at frequency 222 Hz. The arrow indicates the saturation of a sharp increase of \(C\) that corresponds to the Fermi level position near the valence band top. increase. Since the electron mobility is higher compare to the hole one, the resistance maximum \(\rho_{\rm xx}^{\rm max}\) is to the left from \(E_{\rm v}\) at about \(V_{\rm g}^{\rm max}\approx 4\,\)V. At about \(-3\,\)V the Hall resistance changes its sign, confirming the transition to the hole-dominated transport, though the electrons are still present in the system (see below). Thus, depending on \(V_{\rm g}\), holes or electrons dominate a transport response. In Fig. 2 (a) and (b) we show examples of magnetic field dependences of longitudinal \(\rho_{\rm x}\) and Hall resistance \(\rho_{\rm xy}\) measured at \(V_{\rm g}=-20\), \(-6\), \(6\) and \(20\,\)V. There is strong positive magnetoresistance in \(\rho_{\rm xx}(B)\) at all gate voltages (see Supplementary Fig. S2) that together with nonlinear \(\rho_{\rm xy}(B)\) indicate the co-existence of several types of carriers in the system. At high magnetic fields \(\rho_{\rm xy}\) is determined by the total charge carrier density, resulting in a different sign of the Hall resistance for high positive gate voltages (where there are only electrons) and negative gate voltages (where holes dominate transport). On the contrary, in weak magnetic fields and at all gate voltages a positive or near-zero slope of \(\rho_{\rm xy}(B)\) is observed (see Supplementary Fig. S2). According to the multi-component Drude model [21, 22, 9, 23], in weak magnetic fields the slope of the Hall resistance is determined, to a large extent, by carriers with high mobility and indicates the presence of high mobility electrons (at all gate voltages) on a background of electrons or holes with much lower mobility. Within the model, the conductivity tensor components are equal to the sum of partial conductivities: the diagonal component of the conductivity tensor is equal to \(\sigma_{\rm xx}(B)=\sum en_{\rm i}\mu_{\rm i}/\left(1+(\mu_{\rm i}B)^{2}\right)\), and the Hall conductivity is equal to \(\sigma_{\rm xy}(B)=\sum\text{sign}(i)en_{\rm i}\mu_{\rm i}^{2}B/\left(1+(\mu_{ \rm i}B)^{2}\right)\), where \(n_{\rm i}\) and \(\mu_{\rm i}\) are density and mobility of the carriers and \(\text{sign}(i)\) denotes the sign of the carriers in the Hall signal. The Drude model works for any number of groups of carriers, however the model tolerance is too high to distinguish more than two groups reliably. Therefore, we fit our classical magnetotransport data with the two-component Drude model. It allows us to discern holes and electrons at negative gate voltages, and two types of electrons at positive gate voltages. For the latter case the simulated curves (red lines in Fig. 2) nearly ideally follow the experimental data. In comparison to electron side, at negative gate voltages the discrepancy between experiment and fitting is larger. At the optimal set of fitting parameters, the simulated resistance at zero magnetic field is higher than in the experiment, indicating a possible underestimation of the mobility of carriers in this region. The most likely reason for the poor fit is the dependence of carrier mobility on the magnetic field, which is not accounted by the model. Another possible reason of bad fits is a possible presence of third group of carriers. Next, we will analyze the parameters obtained from the fitting and determine the more likely cause. Obtained from the Drude fitting gate voltage dependence of electron (\(n_{\rm Drude}^{(1)}\), \(n_{\rm Drude}^{(2)}\)) and hole (\(p_{\rm Drude}\)) densities, as well as their mobilities (\(\mu_{\rm e}^{(1)}\), \(\mu_{\rm e}^{(2)}\), \(\mu_{\rm h}\)) are shown in Fig. 3. At negative \(V_{\rm g}\), where holes (red spheres) and electrons (black spheres) coexist, the performed fitting provides their total densities (\(n_{\rm Drude}\) and \(p_{\rm Drude}\)) and average mobilities (\(\mu_{\rm e}\) and \(\mu_{\rm h}\)). At positive \(V_{\rm g}\) the first electron density \(n_{\rm Drude}^{(1)}\) (orange circles) increase nearly linearly with the gate voltage increase, while the second \(n_{\rm Drude}^{(2)}\) (green circles) is found to be gate independent, so the total electron density \(n_{\rm Drude}=n_{\rm Drude}^{(1)}+n_{\rm Drude}^{(2)}\) also linearly increases with \(V_{\rm g}\) as expected. The mobility of the first group of electrons \(\mu_{\rm e}^{(1)}\) (orange circles) is about 10 times higher compare to the mobility of the second group \(\mu_{\rm e}^{(2)}\) (green circles). The maximum value of the averaged electron mobility \(\mu_{\rm e}=\sum\mu_{\rm e}^{(i)}n_{\rm e}^{(i)}/(n_{\rm e}^{(1)}+n_{\rm e}^{(2)})\) is about 3\(\times 10^{5}\,\)cm\({}^{2}\)/Vs, and the maximum hole mobility values are around \(2\times 10^{4}\,\)cm\({}^{2}\)/Vs that is consistent with previous studies of thinner 80- and 200-nm HgTe films [9, 10], indicating the high quality of the growth despite the relaxation of the HgTe lattice. The total electron density \(n_{\rm Drude}\) depends linearly on \(V_{\rm g}\) on the positive-gate-voltage side with the slope of about \(4.5\times 10^{10}\,\)cm\({}^{-2}\)/V that is within the calculated from the electrostatics of the device range (\(4.9\times 10^{10}\,\)cm\({}^{-2}\)/V if the carriers are located at the interface between CdHgTe and HgTe). The electron filling rate shows a tendency to decrease at \(V_{g}\lesssim 5\ldots 7.5\,\)V, which is consistent with our valence band top mapping obtained from the capacitance measurements: when electrons and holes coexist, they share the total filling rate proportionally to their densities of states resulting in the decrease of the electron filling rate. In the region of \(V_{g}=-5...5\,\)V the fitting gives inadequate results because of the vicinity to the charge neutrality point. At larger negative gate voltages, the fitting works satisfactory again giving qualitatively correct \(p_{\rm Drude}(V_{g})\) dependence, though the slope is less than expected (\(d(n_{\rm Drude}-p_{\rm Drude})/dV_{g}=4\times 10^{10}\,\)cm\({}^{-2}\)/V for \(V_{g}=-13\ldots-7\,\)V). Keeping all in mind we plot an extrapolated density \(p_{\rm Drude}\) in the Fig. 3 (b) with a dashed line with an expected \(E_{v}\) point located at \(6\,\)V. Figure 2: The examples of magnetic field dependences of \(\rho_{\rm xx}\) (a) and \(\rho_{\rm xy}\) (b) measured at different gate voltages. Red solid lines represent the two-component Drude fittings. ### Quantum transport The system under study exhibits pronounced Shubnikov - de Haas (SdH) oscillations, shown in the Fig. 4 and 6. Their analysis in 3D systems makes it possible to map out the Fermi surface, whereas in 2D structures it is possible to extract directly from the oscillation period the value of carriers density. Carriers in the studied devices have the De Broglie wavelength that is much lower compare to the thickness of the HgTe film, so they should be considered as a bulk carriers. Due to the zero energy gap and the arbitrary initial distribution of the electrostatic potential, bulk electrons and holes can exist simultaneously. At the same time, high-mobility 2D carriers are present in the system too, namely: topological surface states at any gate voltage, as well as electrons and holes in the accumulation layer at non-zero gate voltages. Moreover, magnetotransport measured in parallel magnetic fields (see Supplementary Fig. S1) displays no SdH oscillations, meaning that the observed in perpendicular magnetic fields SdH-oscillations should be treated as coming from the 2D carriers only. Let's first focus on positive gate voltages. Fig. 4 (a) demonstrates the examples of the SdH oscillations measured in \(\rho_{\mathrm{xx}}(B)\) at \(V_{\mathrm{g}}=20\), \(18\) and \(16\) V. The values of the gate voltages in the figure are deliberately chosen close to each other in order to demonstrate an evolution of the oscillation pattern. However, the analysis of oscillations was also carried out in a wider gate voltage range (see Supplementary Fig. S3 and S4). Each Figure 3: The results of the two-component Drude fitting of the classical magnetotransport data. (a) The gate voltage dependences of the average electron \(\mu_{\mathrm{e}}\) (black spheres) and hole \(\mu_{\mathrm{h}}\) (red spheres) mobilities, and partial electron mobilities \(\mu_{\mathrm{e}}^{(1)}\) (orange circles) and \(\mu_{\mathrm{e}}^{(2)}\) (green circles). (b) The gate voltage dependences of the total electron \(n_{\mathrm{Drude}}\) (black spheres) and hole \(p_{\mathrm{Drude}}\) (red spheres) densities, and partial electron densities \(n_{\mathrm{Drude}}^{(1)}\) (orange circles) and \(n_{\mathrm{Drude}}^{(2)}\) (green circles). A solid black line illustrates the filling rate in the structure \(\alpha=4.4\times 10^{11}\,\mathrm{cm}^{-2}/\mathrm{V}\), dash black and red lines correspond to the suggested densities behavior, the \(E_{\mathrm{v}}\) position comes from the capacitance measurements in Fig. 1 (c). \(\rho_{\rm xx}(B)\) trace was recalculated in \(\sigma_{\rm xx}(B^{-1})\), after which its monotonous part \(<\!\sigma_{\rm xx}\!\!>\) was subtracted, and the remaining was normalized on the conductivity at zero field \(\sigma_{\rm xx}^{0}\). The examples of the resulting \(\Delta\sigma_{\rm xx}/\sigma_{\rm xx}^{0}\) curves are presented in Fig. 4 (b). The oscillations show a complicated pattern indicating the existence of several groups of 2D carriers. Their Fourier spectra demonstrate 8 peaks, whose position systematically changes with the gate voltage, denoted by frequencies \(f_{\rm(i=1,\ldots,8)}^{\rm e}\) and indicated by numbers in the Fig. 4 (c). The analysis of such a complex Fourier spectrum must begin with a search for multiple frequencies. We tested various combinations of frequencies and found out that the following relationships are fulfilled in the whole range of the gate voltages: \(2f_{1}^{\rm e}\approx f_{2}^{\rm e}\); \(2f_{3}^{\rm e}\approx f_{6}^{\rm e}\); \(f_{4}^{\rm e}+f_{5}^{\rm e}\approx f_{7}^{\rm e}\). The relationships are shown in Fig. 5 (a) (three bottom traces) as a \(f(V_{g})\) dependencies and also as a normalized difference \(\Delta f/f(V_{g})\) in Fig. 5 (b), where \(\Delta f\) denotes the difference between left and right sides of relationships discovered. Thus, the peaks of the Fourier spectrum with numbers from 1 to 7 correspond to three different groups of carriers. The first two groups exhibit spin degeneracy of Landau levels at small magnetic fields, which is lifted as magnetic field increases resulting in a simply doubling of the oscillation frequency. In contrast, the third groups of carriers (\(f_{4}^{\rm e}\) and \(f_{5}^{\rm e}\)) shows different behaviour, typical for Rashba-splitted electrons [24; 25]: at small magnetic fields it shows a beating oscillation pattern which is reflected by two closely spaced Fourier peaks with a transition to a separated Landau levels at higher fields with the frequency \(f_{7}^{\rm e}=f_{4}^{\rm e}+f_{5}^{\rm e}\). We also found that \(2f_{6}^{\rm e}\neq f_{8}^{\rm e}\) (upper traces in the Fig. 5). The eighth Fourier peak has no harmonics of either higher or lower frequency and therefore corresponds to a fourth group of carriers without spin degeneracy, i.e., topological surface electrons. One should note that the amplitude of this peak is too small for carriers of a such high density and respective contribution to the total conductivity. Although the nature of this phenomenon is not clear, the trend was already observed during the comparison of \(80\,\mathrm{nm}\) HgTe films, where the topological electrons on the top surface made the main contribution to the SdH oscillations [9; 23], with \(200\,\mathrm{nm}\), where their contribution was already several times weaker compared to other carriers [10]. Extrapolating this trend to the fully relaxed \(1000\,\mathrm{nm}\) film under study, one should expect that the peak in the Fourier spectrum from the topological electrons might be significantly damped. Thus, we believe that electron SdH oscillations are formed by four groups of carriers, three of which have spin degeneracy and one does not. The existence of 2D carriers with spin degeneracy is not surprising. They can be both bulk electrons in the accumulation layer formed at positive gate voltages (see insert in the Fig. 4) or Volkov-Pankratov states [4] (VPS). The latter have the same origin as the topological surface states [18; 19], but they form only if there is a smooth transition between topologically trivial and non-trivial materials, or if there is strong enough band bending near the boundary of materials with the opposite topology [19]. We expect to have a sharp interface between HgTe and CdHgTe barriers since it can be controlled well during the epitaxy growth [15], and suggest the induced by the gate band bending creates conditions for the VPS formation. For our study, the difference between VPS and bulk electrons in the accumulation layer is immaterial. Therefore, for convenience and to avoid confusion with bulk electrons located far from the gate outside the accumulation layer, we will refer to all three detected groups of spin-degenerated carriers as accumulation layer electrons. The accumulation layer electrons (ALE) together with topological surface states (TSS) give a self-consistent picture. First, we introduce their partial densities as \(n_{\rm ALE}^{\rm k=1\ldots 3}\) and \(n_{\rm TSS}\) accordingly. The electron density of each group can be determined by the formula \(g_{s}^{i}\frac{e}{\hbar}f_{i}^{\rm e}\), where \(i=1\ldots 8\) is the peak index, \(g_{s}^{i}\) is the appropriate spin degeneracy, \(h\) is the Planck constant: \(n_{\rm ALE}^{\rm 1}=2\frac{e}{\hbar}f_{1}^{\rm e}=\frac{e}{\hbar}f_{2}^{\rm e}\), \(n_{\rm ALE}^{\rm 2}=2\frac{e}{\hbar}f_{3}^{\rm e}=\frac{e}{\hbar}f_{6}^{\rm e}\), \(n_{\rm ALE}^{\rm 3}=\frac{e}{\hbar}f_{4}^{\rm e}+\frac{e}{\hbar}f_{5}^{\rm e}=\frac{e}{\hbar}f_{7 }^{\rm e}\) and \(n_{\rm TSS}=\frac{e}{\hbar}f_{8}^{\rm e}\). The density dependencies on the gate voltage obtained in this manner are shown in Fig. 7(a). The total density of 2D carriers obtained \(n_{\rm SdH}^{\rm\Sigma}\) agrees well with the density \(n_{\rm Drude}^{\rm(1)}\) of high-mobility electrons determined from the two-component Drude model fitting for positive gate voltages. Second, carriers with a higher density also have a higher filling rate, which indicates their closer location to the gate and in line with Figure 5: (a) The gate voltage dependence of SdH frequencies and their superpositions. Insert: the suggested band diagram of the accumulation layer for positive gate voltages. \(E_{i=1\ldots 3}\) denotes edges of electron sub-bands in the accumulation layer, DP is the Dirac point of topological surface electrons, \(E_{\rm F}\) is the Fermi level. (b) The normalized difference between indicated SdH frequencies (shifted on vertical axis by 0.5 each for clarity). Error bars come from the halfwidths of the corresponding peaks. the model of the triangular potential accumulation layer. Third, only one group of accumulation layer electrons with the highest density exhibits Rashba splitting. This fact is consistent with the assumption that this group of electrons is closest to the gate (with the exception of topological electrons, for whom the Rashba splitting is irrelevant), where the electric field is strongest. To sum up the electron side, we found that four group of carrier (three groups of accumulation layer electrons and one topological) are contributing to SdH oscillations and their total density matches with the density of high-mobility carriers obtained from the two-component Drude fitting. The low-mobility electrons, apparently located in the bulk and not affected by the gate, acts as a background. We also do not see any traces of back surface topological electrons. This seems to be due to their low density. Now we switch to the negative gate voltages. The obtained \(\rho_{\rm xx}(B)\) traces for \(V_{\rm g}=-20,-10\) and \(-5\) V are shown in the Fig. 6 (a). Following the same procedure as for the electron side we extracted the oscillatory part of the conductivity, shown in the Fig. 6 (b). The Fourier spectra of oscillations are shown in the Fig. 6 (c). Two distinct peaks can be identified, marked by \(f_{1}^{\rm h}\) and \(f_{2}^{\rm h}\), that correspond to the formation of spin-degenerate and resolved Landau levels, respectively. The spin-degenerate frequency \(f_{1}^{\rm h}\) is seen better at closer to zero \(V_{\rm g}\) and lower \(B\), while the \(f_{2}^{\rm h}\) frequency is more pronounced under opposite conditions reflecting the change in the relation between Zeeman and orbital splitting. The gate voltage dependences of \(f_{2}^{\rm h}\) and \(2f_{1}^{\rm h}\) are shown in Fig. 6 (d), where it is clearly seen that the ratio \(f_{2}^{\rm h}/f_{1}^{\rm h}=2\) holds. Additionally, the normalized difference between indicated SdH frequencies \((f_{2}^{\rm h}-2f_{1}^{\rm h})/f_{2}^{\rm h}\) also shows almost zero value in the Fig. 6 (e) proving that these peaks have the same origin. In single-component 2D systems, the SdH oscillations frequency reflects the charge carrier density of electrons or holes. In multi-component systems, where holes coexist with electrons, like in HgTe films with a thickness of 80-200 nm [9; 26; 27; 10], the period of oscillations in the valence band reflects the differential density of holes and electrons, i.e., \((p-n)_{\rm SdH}=2(e/h)f_{1}^{\rm h}\). The obtained by this manner dependence \((p-n)_{\rm SdH}(V_{g})\) is shown in the Fig. 7 (a). It is clearly seen that \((p-n)_{\rm SdH}\) shows systematically lower values than \(p_{\rm Drude}\). One could extrapolate both \((p-n)_{\rm SdH}(V_{g})\) and \(n_{\rm SdH}^{\Sigma}(V_{g})\) dependencies to zero and found that they cross the horizontal axis at the same point, namely at 3 V. Apparently, this is the charge neutrality point CNP2D for all 2D carriers (both electrons and holes located in the accumulation layer and topological electrons). The position of this point is consistent with the valence band top position, located on the right side from the CNP2D, at 5...7.5 V. Fig. 7 (b) summarizes our findings. Here we show obtained from the Drude fitting hole \(p_{\rm Drude}\) and total electron \(n_{\rm Drude}\) densities, as well as partial 2D \(n_{\rm 2D}=n_{\rm Drude}^{(1)}\) and 3D \(n_{\rm 3D}=n_{\rm Drude}^{(2)}\) densities, for electrons located near the gate and in the bulk accordingly. At high positive \(V_{\rm g}\) there is a mixture of 3D and 2D electrons in the system. The latter ones are high-mobility electrons that consist from topological surface electrons, Volkov-Pankratov and trivial electrons at the accumulation layer. The change of the gate voltage results in the change of the profile of the electrostatic potential of the accumulation layer, while the Fermi level of HgTe is pinned by the 3D bulk electrons of constant density \(n_{\rm 3D}\). At lower gate voltages, at \(V_{\rm g}=5\ldots 7.5\) V, we start to introduce 2D holes in the accumulation layer. From our data it is not clear, if electrons and holes in the accumulation layer co-exist at \(V_{g}<5\) V, however it is possible (for instance, coexistence of 2D holes and topological electrons). On the Figure 6: (a) The examples of Shubnikov – de Haas oscillations measured in \(\rho_{\rm xx}(B)\) at negative gate voltages. (b) The corresponding normalized conductivity oscillations \(\Delta\sigma_{\rm xx}/\sigma_{\rm xx}^{0}=(\sigma_{\rm xx}\,-<\sigma_{\rm xx} >)/\sigma_{\rm xx}^{0}\) in \(1/B\) scale, where \(<\sigma_{\rm xx}>\) is the monotonous part of conductivity and \(\sigma_{\rm xx}^{0}\) is the conductivity at zero field. (c) The corresponding normalized Fourier spectra of the conductivity oscillations. The solid lines correspond to the fitting by Gaussian functions. The center of each peak is indicated by the corresponding frequency \(f_{i}^{\rm h}\). (d) The gate voltage dependence of SdH frequencies \(f_{i}^{\rm h}\) and their superpositions. (e) The normalized difference between indicated SdH frequencies \((f_{2}^{\rm h}-2f_{1}^{\rm h})/f_{2}^{\rm h}\) proves they are the same origin. Error bars come from the halfwidths of the corresponding peaks. other hand, near the CNP\({}_{\text{2D}}\) the accumulation layer is not well-developed yet, while deeper in the valence band (\(V_{g}<0\,\)V) we do not see any manifestation of 2D electrons in SdH oscillations. This suggests that their density and/or quantum mobility are too low. ### Peculiarities of a transport response Here we want to shortly stress out the peculiarities of the quantum transport response of the studied 1000-nm HgTe films. The first feature is an already discussed rather weak oscillatory response of the topological surface states compare to other types of electrons. This is especially strange since they have the highest density and, presumably, high mobility. Perhaps, their proximity to the scattering centers (the CdHgTe/HgTe interface) plays a role here, and the topological protection against backscattering [7, 6], which may increase the transport scattering time \(\tau_{tr}\), has a little or no effect on the quantum time \(\tau_{q}\), responsible for the SdH oscillations amplitude. The second feature for discussion is anomalously strong SdH oscillations observed in the Hall resistance at negative gate voltages (Fig. 8). In comparison to oscillations in \(\rho_{\text{xx}}\), they are characterized by the same period and nearly the same amplitude while having an opposite phase [28, 29, 30, 31]. In general, the SdH oscillations of both diagonal and Hall components of the resistivity tensor from the oscillatory density of states [32]. How Figure 7: (a) The comparison of electron and hole densities obtained from SdH oscillations and from the Drude fitting (data from Fig. 3 (b)). The densities obtained from the Drude fitting shown with red spheres (\(p_{\text{Drude}}\)) and orange circles (\(n_{\text{Drude}}^{(1)}\)). The SdH oscillations period reflects the densities of 2D carriers located in the vicinity of the gate, namely in the accumulation layer and topological surface states, shown by triangles. In the valence band it reflects the differential denisty \((p-n)_{\text{SdH}}\) (magenta), while on the electron side we are able distinguish three types of the accumulation layer electrons (\(n_{\text{ALE}}^{1,3}\), olive, violet and brown, the color matches with the ones in the Fig. 5) and topological surface electrons (\(n_{\text{TSS}}\), cyan). The total 2D electron density \(n_{\text{SdH}}^{\Sigma}=\sum n_{\text{ALE}}^{1,3}+n_{\text{TSS}}\) satisfactorily matches to the high-mobility electron density \(n_{\text{Drude}}^{(1)}\). (b) The combined gate voltage – density map, \(n_{\text{Drude}}\) and \(p_{\text{Drude}}\) have the same color code as in Fig. 3 (b) and are shown as black and red spheres, respectively. Orange circles represent 2D electrons of density \(n_{\text{2D}}\), green circles – bulk 3D electrons of density \(n_{\text{3D}}\) (see text for details). ever, the oscillating part of \(\rho_{\rm xy}\) survives only in the case of short-range scattering [28; 32], that we expect as the main scattering mechanism in the accumulation layer. Next, according to the theories [28; 29; 31], the amplitude of oscillations in \(\rho_{\rm xy}\) is additionally damped in comparison to the one in \(\rho_{\rm xx}\) with a damping factor of \(1/\mu B\). In our case \(1/\mu B\ll 1\), however the observed oscillation amplitude is nearly the same for both \(\rho_{\rm xx}\) and \(\rho_{\rm xy}\), which is anomalous. Some differences between experiment and theory for the SdH amplitude in thinner, \((5-30)\,\)nm-thick, HgTe systems was also observed in [31], but apart from our findings, there the main ratio \(\Delta\rho_{\rm xx}\gg\Delta\rho_{\rm xy}\) holds. We also note that, despite thinner, 20-, 80-, and 200-nm HgTe quantum wells host electrons and holes of similar properties, we did not observed such pronounced SdH oscillations of \(\rho_{\rm xy}\) there. ## IV Conclusion In summary, we have shown that both 2D and 3D carriers may present, depending on the Fermi level position in 1000-nm HgTe film. The 2D carriers are located on the interface between HgTe and CdHgTe (topological electrons) or in the vicinity of it (trivial electrons or holes, or Volkov-Pankratov electrons), in formed by the gate voltage accumulation layer. Both 2D electrons and holes exhibit pronounced Shubnikov-de Haas oscillations, sensitive to the perpendicular component of the magnetic field. 3D electrons are located in the bulk, act as separate classical conductive channel and pins the Fermi level, making this system an ideal candidate to further study of the Quantum Hall effect reservoir model [33; 34]. ## V Acknowledgements The work is supported by RFBR Grant No. 18-32-00138.
2308.11320
Continuous Variable Quantum Key Distribution in Multiple-Input Multiple-Output Settings
We investigate quantum key distribution (QKD) in optical multiple-input-multiple-output (MIMO) settings. Such settings can prove useful in dealing with harsh channel conditions as in, e.g., satellite-based QKD. We study a $2\times2$ setting for continuous variable (CV) QKD with Gaussian encoding and heterodyne detection and reverse reconciliation. We present our key rate analysis for this system and compare it with single-mode and multiplexed CV QKD scenarios. We show that we can achieve multiplexing gain using multiple transmitters and receivers even if there is some crosstalk between the two channels. In certain cases, when there is nonzero correlated excess noise in the two received signals, we can even surpass the multiplexing gain.
Shradhanjali Sahu, Ahmed Lawey, Mohsen Razavi
2023-08-22T09:52:33Z
http://arxiv.org/abs/2308.11320v1
# Continuous Variable Quantum Key Distribution in Multiple-Input Multiple-Output Settings ###### Abstract We investigate quantum key distribution (QKD) in optical multiple-input-multiple-output (MIMO) settings. Such settings can prove useful in dealing with harsh channel conditions as in, e.g., satellite-based QKD. We study a \(2\times 2\) setting for continuous variable (CV) QKD with Gaussian encoding and heterodyne detection and reverse reconciliation. We present our key rate analysis for this system and compare it with single-mode and multiplexed CV QKD scenarios. We show that we can achieve multiplexing gain using multiple transmitters and receivers even if there is some crosstalk between the two channels. In certain cases, when there is nonzero correlated excess noise in the two received signals, we can even surpass the multiplexing gain. Quantum key distribution, multiple input multiple output, Quantum Cryptography, CV QKD ## I Introduction Quantum key distribution (QKD) enables two remote parties to securely exchange a secret key in the presence of potential eavesdroppers [1]. Point-to-point QKD links are, however, limited, in terms of secret key rate (SKR) versus distance, to fundamental bounds that depend on the transmissivity of the channel [2] and the noise in the system. To improve the total SKR, we can use multiplexing techniques to create multiple parallel channels for key exchange [3]. Ideally these channels need to be independent, in which case the SKR would increase linearly with the number of channels. In some scenarios, however, this may not be easily feasible. For example, in satellite-based QKD [4] with spatial multiplexing, beam scattering and the scintillation effects in the atmospheric part of the link may result in crosstalk between parallel channels [5]. The same situation may happen in QKD over multi-core fiber [6]. In such cases, the key rate per single mode channel may considerably drop because of the crosstalk noise generated by other channels. The above scenarios resemble the situation we face in the well-studied multiple-input multiple-output (MIMO) channels in wireless communications. In this paper, we investigate how MIMO techniques can help QKD systems in harsh channel conditions. Satellite-to-ground links are often highly lossy for QKD purposes. The geometric loss in free space, augmented by beam wandering and fading effects in the atmospheric part of the link, as well as the limited size of antennas, lead to considerable amount of loss. For example, the total channel loss observed in recent experiments done by the Chinese satellite, Micius, is roughly 30-40 dB [7]. This can go up to 85 dB for geostationary satellites [8]. This has resulted in focusing mainly on discrete-variable (DV) protocols for satellite-based QKD [9], and to discount the continuous-variable (CV) options, which often do not perform well when channel loss is high [10]. CV-QKD, however, offers some advantages over DV-QKD if its similarity to coherent communications systems is properly exploited. This could allow us to use well-known techniques in wireless communications to improve system performance. In particular, the combination of CV-QKD with MIMO could be interesting because we can recover phase and amplitude information that are critical to the operation of CV systems. In this work, we examine the performance of CV-QKD systems in MIMO settings. For simplicity, we consider the \(2\times 2\) scenario, and model the attack by generalising the ideas in the entangling cloner attack [11]. We work out the SKR, and its dependence on relevant parameters, and show how such parameters can be estimated from the corresponding covariance matrix. Our results show that not only the SKR improves in this multi-antenna setting but also the system could become more resilient to loss. The rest of the paper is organized as follows. In Sec. II, we describe the setting of interest in more detail, followed by our security analysis in Sec. III. We present and discuss our numerical results in Sec. IV and conclude the paper in Sec. V. ## II Problem Description In this work, we consider a CV-QKD system over a \(2\times 2\) MIMO channel with two transmitters and two receivers; see Fig. 1. Each transmitter independently uses Gaussian encoding to exchange data with its intended receiver. The channel connecting transmitters and receivers could, however, cause interference such that the received signal on each receiver may contain components from both transmitters. For example, in the satellite-based CV QKD with two transmitting antennas, if the corresponding ground stations are in the vicinity of each other, they may each capture part of the signal intended for the other receiver. The additional fading and phase distortions that may also happen in the atmospheric part of the link can further adversely affect system performance. For instance, even if we use only one transmitting antenna, but two nearby ground stations to improve our collection efficiency, the signal received on each telescope may have different phase and amplitude. A simple summation of the two received signals will not then necessarily provide good correlation with the transmitted signal. Similarly, in the \(2\times 2\) case, treating the received signals independently as in two multiplexed systems could see substantial decrease in the SKR due to the crosstalk noise coming from the other channel. In both cases, proper MIMO based postprocessing is needed to extract key information from the received signals. Our objective is to find the SKR achievable in the MIMO setting when such considerations are accounted for. Here is the detailed description of the protocol used in our MIMO setting, which is based on the CV QKD protocol with Gaussian modulated coherent states and heterodyne detection [12]. There are three parts in our protocol as follows: **(a) Quantum communication:** In this part of the protocol, quantum states are sent over the quantum channel by Alice, the transmitter node, to Bob, the receiver node, to be used for secret key generation. Here, Alice prepares two coherent states, \(|x_{a_{1}}+ip_{a_{1}}\rangle\) and \(|x_{a_{2}}+ip_{a_{2}}\rangle\), where \(a_{1}=\{x_{a_{1}},p_{a_{1}}\}\) and \(a_{2}=\{x_{a_{2}},p_{a_{2}}\}\), respectively, represent two sets of i.i.d Gaussian random variables with \(0\) mean and variances \(V_{a_{1}}\) and \(V_{a_{2}}\). Upon receiving the signals, Bob performs a heterodyne measurement on them to, respectively, obtain \(b_{1}=\{x_{b_{1}},p_{b_{1}}\}\) and \(b_{2}=\{x_{b_{2}},p_{b_{2}}\}\). This part will be repeated many times to come up, in the asymptotic case, with an infinitely large set of data points. **(b) MIMO processing:** In this step, Alice and Bob choose how they wish to process the two sets of data that they have shared via part (a). Here, we consider two cases: **Case 1, Selection diversity:** Alice and Bob, consider four sets of data \(\{(a_{1},b_{1})\}\), \(\{(a_{1},b_{2})\}\), \(\{(a_{2},b_{1})\}\) or \(\{(a_{2},b_{2})\}\) to extract the secret key from. The option that offers the highest SKR, let's denote it by \(\{(a,b)\}\), is chosen for key exchange. This is akin to selection diversity techniques for receiver spatial diversity in wireless communications. **Case 2, Full MIMO:** Alice and Bob consider the full set of input-output data, \(\{(a_{1},a_{2}),(b_{1},b_{2})\}\), to obtain secret keys. This step should provide us with a recipe for how to process the above set of data to come up with a correlated set of continuous data points, \(\{(a,b)\}\), that can be used by Alice and Bob for key extraction. In our work, we focus on the maximum SKR that can, in principle, be achieved using the ideal MIMO processing. **(c) Classical post-processing:** Once the MIMO processing approach is chosen, discretization, reverse reconciliation and privacy amplification steps are performed on \(\{(a,b)\}\) as for the single-mode CV QKD [12], to obtain a secret key. In our security analysis, the equivalent entanglement based (EB) picture is considered as shown in Fig. 1. In the EB scenario, Alice prepares two two-mode squeezed vacuum (TMSV) states. She measures one mode of each TMSV state using heterodyne detection to project the other mode into coherent states, which she sends to Bob over the quantum channel. The covariance matrix for the initial state of optical modes \(A\) and \(A^{\prime}\), respectively, represented, in Fig. 1, by annihilation operators \((\hat{a}_{1},\hat{a}_{2})\) and \((\hat{a}^{\prime}_{1},\hat{a}^{\prime}_{2})\), is given by \[\boldsymbol{\gamma_{AA^{\prime}}}=\begin{pmatrix}V_{a_{1}}\mathds{1}_{2}&k \sigma_{z}\\ k\sigma_{z}&V_{a_{1}}\mathds{1}_{2}\end{pmatrix}\oplus\begin{pmatrix}V_{a_{2}} \mathds{1}_{2}&l\sigma_{z}\\ l\sigma_{z}&V_{a_{2}}\mathds{1}_{2}\end{pmatrix}, \tag{1}\] where \(k=\sqrt{V_{a_{1}}^{2}-1}\) and \(l=\sqrt{V_{a_{2}}^{2}-1}\). Here, \(\oplus\) refers to the direct sum, \(\mathds{1}_{n}\) is the identity operator of dimension \(n\), and \(\sigma_{z}\) is Pauli operator \(Z\). Note that the first block in \(\boldsymbol{\gamma_{AA^{\prime}}}\) corresponds to \((\hat{a}_{1},\hat{a}^{\prime}_{1})\), whereas the second to \((\hat{a}_{2},\hat{a}^{\prime}_{2})\). ## III Security Analysis The secret key rate for a CV-QKD system with reverse reconciliation, in the asymptotic limit where infinitely many signals are exchanged, is given by [12] \[K=\max\{0,\beta I(a:b)-\chi_{BE}\}, \tag{2}\] where \(I(a:b)\) is the mutual information between Alice's and Bob's data after MIMO processing, \(\chi_{BE}\) is Holevo bound on Eve's accessible information on Bob's measurements and \(\beta\leq 1\) is the reconciliation efficiency. In a real experiment, \(I(a:b)\) can statistically be found using the data exchanged between Alice and Bob. The key problem is then to upper bound the amount of information that has leaked to Eve, i.e., \(\chi_{BE}\). This gives us a lower bound on the key rate, which will specify the amount of privacy amplification needed in the protocol. In order to upper bound \(\chi_{BE}\), one can use the optimality of Gaussian attacks, which will provide us with a recipe to bound \(\chi_{BE}\) using the measured elements of the covariance matrix between Alice and Bob. To get an insight into how the key rate behaves, here we consider a special case, which resembles an extension of the entangling cloner attack [11]. Below we will explain this attack for which we calculate the corresponding covariance matrix and the SKR that can be achieved in Cases 1 and 2 explained in Sec. II. Fig. 1: Entanglement based picture of the two-mode CV QKD protocol. ### _Eavesdropping attack_ Here, we take a multimode extension of the entangling cloner attack [11] as a possible candidate for the optimal attack by Eve for Gaussian encoding. In general, the maximum number of modes required to purify Alice and Bob's modes are less than or equal to twice the number of Alice's modes [13]. As a result, for our \(2\times 2\) MIMO channel, it is sufficient to consider four additional environment/Eve's inputs, for the optimal unitary dilation of the two-mode channel. For this attack, we assume that Eve uses two TMSV states, with variances \(V_{e_{1}}\) and \(V_{e_{2}}\), as the initial state in the EB picture of Fig. 1. One leg of each TMSV state interacts with Alice's signal via a unitary operator \(\mathbf{U}\), while the other can be stored in quantum memories. This unitary operation effectively replaces the beam splitter in the single-mode case. Due to extremality of Gaussian states [14] and optimality of Gaussian attacks [15, 16], and the condition that Eve's state purifies the global state, such a Gaussian attack by Eve maximizes her accessible information. In Fig. 1, the unitary channel transformation matrix \(\mathbf{U}\) maps input annihilation operators to output ones as follows: \[\begin{pmatrix}\hat{b}_{1}\\ \hat{b}_{2}\\ \hat{e}_{1}^{\prime}\\ \hat{e}_{2}^{\prime}\end{pmatrix}=\begin{pmatrix}u_{11}&u_{12}&u_{13}&u_{14} \\ u_{21}&u_{22}&u_{23}&u_{24}\\ \hline u_{31}&u_{32}&u_{33}&u_{34}\\ u_{41}&u_{42}&u_{43}&u_{44}\end{pmatrix}\begin{pmatrix}\hat{a}_{1}^{\prime}\\ \hat{a}_{2}^{\prime}\\ \hat{e}_{1}\\ \hat{e}_{2}\end{pmatrix}, \tag{3}\] where relevant operators are all specified in Fig. 1. We refer to the upper left block in \(\mathbf{U}\) as the \(\mathbf{H}\)-matrix, which effectively replicates the MIMO channel in the classical case. ### _Covariance matrix calculations_ An alternative way of representing the input-output relationship in (3) is to use the symplectic transformation \(\mathbf{\hat{r}_{f}}=\mathbf{S_{fi}}\mathbf{\hat{r}_{i}}\), where \(\mathbf{\hat{r}_{i}}=[\mathbf{\hat{r}_{a_{1}^{\prime}}},\mathbf{\hat{r}_{a_{1}^{\prime}}},\mathbf{\hat{r}_{e_{1}}},\mathbf{\hat{r}_{e_{2}}}]^{T}\) is the input quadrature operator vector and \(\mathbf{\hat{r}_{f}}=[\mathbf{\hat{r}_{b_{1}}},\mathbf{\hat{r}_{b_{2}}},\mathbf{\hat{r}_{e_{1} ^{\prime}}},\mathbf{\hat{r}_{e_{c}}}]^{T}\) is the output quadrature operator vector, with \(\mathbf{\hat{r}_{z}}=[\hat{x}_{z},\hat{p}_{z}]^{T}\), for any annihilation operator \(\hat{z}\), being the vector of the corresponding canonical operators. Here, the symplectic orthogonal matrix \(\mathbf{S_{fi}}\) is given by: \[\mathbf{S_{fi}}=\begin{pmatrix}\begin{array}{ccc|cc}\mathbf{S_{11}}&\mathbf{S_{12}}&\bm {S_{13}}&\mathbf{S_{14}}\\ \mathbf{S_{21}}&\mathbf{S_{22}}&\mathbf{S_{23}}&\mathbf{S_{24}}\\ \hline\mathbf{S_{31}}&\mathbf{S_{32}}&\mathbf{S_{33}}&\mathbf{S_{34}}\\ \mathbf{S_{41}}&\mathbf{S_{42}}&\mathbf{S_{43}}&\mathbf{S_{44}}\end{array}\end{pmatrix}, \tag{4}\] where \[\mathbf{S}_{mn}=\begin{pmatrix}\begin{array}{cc}\Re\{u_{mn}\}&-\Im\{u_{mn}\}\\ \Im\{u_{mn}\}&\Re\{u_{mn}\}\end{array}\end{pmatrix},\quad m,n=1,\ldots,4. \tag{5}\] Using the transformation in (4), we can now obtain the corresponding covariance matrix between Bob's modes, \(\mathbf{B}\), and the two modes that Alice retains in the EB picture, \(\mathbf{A}\), as follows: \[\mathbf{\gamma_{AB}}=\mathrm{Tr}_{E}[\mathbf{S}(\mathbf{\gamma_{AA^{\prime}}}\oplus\mathbf{ \gamma_{E}})\mathbf{S}^{T}], \tag{6}\] where \(\mathbf{S}=\mathbbm{1}_{4}\oplus\mathbf{S_{fi}}\), to which necessary re-arrangements must be applied to match the coordinates of other matrices in (6), \(\mathbf{\gamma_{AA^{\prime}}}\) is the initial covariance matrix for Alice as given in (1), \(\mathbf{\gamma_{E}}=V_{e_{1}}\mathbbm{1}_{2}\oplus V_{e_{2}}\mathbbm{1}_{2}\), and \(\mathrm{Tr}_{E}\left[\ast\right]\) is obtained by excluding the rows and columns corresponding to Eve's modes. Using (3)-(6), the covariance matrix for modes \(\mathbf{A}\) and \(\mathbf{B}\) is given by: \[\mathbf{\gamma_{AB}}=\begin{array}{cccc}\text{modes:}&a_{1}&b_{1}&a_{2}&b_{2} \\ a_{1}&\begin{pmatrix}\mathbf{\gamma_{A_{1}}}&\mathbf{\gamma_{A_{1}B_{1}}}&\mathbf{\gamma_{A _{1}A_{2}}}&\mathbf{\gamma_{A_{1}B_{2}}}\\ \mathbf{\gamma_{A_{1}B_{1}}}&\mathbf{\gamma_{B_{1}}}&\mathbf{\gamma_{A_{2}B_{1}}}&\mathbf{ \gamma_{B_{1}B_{2}}}\\ \mathbf{\gamma_{A_{1}A_{2}}}&\mathbf{\gamma_{A_{2}B_{1}}}&\mathbf{\gamma_{A_{2}}}&\mathbf{\gamma _{A_{2}B_{2}}}\\ \mathbf{\gamma_{A_{1}B_{2}}}&\mathbf{\gamma_{B_{1}B_{2}}}&\mathbf{\gamma_{A_{2}B_{2}}}&\bm {\gamma_{B_{2}}}\end{pmatrix}\\ \end{array} \tag{7}\] where the covariance matrix elements are given by: \[\mathbf{\gamma_{A_{1}}}=V_{a_{1}}\mathbbm{1}_{2},\mathbf{\gamma_{A_{2}}}=V_{a_{2}} \mathbbm{1}_{2},\] \[\mathbf{\gamma_{A_{1}B_{1}}}=k\mathbf{F}(u_{11}),\mathbf{\gamma_{A_{1}B_{2}}} =k\mathbf{F}(u_{21}),\] \[\mathbf{\gamma_{A_{2}B_{1}}}=l\mathbf{F}(u_{12}),\mathbf{\gamma_{A_{2}B_{2}}} =l\mathbf{F}(u_{22}),\mathbf{\gamma_{A_{1}A_{2}}}=0, \tag{8}\] with \[\mathbf{F}(u)=\begin{pmatrix}\Re\{u\}&\Im\{u\}\\ \Im\{u\}&-\Re\{u\}\end{pmatrix}, \tag{9}\] and \[\mathbf{\gamma_{B_{1}}}=\delta_{1}\mathbbm{1}_{2},\mathbf{\gamma_{B_{2}}}=\mu_{1} \mathbbm{1}_{2},\mathbf{\gamma_{B_{1}B_{2}}}=\begin{pmatrix}\nu_{1}&\nu_{3}\\ -\nu_{3}&\nu_{1}\end{pmatrix}, \tag{10}\] with \[\delta_{1}=\mathbf{v_{1}^{\dagger}v_{1}}=|u_{11}|^{2}f_{1}+|u_{12}|^{2 }f_{2}+1+\xi_{b_{1}},\] \[\mu_{1}=\mathbf{v_{2}^{\dagger}v_{2}}=|u_{21}|^{2}f_{1}+|u_{22}|^{2}f_{ 2}+1+\xi_{b_{2}},\] \[\nu_{1}+i\nu_{3}=\mathbf{v_{1}^{\dagger}v_{2}}=u_{11}^{*}u_{21}f_{1}+u _{12}^{*}u_{22}f_{2}+\xi_{b_{1}b_{2}}, \tag{11}\] where \(\xi_{b_{1}}\) and \(\xi_{b_{2}}\) are, respectively, the excess noise at Bob's first and second receiver, \(\xi_{b_{1}b_{2}}\) represents a cross-correlation excess noise term between the two receivers, and \[f_{1}=(V_{a_{1}}-1),\quad f_{2}=(V_{a_{2}}-1),\] \[\mathbf{v_{1}}=[\sqrt{V_{e_{1}}}u_{13}\quad\sqrt{V_{e_{2}}}u_{14} \quad\sqrt{V_{a_{1}}}u_{11}\quad\sqrt{V_{a_{2}}}u_{12}]^{T},\] \[\mathbf{v_{2}}=[\sqrt{V_{e_{1}}}u_{23}\quad\sqrt{V_{e_{2}}}u_{24} \quad\sqrt{V_{a_{1}}}u_{21}\quad\sqrt{V_{a_{2}}}u_{22}]^{T}. \tag{12}\] Given that all elements of the covariance matrix can be measured in a real experiment, there are several interesting observations we can make from the covariance matrix elements: * It is interesting to see that the MIMO channel, \(\mathbf{H}\), can be obtained directly from the elements presented in (8). This channel parameter estimation fully relies on the quantum communication part of the protocol, and can therefore be used for bounding the Holevo information term. * Similar to the single-mode CV-QKD case, here, we have defined the excess noise at each receiver as the difference between the noise measured at each receiver and the minimum noise that we expect assuming that there is no Eve present, i.e., in our model, when \(V_{e_{1}}=V_{e_{2}}=1\) in shot-noise units (SNU). * We, however, have a new excess noise observable, \(\xi_{b_{1}b_{2}}\), that does not exist in single-mode CV-QKD. This new parameter models the correlated excess noise between the two receivers. It would be interesting to see how this parameter affects system performance. We refer to the cases where \(\xi_{b_{1}b_{2}}\neq 0\) as the colored, or correlated, excess noise case. This is in contrast to the typical case in classical MIMO where noise at the receiver is modelled by i.i.d. random variables, and we will see how it can actually offer additional improvement in performance. While the values observed for \(\xi_{b_{1}}\) and \(\xi_{b_{2}}\) can, in principle, be any non-negative real number, the values that \(\xi_{b_{1}b_{2}}\) can take should satisfy the following restrictions: 1. All symplectic eigenvalues of \(\boldsymbol{\gamma_{AB}}\) should be greater than or equal to 1 [17, 18]. 2. By applying Cauchy-Schwarz inequality \[|\boldsymbol{v_{1}^{\dagger}v_{2}}|^{2}\leq(\boldsymbol{v_{1}^{\dagger}v_{1}} )(\boldsymbol{v_{2}^{\dagger}v_{2}})\] (13) to (11), we obtain the following relation for the permissible region of \(\xi_{b_{1}b_{2}}\): \[\nu_{1}^{2}+\nu_{3}^{2}\leq\delta_{1}\mu_{1}.\] (14) ### _Key rate analysis_ Based on the channel model given in Sec. III-A and the covariance matrix obtained in Sec. III-B, we can now work out the SKR in the two cases introduced in Sec. II. **Case 1:** Here, we calculate the key rate for one of the four cases mentioned under selection diversity. The SKR in the other cases can similarly be calculated to the case \(\{a_{1},b_{1}\}\) described here. The secret key rate for this case is given by: \[K(a_{1},b_{1})=\beta I(a_{1}:b_{1})-\chi(b_{1}:E), \tag{15}\] where \(E\) represents all four quantum modes that Eve has access to in Fig. 1. In this case, we assume the state shared by Alice and Bob is Gaussian, in which case the mutual information term is given by: \[I(a_{1}:b_{1})=\frac{1}{2}\log_{2}\left[\frac{\mathrm{Det}[V_{A_{1}}]\mathrm{ Det}[V_{B_{1}}]}{\mathrm{Det}[V_{A_{1},B_{1}}]}\right], \tag{16}\] where \(V_{A_{1}}\), \(V_{B_{1}}\), and \(V_{A_{1}B_{1}}\) are, respectively, the covariance matrix for Alice's mode \(A_{1}\), Bob's mode \(B_{1}\) and the joint covariance matrix for the modes \(A_{1}\) and \(B_{1}\). These matrices can be found by applying heterodyne measurement on the covariance matrix \(\boldsymbol{\gamma_{AB}}\), given as \[\boldsymbol{\gamma_{AB_{net}}}=\frac{\boldsymbol{\gamma_{AB}}+\mathds{1}_{8}} {2}, \tag{17}\] and keeping the relevant modes. The Holevo bound on Eve's information over Bob's measurement is given by: \[\begin{split}\chi(b_{1}:E)&=S(E)-S(E|b_{1})\\ &=S(A_{1}B_{1}A_{2}B_{2})-S(A_{1}A_{2}B_{2}|b_{1}),\end{split} \tag{18}\] where, \(S\) represents the von Neumann entropy function. There are standard techniques to calculate this function for Gaussian states, which has been summarised in Appendix A. The two entropy terms above can then numerically be found using (23)-(25). Note that, in practice, using selection diversity may make sense if one channel is considerably better than the other one, as otherwise part of the signal sent by the second transmitter could enter as noise on the first receiver. Alternatively, one may consider the multiplexing option in which case the total key rate is given by \(K_{total}=K(a_{1}:b_{1})+K(a_{2}:b_{2})\) or \(K_{total}=K(a_{1}:b_{2})+K(a_{2}:b_{1})\) depending on channel conditions. **Case 2:** Using the full information available to the users, the SKR is given by: \[K(a_{1},a_{2}:b_{1},b_{2})=\beta I(a_{1},a_{2}:b_{1},b_{2})-\chi(b_{1},b_{2}: E). \tag{19}\] Again, under Gaussian assumption for the joint state of Alice and Bob, the mutual information for this case is given by: \[I(a_{1},a_{2}:b_{1},b_{2})=\frac{1}{2}\log_{2}\left[\frac{\mathrm{Det}[V_{A_{ 1},A_{2}}]\mathrm{Det}[V_{B_{1},B_{2}}]}{\mathrm{Det}[V_{A_{1},A_{2},B_{1},B_ {2}}]}\right]. \tag{20}\] All relevant covariance matrices in the above equation can be found using the covariance matrix \(\boldsymbol{\gamma_{AB_{net}}}\), given in (17). The Holevo bound is also given by: \[\begin{split}\chi(b_{1},b_{2}:E)&=S(E)-S(E|b_{1}, b_{2})\\ &=S(A_{1}B_{1}A_{2}B_{2})-S(A_{1}A_{2}|b_{1},b_{2}),\end{split} \tag{21}\] where, again, by finding the relevant symplectic eigenvalues, we can use (23)-(25) to numerically calculate the entropy terms in the above equation. Note that the key rate in (19) is the maximum that can be obtained in the MIMO setting considered conditioned on using the optimal MIMO postprocessing applicable to the measured data points. ## IV Key Rate Simulation: Results and Discussion In this section, we compare the key rate for the \(2\times 2\) MIMO CV-QKD with that of single-mode CV-QKD. We consider a MIMO channel in which the line of sight link carries the same power as the crosstalk one given by the following channel state matrix: \[\boldsymbol{H}=\sqrt{\frac{T}{2}}\begin{pmatrix}1&i\\ i&1\end{pmatrix}, \tag{22}\] where \(T\) models the channel transmissivity. We assume the excess noise \((\xi_{b_{1}},\xi_{b_{2}})\) at both receivers to be constant and not dependent on the channel loss. For our simulation, we have taken \(\xi_{b_{1}}=\xi_{b_{2}}=0.001\) SNU, and the reconciliation efficiency \(\beta\) is \(0.95\). We optimize the SKR over \(V_{a_{1}}\) and \(V_{a_{2}}\) under the constraint that the maximum total power is \(2\times 4.7\) SNU. This enables the comparison between the single-mode and two-mode MIMO cases, as 4.7 SNU is the optimum power transmitted in the single-mode case at high losses for our chosen parameter values. It turns out that the optimum power allocation in the MIMO setup is the same as equal power allocation on both transmitted modes. Let us begin by looking into the effect of the correlated noise parameter, \(\xi_{b_{1}b_{2}}\), on the key rate. Fig. 2 shows the SKR vs \(\xi_{b_{1}b_{2}}\). It can be seen that, for the particular channel considered here, the SKR is minimum when \(\xi_{b_{1}b_{2}}=0\). SKR is maximum when \(\xi_{b_{1}b_{2}}\) reaches the values on the circular boundary, which satisfy the condition (14).This suggests that the worst case scenario for Alice and Bob, for this channel, happens when the attack by Eve leaves no excess correlation in the noise terms. In the rest of the paper, unless otherwise noted, we therefore assume \(\xi_{b_{1}b_{2}}\) = 0, which corresponds to the minimum SKR for the given channel parameters. For the colored noise environment, we obtain the SKR plots, optimized over the input modulation variances \(V_{a_{1}},V_{a_{2}}\), by taking \(\xi_{b_{1}b_{2}}=0.0006+i0.00079\), which lies on the circular boundary of values that lead to the maximum SKR. In Fig. 3, we compare the optimized SKR vs channel loss parameter, \(1/T\), in different scenarios of interest. These cases include (labels on the graph correspond to the labels below in the text): (a) The SKR (brown curve) when one of the transmitters is in use, while the other mode is off. In this case, there is no interference/crosstalk from the other channel, but the channel transmissivity is equal to \(H_{11}=\sqrt{T/2}\), which accounts for the fact that the channel suffers from scattering issues. (b) The total SKR (cyan curve) in the multiplexed scenario in the presence of crosstalk noise, i.e, the sum of \(K(a_{1},b_{1})\) and \(K(a_{2},b_{2})\), calculated using (15), when \(V_{a_{1}},V_{a_{2}}\) are varied up to \(4.7\) SNU. The direct channel transmissivities are \(H_{11}=H_{22}=\sqrt{T/2}\), while the overall channel \(H\)-matrix is the same as that given in (22). Because of taking into account the noise contribution from other channels, this SKR behaves poorly as compared to other cases. (c) The SKR (blue curve) when the channel is a single-mode CV-QKD channel with transmissivity \(\sqrt{T}\), and in the absence of crosstalk from any other channel. This is the maximum secret key rate that can be obtained in a single-input single-output (SISO) CV QKD channel when there is no scattering and no crosstalk issues in the channel. (d) The SKR (orange curve) in the \(2\times 2\) MIMO channel when the noise at the receiver is i.i.d., i.e., \(\xi_{b_{1}b_{2}}\)= 0. The SKR in this case turns out to be equal to the sum of key rates for both channels when the MIMO channel is converted into two independent channels via the singular value decomposition (SVD) of the \(H\)-matrix [19]. This is twice the SKR for the SISO case (c), as the singular values for the \(H\)-matrix we have considered are equal to the transmissivity for the single-mode channel. Thus it is equivalent to the multiplexing gain of factor 2 as compared to the SISO secret key rate of case (c). This is considerably better than case (a), which also accounts for additional scattering issues in the SISO case for the particualr channel matrix considered here. (e) The SKR (black curve) in the MIMO channel in the presence of crosstalk and correlated/colored noise, i.e., when \(\xi_{b_{1}b_{2}}\) takes an optimal non-zero value. This offers a key rate slightly more than the i.i.d. noise scenario of case (d), as per the results in Fig. 2. Thus the gain factor is more than 2 in this case, as compared to the SISO SKR of case (c). From our numerical simulations, we observe that in the scenarios with some crosstalk between the employed channels, MIMO processing can significantly improve the performance as compared to multiplexing techniques, where post-processing is done on two separate single-mode CV-QKD systems [3]. In the latter case, the crosstalk signal from one channel is treated as the excess noise in the other system which can considerably hamper its performance. When we do the MIMO post-processing, a new set of observables, including the crosstalk coefficients and correlations that may be observed between the two received signals, can help us with key extraction. ## V Conclusions We proposed CV QKD over a \(2\times 2\) MIMO setting. Setups like this could become relevant in satellite-based QKD. We performed the security analysis for a Gaussian encoded protocol and investigated the SKR versus loss behaviour for a particular MIMO channel with equal distribution of power between the line-of-sight and crosstalk links. It turned out that by using the full MIMO processing power we could mostly remove the crosstalk effect and recover the multiplexing gain of two. Even more, when there was correlated excess noise Figure 3: The secret key rate (SKR) versus channel loss, \(1/T\), for different scenarios. Labels are defined in the text. between the two receivers, we could obtain a slightly higher amount of secret keys. While this work needs to be extended to account for channels of higher dimensions, among other things, our results suggest that MIMO techniques can offer a promising approach to improving SKR performance in scenarios where phase and amplitude distortions are inevitable. ## Appendix Here, we summarize the key techniques for calculating the von Neumann entropy of a Gaussian state. **1) Entropy of Gaussian states:** The von Neumann entropy of an \(N\)-mode Gaussian state \(\rho\), with a covariance matrix \(\mathbf{V}_{\rho}\), is given by: \[S(\rho)=\sum_{n=1}^{N}g(\lambda_{n}), \tag{23}\] where \[g(\lambda_{n})=\frac{\lambda_{n}+1}{2}\log_{2}\left[\frac{\lambda_{n}+1}{2} \right]-\frac{\lambda_{n}-1}{2}\log_{2}\left[\frac{\lambda_{n}-1}{2}\right], \tag{24}\] and \(\lambda_{n}\)s are the symplectic eigenvalues of \(\mathbf{V}_{\rho}\). These eigenvalues can be found by finding the eigenvalues of the matrix \(i\mathbf{\Omega}\mathbf{V}_{\rho}\), i.e., the positive eigenvalues of the matrix \(i\mathbf{\Omega}\mathbf{V}_{\rho}\), where \(\mathbf{\Omega}=\bigoplus_{i=1}^{n}\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\). **2) Conditional covariance matrix:** Given a heterodyne measurement on a single mode \(B\) of an \(N\)-mode Gaussian system \(AB\), with covariance matrix \(\gamma_{AB}\), the conditional covariance matrix denoted by \(\gamma_{A|B}\) is given by [20]: \[\gamma_{A|B}=\gamma_{A}-\gamma_{AB}(\gamma_{B}+\mathds{1}_{2})^{-1}\gamma_{AB }^{T}, \tag{25}\] where \(\gamma_{A}\) (\(\gamma_{B}\)) is the covariance matrix for system \(A\) (\(B\)). ## Acknowledgement M.R. is grateful to Masoud Ghalaii for fruitful discussions. All data generated in this paper can be reproduced by the provided methodology and equations.
2303.13645
More assistance of entanglement, less rounds of classical communication
Classical communication plays a crucial role to distinguish locally a class of quantum states. Despite considerable advances, we have very little knowledge about the number of measurement and communication rounds needed to implement a discrimination task by local quantum operations and classical communications (in short, LOCC). In this letter, we are able to show the relation between round numbers with the local discrimination of a set of pure bipartite orthogonal quantum states. To demonstrate the possible strong dependence on the round numbers, we consider a class of orthogonal product states in $d\otimes d$, which require at least $2d-2$ round of classical communications. Curiously the round number can be reduced to $d$ by the assistance of one-ebit of entanglement as resource and can be reduced further by assistance of more entanglement. We are also able to show that the number of LOCC rounds needed for a discrimination task may depend on the amount of entanglement assistances.
Atanu Bhunia, Indranil Biswas, Indrani Chattopadhyay, Debasis Sarkar
2023-03-23T20:09:18Z
http://arxiv.org/abs/2303.13645v1
# More assistance of entanglement, less rounds of classical communication ###### Abstract Classical communication plays a crucial role to distinguish locally a class of quantum states. Despite considerable advances, we have very little knowledge about the number of measurement and communication rounds needed to implement a discrimination task by local quantum operations and classical communications (in short, LOCC). In this letter, we are able to show the relation between round numbers with the local discrimination of a set of pure bipartite orthogonal quantum states. To demonstrate the possible strong dependence on the round numbers, we consider a class of orthogonal product states in \(d\otimes d\), which require at least \(2d-2\) round of classical communications. Curiously the round number can be reduced to \(d\) by the assistance of one-ebit of entanglement as resource and can be reduced further by assistance of more entanglement. We are also able to show that the number of LOCC rounds needed for a discrimination task may depend on the amount of entanglement assistances. pacs: 03.67.Mn.; 03.65.Ud. **1. Introduction** Nonlocal properties of quantum systems have a class exclusive from Bell nonlocality. Specifically, when a set of orthogonal quantum states cannot be perfectly distinguished by local operations and classical communications (LOCC), it reflects another nonlocal feature of quantum physics [1]. Local distinguishability of quantum states refers to the task of distinguishing a state from a set of prespecified orthogonal states shared among parties separated by arbitrary distances and LOCC being the only legitimate class of operations [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. The nonlocality of sets of orthogonal quantum states can be used for various practical purposes, such as, data hiding, quantum secret sharing and so on. The study of local distinguishability of orthogonal quantum states and exploring their relationship between quantum nonlocality and entanglement received considerable attention in the past two decades [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. In quantum information processing, one of the most important physical scenario occurs when a multipartite system is distributed among different parties separated by arbitrary distances. The parties perform multiple rounds of local measurements on their respective subsystems, and each time globally broadcasting their measurement outcomes. Other parties are required to choose their measurement setups depending on the outcomes and continue the process as required. This class of operations is known as LOCC. From an experimental perspective, LOCC operations have a natural attraction since local quantum measurements are much easier to perform on a system than their nonlocal counterpart. And on an even more fundamental level, LOCC is linked to the very notion of entanglement, as entanglement is precisely the multipartite correlations that cannot be generated by LOCC. However, despite this general feature, the class of LOCC is still not satisfactorily understood. One largely overlooked the question of how the number of measurement and communication rounds allowed in an LOCC process that affects what tasks the parties are able to perform. In other words, what is the cost of the LOCC round number to accomplish a given task? Here we are asking for the number of times the parties must make a local measurement and use the classical channel to communicate their results. If the channel has some finite capacity, then this question generalizes the question of minimum classical communication cost in performing some LOCC tasks, a vitally important issue in its own right. Thus, the LOCC round number can be seen as a cost for both classical communications and quantum operations. There are relatively few studies conducted on the round number. Bennett et. al., have proven that two-way LOCC is strictly more powerful than just one-way LOCC[2]. On the other hand, for entanglement manipulation of pure bipartite states, Lo and Popescu [3] showed that two-way communications are equivalent to one-way communications and one-way communications are provably better than no communication. For the task of distinguishing states, Xin and Duan have constructed a collection of states in \(m\otimes n\) systems that needed at least \(2\;\min\{m,n\}-2\) rounds of classical communications in order to be perfectly distinguished[4]. These findings demonstrate that the exact relationship between the round number and task achievability is a highly non-trivial issue, and in fact, contains some surprising results. In this letter, we study the effect of classical communications for a local discrimination task. We show that in general many rounds of classical communications are necessary. We demonstrate this result by constructing a class of \(d\otimes d\) pure orthogonal states, which requires at least \(2d-2\) rounds of classical communications to achieve a perfect local discrimination. In some sense, our result exhibit that two way classical communications can effectively increase the local distinguishability. Furthermore, we show that the round number of the discrimination task can be brought down by the assistance of entanglement. Interestingly we observe that the round number can be reduced to \(d\) by the support of one-ebit of entanglement as resource and it can be decreased further by using more resources. Throughout this letter, we do not normalize states and operators for simplicity. Every bipartite pure state can be written as \(\left|\psi\right\rangle=\sum_{i,j}m_{i,j}|i\rangle|j\rangle\in\mathbb{C}^{m} \otimes\mathbb{C}^{n}\), where \(|i\rangle\) and \(|j\rangle\) are the computational bases of \(\mathbb{C}^{m}\) and \(\mathbb{C}^{n}\), respectively. There exists a one to one correspondence between the state \(|\psi\rangle\) and the \(m\times n\) matrix \(M=(m_{ij})\). If \(\text{rank}(M)=1\), then \(|\psi\rangle\) is a product state, and if \(\text{rank}(M)>1\) then \(|\psi\rangle\) is an entangled state. Also \(\langle\psi_{1}\mid\psi_{2}\rangle=\text{Tr}\left(M_{1}^{\dagger}M_{2}\right)\), where \(\langle\psi_{1}\mid\psi_{2}\rangle\) is the inner product of \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\). Now, firstly we will review some definitions which we will use in the following discussions. \(Definition1.\)[47] If all the POVM elements of a measurement structure corresponding to a discrimination task of a given set of states are proportional to the identity matrix, then such a measurement is not useful to extract information for this task and is called a \(trivial\ measurement\). On the other hand, if not all POVM elements of a measurement are proportional to the identity matrix then the measurement is said to be a \(nontrivial\ measurement\). \(Definition2.\)[47] Consider a measurement to distinguish a fixed set of pairwise orthogonal quantum states. After performing that measurement, if the postmeasurement states are also pairwise orthogonal to each other then such a measurement is said to be an \(orthogonality-preserving\ measurement(\text{OPM})\). \(Definition3.\) The number of classical communications round required for a discrimination task means the number of times the parties globally broadcast their measurement outcomes after performing the local measurement on their respective subsystems. ## 2 Distinguishability by minimum classical round Here we construct a set of orthogonal pure product states which require a minimum rounds of classical communications for the respective discrimination task. For better understanding, we first provide an example in \(\mathbb{C}^{6}\bigotimes\mathbb{C}^{6}\) and generalize the result to higher dimensions. We represent here a quantum state \(\left|i+\overline{i+1}\right\rangle|j\rangle\) by a rectangle, where \(\left|i\pm\overline{i+1}\right\rangle=\frac{1}{\sqrt{2}}(|i\rangle\pm|i+1\rangle)\), for integer \(i\). _Proposition 1._ The \(36\) states in \(6\otimes 6\), \[\begin{array}{ll}|a\pm b\rangle=\frac{1}{\sqrt{2}}(|a\rangle\pm|b\rangle),0 \leq a<b\leq 6,\\ |\phi_{1,2}\rangle=|0\rangle_{A}|0\pm 1\rangle_{B},&|\phi_{3,4}\rangle=|0 \rangle_{A}|2\pm 3\rangle_{B},\\ |\phi_{5,6}\rangle=|0\rangle_{A}|4\pm 5\rangle_{B},&|\phi_{7,8}\rangle=|1\pm 2 \rangle_{A}|0\rangle_{B},\\ |\phi_{9,10}\rangle=|3\pm 4\rangle_{A}|0\rangle_{B},&|\phi_{11}\rangle=|5 \rangle_{A}|0\rangle_{B},\\ |\phi_{12,13}\rangle=|1\rangle_{A}|1\pm 2\rangle_{B},&|\phi_{14,15}\rangle=|1 \rangle_{A}|3\pm 4\rangle_{B},\\ |\phi_{16}\rangle=|1\rangle_{A}|5\rangle_{B},&|\phi_{17,18}\rangle=|2\pm 3 \rangle_{A}|1\rangle_{B},\\ |\phi_{19,20}\rangle=|4\pm 5\rangle_{A}|1\rangle_{B},&|\phi_{21,22}\rangle=|2 \rangle_{A}|2\pm 3\rangle_{B},\\ |\phi_{23,24}\rangle=|2\rangle_{A}|4\pm 5\rangle_{B},&|\phi_{25,26}\rangle=|3\pm 4 \rangle_{A}|2\rangle_{B},\\ |\phi_{27}\rangle=|5\rangle_{A}|2\rangle_{B},&|\phi_{28,29}\rangle=|3\rangle_{A }|3\pm 4\rangle_{B},\\ |\phi_{30}\rangle=|3\rangle_{A}|5\rangle_{B},&|\phi_{31,32}\rangle=|4\pm 5 \rangle_{A}|3\rangle_{B},\\ |\phi_{33,34}\rangle=|4\rangle_{A}|4\pm 5\rangle_{B},&|\phi_{35}\rangle=|5 \rangle_{A}|4\rangle_{B},\\ |\phi_{36}\rangle=|5\rangle_{A}|5\rangle_{B},&\end{array} \tag{1}\] need at least ten rounds of classical communications to be distinguishable by LOCC. _Proof:_ Suppose Alice goes first, and let \(A_{m}\) denote Alice's POVM operator with outcome \(m\) such that the postmeasurement states \(\left\{A_{m}\otimes I_{B}\left|\phi_{i}\right\rangle,i=1,\ldots,36\right\}\) should be mutually orthogonal. Because \(a_{ij}=0\) is necessary and sufficient for \(a_{ji}=0,i<j\), we will only show \(a_{ij}=0,i<j\). Now, considering the states \(\left|\phi_{1,12}\right\rangle\), we have \(\left\langle 0\left|A_{m}\right|1\right\rangle_{A}\left\langle 0+1|1+2 \right\rangle_{B}=0\); which implies, \(a_{01}=a_{10}=0.\) In the same way, for the states \(\left|\phi_{3,21}\right\rangle,\left|\phi_{3,28}\right\rangle,\left|\phi_{5,33}\right\rangle\) and \(\left|\phi_{5,36}\right\rangle\), we have \(a_{02}=a_{20}=0,a_{03}=a_{30}=0,a_{04}=a_{40}=0\) and \(a_{05}=a_{50}=0,\) respectively. Similarly, if we choose the states \(\left|\phi_{12,21}\right\rangle,\left|\phi_{14,28}\right\rangle,\left|\phi_{1 4,33}\right\rangle\) and \(\left|\phi_{16,36}\right\rangle\), we obtain \(a_{12}=a_{21}=0,a_{13}=a_{31}=0,a_{14}=a_{41}=0\) and \(a_{15}=a_{51}=0,\) respectively. Now considering the states \(\left|\phi_{21,28}\right\rangle\), we have \(\left\langle 2\left|A_{m}\right|3\right\rangle_{A}\left\langle 2+3|3+4 \right\rangle_{B}=0.\) Which imply \(a_{23}=a_{32}=0.\) In a similar manner by considering \(\left|\phi_{23,33}\right\rangle,\left|\phi_{23,36}\right\rangle,\left|\phi_{2 8,33}\right\rangle,\left|\phi_{30,36}\right\rangle\) and \(\left|\phi_{33,36}\right\rangle\), we have \(a_{24}=a_{42}=0,a_{25}=a_{52}=0,a_{34}=a_{43}=0,a_{35}=a_{53}=0\) and \(a_{45}=a_{54}=0,\) respectively. Therefore, \(A_{m}\) is Next considering \(\left|\phi_{7,8}\right\rangle,\) we get \(\left\langle 1+2\left|A_{m}\right|1-2\right\rangle_{A}\left\langle 0\right|0 \right\rangle_{B}=0,\) i.e., \(\left\langle 1\left|A_{m}\right|1\right\rangle-\left\langle 2\left|E_{m} \right|2\right\rangle=0.\) Thus, \(a_{11}=a_{22}\). For the states \(\left|\phi_{9,10}\right\rangle,\left|\phi_{17,18}\right\rangle,\) and \(\left|\phi_{19,20}\right\rangle,\) we finally get \(a_{11}=a_{22}=\cdots=a_{55}.\) Therefore, \(A_{m}\)\(=\) diag\(\left(a_{0},\beta,\beta\ldots,\beta\right).\) If possible, let us assume that \(\alpha_{0}\neq 0\) and \(\beta\neq 0\). Then after Alice's measurement, Bob should do a nontrivial operation on his own subsystem according to Alice's result. We denote \(B_{n}\) as Bob's operator. As we have discussed above, by choosing suitable pair of states we can conclude that all the off-diagonal elements of \(B_{n}\) are equal to \(0\). Similarly, for the diagonal elements as we have discussed above, if we consider the states \(\left|\phi_{1,2}\right\rangle,\left|\phi_{3,4}\right\rangle,\left|\phi_{5,6} \right\rangle,\left|\phi_{12,13}\right\rangle\) and \(\left|\phi_{14,15}\right\rangle,\) we finally get, \(b_{00}=b_{11}=\cdots=b_{55}.\) Therefore, \(B_{n}\) is proportional to the identity operator, i.e., \(B_{n}=\gamma_{0}I\), which is the trivial operator and this contradicts our assumption. So, either \(\alpha_{0}=0\) or \(\beta=0.\) Notice that this result also suggests us that these states cannot be distinguished locally if Bob goes first. Now it is clear that if Alice goes first with a diagonal operator, i.e., \(\alpha_{0}=\beta=1,\) then the above set of states cannot be distinguished. So, Alice has to do non-trivial measurement first and this only happens when any one of \(\alpha_{0},\)\(\beta\) is not equal to zero. For that Alice only has two outcome measurement operators: \(A_{1}=\) diag\(\left(1,0,\ldots,0\right)\) and \(A_{2}=\) diag\(\left(0,1,1\ldots,1\right)\). If the outcome \(A_{1}\) click, Bob is able to distinguish the remaining states by projecting onto \(\left|0\pm 1\right\rangle,\left|2\pm 3\right\rangle\) and \(\left|4\pm 5\right\rangle\). If the measurement outcome is \(A_{2}\), it will isolate the remaining \(30\) states. Therefore the system is now \(5\otimes 6.\) It is then Bob's turn to do measurement. Following the method we used above, we can prove that Bob's measurement must be \(E_{1}=\) diag\(\left(1,0,\ldots,0\right)\) and \(E_{2}=\) diag\(\left(0,1,\ldots,1\right)\). By induction, we find the number of rounds needed for distinguishing is \(10.\) This completes the proof.\(\blacksquare\) Obviously, the states of the above set constitute a basis. It is not possible to distinguish the above class of states with lesser number of rounds. Also it is noted that if we omit or add some states into the set, it will change the minimum bound of round number. Next we generalize the result for arbitrary large dimensions. _Proposition 2._ The \(d^{2}\) states in \(d\otimes d\) system, where \(d\) is even, \[\left|a\pm b\right\rangle=\frac{1}{\sqrt{2}}(\left|a\right\rangle \pm\left|b\right\rangle),0\leq a<b,\] \[\left|\phi_{i+1,i+2}\right\rangle=|0\rangle_{A}|i\pm(i+1)\rangle_{B },i=0,2,\ldots,d-2,\] \[\left|\phi_{d+i,d+i+1}\right\rangle=|i\pm(i+1)\rangle_{A}|0\rangle _{B},i=1,3,\ldots,d-3,\] \[\left|\phi_{2d-1}\right\rangle=|d-1\rangle_{A}|0\rangle_{B},\] \[\left|\phi_{2d+i-1,2d+i}\right\rangle=|1\rangle_{A}|i\pm(i+1) \rangle_{B},i=1,3,\ldots,d-3,\] \[\left|\phi_{3d-2}\right\rangle=|1\rangle_{A}|d-1\rangle_{B},\] \[\left|\phi_{3d+i-3,3d+i-2}\right\rangle=|i\pm(i+1)\rangle_{A}|1 \rangle_{B},i=2,4,\ldots,d-2,\] \[\left|\phi_{4d+i-5,4d+i-4}\right\rangle=|2\rangle_{A}|i\pm(i+1) \rangle_{B},i=2,4,\ldots,d-2,\] \[\left|\phi_{5d+i-8,5d+i-7}\right\rangle=|i\pm(i+1)\rangle_{A}|2 \rangle_{B},i=3,5,\ldots,d-3,\] \[\left|\phi_{6d-9}\right\rangle=|d-1\rangle_{A}|2\rangle_{B},\] \[\left|\phi_{6d+i-11,6d+i-10}\right\rangle=|3\rangle_{A}|i\pm(i+1) \rangle_{B},i=3,5,\ldots,d-3,\] \[\left|\phi_{7d-12}\right\rangle=|3\rangle_{A}|d-1\rangle_{B},\] \[\left|\phi_{7d+i-15,7d+i-14}\right\rangle=|i\pm(i+1)\rangle_{A}|3 \rangle_{B},i=4,6,\ldots,d-2,\] \[\left|\phi_{8d+i-19,8d+i-18}\right\rangle=|4\rangle_{A}|i\pm(i+1) \rangle_{B},i=4,6,\ldots,d-2,\] \[\left|\phi_{9d+i-24,9d+i-23}\right\rangle=|i\pm(i+1)\rangle_{A}|4 \rangle_{B},i=5,7,\ldots,d-3,\] \[\left|\phi_{10d-25}\right\rangle=|d-1\rangle_{A}|4\rangle_{B},\] \[\left|\phi_{10d+i-29,10d+i-28}\right\rangle=|5\rangle_{A}|i\pm(i+1) \rangle_{B},i=5,7,\ldots,d-3,\] \[\left|\phi_{11d-30}\right\rangle=|5\rangle_{A}|d-1\rangle_{B},\] \[\vdots\] \[\left|\phi_{d^{2}-5,d^{2}-4}\right\rangle=|(d-2)\pm(d-1)\rangle_{A}|d -3\rangle_{B},\] \[\left|\phi_{d^{2}-3,d^{2}-2}\right\rangle=|d-2\rangle_{A}|(d-2)\pm( d-1)\rangle_{B},\] \[\left|\phi_{d^{2}-1}\right\rangle=|d-1\rangle_{A}|d-2\rangle_{B},\] \[\left|\phi_{d^{2}}\right\rangle=|d-1\rangle_{A}|d-1\rangle_{B},\] need at least \(2d-2\) rounds classical communications to be distinguishable by LOCC. _Proof:_ See supplementary information[50] for explicit description of the proof. By the above construction it is not very difficult to find a set which requires a fixed amount of round number of classical communications for its discrimination task. The main factor which plays an important role in this structure is the quantum superposition. In the next discussion, we construct an entanglement assisted discrimination protocol for the above set of states with lesser number of communication rounds. **3. Reducing classical round by one-ebit** We now consider the discrimination protocol of the above class of states by using entanglement as a resource. _Proposition 3._ The set of states (1) needs only six rounds of communications for its local discrimination task by consuming one copy of \(2\otimes 2\) maximally entangled state as a resource. _Proof:_ First of all we assume that one-ebit of entanglement shared between Alice, Bob be \(\left|\psi\right\rangle_{ab}\). Therefore the initial state shared among them is \(\left|\phi\right\rangle_{AB}\otimes\left|\psi\right\rangle_{ab}\), where \(\left|\phi\right\rangle\) is one of the state from (1). _Round 1._ Bob performs a measurement \[\mathcal{B}\equiv\{B_{1}:=\mathbb{P}\left[(|0\rangle,|1\rangle,|2 \rangle)_{B};|0\rangle_{b}\right]+\\ \mathbb{P}\left[(|3\rangle,|4\rangle,|5\rangle)_{B};|1\rangle_{b }\right],\\ B_{2}:=\mathbb{I}-B_{1}\},\] where \(\mathbb{P}(\cdot)\) represents the projection operator. Later on Alice and Bob do some sequence of measurements to distinguish locally the class of states (1). The complete description of proof is in the supplementary information[50]. Next we generalize the result for arbitrary large dimensions. _Proposition 4._ The set of states (2) needs only \(d\) rounds of communications for its local discrimination task by consuming one copy of \(2\otimes 2\) maximally entangled state as a resource. _Proof:_ See supplementary information[50] for complete description of the proof. ## 4 Reducing classical round by 2-ebits Now we will present a method to locally distinguish the above class of orthogonal product states in \(d\otimes d\) with multiple copies of \(2\otimes 2\) maximally entangled states. We consider multicopy resource assisted discrimination of the nonlocal set. Recall that this set of operations strictly includes the set of LOCC operations. Our result, however, establishes that, given two copies of the Bell state the number of classical communication rounds of local discrimination task can be reduced further. _Proposition5._ The set of states (1) needs only four rounds of communications for its local discrimination task by consuming two copies of \(2\otimes 2\) maximally entangled states as a resource. _Proof:_ First of all let us assume that The state with \(2\)-ebits of entanglement shared between Alice, Bob be \(\left|\psi_{1}\right\rangle_{a_{1}b_{1}}\otimes\left|\psi_{2}\right\rangle_{a_{ 2}b_{2}}\) where each of \(\left|\psi_{1}\right\rangle_{a_{1}b_{1}}\) and \(\left|\psi_{2}\right\rangle_{a_{2}b_{2}}\) are of one-ebit entanglement. Therefore the initial state shared among them is \[\left|\phi\right\rangle_{AB}\otimes\left|\psi_{1}\right\rangle_{a_{1}b_{1}} \otimes\left|\psi_{2}\right\rangle_{a_{2}b_{2}}\] where \(\left|\phi\right\rangle\) is one of the state from (1). _Round 1._ Bob performs a measurement \[\mathcal{B}\equiv\{B_{1}:=\mathbb{P}\left[(|0\rangle,|1\rangle,|2 \rangle)_{B};|0\rangle_{b}\right]+\mathbb{P}\left[(|3\rangle,|4\rangle,|5 \rangle)_{B};|1\rangle_{b}\right],\\ B_{2}:=\mathbb{I}-B_{1}\}\\ C\equiv\{C_{1}:=\mathbb{P}\left[(|0\rangle,|4\rangle,|5\rangle)_{B };|0\rangle_{b}\right]+\mathbb{P}\left[(|1\rangle,|2\rangle,|3\rangle)_{B};|1 \rangle_{b}\right],\\ C_{2}:=\mathbb{I}-C_{1}\}\] Later on Alice and Bob do some sequences of measurements to distinguish the class of states (1). The complete description of proof is in the supplementary information[50]. We have presented a different distinguishing method which uses two or more low-dimensional entanglement resources instead of a high-dimensional entanglement resource. We think that our method is more efficient and saves resources. _Proposition 6._ The set of states (2) needs only \(d-2\) rounds of communications for its discrimination task by consuming two copies of \(2\otimes 2\) maximally entangled states as a resource. _Proof:_ See supplementary information[50] for complete description of the proof. The round of classical communications can be further decreased by using more amount of entanglement resource for this discrimination task. In this particular task it can be checked that by using \(3\)-ebits of entanglement resource the round number can be bring down from \(d\) to \(d-4\). ## 5 Conclusions In this letter, we have investigated the number of measurement and communication rounds needed to implement a discrimination task by local quantum operations and classical communications (LOCC). In particular, we have constructed a special set of \(d\otimes d\), states which require at least \(2d-2\) rounds of classical communications for perfect discrimination. Our result Figure 2: Product states representation in \(\mathbb{C}^{6}\bigotimes\mathbb{C}^{6}\) following Bob’s first measurement with outcome B1. The labels on the right column correspond to Alice’s and Bob’s assisted systems. indicates that classical communication plays a crucial role in local discrimination. Next with entanglement as a resource to distinguish orthogonal quantum states, we present a method based on multiple copies of low-dimensional entanglement resources instead of a high-dimensional entanglement resource. Remarkably we have observed that the amount of classical communications can be reduced further with the help of entanglement assistance. The results can lead to a better understanding of the relationship between classical communications and entanglement resources. However, there are still some questions worth looking for. Firstly, is it possible to extend the whole scenario to multipartite case and what will be the entanglement resource that gives advantage. Secondly, by using lesser amount of entanglement resource is it possible to get the same advantages in discrimination task. ###### Acknowledgements. The authors AB and IB acknowledge the support from UGC, India. The authors IC and DS acknowledge the work as part of QUest initiatives by DST India.
2309.06575
Percolation of 'Civilisation' in a Homogeneous Isotropic Universe
In this work, we consider the spread of a 'civilisation' in an idealised homogeneous isotropic universe where all the planets of interest are habitable. Following a framework that goes beyond the usual idea of percolation, we investigate the behaviour of the number of colonised planets with time, and the total colonisation time for three types of universes. These include static, dark energy-dominated, and matter-dominated universes. For all these types of universes, we find a remarkable fit with the Logistic Growth Function for the number of colonised planets with time. This is in spite of the fact that for the matter- and dark-energy dominated universes, the space itself is expanding. For the total colonisation time, $T$, the case for a dark energy-dominated universe is marked with divergence beyond the linear regime characterised by small values of the Hubble parameter, $H$. Not all planets in a spherical section of this universe can be 'colonised' due to the presence of a shrinking Hubble sphere. In other words, the recession speeds of other planets go beyond the speed of light making them impossible to reach. On the other hand, for a matter-dominated universe, while there is an apparent horizon, the Hubble sphere is growing instead of shrinking. This leads to a finite total colonisation time that depends on the Hubble parameter characterising the universe; in particular, we find $T\sim H$ for small $H$ and $T\sim H^2$ for large $H$.
Allan L. Alinea, Cedrix Jake C. Jadrin
2023-08-27T12:13:54Z
http://arxiv.org/abs/2309.06575v1
# Percolation of 'Civilisation' in a Homogeneous Isotropic Universe ###### Abstract In this work, we consider the spread of a 'civilisation' in an idealised homogeneous isotropic universe where all the planets of interest are habitable. Following a framework that goes beyond the usual idea of percolation, we investigate the behaviour of the number of colonised planets with time, and the total colonisation time for three types of universes. These include static, dark energy-dominated, and matter-dominated universes. For all these types of universes, we find a remarkable fit with the Logistic Growth Function for the number of colonised planets with time. This is in spite of the fact that for the matter- and dark-energy dominated universes, the space itself is expanding. For the total colonisation time, \(T\), the case for a dark energy-dominated universe is marked with divergence beyond the linear regime characterised by small values of the Hubble parameter, \(H\). Not all planets in a spherical section of this universe can be 'colonised' due to the presence of a shrinking Hubble sphere. In other words, the recession speeds of other planets go beyond the speed of light making them impossible to reach. On the other hand, for a matter-dominated universe, while there is an apparent horizon, the Hubble sphere is growing instead of shrinking. This leads to a finite total colonisation time that depends on the Hubble parameter characterising the universe; in particular, we find \(T\sim H\) for small \(H\) and \(T\sim H^{2}\) for large \(H\). **keywords:**_percolation, homogeneous isotropic universe, dark energy-dominated, matter-dominated, Hubble horizon, FLRW metric_ ## 1 Introduction The question of whether we are alone or not in the Universe dates back to antiquity; perhaps, as early as the times when modern humans started wondering about the night sky. With the realisation that the sun is simply one of the seemingly countless stars out there, many of which hosting their own solar systems [1, 2, 3], we gained a dazzling insight that the probability of existence of life beyond the Sun's influence, cannot be zero. In fact, counting the number of solar systems and planets in the _Goldilocks zone_, there could be a multitude of planets in our galaxy and beyond, harnessing life [4, 5]. Pushing this idea further, with about a 14-billion year old Universe based on the \(\Lambda\)CDM model [6, 7, 8], life in other planets may not be limited to plants and wild animals alone, but could include more technologically advanced civilisations compared to ours. In spite of this, we have yet to find a definitive proof for the existence of alien life, much less made a contact with intelligent extraterrestrial beings. It is possible that advanced alien civilisations, should they exist, prefer not to interfere with human affairs or those of other intelligent beings, in general; letting them take their own course of development [9, 10]. Or, perhaps more reasonably, the density of 'living' planets in the Universe is extremely low [11]. The extremely large distances between these 'living' planets and an even more sparse distribution of putative intelligent civilisations, effectively hinder discovery of alien life and communication amongst intelligent beings in different solar systems. Possibly, it is also for this main reason that the colonisation of other planets and the propagation of civilisations in the network of stars constituting the Universe is extremely slow. Although it is about 14 billion years old, it may not be old enough for life to significantly propagate to habitable or terraformable planets.1 Footnote 1: The _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._[2] have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al._ have shown that the _et al. have shown that the _et al._ have shown that the _et al. _ideal_ static universe whose only constituents are habitable planets represented by cells in an empty three dimensional space. Considering a spherical section of this universe, we have the following general algorithm for our simulation: * Randomly distribute \(N\) number of cells inside the spherical section following a _uniform_ probability distribution corresponding to a homogenenous isotropic universe. In our program, this is done using Permuted Congruential Generator [35] for the cell coordinates \((x,y,z)\) with the constraint \(x^{2}+y^{2}+z^{2}<r^{2}\), where \(r\) is the radius of the spherical section. * Initially, the centremost cell represent the only 'living' planet; all other cells are yet to be occupied. The time counter for the spread of civilisation is set to zero and the counter for the number of occupied cells is set to one. The civilisation from this cell then propagates following a simplified 2D pattern shown in _Fig._1; the actual simulation involves a 3D space. * From the centremost cell (1), the civilisation travels at constant speed, \(v\), to the nearest cell (2), where a new civilisation is established; ideally, within effectively zero amount of time upon arrival as in the usual percolation of forest fire in a rectangular lattice. * With two cells (1 and 2) now occupied in the figure, these two civilisations propagate to their corresponding nearest target cells (3 and 4, respectively). The main idea is, whenever a cell is occupied, its civilisation spreads to the nearest cell at the same speed \(v\). * Programmatically, each cell has several flags and state variables indicating whether they are (a) inhabited or not, (b) targeted or not by a neighbouring 'civilised' cell, and (c) measuring the time of travel to a neighbouring cell. Each time a new 'uncivilised' cell is 'civilised', the time counter and number of occupied cells are incremented based on (c) and one, respectively. The cells are stored in an array of structures with these flags. * Because neigbouring cells have different distances from the cells targeting them, they are not reached at the same time. In the figure, cell 2 reaches its target cell 3 first before cell 1 reaches its target cell 4 because the distance between cells 2 and 3 is smaller than that for cells 1 and 4. The propagation of civilisation continues from here following the same logic until all cells are occupied or 'civilised'. Note that in our model, all occupied cells are _undying_. Once a cell is occupied, it remains alive and a source of spreading civilisation throughout the duration of the simulation6. Furthermore, each occupied cell only targets one cell at a time for the propagation of its civilisation--the nearest unoccupied cell. Given the highly unlikely scenario where two neighbouring nearest cells are of (nearly) equal distance from an occupied cell, only one of these two becomes a target; the target is chosen randomly. For the time of arrival from one occupied cell located at a position represented by a vector, \(\vec{r}_{i}\), measured from the centremost cell, to the nearest unoccupied (target) cell at \(\vec{r}_{j}\), we have \(|\vec{r}_{j}-\vec{r}_{i}|/v\). Certainly, because \(v\) is constant, two or more neighbouring occupied cells will, in general, not arrive simultaneously at their target cells as illustrated in _Fig._1. This serves to remind us that unlike that of the usual percolation on 2D square lattice (modeling, for instance, the spread of forest fire,) our cells are not on a regular lattice. This leads to non-uniform time steps between occupations of neighbouring cells. Footnote 6: We leave it for future work to deal with death of a civilisation, variation in planet habitability, limited propagation, etc. Having established our algorithm above, we perform a simulation involving \(N=5000\) cells located in a sphere with a radius of \(L=5.0\) units. For simplicity, we set the scale factor in the metric given by (1) and the constant speed of propagation to unity (\(v=1\)). Not to be confused with the speed of light, \(c=1\), usually employed in General Relativity and Quantum Field Theory, the choice of units, \(v=1\), simply implies our unit of distance is the same as the unit of time; e.g., \(1\,\mbox{km}=1\,\mbox{s}\). _Figure_2 shows our simulation result for the number of cells occupied with respect to time. This is an averaged result over 500 trials. For Figure 1: Simplified illustration for the occupation of cells or ‘planets’ in a static universe. The centremost cell is occupied first and ‘life’ or ‘civilisation’ propagates towards neighbouring cells following the order (1) \(\rightarrow\) (2) \(\rightarrow\) (3) \(\rightarrow\) (4). each trial, \(5000\) cells are randomly distributed inside the sphere using the Permuted Congruential Generator [35] and civilisation is allowed to spread from the centremost cell with the intention to occupy all cells. With \(500\) trials, the standard errors of the mean times to reach cells \(2\) to \(5000\), divided by the corresponding mean times, are all below \(0.5\%\); e.g., the average time to reach cell \(5000\) is \(\bar{t}=15.58\) units with a standard error of \(\sigma_{\bar{t}}=0.019\) corresponding to \(\sigma_{\bar{t}}/\bar{t}\approx 0.12\%\). In other words, for the purposes of this study, \(500\) trials seem to be more than good enough for the'smooth' plot shown in _Fig._ 2. Focusing on the behaviour of the spread of civilisation shown in the figure, we find a slow start. With a few sources of civilisations, the spread is correspondingly slow. Then it picks up speed as more and more cells are occupied corresponding to many sources of civilisations. As is evident in the figure, the maximum rate of propagation of civilisation happens when around half of all cells are occupied; about \(n=2500\) corresponding to the approximate time \(t\approx 7\) units. Beyond this, while there are more sources of civilisations, there are fewer cells for them to spread to in our sphere of interest. The graph of the number of occupied cells, \(n\), tapers off with \(t\) near \(n=N=5000\). Note that increasing the speed, \(v\), for the propagation of civilisation can only increase the number of occupied cells with time. But the behaviour of \(n\) with \(t\) remains the same. Informally, the 'S'-shape curve in _Fig._ 2 is simply 'compressed' to the left with higher \(v\). Knowing this, our ideal model for the spread of civilisation in a static universe, appears to follow the Logistic Growth Model [34]. It seems to stand side-by-side with the other applications of this model such as population growth [36], spread of communicable disease [37], and chemical reactions in a closed system [38], amongst others. The prominent behaviour of these dynamical systems is a relatively slow start in the propagation due to limited sources (e.g., disease carriers). Then, as the number of sources grows, with many others yet to be reached by these sources, the propagation speeds up. This happens until there are significantly more sources than those that can be infected or occupied in cases of disease spread and colonisation, respectively. Eventually, the propagation slows down with less number of cells to occupy (colonisation) or less reactants to react (chemical reactions) until the elements of the system are exhausted. Indeed, the behaviour of \(n\) with respect to \(t\) most likely follows that of the Logistic Growth Model. Curve fitting our simulation result with the logistic function given by \[n(t)=\frac{N_{0}}{1+e^{-k(t-t_{0})}}, \tag{2}\] where \(N_{0},\,k\), and \(t_{0}\) are the adjustable parameters, yields the equation for the best fit curve indicated in _Fig._ 2. The corresponding coefficient of determination, \(R^{2}=0.9996\), is extremely close to unity, indicating an excellent fit. In the next two sections, we shall see if the Logistic Growth Model is still followed when the space itself is expanding; that is, homogeneously and isotropically for dark energy-dominated and matter-dominated universes. ## 3 Dark Energy-Dominated Universe Let us consider a dark energy-dominated universe characterised by a constant Hubble parameter, \(H\equiv\dot{a}/a\), where the dot indicates derivative with respect to time. Such a universe corresponds to an exponentially expanding spacetime driven by a constant dark energy, \(\Lambda\), in Einstein's general theory of relativity [6, 21]. Indeed, with \(H=\text{const.}\), we see that \[\frac{da}{a}=H\,dt\quad\Rightarrow\quad a\sim e^{Ht}. \tag{3}\] Situating (idealised) habitable planets or cells in this universe for colonisation, as in the immediately preceding section, implies that the 'physical' distances between the cells increase with time, even in the absence of individual random motion. It is akin to two dots on the surface of a rubber balloon'moving' farther apart as the balloon is inflated. As such, the time it would take for one civilisation to reach a nearby cell and colonise it, is larger compared to that in a static universe, given the same propagation speed, \(v\). In particular, let us consider two cells located at \(\vec{r}_{i}\) and \(\vec{r}_{j}\), respectively, measured from the centremost Figure 2: Behaviour of the number of cells occupied with time in a static universe, for \(N=5000\) cells and \(500\) trials. cell. Noting that our space is coordinatised by comoving coordinates indicated in (1), the time, \(t_{j}\), it takes to reach cell \(j\) from cell \(i\) starting at time, \(t_{i}\), is given by \[|\vec{r}_{j}-\vec{r}_{i}|=\int_{t_{i}}^{t_{j}}dt\ \frac{v}{a}. \tag{4}\] Needless to say, if \(a\) is unity, it reduces to that of the case for static universe in the immediately preceding section; i.e., \(|\vec{r}_{j}-\vec{r}_{i}|=v(t_{j}-t_{i})\). Using the relation for the scale factor given by (3) above, we find upon integration, \[t_{j}=-\frac{1}{H}\ln\bigg{(}e^{-Ht_{i}}-\frac{H}{v}|\vec{r}_{j}-\vec{r}_{i}| \bigg{)}. \tag{5}\] If both \(|\vec{r}_{j}-\vec{r}_{i}|\) and \(v\) are set to unity, for instance, then for7\(H=0.1\) and \(t_{i}=0\), we get \(t_{j}\approx 1.05\) as opposed to the case of static universe where \(t_{j}=1.00\). Footnote 7: We are omitting here explicit mention of units for brevity. As in the previous section, \(v\) is set to unity, leading to equality in distance and time units. _Figure_ 3 shows our simulation result for the number of colonised cells with time in a dark energy-dominated universe characterised by a constant Hubble parameter. Here, \(N=5000\) cells and the number of trials is \(500\) for each series1, as in Sec. 2. With reference to the static universe (\(H=0.00\)), we see a deviation to the right of the number of colonised cells with time. In other words, it takes longer time to colonise a given number of cells from the centremost cell due to the expansion of the Universe, and this time increases with increasing \(H\). Nonetheless, looking at the figure, the behaviour of \(n\) with \(t\) seems to still follow the Logistic Growth Model as in the static universe. In fact, fitting this function to the data points, we find that the coefficients of determination remain extremely closed to unity as we vary \(H\) above from \(0.00\) to \(0.06\). For \(H=0.06\) (see _Fig._ 4), where the deviation is expected to be the highest among the three (non-static universe) series in _Fig._ 3, we find \(R^{2}=0.9992\approx 1\), suggesting an excellent fit. Footnote 1: As in the case for static universe, with \(500\) trials, the standard errors of the mean times to reach cells \(2\) to \(5000\), divided by the corresponding mean times, are all below \(0.5\%\); e.g., the average time to reach cell \(5000\) is \(\tilde{t}=30.36\) units with a standard error of \(\sigma_{\tilde{t}}=0.089\) corresponding to \(\sigma_{\tilde{t}}/\tilde{t}\approx 0.29\%\). Figure 4: Behaviour of the number of cells occupied with time in a dark energy-dominated universe, for \(H=0.060\), \(N=5000\) cells, and \(500\) trials. Figure 5: Behaviour of time to occupy _all_ cells with respect to the Hubble parameter, in a dark energy-dominated universe, for \(N=5000\) cells and \(500\) trials. Beyond \(H=0.06\), however, we see an effective breakdown from the Logistic Growth Model. This is because for larger values of \(H\), all cells in our sphere can no longer be colonised. This behaviour is apparent in _Fig._5 showing the time to occupy _all_ cells, \(T\), with respect to the Hubble parameter2. As the expansion rate increases, \(T\) correspondingly increases; that is, in a seemingly linear manner for small \(H\) but goes up fast beyond the linear regime. Above \(H=0.06\), there seems to be an asymptote where \(T\) diverges. Our curve fitting function for \(T\) with parameters \(t_{0}\) and \(d\), borrowed from (5), captures this behaviour with \(R^{2}=0.9990\): Footnote 2: To make sense of the data here on a galactic scale, consider a sphere the size of the Milky Way galaxy—radius of about \(5\times 10^{20}\) m—with only 5000 habitable planets as in our simulation. For the expansion rate of \(H=0.01\) with \(v\) corresponding to one percent the speed light, the total colonisation time of 17 units in the figure simply converts to about 17 million years; _i.e._, 1 time unit \(\approx\) 1 million years. \[T=-\frac{1}{H}\ln\big{(}e^{-Ht_{0}}-Hd\big{)}. \tag{6}\] For \(H\approx 0\), its Taylor series expansion yields \(T\sim H\) confirming the mentioned linear behaviour. Beyond this is an asymptote given by \(e^{-Ht_{0}}-Hd\). For the results in _Fig._5, we have an effective asymptote at \(H=0.06367\). The existence of a horizon is apparent in (5) describing the time, \(t_{j}\), it takes to reach cell \(j\) from cell \(i\) starting at time, \(t_{i}\). Given a fixed value of \(H\), we find \(t_{j}\rightarrow\infty\) as the argument of the (natural) logarithmic function goes to zero; that is, \[e^{-Ht_{i}}\rightarrow\frac{H}{v}|\vec{r}_{j}-\vec{r}_{i}|. \tag{7}\] Early on, civilisation can propagate from the centermost cell to the neighbouring cells because the inter-cell physical distances are still small. But as time goes by, the universe expands, and these inter-cell distances increase. Colonisation of new cells takes longer and longer until such time that the next neighbouring cell to be colonised goes beyond the horizon; it requires a propagation velocity higher than \(v\) to be reached. Certainly, one may consider a greater propagation velocity for the civilisation to spread to all cells. Following this line of thinking, however, will only increase the value of \(H\) in _Fig._5 at which the total colonisation time diverges. To see this, we recall the idea of a Hubble horizon [6, 21] corresponding to a Hubble sphere with comoving radius defined as \[R_{H}\equiv\frac{c}{aH}, \tag{8}\] where \(c\) is the maximal _causal_ velocity that is physically attainable; i.e., the velocity of light. In accord with Hubble's law, the effective recession velocity of the objects situated on the Hubble sphere relative to its centre is \(c\). It follows that the total colonisation time, even in our idealised scenario where the spread of civilisation is made easy by simple propagation to neighbouring cells, is bound to diverge, either with increasing \(H\) or with increasing radius of our chosen spherical section of a dark energy-dominated universe. Furthermore, because of this divergence, the breakdown from the Logistic Growth Model for the number of occupied cells with time seems inevitable. ## 4 Matter-Dominated Universe For a matter-dominated universe, the Hubble parameter is related to the scale factor as \(H^{2}\propto a^{-3}\), by virtue of the Friedmann equation [6, 21, 22]. With the definition given by \(H\equiv\dot{a}/a\), we find \(a\propto t^{\frac{2}{3}}\). Following the same logic as in the immediately preceding section, given this relationship between \(a\) and \(t\), we obtain the time to reach cell \(j\) from cell \(i\) given by \[t_{j}=\frac{2}{3H_{0}}\bigg{[}(1+\frac{3}{2}H_{0}t_{i})^{\frac{1}{3}}+\frac{H_ {0}}{2v}|\vec{r}_{j}-\vec{r}_{i}|\bigg{]}^{3}-\frac{2}{3H_{0}}, \tag{9}\] where \(H_{0}\) is the initial Hubble parameter. Needless to say, \(H\) is no longer constant unlike that of a dark energy-dominated universe. In our simulation, \(H_{0}\) is chosen to coincide with the corresponding constant \(H\) for a dark energy-dominated universe, for the purpose of comparison. _Figure_6 shows our simulation results for the number of occupied cells with time for different values of \(H_{0}\). Similar to that for the dark energy-dominated universe, the time it takes to occupy a given number of cells is higher for higher initial value of the Hubble parameter. Furthermore, a short visual inspection tells us that the graphs seem to follow the same pattern; that is, a logistic growth function. Indeed, the coefficients of determination in the curve fitting using this function, remain very close to unity for all the series in the figure; for the highest \(H_{0}\) in the graph, \(R^{2}=0.9994\), indicating an excellent fit. Having said this, we notice that on average, the time it takes to occupy a certain number of cells is lower compared to that of the dark energy-dominated universe. For instance, the times to reach 4500 cells for \(H=0.06\) are 11.5 and 12.6 (time units) for the matter- and dark energy-dominated universes, respectively. This is justified by the fact that although both universes start with the same value of the Hubble parameter, that of the dark energy-dominated universe is constant while the other one decreases with time; in particular, for the latter, \(H\sim 1/t\). Physically, beyond the initially set time and Hubble parameter, colonisation is faster for a matter-dominated universe because it expands slower compared to a dark energy-dominated universe. Consistent with this observation, the time to occupy _all_ cells in our simulation is correspondingly lower for a matter-dominated universe; see _Fig._ 7. Furthermore, the figure shows that while \(T\) for a dark energy-dominated universe grows fast in the region \(0.0<H\leq 0.06\) and even 'blows up' around \(H=0.06\), for a matter-dominated universe, \(T\) seems to follow a linear behaviour with \(H_{0}\). In fact, as can be seen in _Fig._ 8, it crosses the 'dangerous' mark about \(H=0.06\) for the dark energy-dominated universe without problem; that is, as far as \(H_{0}=1.0\) in our simulation. Our curve fitting function with parameters \(t_{0}\) and \(d\), borrowed from (9), captures the behaviour of \(T\) with \(H_{0}\) as \[T=\frac{2}{3H_{0}}\bigg{[}(1+\frac{3}{2}H_{0}t_{0})^{\frac{1}{3}}+\frac{H_{0}d }{2v}\bigg{]}^{3}-\frac{2}{3H_{0}}. \tag{10}\] For the result shown in _Fig._ 8, we have \(R^{2}=1.0000\), signifying an excellent fit. In agreement with our simulation result and the aforementioned observations, for small values of \(H_{0}\), the Taylor series expansion of (10) tells us that \(T\sim H_{0}\). Furthermore, for large values of \(H_{0}\), the time to colonise all cells scales as \(H_{0}^{2}\); that is, a dominantly quadratic behaviour. Lastly, in contrast to (6) for a dark energy-dominated universe, the relationship given by (10) for a matter-dominated universe, has no asymptote marking the divergence of \(T\). In spite of these observations, one may think more deeply if the time for a civilisation to propagate to _all_ cells can remain finite given a finite number of cells but with large finite Hubble parameter. In hindsight, following the same logic as in the previous section, the fact that a matter-dominated universe also expands homogeneously and isotropically seems to point to the existence of a Hubble horizon. Beyond the Hubble sphere, the cells recede from the center at a velocity larger than that of light in accord with Hubble's law. Equation (8) for the Hubble horizon, applied for a matter-dominated universe, seems to indicate an apparent horizon that could give rise to a divergent \(T\); indeed, a short inspection of (8) tells us that \(R_{H}\neq\infty\). However, we Figure 8: Behaviour of the time to occupy all cells with respect to the Hubble parameter, in a matter-dominated universe, for \(N=5000\) cells and \(500\) trials. Figure 6: Behaviour of the number of cells occupied with time in a matter-dominated universe, for varying values of the _initial_ Hubble parameter; \(N=5000\) cells and \(500\) trials. Figure 7: Comparison of the behaviour of the total times to occupy all cells with respect to the (initial) Hubble parameter, in a matter-dominated universe and dark energy dominated universe, for \(N=5000\) cells and \(500\) trials. cut short this line of thinking for a divergent \(T\) by recalling that for a matter-dominated universe, \(a\propto t^{\frac{3}{3}}\). This certainly leads to a finite \(R_{H}\sim t^{\frac{1}{3}}\) but the Hubble sphere is _growing_. This is in contrast to the case of dark energy-dominated universe where the comoving Hubble sphere is _shrinking_. Now, multiplying \(R_{H}\) by \(a\) to get the 'proper' Hubble radius and taking the time derivative of the result yields \(c=(aR_{H})\). We find that although there is an apparent Hubble horizon, the expansion rate of the Hubble sphere relative to its centre is equal to \(c\)! Consequently, given enough time, cells initially located beyond the horizon eventually enter the Hubble sphere enabling effectively all cells in a spherical section of a matter-dominated universe to be colonised. ## 5 Concluding Remarks In the quest to understand the colonisation potential of an advanced civilisation across the vast expanse of habitable planets in our universe, we ponder in this work interesting questions from the perspective of computational physics. These questions involve the behaviour of the number of planets colonised with time, and the total colonisation time with respect to the Hubble parameter describing the universe. Inspired by Percolation Theory, we investigate these questions within an idealised scenario involving homogeneous isotropic universe within which habitable planets are embedded. We study three types of universes, namely, static, dark energy-dominated, and matter-dominated. We find that the growth in the number of colonised planets with time in a spherical section of a static universe follows the Logistic Growth Model. Such a behaviour confirms our expectation based on its other applications such as the propagation of fire, spread of disease, and chemical reaction. In contrast, the same might not be anticipated for a dark energy- and matter-dominated universes because they are characterised by an expanding space. Surprisingly, our results indicate that even with this expansion, the behaviour of the number of colonised planets with time, fits well with the logistic growth function, albeit with slower occupation or colonisation rate and a caveat for a dark energy-dominated universe. The case for a dark energy-dominated universe is characterised by a divergent total colonisation time. While it behaves linearly for low values of the Hubble parameter describing the universe, beyond some cutoff, it blows up marking a breakdown from the Logistic Growth Model. The underlying reason is the existence of a Hubble horizon corresponding to a shrinking Hubble sphere; planets beyond this'moves' faster than the speed of light preventing further colonisation. For a matter-dominated universe, the total colonisation time behaves linearly for small values of the Hubble parameter and quadratically for large values of this parameter. Our simulation results suggest that the total colonisation for a spherical section of this universe remains finite for arbitrarily large values of the initial Hubble parameter. This is in spite of the existence of an apparent Hubble horizon. We reason that while there is a finite Hubble horizon for a matter-dominated universe, it is growing instead of shrinking as in the case of a dark energy dominated universe. Since its growth is faster than the rate of colonisation, the colonisation of all planets in a spherical section of this universe seems to be always possible. In spite of the limited ideal framework of this study, we find interesting results about colonisation of planets in the universe from the perspective of computational physics. However, our humble beginning leaves a lot of avenues for future exploration. This may include factors involving (a) planet habitability, (b) death or survival rate of civilisations, and (c) multiple starting civilisations, amongst others. The first two are certainly related and the addition of the third one requires more programming resources. Focusing on the death rate alone for simplicity, we speculate that for a constant probability of death upon arrival to previously uninhabited planets, the first order behaviour, \(T\sim H\), for small \(H\), would remain the same, albeit \(n\) would grow slower with \(t\), for both matter- and dark-energy dominated universes3. Spherical sections of both static and matter-dominated universes would continue to be fully occupied given enough time. However, for a dark-energy dominated universe, the breakdown from the Logistic Growth Model for \(n\) with respect to \(t\) would be earlier given the slower effective propagation rate of civilisation with a shrinking Hubble sphere. Holding on tight with our effective extension of Percolation Theory, we hope to gain deeper insight with the inclusion of these factors in the study of the spread of civilisation in a homogeneously and isotropically expanding Universe. Footnote 3: If the constant death rate corresponds to effectively lowering the propagation velocity of civilisation in the curve fitting functions for \(T\) with respect to \(H\) for both matter- and dark energy-dominated universes, then the first order behaviour, \(T\sim H\), for small \(H\), remains the same based on (6) and (10). ## Acknowledgement CJC Jadrin would like to express his sincere gratitude to the Department of Science and Technology Science Education Institute (DOST-SEI) for the financial support during the conduct of this study.
2307.10863
Period-like polynomials for $L$-series associated with half-integral weight cusp forms
Given the L-series of a half-integral weight cusp form, we construct a cohomology class with coefficients in a finite dimensional vector space in a way that parallels the Eichler cohomology in the integral weight case. We also define a lift of half-integral weight cusp forms to integral weight modular forms that is compatible with the $L$-series of the respective forms.
James Branch, Nikolaos Diamantis, Wissam Raji, Larry Rolen
2023-07-20T13:33:20Z
http://arxiv.org/abs/2307.10863v3
# Period-like polynomials for \(L\)-series associated with half-integral weight cusp forms ###### Abstract. Given the \(L\)-series of a half-integral weight cusp form, we construct a cohomology class with coefficients in a finite dimensional vector space in a way that parallels the Eichler cohomology in the integral weight case. We also define a lift of half-integral weight cusp forms to integral weight cusp forms that is compatible with the \(L\)-series of the respective forms. ## 1. Introduction The Dirichlet series associated by Shimura to half-integral weight modular forms in the last section of his original paper [7] has not received as much attention as its integral weight counterpart. Partly because of its failure to possess an Euler product, it has not been extensively studied from arithmetic and algebraic perspectives that have a long history in the case of integral weight modular forms. In view of this, the two main purposes of this note are: (i) to associate to the \(L\)-series of a half-integral weight cusp form a cohomology class with coefficients in a _finite_ dimensional vector space in a way that parallels the Eichler cohomology of the integral weight case. This includes the construction of Eichler integrals and a period-like polynomial. (ii) to define a lift of half-integral weight cusp forms to integral weight cusp forms that is compatible with the \(L\)-series of the respective forms. This section presents special cases of the two main results of the note. The first one is obtained from Theorem 3.1 in the special case \(N=4\), \(k\) such that \(4|(k-\frac{5}{2})\) and \(a=k-2\): **Theorem 1.1**.: _Let \(k\in\frac{1}{2}+\mathbb{Z}\) such that \(k-\frac{5}{2}\in 4\mathbb{N}\) and let \(f\) be a cusp form of weight \(k\) for \(\Gamma_{0}(4)\) such that \(f(-1/(4z))=(-2iz)^{k}f(z).\) For each \(z\) in the upper half-plane \(\mathfrak{H}\) define the "Eichler integral"_ \[F(z)=\Gamma(k-1)\int_{z}^{i\infty}f(w)\left(\sum_{n=0}^{k-\frac{5}{2}}\left( \frac{(4i)^{n}}{n!\Gamma(k-1-n)}+\frac{4^{k-n-\frac{11}{4}}i^{\frac{5}{2}-n}w ^{-\frac{1}{2}}}{(k-\frac{5}{2}-n)!\Gamma(n+\frac{3}{2})}\right)z^{n}w^{k-2-n }\right)dw.\] _Then,_ 1. _For each_ \(z\in\mathfrak{H}\)_,_ \(P(z):=F(z)-F(-1/(4z))(-2iz)^{k-\frac{5}{2}}\) _is a polynomial of degree at most_ \(k-\frac{5}{2}\) _in_ \(z\)_._ 2. _We have_ \[P(z)=i^{k-1}\Gamma(k-1)\sum_{n=0}^{k-\frac{5}{2}}\left(\frac{4^{n}\Lambda_{f}( k-1-n)}{n!\Gamma(k-1-n)}+(-1)^{n+1}\frac{4^{k-n-\frac{11}{4}}\Lambda_{f}(k- \frac{3}{2}-n)}{(k-\frac{5}{2}-n)!\Gamma(n+\frac{3}{2})}\right)z^{n},\] _where_ \(\Lambda_{f}(s)\) _is the_ \(L\)_-series of_ \(f\) _(to be defined precisely in the next section)._ The polynomial \(P\) shares some of the defining features of the _period polynomial_ of integral weight forms: it encodes certain values of \(\Lambda_{f}\) inside the interval \([1,k-1]\) and satisfies one of the period relations. Indeed, in Prop. 3.2 we will show that \(P\) matches exactly the \((k-\frac{3}{2})\)-th partial sum in the Taylor expansion of the ("symmetrised" version of the) Eichler cocycle, as extended to all real weights in [1]. Since then the period polynomial equals the value at the Fricke involution of an Eichler cocycle (based at \(i\infty\)), we think of \(P\) as an analogue of the period polynomial for half-integral weight cusp forms. A first, to our knowledge, attempt to develop a cohomology for \(L\)-series of half-integral cusp forms was made in [2]. Its main construction encodes \(L\)-values inside \([1,k-1]\) and satisfies one of the period relations too. However, the analogue of \(P\) in [2] belongs to an infinite dimensional space whereas \(P\) is a polynomial of degree \(\leq k-\frac{5}{2}.\) Another difference is that, as will be seen in the general form of the theorem in the sequel, our polynomial \(P\) can be made to encode values of \(\Lambda_{f}(s)\) at a larger class of finite "arithmetic sequences" inside \([1,k-1]\). The second main result presented here is a special case of Theorem 4.1: **Theorem 1.2**.: _Let \(k\in\frac{1}{2}+\mathbb{Z}\) such that \(k-\frac{5}{2}\in 4\mathbb{N}\). For each cusp form \(f\) of weight \(k\) for \(\Gamma_{0}(4)\) such that \(f(-1/(4z))=(-2iz)^{k}f(z)\) there exists a unique pair \((g,h)\) of cusp forms of (integral) weight \(k-\frac{1}{2}\) and level \(4\) such that, for each \(n=0,\ldots,k-\frac{5}{2}\), we have_ \[\left(\frac{2^{2n}}{n!\Gamma(k-1-n)}+(-1)^{n+1}\frac{2^{k-5-2n}}{ (k-\frac{5}{2}-n)!\Gamma(n+\frac{3}{2})}\right)\Lambda_{f}\left(k-\frac{5}{4}- n\right)\\ =\frac{1}{\Gamma(k-1)}\binom{k-\frac{5}{2}}{n}\left(i^{-n+\frac{ 1}{2}+k}\Lambda_{g}(k-n-\frac{3}{2})+i^{n-\frac{1}{2}-k}\overline{\Lambda_{h}( k-n-\frac{3}{2})}\right) \tag{1.1}\] The main characteristic of the "lift" induced by Theorem 1.2 is that it is compatible, in the sense of eq. (1.1), with the \(L\)-series of the half-integral weight form and that of the corresponding integral weight forms. On the other hand, there does not seem to be any compatibility with the Hecke action, and the "lifted" forms are not explicitly given in terms of Fourier expansions, as was the case of the Shimura lift. The identity of Theorem 1.2 expresses the "critical" values of \(L\)-series of a half-integral weight cusp form directly in terms of \(L\)-values of _integral_ weight forms. If our lift were compatible with the Hecke action, then we could immediately deduce algebraicity results about the \(L\)-values of some half-integral weight Hecke eigenforms from the corresponding results in the integral weight case. However, algebraic properties so similar to those of the integral-weight \(L\)-values are not expected for \(L\)-series of half-integral weight forms. Therefore, additional input is required to derive algebraic information from the translation of half-integral weight \(L\)-values to integral weight \(L\)-values provided by Theorem 1.2. It was mentioned above that our lift is not given explicitly via Fourier expansions. Nevertheless, it does have an explicit expression through the explicit inverse of the Eichler-Shimura map given in [6]. Theorem 5.4 allows us to obtain the integral weight lift of a given form \(f\) of half-integral weight \(k\) from \(k-3/2\) values of the \(L\)-series associated with \(f\). To formulate and prove this result we develop, in Sect. 5, a reformulation of our lift of Theorem 4.1 which may be of independent interest. ## 2. Terminology and notation We first fix the terminology and the notation we will be using. They will mostly be consistent with those of [7] and [2]. Let \(k\in\frac{1}{2}+\mathbb{Z}\) and \(N\in 4\mathbb{N}.\) We let \(\left(\frac{c}{d}\right)\) be the Kronecker symbol. For an odd integer \(d\), we set \[\epsilon_{d}:=\begin{cases}1&\text{ if }d\equiv 1\bmod 4,\\ i&\text{ if }d\equiv 3\bmod 4,\end{cases} \tag{2.1}\] so that \(\epsilon_{d}^{2}=\left(\frac{-1}{d}\right)\). We set the implied logarithm to equal its principal branch so that \(-\pi<\)arg\((z)\leq\pi\). We define the action \(|_{k}\) of \(\Gamma_{0}(N)\) on smooth functions \(f\) on \(\mathfrak{H}\) as follows: \[(f|_{k}\gamma)(z):=\left(\frac{c}{d}\right)\epsilon_{d}^{2k}(cz+d)^{-k}f( \gamma z)\qquad\text{ for all }\gamma=\begin{pmatrix}*&*\\ c&d\end{pmatrix}\in\Gamma_{0}(N). \tag{2.2}\] Further, let \(W_{N}=\begin{pmatrix}0&-1/\sqrt{N}\\ \sqrt{N}&0\end{pmatrix}\) and \(\Gamma_{0}(N)^{*}=\langle W_{N},\Gamma_{0}(N)\rangle.\) We set \[(f|_{k}W_{N})(z):=(-i\sqrt{N}z)^{-k}f(-1/(Nz)). \tag{2.3}\] We extend the action to \(\mathbb{C}[\Gamma_{0}(N)^{*}]\) by linearity. For \(n\in\mathbb{Z}\) we let, as usual, \[(f|_{n}\gamma)(z):=(cz+d)^{-n}f(\gamma z)\qquad\text{ for all }\gamma=\begin{pmatrix}*&* \\ c&d\end{pmatrix}\in\operatorname{SL}_{2}(\mathbb{R}). \tag{2.4}\] If \(\Gamma\) is either a subgroup of finite index in \(\operatorname{SL}_{2}(\mathbb{Z})\) or \(\Gamma_{0}^{*}(N)\) for some \(N\), we will denote the space of cusp forms of weight \(k\) for a group \(\Gamma\) by \(S_{k}(\Gamma)\). We let \(T,S\) and \(U\) be the following elements of \(\operatorname{SL}_{2}(\mathbb{Z})\): \[T=\begin{pmatrix}1&1\\ 0&1\end{pmatrix},\qquad S=\begin{pmatrix}0&-1\\ 1&0\end{pmatrix},\qquad U=TS=\begin{pmatrix}1&-1\\ 1&0\end{pmatrix}.\] For \(k\in\frac{1}{2}\mathbb{Z}\) and \(f,g\in S_{k}(\Gamma)\), we define the Petersson scalar product as \[(f,g)=\int_{\Gamma\backslash\mathfrak{H}}f(z)\overline{g(z)}y^{k}\frac{dxdy }{y^{2}}\quad\text{where }z=x+iy.\] For \(k\in\frac{1}{2}\mathbb{Z}\) and a character \(\chi\) on \(\Gamma=\Gamma_{0}(N)\) or \(\Gamma_{0}^{*}(N)\), we set \[f|_{k,\chi}\gamma:=\overline{\chi(\gamma)}f|_{k}\gamma\qquad\text{ for all }\gamma\in\Gamma. \tag{2.5}\] Let \(\lambda\) be the width of the cusp \(\infty\). For integral or half-integral weights \(k\), we attach to \[f(z)=\sum_{n\geq 1}a_{f}(n)e^{\frac{2\pi inz}{\lambda}}\in S_{k}(\Gamma)\] the \(L\)_-series_ \[L_{f}(s)=\sum_{n\geq 1}\frac{a_{f}(n)}{n^{s}}.\] This is absolutely convergent for \(\Re(s)\gg 1\) and can be analytically continued to the entire complex plane. Its "completed" version is \[\Lambda_{f}(s)=\frac{\Gamma(s)\lambda^{s}}{(2\pi)^{s}}\sum_{n\geq 1}\frac{a_{f}(n )}{n^{s}}=\int_{0}^{\infty}f(it)t^{s}\frac{dt}{t}. \tag{2.6}\] It satisfies the functional equation \[\Lambda_{f}(s)=N^{\frac{k}{2}-s}\Lambda_{f|_{k}W_{N}}(k-s),\ \ \text{if}\ k\in\frac{1}{2}+\mathbb{Z}\ \text{and}\ \Lambda_{f}(s)=i^{k}N^{\frac{k}{2}-s}\Lambda_{f|_{k}W_{N}}(k-s),\ \ \text{if}\ k\in \mathbb{Z}. \tag{2.7}\] For \(z,w\) not necessarily non-negative integers set \[\binom{z}{w}:=\frac{\Gamma(z+1)}{\Gamma(w+1)\Gamma(z-w+1)}.\] ## 3. An analogue of the period polynomial Let \(f\) be a cusp form of weight \(k\in\frac{1}{2}+\mathbb{Z}\) for \(\Gamma_{0}(N)\) such that \(f|_{k}W_{N}=f\). Fix \(a\in[0,2k-9/2]\) and set \[P_{a}(z):=\int_{0}^{\infty}f(w)\Phi_{a}(z,w)dw,\] where \[\Phi_{a}(z,w):=\sum_{n=0}^{k-5/2}\left[\binom{k-2}{n}(iNz)^{n}w^{a-n}+\frac{i^ {k}}{\sqrt[4]{N}}\binom{k-2}{n+1/2}(iz)^{n}(-Nw)^{2k-9/2-a-n}\right].\] **Theorem 3.1**.: _Let \(k\in\frac{1}{2}+\mathbb{Z}\) with \(k>5/2\). Suppose that \(f\in S_{k}(\Gamma_{0}^{*}(N))\) and \(a\in[0,2k-9/2]\). With the above notation, set_ \[F_{a}(z):=\int_{z}^{\infty}f(w)\Phi_{a}(z,w)dw.\] 1. _For_ \(z\in\mathfrak{H}\) _we have_ \[-i^{k-5/2}F_{a}|_{5/2-k}W_{N}+F_{a}=P_{a}.\] _Therefore, if_ \(\chi\) _is a character on_ \(\Gamma_{0}^{*}(N)\) _such that_ \(\chi(W_{N})=i^{\frac{5}{2}-k}\)_, then_ (3.1) \[P_{a}|_{5/2-k,\chi}(W_{N}+1)=0.\] _In particular, if_ \(4|(k-5/2)\)_, then_ \[P_{a}|_{5/2-k}(W_{N}+1)=0.\] 2. _For each_ \(z\in\mathbb{C},\)__ \[P_{a}(z) = i^{a+1}\sum_{n=0}^{k-5/2}\left[\binom{k-2}{n}N^{n}\Lambda_{f}(a+ 1-n)\right.\] \[+ \left.\binom{k-2}{n+1/2}N^{2k-a-n-\frac{19}{4}}i^{2n-k+\frac{1}{2 }}\Lambda_{f}(2k-7/2-a-n)\right]z^{n}\] Proof.: We first show that \[-(i\sqrt{N}z)^{k-5/2}(-i\sqrt{N}w)^{k-2}\Phi_{a}(W_{N}z,W_{N}w)=\Phi_{a}(z,w). \tag{3.3}\] Indeed, the left-hand side is \[-(i\sqrt{N}z)^{k-5/2}(-i\sqrt{N}w)^{k-2}\sum_{n=0}^{k-5/2}\bigg{[} \binom{k-2}{n}(iz)^{-n}(-Nw)^{n-a}\] \[+\frac{i^{k}}{\sqrt[4]{N}}\binom{k-2}{n+1/2}(iNz)^{-n}w^{n-2k+9/2+ a}\bigg{]}\] Observe that \[\binom{k-2}{k-5/2-n}=\binom{k-2}{n+1/2},\] so that the change of variables \(n\mapsto k-5/2-n\) yields \[-(i\sqrt{N}z)^{k-5/2}(-i\sqrt{N}w)^{k-2}\sum_{n=0}^{k-5/2}\bigg{[} \binom{k-2}{n+1/2}(iz)^{n-k+5/2}(-Nw)^{k-5/2-n-a} \tag{3.5}\] \[+\frac{i^{k}}{\sqrt[4]{N}}\binom{k-2}{n}(iNz)^{n-k+5/2}w^{-k+2-n+ a}\bigg{]} \tag{3.4}\] Since \(w\in\mathfrak{H}\), we have, for every \(t\in\mathbb{R}\), \((-w)^{t}=e^{-\pi it}w^{t}.\) A routine computation shows \[-(i\sqrt{N}z)^{k-5/2}(-i\sqrt{N}w)^{k-2}(iz)^{n-k+5/2}(-Nw)^{k-5/2-n-a}=\frac{ i^{k}}{\sqrt[4]{N}}(iz)^{n}(-Nw)^{2k-9/2-a-n}\] and similarly: \[-(i\sqrt{N}z)^{k-5/2}(-i\sqrt{N}w)^{k-2}\frac{i^{k}}{\sqrt[4]{N}}(iNz)^{n-k+5/2 }w^{2-k-n+a}=(iNz)^{n}w^{a-n}\] Therefore, (3.4) equals \(\Phi_{a}(z,w)\), establishing (3.3). With (3.3) we now have: \[i^{k-5/2}(F_{a}|_{5/2-k}W_{N})(z) = (i\sqrt{N}z)^{k-5/2}\int_{W_{N}z}^{\infty}f(w)\Phi_{a}(W_{N}z,w)dw \tag{3.6}\] \[= (i\sqrt{N}z)^{k-5/2}\int_{z}^{0}f(W_{N}w)\Phi_{a}(W_{N}z,W_{N}w)d (W_{N}w)\] \[= -(i\sqrt{N}z)^{k-5/2}\int_{z}^{0}f(w)(-i\sqrt{N}w)^{k}\Phi_{a}(W_ {N}z,W_{N}w)\frac{dw}{(-i\sqrt{N}w)^{2}}\] \[= \int_{z}^{0}f(w)\Phi_{a}(z,w)dw=F_{a}(z)-P_{a}(z)\] as required. For (3.1), we have from (3.6), \[P_{a}|_{\frac{5}{2}-k,\chi}(1+W_{N})=F_{a}|_{\frac{5}{2}-k,\chi}(1-W_{N})|_{ \frac{5}{2}-k,\chi}(1+W_{N})=F_{a}|_{\frac{5}{2}-k,\chi}(1-W_{N}^{2})=0.\] To show (3.2), we expand the expression for defining expression for \(P_{a}\) and use the integral formula for \(\Lambda_{f}\) in (2.6) This theorem and (2.7) show that \(P_{a}\) can be thought of as a "period polynomial" encoding the \(L\)-values of \(f\) at \(a+1,a,a-1,\cdots,a-k+7/2\). Among the various choices of \(a\), the most "canonical" is \(a=k-2\) because then, our "period polynomial" \(P_{a}\) becomes entirely consistent with the Eichler cohomology attached to general weight cusp forms, as in [1]. Specifically, the Eichler cocycle on which that cohomology is based is induced by the assignment (Section 2.2 of [1]) to \(f\in S_{k}(\Gamma)\) of the map \[\Gamma\ni\gamma\longrightarrow\psi_{f,\gamma}^{\infty}(z):=\int_{\gamma^{-1}i \infty}^{i\infty}f(w)(w-z)^{k-2}dw\] where \(\psi_{f,\gamma}^{\infty}\) is defined on the _lower_ half-plane \(\bar{\mathfrak{H}}\). For \(\gamma=W_{N}\), the integral giving \(\psi_{f,W_{N}}^{\infty}\left(x\right)\) is well-defined for \(x>0\) as well. We then have the following relation between our "period polynomial" \(P_{k-2}\) and \(\psi_{f,W_{N}}^{\infty}\left(x\right)\). **Proposition 3.2**.: _For each \(x>1\), we have_ \[P_{k-2}(ix)=\psi_{f,W_{N}}^{\infty}\left(Nx\right)-i^{\frac{5}{2}-k}\psi_{f,W _{N}}^{\infty}\left(1/x\right)(\sqrt{N}x)^{k-\frac{5}{2}}+O(x^{k-\frac{3}{2}}).\] Proof.: We first note that \[\psi_{f,W_{N}}^{\infty}\left(Nx\right)-i^{\frac{5}{2}-k}\psi_{f,W _{N}}^{\infty}\left(1/x\right)(\sqrt{N}x)^{k-\frac{5}{2}}\\ =i^{k-1}\int_{0}^{\infty}f(it)\left((1+iNxt^{-1})^{k-2}-i^{\frac{ 5}{2}-k}(1+ix^{-1}t^{-1})^{k-2}(\sqrt{N}x)^{k-\frac{5}{2}}\right)t^{k-2}dt. \tag{3.7}\] By Taylor's formula we have, for each \(M\in\mathbb{N}\), \[(1+iNxt^{-1})^{k-2}=\\ \sum_{n=0}^{M-1}\binom{k-2}{n}(iNxt^{-1})^{n}+O_{M}\left(\int_{0} ^{1}(1-y)^{M-1}(1+iNyxt^{-1})^{k-2-M}(iNxt^{-1})^{M}dy\right).\] If \(M>k-2\) the error term is \(O_{M}((xt^{-1})^{M})\). Likewise, again if \(M>k-2\), \((1+ix^{-1}t^{-1})^{k-2}\) is \(O_{M}((x^{-1}t^{-1})^{M})\). Therefore for \(M=k-\frac{3}{2}\), we deduce, for \(x>1\), \[\psi_{f,W_{N}}^{\infty}\left(Nx\right)-i^{\frac{5}{2}-k}\psi_{f,W _{N}}^{\infty}\left(1/x\right)(\sqrt{N}x)^{k-\frac{5}{2}}\\ =i^{k-1}\sum_{n=0}^{k-\frac{5}{2}}\binom{k-2}{n}\left((iNx)^{n}-i ^{\frac{5}{2}-k}(ix^{-1})^{n}(\sqrt{N}x)^{k-\frac{5}{2}}\right)\Lambda_{f}(k-1 -n)+O(x^{k-\frac{3}{2}}) \tag{3.8}\] where the implied constant is independent of \(x\). The change of variables \(n\to k-\frac{5}{2}-n\), followed by (2.7) in the second sum, implies the result. ### Comparison with the period function of [2] Another function encoding special values of \(L_{f}\) is given in [2]. The result in [2] is stated for cusp forms on Hecke groups, but, thanks to the embedding of \(S_{k}(\Gamma_{0}^{*}(N))\) into a space of cusp forms for Hecke groups ((8.1) of [2]) it can be formulated for the modular forms studied in Theorem 3.1: **Proposition 3.3**.: _[_2_]_ _Let \(k\in\frac{1}{2}+\mathbb{Z}\) and \(f(z)=\sum a_{f}(n)e^{2\pi inz}\in S_{k}(\Gamma_{0}(N))\) such that \(f|_{k}W_{N}=f\). For each \(z\in\mathfrak{H}\) set_ \[\mathcal{E}_{f}^{*}(z)=\frac{1}{\sqrt{\pi}}\sum_{n\geq 1}\frac{a_{f}(n)}{n^{k-1}} \left(e^{-2\pi inz}\Gamma\left(\frac{1}{2},-2\pi inz\right)-\frac{1}{\sqrt{-2 \pi inz}}\right).\] _Then, for all \(z\in\mathfrak{H},\)_ \[\left(\mathcal{E}_{f}^{*}|_{2-k}(1-W_{N})\right)(z)=\sum_{n=0}^{k-\frac{3}{2}} \left(\frac{L_{f}(k-n-1)}{\Gamma(n+1)}+\frac{L_{f}(k-n-\frac{1}{2})}{\Gamma(n+ \frac{1}{2})}\left(\frac{2\pi z}{i}\right)^{-\frac{1}{2}}\right)\left(\frac{2 \pi z}{i}\right)^{n}. \tag{3.9}\] The values of \(L_{f}(s)\) at \(k-1,\ldots\frac{3}{2}\) appearing in the right-hand side of (3.9) are also encoded by \(P_{k-2}\). However, the function on the right-hand side of (3.9) does not belong to a finite-dimensional space closed under the action of the group. Further, the "Eichler integral" \(\mathcal{E}_{f}^{*}(z)\) is defined as a series. To complete the comparison of our construction with the "period function" of [2], we show how the main piece of \(\mathcal{E}_{f}^{*}(z)\) can nevertheless be expressed as an integral too. **Proposition 3.4**.: _With the notation of Theorem (3.1), for each \(z\in\mathfrak{H}\),_ \[\mathcal{E}_{f}^{*}(z)=\alpha_{k}(-iz)^{\frac{1}{2}}\int_{z}^{i\infty}F_{f}(w )(w-z)^{k-\frac{5}{2}}dw-\frac{1}{\sqrt{-2\pi^{2}iz}}L_{f}\left(k-\frac{1}{2}\right)\] _where_ \[F_{f}(w):=\int_{0}^{\infty}f\left(xw\right)x^{k-\frac{3}{2}}(x+1)^{-\frac{1}{2 }}dx\] _and \(\alpha_{k}=(-2\pi i)^{k-1}/(\pi^{\frac{1}{4}}(k-\frac{5}{2})!).\)_ Proof.: We first recall ([5], (8.19.1), (8.19.3)) that, for \(\operatorname{Re}(w)>0,\) \[\Gamma\left(\frac{1}{2},w\right)=w^{\frac{1}{2}}\int_{1}^{\infty}e^{-wt}t^{- \frac{1}{2}}dt.\] Therefore, for \(z\in\mathfrak{H},\) \[\sum_{n\geq 1}\frac{a_{f}(n)}{n^{k-1}}\frac{e^{-2\pi inz}}{\sqrt{ \pi}}\Gamma\left(\frac{1}{2},-2\pi inz\right)=\frac{1}{\sqrt{\pi}}\sum_{n\geq 1 }\frac{a_{f}(n)}{n^{k-1}}e^{-2\pi inz}\left(-2\pi inz\right)^{\frac{1}{2}}\int_ {1}^{\infty}e^{2\pi inzt}t^{-\frac{1}{2}}dt\\ =\left(\frac{-2\pi iz}{\sqrt{\pi}}\right)^{\frac{1}{2}}\int_{1}^ {\infty}t^{-\frac{1}{2}}\left(\sum_{n\geq 1}\frac{a_{f}(n)}{n^{k-\frac{3}{2}}}e^{2 \pi inz(t-1)}\right)dt\] By the theory of usual (integral weight) Eichler integrals, followed by the changes of variables \(x=t-1\) and \(w_{1}=w/x\), this equals \[\left(\frac{-2\pi iz}{\sqrt{\pi}}\right)^{\frac{1}{2}}\int_{1}^{\infty}t^{- \frac{1}{2}}\frac{(-2\pi i)^{k-\frac{3}{2}}}{\Gamma(k-\frac{3}{2})}\int_{z(t- 1)}^{i\infty}f(w)\left(w-z(t-1)\right)^{k-\frac{5}{2}}dwdt\\ =\alpha_{k}z^{\frac{1}{2}}\int_{0}^{\infty}x^{k-\frac{3}{2}}(x+1) ^{-\frac{1}{2}}\int_{z}^{i\infty}f(xw_{1})\left(w_{1}-z\right)^{k-\frac{5}{2} }dw_{1}dx\] A change in the order of integration implies the formula. ## 4. An Eichler cocycle In this section, we will first use Theorem 3.1 to construct an Eichler cocycle, with coefficients in the space \(\mathbb{C}_{k-5/2}[z]\). We maintain the notation and assumptions of the last section. Since \(4|N\), \(\Gamma_{0}(N)\) is torsion-free and hence free on a set of generators \(\{\gamma_{j}\}_{j=1}^{2g+h-1}\), where \(\gamma_{1}=T\), \(g\) is the genus and \(h\) the number of inequivalent cusps of \(\Gamma_{0}(N)\). ([3], Prop. 2.4) We also have \(W_{N}\Gamma_{0}(N)=\Gamma_{0}(N)W_{N}\) and thus \(\Gamma_{0}^{*}(N)=\langle\Gamma_{0}(N),W_{N}\rangle\) is generated by \(\{\gamma_{j}\}\cup\{W_{N}\}\) with only relation \(W_{N}^{4}=1\). From the above, we first deduce that there is always a character \(\chi\) on \(\Gamma_{0}^{*}(N)\) such as prescribed in Theorem 3.1, that is such that \(\chi(W_{N})=i^{5/2-k}\). Indeed, since \(i^{4(5/2-k)}=1\), the character induced by the assignment \(\chi(W_{N})=i^{5/2-k}\) and \(\chi(\gamma_{i})=1\) for \(i=1,\dots 2g+h-1\) is well-defined. Further, let, for \(a\in[0,2k-\frac{9}{2}]\), \(P_{a}\) be the polynomial defined in (3.2). Consider the polynomial \[\hat{P}_{a}(z):=P_{a}(2z/\sqrt{N}).\] It is then easy to deduce from Theorem 3.1 that \[\hat{P}_{a}|_{\frac{5}{2}-k,\chi}(W_{4}+1)=0 \tag{4.1}\] for any character \(\chi\) on \(\Gamma_{0}^{*}(4)\) such that \(\chi(W_{4})=i^{\frac{5}{2}-k}\). (Since there is no risk of confusion, we use the same notation for both characters). Recall that \(\Gamma_{0}(4)\) is freely generated on the generators \[T,\begin{pmatrix}1&0\\ 4&1\end{pmatrix}\] and thus \(\Gamma_{0}^{*}(4)\) is generated by \(T,W_{4}\) in \(\Gamma_{0}^{*}(4)\) with only relation \(W_{4}^{4}=1\). We consider the map \[\hat{\pi}_{f}:\Gamma_{0}^{*}(4)\to\mathbb{C}_{k-5/2}[z]\] induced by the \(1\)-cocycle condition from the values \[\hat{\pi}_{f}(W_{4})=\hat{P}_{a}\qquad\text{ and }\hat{\pi}_{f}(T)=0.\] Then \(\hat{\pi}_{f}\) well-defined since \[\hat{\pi}_{f}(W_{4}^{2})=\hat{\pi}_{f}(W_{4})|_{\frac{5}{2}-k,\chi}W_{4}+\hat {\pi}_{f}(W_{4})=\hat{P}_{a}|_{\frac{5}{2}-k}W_{4}+\hat{P}_{a}=0\] by (4.1). This cocycle induces a non-trivial class in \(H^{1}_{\text{par}}(\Gamma_{0}^{*}(4),\mathbb{C}_{k-5/2}[z])\) where the action of \(\Gamma_{0}^{*}(4)\) on \(\mathbb{C}_{k-5/2}[z]\) is \(|_{5/2-k,\chi}\). On the other hand, according to Eichler-Shimura isomorphism, there is an isomorphism \[\phi:S_{k-1/2}(\Gamma_{0}^{*}(4),\chi)\oplus\overline{S_{k-1/2}(\Gamma_{0}^{* }(4),\bar{\chi})}\longrightarrow H^{1}_{\text{par}}(\Gamma_{0}^{*}(4),\mathbb{ C}_{k-5/2}[z])\] induced by the assignment of the following map \(\phi(g,\bar{h}):\Gamma_{0}^{*}(4)\to\mathbb{C}_{k-5/2}[z]\) to \((g,\bar{h})\): \[\phi(g,\bar{h})(\gamma)=\int_{\infty}^{\gamma^{-1}\infty}g(w)(w-z)^{k-5/2}dw+ \int_{\infty}^{\gamma^{-1}\infty}\overline{h(w)}(\bar{w}-z)^{k-5/2}d\bar{w}.\] Therefore, there are unique \(g\in S_{k-1/2}(\Gamma_{0}^{*}(4),\chi)\), \(h\in S_{k-1/2}(\Gamma_{0}^{*}(4),\bar{\chi})\) and a polynomial \(Q\) in \(\mathbb{C}_{k-5/2}[z]\) such that \[\hat{\pi}_{f}(\gamma)=\phi(g,\bar{h})(\gamma)+Q|_{\frac{5}{2}-k,\chi}(\gamma-1)\] for all \(\gamma\in\Gamma_{0}^{*}(4)\). Since \(\hat{\pi}_{f}(T)=\phi(g,\bar{h})(T)=0\), the polynomial \(Q\) should vanish too, because it would otherwise have infinitely many zeros. Therefore, for each \(f\in S_{k}(\Gamma_{0}^{*}(4),\chi)\), there are unique \(g\in S_{k-1/2}(\Gamma_{0}^{*}(4),\chi)\), \(h\in S_{k-1/2}(\Gamma_{0}^{*}(4),\bar{\chi})\) such that, for all \(\gamma\in\Gamma_{0}^{*}(4)\), \[\hat{\pi}_{f}(\gamma)=\phi(g,\bar{h})(\gamma). \tag{4.2}\] An application of the binomial theorem combined with the integral form of \(\Lambda_{g}(s),\Lambda_{h}(s)\) implies that \(\phi(g,\bar{h})(W_{4})\) can be expressed as a polynomial with coefficients involving values of the critical values of \(g,\bar{h}\). In our case this gives \[\phi(g,\bar{h})(W_{4})\\ =-\sum_{n=0}^{k-5/2}\binom{k-5/2}{n}(-z)^{n}\left(i^{k+n-3/2} \Lambda_{g}(k-n-3/2)+i^{3/2-n-k}\overline{\Lambda_{h}(k-n-3/2)}\right). \tag{4.3}\] Comparing coefficients with the expression for \(\hat{\pi}_{f}(W_{4})=\hat{P}_{a}\) in (3.2), we deduce **Theorem 4.1**.: _Let \(k\in\frac{1}{2}+\mathbb{Z}\) with \(k>\frac{5}{2}.\) For each cusp form \(f\) of weight \(k\) for \(\Gamma_{0}(N)\) such that \(f(-1/(Nz))=(-i\sqrt{N}z)^{k}f(z)\), a character \(\chi\) on \(\Gamma_{0}^{*}(4)\) such that \(\chi(W_{4})=i^{\frac{5}{2}-k}\) and for each \(a\in[0,2k-\frac{9}{2}]\), there exists a unique pair \((g,h)\) of cusp forms of (integral) weight \(k-\frac{1}{2}\), level \(4\) and character \(\chi\) such that, for each \(n=0,\ldots,k-\frac{5}{2}\), we have_ \[i^{a+1}2^{n}\left(\binom{k-2}{n}N^{\frac{n}{2}}\Lambda_{f}(a+1-n )+i^{2n+\frac{1}{2}-k}\binom{k-2}{n+\frac{1}{2}}N^{2k-a-\frac{3n}{2}-\frac{19} {4}}\Lambda_{f}(2k-\frac{7}{2}-a-n)\right)\\ =\binom{k-\frac{5}{2}}{n}\left(i^{-n+\frac{1}{2}+k}\Lambda_{g}(k- n-\frac{3}{2})+i^{n-\frac{1}{2}-k}\overline{\Lambda_{h}(k-n-\frac{3}{2})}\right) \tag{4.4}\] Theorem 1.2 is a special case of this for \(N=4\), \(k\) such that \(4|(k-\frac{5}{4})\) and \(a=k-\frac{9}{4}\). ### A special case In low dimensions, Theorem 4.1 can assume a simpler form. For example, we can specialise to \(k\) such that dim\(S_{k-1/2}(\Gamma_{0}^{*}(4))=1\) and \(a=k-9/4\). Then, if \(g\) is a normalised eigenform in \(S_{k-1/2}(\Gamma_{0}^{*}(4))\), which, in particular, implies that \(g\) has real Fourier coefficients at infinity, Theorem 4.1 becomes **Corollary 4.2**.: _Let \(k\in\frac{5}{2}+4\mathbb{N}\) such that \(S_{k-1/2}(\Gamma_{0}^{*}(4))\) is \(1\)-dimensional, spanned by a normalised cusp form \(g\). For each cusp form \(f\) of weight \(k\) for \(\Gamma_{0}(N)\) such that \(f(-1/(Nz))=(-i\sqrt{N}z)^{k}f(z)\) there exists a \(\lambda_{f}\in\mathbb{C}\) such that, for each \(n=0,\ldots,k-\frac{5}{2}\), we have_ \[\Lambda_{f}\left(k-\frac{5}{4}-n\right)=C_{k,N,n}\Lambda_{g}\left(k-\frac{3}{ 2}-n\right) \tag{4.5}\] _where_ \[C_{k,N,n}=\binom{k-\frac{5}{2}}{n}2^{-n}\left(i^{\frac{7}{4}-n}+\lambda_{f}i^ {n-k+\frac{3}{4}}\right)\left[\binom{k-2}{n}N^{\frac{n}{2}}+(-1)^{n+1}\binom{ k-2}{n+\frac{1}{2}}N^{k-\frac{5}{2}-\frac{3n}{2}}\right]^{-1}.\] **Remark.** In view of the corollary, it is tempting to try to deduce algebraicity dependence for \(L\)-values of \(f\) from the corresponding properties for integral weight forms (e.g. via Manin's periods' theorem). However, the number of \(L\)-values of \(f\) whose algebraic dependence we would like to derive is the same as the number of independent \(L\)-values of \(g\) in the RHS of (4.5). In the case of \(k=13/2\) for instance, the two values for odd \(n\in\{0,\ldots,4\}\) are essentially the same because of the functional equation of \(\Lambda_{f}(s)\), leaving us with a single value of \(\Lambda_{f}(s)\), trivially accounted for by the single constant \(\lambda_{f}\). ## 5. An explicit form of the lift The lift described in Theorem 4.1 can be made explicit via the explicit inverse of the Eichler-Shimura map found in [6]. To this end we reformulate the construction leading to Theorem 4.1 in a way that parallels the decomposition of the classical period polynomial into an "even" and "odd" part. For simplicity, in this section, we will be working with weight \(k\) such that \(4|(k-\frac{5}{2})\). We will first review some cohomological constructions used in [6] and some notation from [4] which we will then apply to half-integral weight cusp forms. We first consider the Hecke group \(H(2)\) generated by the images of \(T^{2}\) and \(S\) under the natural projection of \(\operatorname{SL}_{2}(\mathbb{Z})\) onto \(\operatorname{PSL}_{2}(\mathbb{Z})\). Since \(4|(k-\frac{5}{2})\), it will be legitimate to identify the same notation for elements of \(\operatorname{SL}_{2}(\mathbb{Z})\) and their images in \(\operatorname{PSL}_{2}(\mathbb{Z})\). The group \(H(2)\) has only the relation \(S^{2}=1\) (see Sect. 5 of [2] for the summary of some basic properties of the Hecke groups). It is easy to see that a set of representatives of \(H(2)\backslash\operatorname{PSL}_{2}(\mathbb{Z})\) is \[\{1,T,U\},\qquad\text{where }U=TS.\] Let \(x\in\operatorname{PSL}_{2}(\mathbb{Z})\). If \(H(2)x=H(2)\) (resp. \(H(2)T,H(2)U\)), set \(u(x)=1\), (resp. \(T,U\)). Some values of \(u\) that will be repeatedly used below tacitly are: \(u(T^{-1})=u(T),u(TST^{-1})=U.\) For each \(x,g\in\operatorname{PSL}_{2}(\mathbb{Z})\), set \[\kappa_{x,g}:=u(x)gu(xg)^{-1}. \tag{5.1}\] One notices that, if \(x\in H(2)\), \(\kappa_{x,g}=gu(g)^{-1}\) and, if \(x,g\in H(2)\), \(\kappa_{x,g}=g\). Further, if \(H(2)x=H(2)x^{\prime}\), then \(\kappa_{x,g}=\kappa_{x^{\prime},g}\) and thus \(\kappa_{x,g}\) is well-defined as a function of \((H(2)\backslash\operatorname{PSL}_{2}(\mathbb{Z}))\times\operatorname{PSL}_{2 }(\mathbb{Z}).\) Finally, \(\kappa\) satisfies the relation \[\kappa_{x,g_{1}g_{2}}=\kappa_{x,g_{1}}\kappa_{xg_{1},g_{2}} \tag{5.2}\] Next, we consider the space \[I_{k}:=\operatorname{Ind}_{H(2)}^{\operatorname{PSL}_{2}(\mathbb{Z})}( \mathbb{C}_{k-5/2}[z])=\{f:H(2)\backslash\operatorname{PSL}_{2}(\mathbb{Z}) \to\mathbb{C}_{k-5/2}[z]\}.\] Since \(\operatorname{PSL}_{2}(\mathbb{Z})\) acts on \(\mathbb{C}_{k-5/2}[z]\), there is an action of \(\operatorname{PSL}_{2}(\mathbb{Z})\) on \(I_{k}\), given for each \(v\in I_{k}\) by \[(v||g)(x):=v(xg^{-1})|_{\frac{5}{2}-k}g\quad\text{for all }x\in H(2)\backslash \operatorname{PSL}_{2}(\mathbb{Z}),g\in\operatorname{PSL}_{2}(\mathbb{Z}).\] Let \(\sigma:H(2)\to I_{k}\) be a \(1\)-cocycle with values in \(I_{k}\) such that \(\sigma(T)=0\). Then, as in the case of the classical period polynomial, one sees directly by the cocycle relation that \(\sigma(S)\) belongs to the space \[W:=\{v\in I_{k};v||(S+1)=v||(U^{2}+U+1)=0\}\] called the space of _period polynomials_ in [6]. (Note that the condition \(v||(-1)=v\) included in the definition of that space in [6] is not needed here because \(k-\frac{5}{2}\) is even.) We also define the following subspace of \(W\): \[C:=\{P||(1-S);P\in I_{k},P||T=P\}.\] As in the classical case again, the spaces \(W\) and \(C\) can be decomposed as a direct sum of the \(\pm\)-eigenspaces of a certain involution. Specifically, let \(\epsilon=\left(\begin{smallmatrix}-1&0\\ 0&1\end{smallmatrix}\right)\) and let \(\epsilon\) act on a \(v\in I_{k}\) so that \[(v||\epsilon)(x):=v(\epsilon x\epsilon)|_{\frac{5}{2}-k}\epsilon\quad\text{ for all }x\in H(2)\backslash\operatorname{PSL}_{2}(\mathbb{Z}).\] Then \(I_{k}\) decomposes into \(\pm\)-eigenspaces under the action of \(\epsilon\), denoted \(I_{k}^{\pm}\). Further, since \(W\) (resp. \(C\)) is closed under the action of \(\epsilon\), it also decomposes into \(\pm\)-eigenspaces \(W^{\pm}\) (resp. \(C^{\pm}\)). We denote that \(\pm\)-component of \(\sigma(S)\) by \(\sigma(S)^{\pm}\), i.e., \[\sigma(S)^{\pm}=\frac{1}{2}\left(\sigma(S)\pm\sigma(S)||\epsilon\right)\in W^ {\pm}.\] The Eichler-Shimura theorem can be formulated as a pair of isomorphisms to (quotients of) \(W^{\pm}\). We present it here in the arrangement of [6] (Theorem 2.1) as applied to our setting. First, for each cusp form \(g\) of weight \(k-\frac{1}{2}\) for \(H(2)\), we let \(\rho_{g}\) be the element of \(I_{k}\) defined by \[\rho_{g}(\gamma)(z)=\int_{0}^{i\infty}\left(g\Big{|}_{k-\frac{1}{2}}\gamma \right)(w)(w-z)^{k-\frac{5}{2}}dw\quad\text{for all }\gamma\in H(2)\backslash \operatorname{PSL}_{2}(\mathbb{Z}). \tag{5.3}\] Each polynomial \(\rho_{g}(\gamma)(z)\) can be expanded as \[\rho_{g}(\gamma)(z)=\sum_{n=0}^{k-\frac{5}{2}}(-1)^{n}\binom{k-\frac{5}{2}}{ n}r_{\gamma,n}(g)z^{k-\frac{5}{2}-n},\quad\text{for }r_{\gamma,n}(g)=i^{n+1}\int_{0}^{\infty}\left(g\Big{|}_{k-\frac{1}{2}}\gamma \right)(it)t^{n}dt \tag{5.4}\] Further, it can be shown that \(\rho_{g}\in W\) and hence that it can be written as a sum of its \(+\) and \(-\) components \(\rho_{g}^{\pm}\in W^{\pm}\). With this notation we have **Theorem 5.1**.: _([6], Theorem 2.1 (Eichler-Shimura)) The assignments \(g\to\rho_{g}^{+}\) and \(g\to\rho_{g}^{-}\) induce isomorphisms_ \[\rho^{+}:S_{k-\frac{1}{2}}(H(2))\cong W^{+}/C^{+}\quad\text{and }\rho^{-}:S_{k-\frac{1}{2}}(H(2))\cong W^{-}/C^{-}.\] Since \(H(2)\) is \(\operatorname{SL}_{2}(\mathbb{Z})\)-conjugate to \(\Gamma_{0}(2)\), Prop. 4.4 of [6] shows that \[C_{H(2)}^{-}\simeq(C_{\Gamma_{0}(2)})^{-}=\{0\}.\] Let \(k\in\frac{1}{2}+\mathbb{Z}\) with \(k>\frac{5}{2}\) and \(4|(k-\frac{5}{2})\). We will construct an element of \(W^{-}\) based on a cusp form \(f\) of weight \(k\) for \(\Gamma_{0}(N)\) such that \(f(-1/(Nz))=(-i\sqrt{N}z)^{k}f(z)\). In the last section, we defined the map \(\hat{\pi}_{f}:\Gamma_{0}^{*}(4)\to\mathbb{C}_{k-5/2}[z]\) induced by the \(1\)-cocycle condition from the values \(\hat{\pi}_{f}(W_{4})=\hat{P}_{a}\) and \(\hat{\pi}_{f}(T)=0.\) Since \(4|(k-\frac{5}{2})\), there is a well-defined parabolic \(1\)-cocycle \(\pi_{f}^{\prime}:H(2)\to\mathbb{C}_{k-5/2}[z]\) induced by the \(1\)-cocycle relation from the values \[\pi_{f}^{\prime}(S)(z)=\hat{P}_{a}(z/2)\qquad\text{ and }\pi_{f}^{\prime}(T)=0.\] This cocycle induces a non-trivial class in \(H^{1}_{\text{par}}(H(2),\mathbb{C}_{k-5/2}[z])\) where the action of \(H(2)\) on \(\mathbb{C}_{k-5/2}[z]\) is \(|_{5/2-k}\). We now define a \(1\)-cocycle of \(\operatorname{PSL}_{2}(\mathbb{Z})\) with coefficients in \(I_{k}\) induced by the cocycle \(\pi_{f}^{\prime}\). For each \(g\in\operatorname{PSL}_{2}(\mathbb{Z})\) let \(\tilde{\pi}_{f}(g)\) be the element of \(I_{k}\) such that \[\tilde{\pi}_{f}(g)(x):=\pi_{f}^{\prime}(\kappa_{x,g^{-1}}^{-1})|_{\frac{5}{2}-k }u(x)\quad\text{for all }x\in H(2)\backslash\operatorname{PSL}_{2}(\mathbb{Z}). \tag{5.5}\] By Shapiro's lemma or, directly with (5.1), we can see that it is indeed a parabolic \(1\)-cocycle. In particular, there is a \(v_{T}\in I_{k}\) such that \(\tilde{\pi}_{f}(T)=v_{T}||(T-1)\), namely the map with values \[v_{T}(1)=v_{T}(T)=0\quad\text{and }v_{T}(U)=\pi^{\prime}_{f}(S)=\hat{P}_{a}( \cdot/2). \tag{5.6}\] Indeed, directly with (5.5), we have, for each \(x\in\{1,T\}\): \[\tilde{\pi}_{f}(T)(x)=0=v_{T}(xT^{-1})|_{\frac{5}{2}-k}T-v_{T}(x)\] and, with the cocycle relation for \(\pi^{\prime}_{f}\), \[\tilde{\pi}_{f}(T)(U)=\pi^{\prime}_{f}(UTU^{-1})|_{\frac{5}{2}-k}U=\pi^{\prime} _{f}(U)|_{\frac{5}{2}-k}(T-1)=\pi^{\prime}_{f}(S)|_{\frac{5}{2}-k}(T-1)\] We finally define a cocycle \(\pi_{f}\) differing from \(\tilde{\pi}_{f}\) by a coboundary and vanishing at \(T\): For all \(\gamma\in\operatorname{PSL}_{2}(\mathbb{Z})\), we set \[\pi_{f}(\gamma):=\tilde{\pi}_{f}(\gamma)-v_{T}||(\gamma-1). \tag{5.7}\] Then, as mentioned above, \(\pi_{f}(S)\in W\) and \(\pi_{f}(S)=\pi^{+}_{f}(S)+\pi^{-}_{f}(S)\), for \[\pi_{f}(S)^{\pm}=\frac{1}{2}\left(\pi_{f}(S)\pm\pi_{f}(S)||\epsilon\right)\in W ^{\pm}.\] We can now state and prove another version of our lift of half-integral weight forms. **Proposition 5.2**.: _Let \(k\in\frac{1}{2}+\mathbb{Z}\) such that \(k>\frac{5}{2}\) and \(4|(k-\frac{5}{2})\). For each \(f\in S_{k}(\Gamma_{0}(N))\) such that \(f|_{k}W_{N}=f\) and for each \(a\in[0,2k-\frac{9}{2}]\), there is a \(g\in S_{k-\frac{1}{2}}(\Gamma_{0}(4))\) such that \(g|_{k-\frac{1}{2}}W_{4}=g\) and, for all odd \(n\in\{1,\dots,k-\frac{7}{2}\}\),_ \[i^{a+1}\left(\binom{k-2}{n}N^{\frac{n}{2}}\Lambda_{f}(a+1-n)+(-1) ^{n+1}\binom{k-2}{n+\frac{1}{2}}N^{2k-a-\frac{3n}{2}-\frac{19}{4}}\Lambda_{f} (2k-\frac{7}{2}-a-n)\right)\\ =\binom{k-\frac{5}{2}}{n}i^{-1-n}2^{k-\frac{3}{2}-n}\Lambda_{g} \left(k-\frac{3}{2}-n\right) \tag{5.8}\] Proof.: By Theorem 5.1, there exists a \(g_{1}\in S_{k}(H(2))\) such that \[\pi_{f}(S)^{-}=\rho_{g_{1}}^{-}. \tag{5.9}\] Since \(H(2)\epsilon T\epsilon=H(2)T\) and \(H(2)\epsilon U\epsilon=H(2)T^{-1}S^{-1}=H(2)U\), we see that for each \(x\in H(2)\backslash\operatorname{PSL}_{2}(\mathbb{Z})\), \(\pi_{f}(S)^{-}(x)\) (resp. \(\rho_{g_{1}}^{-}(x)\)) is the part of the polynomial \(\pi_{f}(S)(x)\) (resp. \(\rho_{g_{1}}(x)\)) corresponding to its odd powers. By the definition of \(\pi_{f}\), \(\pi_{f}(S)(1)=P_{a}(z/\sqrt{N})\) and (3.2) shows that the \(n\)-th coefficient of \(\pi_{f}(S)(1)\) equals the left-hand side of (5.8). By (5.4), the \(n\)-th coefficient of \(\rho_{g_{1}}(1)\) equals \[(-1)^{n}\binom{k-\frac{5}{2}}{n}i^{1-n}\Lambda_{g_{1}}\left(k-\frac{3}{2}-n \right).\] Since, by the analogue of the proposition in Section 8 of [2] for integral weights, the function \(g\) such that \(g(z):=g_{1}(2z)\) is a weight \(k-\frac{1}{2}\) cusp form for \(\Gamma_{0}^{*}(4)\) and \(\Lambda_{g_{1}}(s)=2^{s}\Lambda_{g}(s)\), we obtain the expression in the right-hand side of (5.8). With (5.9), we deduce the result. We will identify explicitly the integral weight cusp form to which a half-integral weight form is lifted according to Prop 5.2. We will use a theorem of [6] providing explicit inverses for the Eichler-Shimura maps \(\rho^{\pm}\). To state it, we introduce some additional notation. Firstly, for each \(\gamma\in H(2)\backslash\operatorname{PSL}_{2}(\mathbb{Z})\), the polynomials \(\rho_{g}^{\pm}(\gamma)\) can be written as \[\rho_{g}^{\pm}(\gamma)(z)=\sum_{n=0}^{k-\frac{5}{2}}(-1)^{n}\binom{k-\frac{5}{2 }}{n}r_{\gamma,n}^{\pm}(g)z^{k-\frac{5}{2}-n}\] where, \(r_{\gamma,n}^{+}(g)=0\) (resp. \(r_{\gamma,n}^{-}(g)=0\)), when \(n\) is odd (resp. even). Then, for each \(\gamma\in H(2)\backslash\operatorname{PSL}_{2}(\mathbb{Z})\) and \(0\leq n\leq k-\frac{5}{2}\) we denote by \(R_{\gamma,n}^{\pm}\) the unique element of \(S_{k-\frac{1}{2}}(H(2))\) such that \[r_{\gamma,n}^{\pm}(h)=(h,R_{\gamma,n}^{\pm})\quad\text{for all }h\in S_{k-\frac{1}{2 }}(H(2))\] and \[s_{\gamma,n}^{\pm}(h)=\sum_{j=0}^{n}\binom{n}{j}(-1)^{n-j}r_{\gamma,j}^{\pm}( h).\] With this notation, Theorem 6.1 of [6] reads, in our case, as **Theorem 5.3**.: _[_6_]_ _For each \(g_{1}\in S_{k-\frac{1}{2}}(H(2))\), we have_ \[g_{1}=\frac{2}{3}(2i)^{1-k}\sum_{\gamma\in H(2)\backslash\operatorname{PSL}_{ 2}(\mathbb{Z})}\sum_{n=0}^{k-\frac{5}{2}}\binom{k-\frac{5}{2}}{n}s_{\gamma U^{ -1},n}^{-}(g_{1})R_{\gamma,n}^{+}.\] To use this theorem, we first compute \(\pi_{f}(S)(\gamma)\) for \(\gamma\in H(2)\backslash\operatorname{PSL}_{2}(\mathbb{Z})\) according to (5.7), (5.6) and the \(1\)-cocycle relation of \(\pi_{f}^{\prime}\): \[\pi_{f}(S)(1)=\pi_{f}^{\prime}(\kappa_{1,S^{-1}}^{-1})-v_{T}(S^{ -1})|_{k-\frac{5}{2}}S+v_{T}(1)=\pi_{f}^{\prime}(S)\] \[\pi_{f}(S)(T)=\pi_{f}^{\prime}(\kappa_{T,S^{-1}}^{-1})|_{k-\frac{ 5}{2}}T-v_{T}(TS^{-1})|_{k-\frac{5}{2}}S+0=\pi_{f}^{\prime}(1)-\pi_{f}^{\prime }(S)|_{k-\frac{5}{2}}S=\pi_{f}^{\prime}(S)\] \[\pi_{f}(S)(U)=\pi_{f}^{\prime}(\kappa_{U,S^{-1}}^{-1})|_{k-\frac{ 5}{2}}U-v_{T}(US^{-1})|_{k-\frac{5}{2}}S+v_{T}(U)=\pi_{f}^{\prime}(T)-0+\pi_{ f}^{\prime}(S)=\pi_{f}^{\prime}(S). \tag{5.10}\] Therefore, if \(g_{1}\) is the weight \(k-\frac{1}{2}\) cusp form for \(H(2)\) induced from \(f\) by the proof of Prop. (5.8), then (5.9) implies that, for each \(\gamma\in H(2)\backslash\operatorname{PSL}_{2}(\mathbb{Z})\), the \(n\)-th coefficient of \(\rho_{g_{1}}^{-}(\gamma)\) (for \(n\) odd), equals the \(n\)-th coefficient of \(\pi_{f}^{\prime}(S)=P_{a}(z/\sqrt{N})\). Therefore, with (5.9) and (3.2), we deduce that, for each \(\gamma\in H(2)\backslash\operatorname{PSL}_{2}(\mathbb{Z})\) and odd \(j\), \[-\binom{k-\frac{5}{2}}{j}r_{\gamma,k-\frac{5}{2}-j}^{-}(g_{1})\] \[=i^{a+1}\left(\binom{k-2}{j}N^{\frac{j}{2}}\Lambda_{f}(a+1-j)+(- 1)^{j+1}\binom{k-2}{j+\frac{1}{2}}N^{2k-a-\frac{3j}{2}-\frac{19}{4}}\Lambda_{f} (2k-\frac{7}{2}-a-j)\right).\] This, in turn, implies that, for each \(n=0,\ldots k-\frac{5}{2}\), \[s_{n}^{-}:=s_{\gamma,n}^{-}(g_{1})=i^{a+1+2n}\sum_{\begin{subarray}{ c}j=0\\ j\text{ odd}\end{subarray}}^{n}\binom{n}{j}\binom{k-\frac{5}{2}}{j}^{-1}\\ \times\left(\binom{k-2}{j+\frac{1}{2}}N^{\frac{1}{2}(k-\frac{5}{2 }-j)}\Lambda_{f}(a+\frac{7}{2}-k+j)+\binom{k-2}{j}N^{\frac{k}{2}-a-1+\frac{3j} {2}}\Lambda_{f}\left(k-1-a+j\right)\right). \tag{5.11}\] Recall the cusp form \(g(z)=g_{1}(2z)\) of weight \(k-\frac{1}{2}\) for \(\Gamma_{0}^{*}(4)\). Then, with Theorem 5.3, we deduce **Theorem 5.4**.: _Let \(k\in\frac{1}{2}+\mathbb{Z}\) such that \(k>\frac{5}{2}\) and \(4|(k-\frac{5}{2})\). For each \(f\in S_{k}(\Gamma_{0}(N))\) such that \(f|_{k}W_{N}=f\) and for each \(a\in[0,2k-\frac{9}{2}]\), the "lift" \(g\in S_{k-\frac{1}{2}}(\Gamma_{0}(4))\) of Prop. 5.2 is given by_ \[g=2(2i)^{1-k}\sum_{n=0}^{k-\frac{5}{2}}\binom{k-\frac{5}{2}}{n}s_{n}^{-}R_{n}^ {+}\] _where \(s_{n}^{-}\) is given by (5.11) and \(R_{n}^{+}(z):=R_{\gamma,n}^{+}(2z)\)._
2303.02990
Spinor domain wall and test fermions on an arbitrary domain wall
We consider a spinor domain wall embedded in a five-dimensional spacetime with a nondiagonal metric. The corresponding plane symmetric solutions for linear and nonlinear spinor fields with different parameters are obtained. It is shown that in the general case the metric functions and spinor fields do not possess $Z_2$ symmetry with respect to the domain wall. We study the angular momentum density of the domain wall arising because of the presence of the spinor field creating the wall. The properties of test fermions located on an arbitrary domain wall are considered. The concepts of the ``second spin'' (arising due to the properties of the Lorentz group generators in a five-dimensional spacetime) and of the ``second magnetic field'' (representing the components $F_{i 5}$ of the electromagnetic field five-tensor) are introduced. We find eigenspinors of the ``second spin'' and show that some of them represent the Bell states. In the nonrelativistic limit we derive the Pauli equation for the test fermions on the domain wall which contains an extra term describing the interaction of a spin-$1/2$ particle with the ``second magnetic field''; this allows the possibility of an experimental verification of the existence of extra dimensions.
Vladimir Dzhunushaliev, Vladimir Folomeev, Dina Zholdakhmet
2023-03-06T09:35:19Z
http://arxiv.org/abs/2303.02990v2
# Spinor domain wall and test fermions on an arbitrary domain wall ###### Abstract We consider a spinor domain wall embedded in a five-dimensional spacetime with a nondiagonal metric. The corresponding plane symmetric solutions for linear and nonlinear spinor fields with different parameters are obtained. It is shown that in the general case the metric functions and spinor fields do not possess \(Z_{2}\) symmetry with respect to the domain wall. We study the angular momentum density of the domain wall arising because of the presence of the spinor field creating the wall. The properties of test fermions located on an arbitrary domain wall are considered. The concepts of the "second spin" (arising due to the properties of the Lorentz group generators in a five-dimensional spacetime) and of the "second magnetic field" (representing the components \(F_{i5}\) of the electromagnetic field five-tensor) are introduced. We find eigenspinors of the "second spin" and show that some of them represent the Bell states. In the nonrelativistic limit we derive the Pauli equation for the test fermions on the domain wall which contains an extra term describing the interaction of a spin-1/2 particle with the "second magnetic field"; this allows the possibility of an experimental verification of the existence of extra dimensions. Thick domain wall solutions, Dirac equation, "second spin", "second magnetic field", test fermions, Pauli equation, experimental verification ## I Introduction The study of self-consistent solutions to the Einstein-Dirac equations is a fascinating and very difficult problem. Such solutions are few at present. The main difficulty is that spinors in the Dirac equation have a spin and hence there are no spherically symmetric solutions: they must be at least axially symmetric, which greatly complicates deriving solutions describing a gravitating spinor field. To simplify matters, one can take two spinor fields with oppositely directed spins, as is done, for example, in Refs. [1; 2] for particlelike solutions. In cosmology, classical spinor fields have been considered in Refs. [3; 4; 5; 6]. In astrophysics, stars supported by spinor fields have been studied in Refs. [7; 8; 9; 10; 11; 12]. [Note here that Refs. [10; 12] deal with a single spinor field describing spinning (axially symmetric) configurations.] A consideration of a hypothesis, called brane world scenario, according to which we live on a thin leaf (brane) embedded into some multidimensional space (bulk) is of special interest. In constructing brane models, one usually either uses various scalar fields (for a review, see Ref. [13]) or performs modeling within some extended theories of gravity (see the review [14]). However, it is of some interest to employ another fundamental fields as a matter source supporting the brane. In this connection, in the present paper we continue our previous investigations began in Refs. [15; 16] where thick brane solutions supported by nonlinear spinor fields have been considered (see also the recent work [17] where five-dimensional domain wall solutions supported by a nonlinear spinor field are under investigation). Working within the Einstein-Dirac theory, in the first part of the current paper, we study five-dimensional plane symmetric solutions describing domain walls. The main distinctive feature of the present study is the use of a nondiagonal metric; this logically follows from the fact that a spinor field must possess a spin whose presence will in general result in the appearance of a nondiagonal component in the metric which describes a rotation. In the second part of the paper, we study the properties and behavior of _test_ fermions living on _any_ domain wall. It will be shown that the spatial part of the five-dimensional spin tensor splits into an ordinary spin and a "second spin" which is represented by the components \(\Sigma_{5i}\) of the five-dimensional spin tensor. In order to study the characteristics of the motion of the test fermions on the domain wall, we will obtain a nonrelativistic approximation for the five-dimensional Dirac equation (the Pauli equation) which will contain extra terms (compared with the Pauli equation in our four-dimensional spacetime). One of these terms involves the component \(A_{5}\) of the five-potential of an electromagnetic field, and another one contains the direct interaction between a spin and a "second magnetic field" which is introduced as a vector \(\mathcal{H}_{i}=F_{i5}\), where \(F_{AB}\) is the electromagnetic field five-tensor. It is remarkable that the presence of such interaction may lead to experimentally measurable consequences; this in turn may permit one to verify the hypothesis that our world is a domain wall (or a brane) embedded in some multidimensional space (bulk). It is worth mentioning that, strictly speaking, the configurations studied here cannot yet be referred to as branes, since this would demand to demonstrate that the domain walls under consideration are able to trap zero modes of various matter fields corresponding to particles or fields in the Standard Model. This question must be considered separately. Field equations We work within the five-dimensional theory of gravitation with a source of matter in the form of a nonlinear spinor field \(\psi\) whose Lagrangian is (hereafter we work in units \(8\pi G=c=\hbar=1\)) \[\mathcal{L}_{m}=\frac{\imath}{2}\left(\bar{\psi}\not{\nabla}\psi-\bar{\psi} \overleftarrow{\not{\nabla}}\psi\right)-m\bar{\psi}\psi+V(\bar{\psi},\psi),\] where \(m\) is some parameter and the potential \(V(\bar{\psi},\psi)\) is understood to be so chosen that the condition \(\bar{\psi}\frac{\partial V}{\partial\psi}=2V\) holds. In what follows, we will use the potential \[V=\frac{\lambda}{2}\left(\bar{\psi}\psi\right)^{2},\] where \(\lambda\) is some parameter. The corresponding five-dimensional Einstein and Dirac equations are \[E_{ab}\equiv R_{ab}-\frac{1}{2}\eta_{ab}R+\eta_{ab}\Lambda-T_{ab}=0, \tag{1}\] \[\left(\imath\Gamma^{a}e_{a}{}^{A}D_{A}-m+\frac{\partial V}{ \partial\bar{\psi}}\right)\psi=0, \tag{2}\] where \(a,b=\bar{0},\bar{1},\bar{2},\bar{3},\bar{5}\) are the Lorentz indices; \(A=0,1,2,3,5\) is the world index; \(e^{a}{}_{A}\) is the 5-bein; \(\Gamma^{a}\) are the five-dimensional Dirac matrices in flat Minkowski space; \(D_{A}\psi=\left(\partial_{A}-\frac{1}{4}\omega_{A}{}^{ab}\Gamma_{ab}\right)\psi\) is the covariant derivative of the spinor \(\psi\); \(\Gamma_{ab}=\frac{1}{2}\left(\Gamma_{a}\Gamma_{b}-\Gamma_{b}\Gamma_{a}\right)\); \(\not{\nabla}\psi=e^{A}_{a}\gamma^{a}D_{A}\psi\); \(\Lambda\) is the cosmological constant; \(\eta_{ab}=\text{diag}\left(1,-1,-1,-1,-1\right)\) is the five-dimensional covariant Minkowski metric. According to the textbook [18], the energy-momentum tensor for the spinor field is taken in the form \[T_{a}{}^{A}=\frac{\imath}{2}\bar{\psi}\left(\Gamma^{A}e_{a}{}^{B}+\Gamma_{a}g ^{AB}\right)D_{B}\psi-\frac{\imath}{2}D_{B}\bar{\psi}\left(\Gamma^{A}e_{a}{}^ {B}+\Gamma_{a}g^{AB}\right)\psi-e_{a}{}^{A}\mathcal{L}_{m}, \tag{3}\] where \(\Gamma^{A}=e_{a}{}^{A}\Gamma^{a}\) are the five-dimensional Dirac matrices in a curved spacetime; \(g^{AB}=e_{a}{}^{A}e_{b}{}^{B}\eta^{ab}\) is the five-dimensional contravariant metric tensor; \(\bar{\psi}=\psi^{\dagger}\Gamma^{\bar{0}}\) is the Dirac conjugate spinor; \(D_{A}\bar{\psi}=\bar{\psi}\left(\overleftarrow{\partial}_{A}+\frac{1}{4} \omega_{A}{}^{ab}\Gamma_{ab}\right)\) with \(\bar{\psi}\overleftarrow{\partial}_{A}=\partial_{A}\bar{\psi}\). Notice here that our definition of the energy-momentum tensor (3) has the opposite sign compared with Ref. [18] in order to be consistent with the definitions for \(R_{ab}\) from Ref. [19]. The five-dimensional Dirac matrices in flat Minkowski space are \[\Gamma^{\bar{0}}=\begin{pmatrix}0&\mathbb{I}_{2\times 2}\\ \mathbb{I}_{2\times 2}&0\end{pmatrix},\quad\Gamma^{\bar{i}}=\begin{pmatrix}0&- \sigma_{\bar{i}}\\ \sigma_{\bar{i}}&0\end{pmatrix}\text{ with }\bar{i}=1,2,3,\quad\Gamma^{\bar{5}}= \begin{pmatrix}-\imath\mathbb{I}_{2\times 2}&0\\ 0&\imath\mathbb{I}_{2\times 2}\end{pmatrix},\] where \(\mathbb{I}_{2\times 2}\) is the \(2\times 2\) unit matrix, and \(\sigma_{\bar{i}}\) are the Pauli matrices \[\sigma_{\bar{1}}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\quad\sigma_{\bar{2}}=\begin{pmatrix}0&-\imath\\ \imath&0\end{pmatrix},\quad\sigma_{\bar{3}}=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}.\] We seek a wall-like solution of the system (1) and (2). To do this, let us choose the following orthonormal 5-bein: \[e_{A}^{0}dx^{A}=\chi(r)dt,\quad e_{A}^{1}dx^{A}=\phi(r)dx,\quad e_{A}^{2}dx^{A}= \phi(r)dy,\quad e_{A}^{3}dx^{A}=\xi(r)dt+\eta(r)dz,\quad e_{A}^{5}dx^{A}=dr, \tag{4}\] such that \[ds^{2}\equiv\eta_{ab}\left(e_{A}^{a}dx^{A}\right)\left(e_{B}^{b}dx^{B}\right)= \left(\chi^{2}-\xi^{2}\right)dt^{2}-\phi^{2}\left(dx^{2}+dy^{2}\right)-\eta^{2 }dz^{2}-dr^{2}-2\,\xi\eta\,dt\,dz. \tag{5}\] In turn, for the spinor field, we employ the Ansatz \[\psi=e^{\imath(\Omega t+Mz)}\begin{pmatrix}A(r)\\ 0\\ B(r)\\ 0\end{pmatrix}, \tag{6}\] where \(\Omega\) and \(M\) are some constants. Substituting this Ansatz and 5-bein from (4) in Eqs. (1) and (2) and taking into account (3), one can obtain the following set of the Einstein-Dirac equations: \[\frac{\xi^{\prime\prime}}{\xi} =-\frac{5}{12}\frac{\xi^{\prime 2}}{\chi^{2}}-\frac{5}{12}\frac{ \xi^{2}}{\chi^{2}}\frac{\eta^{\prime 2}}{\eta^{2}}+\frac{\eta^{\prime 2}}{\eta^{2}}+ \frac{1}{3}\frac{\phi^{\prime 2}}{\phi^{2}}+\frac{2}{3}\frac{\eta^{\prime}}{ \eta}\frac{\phi^{\prime}}{\phi}-\frac{5}{3}\frac{\eta^{\prime}}{\eta}\frac{ \chi^{\prime}}{\chi}+\frac{2}{3}\frac{\phi^{\prime}}{\phi}\frac{\chi^{\prime} }{\chi}+\frac{5}{6}\frac{\xi\eta^{\prime}\xi^{\prime}}{\eta\chi^{2}}-\frac{ \eta^{\prime}}{\eta}\frac{\xi^{\prime}}{\xi}-2\frac{\xi^{\prime}}{\xi}\frac{ \phi^{\prime}}{\phi}+\frac{\xi^{\prime}}{\xi}\frac{\chi^{\prime}}{\chi}\] \[+\frac{B^{2}\left[\Omega\eta\left(\xi+3\chi\right)-M\left(\xi^{2 }+\xi\chi-3\chi^{2}\right)\right]}{\eta\xi\chi}+\Lambda\Big{\}}, \tag{7}\] \[\frac{\chi^{\prime\prime}}{\chi} =\frac{7}{12}\frac{\xi^{\prime 2}}{\chi^{2}}+\frac{7}{12}\frac{ \xi^{2}}{\chi^{2}}\frac{\eta^{\prime 2}}{\eta^{2}}+\frac{1}{3}\frac{\phi^{\prime 2}}{ \phi^{2}}+\frac{2}{3}\frac{\eta^{\prime}}{\eta}\frac{\phi^{\prime}}{\phi}- \frac{2}{3}\frac{\eta^{\prime}}{\eta}\frac{\chi^{\prime}}{\chi}-\frac{4}{3} \frac{\phi^{\prime}}{\phi}\frac{\chi^{\prime}}{\chi}-\frac{7}{6}\frac{\xi\eta ^{\prime}\xi^{\prime}}{\eta\chi^{2}}\] \[-\frac{1}{2}\frac{AB\left(\eta\xi^{\prime}-\xi\eta^{\prime}\right) }{\eta\chi}-\frac{1}{3}\left\{\frac{A^{2}\left[-M\left(2\xi+\chi\right)+2\eta \left(\Omega+\lambda\chi B^{2}\right)\right]}{\eta\chi}+\frac{B^{2}\left[2 \Omega\eta+M\left(\chi-2\xi\right)\right]}{\eta\chi}-\Lambda\right\},\] (8) \[\frac{\eta^{\prime\prime}}{\eta} =-\frac{5}{12}\frac{\xi^{\prime 2}}{\chi^{2}}-\frac{5}{12}\frac{ \xi^{2}}{\chi^{2}}\frac{\eta^{\prime 2}}{\eta^{2}}+\frac{1}{3}\frac{\phi^{\prime 2}}{ \phi^{2}}-\frac{4}{3}\frac{\eta^{\prime}}{\eta}\frac{\phi^{\prime}}{\phi}- \frac{2}{3}\left(\frac{\eta^{\prime}}{\eta}\frac{\chi^{\prime}}{\chi}-\frac{ \phi^{\prime}}{\phi}\frac{\chi^{\prime}}{\chi}\right)+\frac{5}{6}\frac{\eta^{ \prime}\xi^{\prime}}{\eta\chi^{2}}\] \[-\frac{1}{2}\frac{AB\left(\xi\eta^{\prime}-\eta\xi^{\prime}\right) }{\eta\chi}-\frac{1}{3}\left\{\frac{A^{2}\left[M\left(\xi+2\chi\right)-\eta \left(\Omega-2\lambda\chi B^{2}\right)\right]}{\eta\chi}-\frac{B^{2}\left[ \Omega\eta+M\left(2\chi-\xi\right)\right]}{\eta\chi}-\Lambda\right\},\] (9) \[\frac{\phi^{\prime\prime}}{\phi} =\frac{1}{12}\frac{\xi^{\prime 2}}{\chi^{2}}+\frac{1}{12}\frac{\xi^{2}}{ \chi^{2}}\frac{\eta^{\prime 2}}{\eta^{2}}-\frac{2}{3}\frac{\phi^{\prime 2}}{\phi^{2}}- \frac{1}{3}\left(\frac{\eta^{\prime}}{\eta}\frac{\phi^{\prime}}{\phi}-\frac{ \eta^{\prime}}{\eta}\frac{\chi^{\prime}}{\chi}+\frac{\phi^{\prime}}{\phi}\frac{ \chi^{\prime}}{\chi}\right)-\frac{1}{6}\frac{\xi\eta^{\prime}\xi^{\prime}}{ \eta\chi^{2}}\] \[-\frac{1}{3}\left\{\frac{B^{2}\left[M\left(\xi+\chi\right)-\Omega \eta\right]}{\eta\chi}-\frac{A^{2}\left[M\left(\chi-\xi\right)+\eta\left(\Omega -2\lambda\chi B^{2}\right)\right]}{\eta\chi}-\Lambda\right\},\] (10) \[A^{\prime} =-A\left[\frac{\left(\xi+2\chi\right)\eta^{\prime}}{4\eta\chi}- \frac{\xi^{\prime}}{4\chi}+\frac{\phi^{\prime}}{\phi}+\frac{\chi^{\prime}}{2 \chi}-m\right]-2\lambda A^{2}B+\Omega\frac{B}{\chi}-M\frac{B\left(\xi+\chi\right) }{\eta\chi},\] (11) \[B^{\prime} =B\left[\frac{\left(\xi-2\chi\right)\eta^{\prime}}{4\eta\chi}- \frac{\xi^{\prime}}{4\chi}-\frac{\phi^{\prime}}{\phi}-\frac{\chi^{\prime}}{2 \chi}-m\right]+2\lambda AB^{2}-\Omega\frac{A}{\chi}+M\frac{A\left(\xi-\chi \right)}{\eta\chi}, \tag{12}\] where the prime denotes differentiation with respect to the fifth coordinate \(r\). The above gravitational equations represent respectively the following combinations of the Einstein equations (1): \[-\frac{1}{3}E_{00}+\frac{2}{3}E_{1\bar{1}1}-\frac{2}{3}E_{\bar{3} \bar{3}}+E_{0\bar{3}}=0,\,\frac{2}{3}E_{0\bar{0}}+\frac{2}{3}E_{\bar{1}\bar{1} }+\frac{1}{3}E_{\bar{3}\bar{3}}=0,\,-\frac{1}{3}E_{0\bar{0}}+\frac{2}{3}E_{ \bar{1}\bar{1}}-\frac{2}{3}E_{\bar{3}\bar{3}}=0,\,-E_{0\bar{0}}-E_{\bar{1} \bar{1}}+E_{\bar{3}\bar{3}}=0.\] This enabled us to write them in the form where each equation contains a higher derivative only of one metric function. In turn, in addition to these gravitational equations, we also have the constraint equation [\((\bar{5}-\bar{5})\)-component of the Einstein equations (1)] \[\frac{\xi^{2}\eta^{\prime 2}}{4\eta^{2}\chi^{2}}-\frac{\xi\eta^{ \prime}\xi^{\prime}}{2\eta\chi^{2}}+\frac{\xi^{\prime 2}}{4\chi^{2}}+\frac{2 \eta^{\prime}\phi^{\prime}}{\eta\phi}+\frac{\phi^{\prime 2}}{\phi^{2}}+\frac{ \eta^{\prime}\chi^{\prime}}{\eta\chi}+2\frac{\phi^{\prime}\chi^{\prime}}{\phi \chi}\] \[=\Lambda-2mAB+2\lambda A^{2}B^{2}-\Omega\frac{A^{2}+B^{2}}{\chi}+ M\frac{A^{2}\left(\xi-\chi\right)+B^{2}\left(\xi+\chi\right)}{\eta\chi}, \tag{13}\] which contains only first derivatives of the metric functions. It will be used below in assigning boundary conditions. ## III Domain wall solutions The equations (7)-(12) permit a number of solutions given below. To derive these solutions, we will begin from the fact that in the neighbourhood of the domain wall \(r=0\) the solutions can be represented as a power series in \(r\), \[\xi\approx\xi_{0}+\frac{\xi_{2}}{2}r^{2},\quad\chi\approx\chi_{0 }+\frac{\chi_{2}}{2}r^{2},\quad\eta\approx\eta_{0}+\frac{\eta_{2}}{2}r^{2}, \quad\phi\approx\phi_{0}+\frac{\phi_{2}}{2}r^{2}, \tag{14}\] \[A\approx A_{0}+A_{1}r,\quad B\approx B_{0}+B_{1}r. \tag{15}\] These expansions will be used as boundary conditions in solving the equations (7)-(12). ### The case of \(\Omega=M=0\) Consider first the simplest case of static solutions with \(\Omega=M=0\). For this case, the Dirac equations (11) and (12) can be integrated analytically in the form \[AB=\frac{C}{\chi\eta\phi^{2}},\] where \(C\) is an integration constant. In this case, it can be shown that solutions for the metric functions are linearly dependent: \[\xi=\alpha\chi,\quad\eta=\beta\chi,\quad\phi=\gamma\chi, \tag{16}\] where \(\alpha,\beta\), and \(\gamma\) are some constants, and \(0<\alpha<1\) to ensure that the signature of the metric (5) remains unchanged. As a result, the Einstein equations (7)-(10) reduce to one equation \[\frac{\chi^{\prime\prime}}{\chi}+\left(\frac{\chi^{\prime}}{\chi}\right)^{2}+ \frac{2\tilde{C}^{2}\lambda}{3\chi^{8}}-\frac{\Lambda}{3}=0, \tag{17}\] and the constraint equation (13) yields \[6\left(\frac{\chi^{\prime}}{\chi}\right)^{2}+\frac{2\tilde{C}m}{\chi^{4}}- \frac{2\tilde{C}^{2}\lambda}{\chi^{8}}-\Lambda=0, \tag{18}\] where the new constant \(\tilde{C}=C/\left(\beta\gamma^{2}\right)\) is introduced. Taking into account the boundary conditions (14), one can find the following particular solution of the equation (17) that also satisfies the constraint equation (18) (this is achieved by suitably adjusting the integration constant \(\tilde{C}\)): \[\chi=\left\{-\frac{\tilde{C}\,m\left[\sqrt{1-\frac{2\lambda\Lambda}{m^{2}}} \cosh\left(2\sqrt{\frac{2\Lambda}{3}}r\right)-1\right]}{\Lambda}\right\}^{1/4 }\quad\text{ with }\quad\tilde{C}=\frac{m\chi_{0}^{4}}{2\lambda}\left(1+\sqrt{1-\frac{2 \lambda\Lambda}{m^{2}}}\right).\] It is evident that this solution is symmetric with respect to the domain wall located at \(r=0\). (Notice here that, using this solution, one can find the spinor functions \(A\) and \(B\) in an analytical form; however, since these expressions are too cumbersome, we do not show them here.) The corresponding solutions are exemplified in Fig. 1. It is worth noting here that, in the case under consideration, the solutions for the spinor functions \(A\) and \(B\) are asymmetric, while their product \(AB\equiv\tilde{C}/\chi^{4}\) is symmetric with respect to the domain wall (see the right panel of Fig. 1). Asymptotically (as \(r\rightarrow\pm\infty\)), the spinor functions \(A\) and \(B\) tend to zero, and the metric functions exhibit an exponential growth. Accordingly, the scalar curvature \[R =\frac{8}{3}\Lambda-\frac{16}{3}\lambda A^{2}B^{2}+\frac{2}{3} \frac{\Omega}{\chi}\left(A^{2}+B^{2}\right)-\frac{2}{3}\frac{M}{\eta\chi} \left[A^{2}\left(\xi-\chi\right)+B^{2}\left(\xi+\chi\right)\right]+\frac{1}{6 }\frac{\xi^{2}\eta^{2}}{\eta^{2}\chi^{2}}+\frac{1}{6}\frac{\xi^{\prime 2}}{ \chi^{2}}+\frac{2}{3}\frac{\phi^{\prime 2}}{\phi^{2}}\] \[+\left(\frac{4}{3}\frac{\phi^{\prime}}{\phi}-\frac{1}{3}\frac{ \xi\xi^{\prime}}{\chi^{2}}\right)\frac{\eta^{\prime}}{\eta}+\left(\frac{4}{3} \frac{\phi^{\prime}}{\phi}+\frac{2}{3}\frac{\eta}{\eta^{2}}\right)\frac{\chi^ {\prime}}{\chi}\] behaves asymptotically as \(R\to 10/3\Lambda\); for \(\Lambda>0\) under consideration, this corresponds to an asymptotically anti-de Sitter spacetime. Particularly simple is the case of a linear spinor field (i.e., when \(\lambda=0\)). For this case, Eq. (17) has the following particular solution: \[\chi=\chi_{0}\sqrt{\cosh\left(\sqrt{\frac{2\Lambda}{3}r}\right)}.\] Substituting this in the Dirac equations (11) and (12) (which are now separated for the functions \(A\) and \(B\)) and taking into account the need to satisfy the constraint equation (13), one has the following solutions for the spinor fields (asymmetric with respect to the domain wall): \[A=A_{0}e^{mr}\operatorname{sech}\Bigg{(}\sqrt{\frac{2\Lambda}{3}}r\Bigg{)},\quad B =\frac{\Lambda}{2A_{0}m}e^{-mr}\operatorname{sech}\Bigg{(}\sqrt{\frac{2 \Lambda}{3}}r\Bigg{)}.\] Asymptotically, on both sides of the domain wall, there is the following behavior of the spinor fields: \[\text{as}\quad r\to\pm\infty:A\to 2A_{0}\exp\Bigg{[}\mp\Bigg{(}\sqrt{\frac{2 \Lambda}{3}}\mp m\Bigg{)}\,r\Bigg{]},\quad B\to\frac{\Lambda}{A_{0}m}\exp \Bigg{[}\mp\Bigg{(}\sqrt{\frac{2\Lambda}{3}}\pm m\Bigg{)}\,r\Bigg{]}.\] Hence we see that regular solutions for the spinor fields are possible only if \[\sqrt{\frac{2\Lambda}{3}}-m>0.\] ### The case of nonzero \(\Omega\) and \(M\) When the free parameters \(\Omega\) and \(M\) are nonzero, solutions for the metric functions are not already symmetric with respect to the domain wall. In this case regular solutions do exist both for simultaneously nonzero \(\Omega\) and \(M\) and in the case when one of these parameters is zero. Using the boundary conditions (14) and (15), the numerical solutions to the equations (7)-(12) are exemplified in Fig. 2. In the case of a linear spinor field (i.e., when \(\lambda=0\)), there are also regular solutions shown in Fig. 3. It is seen from the graphs given in Figs. 2 and 3 that, as in the case of \(\Omega=M=0\) considered above, we deal with an asymptotically anti-de Sitter spacetime. ### The case without spinor field In the absence of the spinor field (but when \(\Lambda\) is present) there are no regular solutions to Eqs. (7)-(12). ### Three-dimensional energy density of the system In this subsection we consider a question of the energy per unit volume of the domain wall, which is defined as \[E_{\text{3D}}=\int_{-\infty}^{\infty}T_{0}^{0}\sqrt{{}^{5}}{\rm{ \it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{ \it{{\it{{\it{{\it{{\it{{ }}}}}}}}}}}}}}}}} \frac{\phi^{2}}{\chi}\Big{\{}B^{2}\left[M\xi\left(\xi+\chi \right)-\Omega\eta\left(\xi+2\chi\right)\right] \tag{19}\] \[+A^{2}\left[M\xi\left(\chi-\xi\right)+\eta\left(\Omega\xi-2 \Omega\chi+4\lambda B^{2}\chi^{2}\right)\right]+AB\eta\left(\xi\chi^{\prime}- \chi\xi^{\prime}\right)\Big{\}}dr.\] It is evident that for a physically realistic system this quantity must be finite. Taking into account that asymptotically the metric functions diverge exponentially (an anti-de Sitter spacetime) and the spinor functions decrease exponentially, it is necessary to follow the behavior of the expression in the integrand depending on the values of the free parameters of the system. To do this, let us write down the corresponding asymptotic (as \(r\rightarrow\infty\)) expressions for the metric functions and spinor fields: \[\xi\rightarrow\xi_{\infty}e^{\sqrt{\Lambda/6}r},\quad\chi \rightarrow\chi_{\infty}e^{\sqrt{\Lambda/6}r},\quad\eta\rightarrow\eta_{ \infty}e^{\sqrt{\Lambda/6}r},\quad\phi\rightarrow\phi_{\infty}e^{\sqrt{ \Lambda/6}r}, \tag{20}\] \[A\to C_{1}e^{-\left(m+\sqrt{3\Lambda/2}\right)r}+C_{2}e^{- \left(\sqrt{2\Lambda/3}-m\right)r},\quad B\rightarrow\tilde{C}_{1}e^{-\left( m+\sqrt{2\Lambda/3}\right)r}, \tag{21}\] where \(\xi_{\infty},\chi_{\infty},\eta_{\infty},\phi_{\infty}\) are some constants, \(C_{1}\) and \(C_{2}\) are integration constants and \[\tilde{C}_{1}=C_{1}\frac{\left(12m+\sqrt{6\Lambda}\right)\eta_{ \infty}\xi_{\infty}}{6\left[\Omega\eta_{\infty}-M\left(\xi_{\infty}+\chi_{ \infty}\right)\right]}. \tag{22}\] It is seen from Eq. (21) that in the general case asymptotically decaying solution for the function \(A\) can exist only when \[0<m<\sqrt{\frac{2}{3}\Lambda}. \tag{23}\] Next, substituting (20) and (21) in Eq. (19), we have the following asymptotic (as \(r\rightarrow\infty\)) form for the expression in the integrand: \[\frac{1}{2}\frac{\phi_{\infty}^{2}}{\chi_{\infty}} \Big{\{}C_{2}^{2}\left[\Omega\eta_{\infty}\left(\xi_{\infty}-2\chi_ {\infty}\right)+M\xi_{\infty}\left(\chi_{\infty}-\xi_{\infty}\right)\right]e^{- \left(\sqrt{\Lambda/6}-2m\right)r}\] \[-\tilde{C}_{1}^{2}\left[\Omega\eta_{\infty}\left(\xi_{\infty}+2 \chi_{\infty}\right)-M\xi_{\infty}\left(\chi_{\infty}+\xi_{\infty}\right) \right]e^{-\left(\sqrt{\Lambda/6}+2m\right)r}\Big{\}}. \tag{24}\] (Similar expression can also be obtained for the case of \(r\rightarrow-\infty\).) Hence we see that to obtain an asymptotically decaying expression, it is necessary that \[m<\frac{1}{2}\sqrt{\frac{\Lambda}{6}}.\] Since this condition is stronger than (23), the resulting restriction on the system parameters is therefore \[0<m<\frac{1}{2}\sqrt{\frac{\Lambda}{6}}.\] When this condition is fulfilled, the expression in the integrand of Eq. (19) will decrease asymptotically, and the corresponding integral will be finite. One further remark may be made, to complete this subsection. The formulas obtained above are invalid for the case considered in Sec. III.1 where \(\Omega=M=0\) [cf. Eqs. (22) and (24)]. However, proceeding in a similar manner, one can show that the energy of the configuration from Sec. III.1 is also finite. ### Angular momentum density of the domain wall According to the definition of the angular momentum five-tensor, \[M_{AB}=x_{A}T^{t}_{\ B}-x_{B}T^{t}_{\ A},\] one can introduce the angular momentum density three-vector \[L_{i}=\frac{1}{2}\epsilon_{ijk}M^{jk},\] as well as the "second angular momentum density three-vector" \[\mathcal{L}_{i}=M_{i5}.\] Using the components of the energy-momentum tensor (13)-(14) given in the Appendix A, one can find the following expressions for the components of the angular momentum density \(L_{x,y}\) and the "second angular momentum density" \(\mathcal{L}_{z}\): \[L_{x,y} =\begin{pmatrix}-y\\ x\end{pmatrix}T^{t}_{\ z}, \tag{25}\] \[\mathcal{L}_{z} =-\,rT^{t}_{\ z}. \tag{26}\] In cylindrical coordinates, Eq. (25) yields \[L_{\varphi}=\rho T_{\ z}^{t}\quad\text{with}\quad\rho=\sqrt{x^{2}+y^{2}}.\] The typical spatial distributions of the component \(T_{\ z}^{t}\) [which is given by the expression (10)] appearing in Eqs. (25) and (26) are plotted in Fig. 4. Notice that for the case considered in Sec. III.1 where \(\Omega=M=0\) and the metric functions are linearly dependent, the expression (10) is identically zero. The physical reason for the appearance of the angular momenta \(L_{\varphi}\) and \(\mathcal{L}_{z}\) is the existence of the current density \(j^{A}=\bar{\psi}\gamma^{A}\psi\), which has the following nonvanishing componets: \[j^{t}=\frac{1}{\chi}\left(A^{2}+B^{2}\right),\quad j^{z}=\frac{A^{2}}{\eta} \left(1-\frac{\xi}{\chi}\right)-\frac{B^{2}}{\eta}\left(1+\frac{\xi}{\chi} \right).\] ## IV Spin of test fermions on the domain wall In this section we consider the properties of spin operators of test fermions confined on the domain wall. In order to distinguish these fermions from the spinor field \(\psi\) supporting the domain wall, we denote them as \(\chi\). (Note that in this section we use the letters \(\chi,\eta\), and \(\phi\) to denote the spinor functions. We have used above the same symbols for the metric functions, but this should not lead to misunderstanding.) The consideration given below can also be applied to any domain wall, not necessarily to the domain walls obtained above. In a four-dimensional spacetime, three-dimensional spin operators can be represented by one three-vector \(\Sigma_{i}\) dual to the \(i,j=1,2,3\) components of the generators of the Lorentz algebra, \[\Sigma_{i}=\frac{1}{2}\epsilon_{ijk}\sigma_{jk}=\frac{\imath}{4}\epsilon_{ijk }\left[\gamma_{j},\gamma_{k}\right].\] Figure 4: The distributions of the component \(T_{\ z}^{t}\) for the solutions represented in Fig. 2 (for the nonlinear spinor field; shown by the solid line) and in Fig. 3 (for the linear spinor field; shown by the dashed line). This construction can be adopted to a five-dimensional spacetime where the domain wall under consideration is embedded. In the five-dimensional spacetime, the generators of the Lorentz algebra are \[\Sigma^{ab}=\frac{\imath}{4}\left[\gamma^{a},\gamma^{b}\right]\quad\text{with} \quad a,b=0,1,2,3,5.\] Generalising the construction of the spin in a four-dimensional spacetime, spin operators in a five-dimensional spacetime will be the matrices \[\Sigma^{\alpha\beta}=\frac{\imath}{4}\left[\gamma^{\alpha},\gamma^{\beta} \right]\quad\text{with}\quad\alpha,\beta=1,2,3,5.\] Unlike the four-dimensional spacetime, it looks impossible to dualize these matrices; for this reason, the spin operators form now not a vector, but a spatial tensor \(\Sigma^{\alpha\beta}\). Being antisymmetric, this tensor is equivalent to two three-vectors \[\Sigma_{i}=\frac{1}{2}\epsilon_{ijk}\Sigma_{jk},\quad\mathcal{S}_{i}=\Sigma_{5i}\] with \(i,j,k=1,2,3\). The operators \(\Sigma_{i}\) have the same form as the standard spin operators in a four-dimensional spacetime. In turn, the operators \(\mathcal{S}_{i}\) have the form \[\mathcal{S}_{i}=\frac{1}{2}\begin{pmatrix}0&\sigma_{i}\\ \sigma_{i}&0\end{pmatrix}.\] The commutation relations for these operators are \[[\Sigma_{i},\Sigma_{j}]= \imath\epsilon_{ijk}\Sigma_{k}, \tag{27}\] \[[\mathcal{S}_{i},\mathcal{S}_{j}]= \imath\epsilon_{ijk}\Sigma_{k},\] (28) \[[\Sigma_{i},\mathcal{S}_{j}]= \imath\epsilon_{ijk}\mathcal{S}_{k}. \tag{29}\] The operators \(\Sigma_{i}\) are standard operators of the projection of the spin on the \(x^{i}\)-axis. Let us call the operators \(\mathcal{S}_{i}\) as the operators of the projection of the "second spin" on the \(x^{i}\)-axis. Consistent with the commutation relations (28) and (29), it is seen that the projections of the "second spin" on the \(x^{i}\) and \(x^{j}\) axes (with \(i\neq j\)) cannot be measured simultaneously. Also, it is seen from Eq. (29) that eigenvalues of the operators \(\Sigma_{i}\) and \(\mathcal{S}_{j}\) can be measured simultaneously only if \(i=j\). The eigenvalue problem for the operator of the "second spin" is \[\mathcal{S}_{i}\chi_{i,n,\pm}=s_{i,\pm}\chi_{i,n,\pm},\] where \(s_{i,\pm}\) are eigenvalues and \(\chi_{i,n,\pm}\) are eigenspinors of the operator of the "second spin", and the index \(i\) describes the components of the "second spin" along the \(x,y,z\) axes, the index \(n\) corresponds to the number of the eigenspinor, and \(\pm\) corresponds to the projection of the "second spin" on the \(i\)-axis. Solving this problem, we get the following eigenvalues and eigenspinors: \[s_{x,-}= -\frac{1}{2},\chi_{x,1,-}^{T}=\left\{-1,0,0,1\right\},\chi_{x,2,-}^ {T}=\left\{0,-1,1,0\right\}, \tag{30}\] \[s_{x,+}= +\frac{1}{2},\chi_{x,1,+}^{T}=\left\{1,0,0,1\right\},\chi_{x,2,+}^ {T}=\left\{0,1,1,0\right\},\] (31) \[s_{y,-}= -\frac{1}{2},\chi_{y,1,-}^{T}=\left\{\imath,0,0,1\right\},\chi_{y, 2,-}^{T}=\left\{0,-\imath,1,0\right\},\] (32) \[s_{y,+}= +\frac{1}{2},\chi_{y,1,+}^{T}=\left\{-\imath,0,0,1\right\},\chi_{ y,2,+}^{T}=\left\{0,\imath,1,0\right\},\] (33) \[s_{z,-}= -\frac{1}{2},\chi_{z,1,-}^{T}=\left\{0,1,0,1\right\},\chi_{z,2,- }^{T}=\left\{-1,0,1,0\right\},\] (34) \[s_{z,+}= +\frac{1}{2},\chi_{z,1,+}^{T}=\left\{0,-1,0,1\right\},\chi_{z,2, +}^{T}=\left\{1,0,1,0\right\}. \tag{35}\] Note that the eigenspinors are orthogonal each other (for simplicity, we omitted the indices \(i\) and \(\pm\)): \[\bar{\chi}_{1}\chi_{2}=0.\] ### Eigenspinors of the operator of the "second spin" and the Bell states In this subsection we show that some eigenspinors of the operator of the "second spin" (30)-(33) are the Bell states. To do this, let us introduce quantum states \(\left|0\right\rangle\) and \(\left|1\right\rangle\) for some quantum system that can take only two possible states. For example, this can be the projection of the spin on the \(z\)-axis. These quantum states can be written in the form of the following Weyl spinors: \[\left|0\right\rangle=\begin{pmatrix}0\\ 1\end{pmatrix},\left|1\right\rangle=\begin{pmatrix}1\\ 0\end{pmatrix}.\] Then, apart from a normalization factor, the eigenspinors (30)-(35) can be represented as \[\chi_{x,1,\pm}= \left|0\right\rangle\otimes\left|0\right\rangle\pm\left|1\right\rangle \otimes\left|1\right\rangle, \tag{36}\] \[\chi_{x,2,\pm}= \left|0\right\rangle\otimes\left|1\right\rangle\pm\left|1\right\rangle \otimes\left|0\right\rangle,\] (37) \[\chi_{y,1,\pm}= \left|0\right\rangle\otimes\left|0\right\rangle\mp\imath\left|1 \right\rangle\otimes\left|1\right\rangle,\] (38) \[\chi_{y,2,\pm}= \left|0\right\rangle\otimes\left|1\right\rangle\pm\imath\left|1 \right\rangle\otimes\left|0\right\rangle,\] (39) \[\chi_{z,1,\pm}= \mp\left|1\right\rangle\otimes\left|0\right\rangle+\left|0\right\rangle \otimes\left|0\right\rangle=\left(\mp\left|1\right\rangle+\left|0\right\rangle \right)\otimes\left|0\right\rangle,\] (40) \[\chi_{z,2,\pm}= \pm\left|1\right\rangle\otimes\left|1\right\rangle+\left|0\right\rangle \otimes\left|1\right\rangle=\left(\pm\left|1\right\rangle+\left|0\right\rangle \right)\otimes\left|1\right\rangle. \tag{41}\] The quantum states (36)-(39) are called the Bell states or sometimes the EPR states or EPR pairs (after Bell or Einstein, Podolsky, and Rosen, who first pointed out the strange properties of such states). Strictly speaking, the entangled states (36)-(39) differ from the Bell states by some factors on the right-hand sides of Eqs. (36)-(39). In quantum calculations, the Bell states describe an entangled pair of qubits. The difference of the quantum states (36)-(39) from those given by Eqs. (40)-(41) is that the former cannot be represented as a tensor product of states, while the latter, as one sees from the right-hand sides of Eqs. (40)-(41), can be represented as tensor products of some quantum states. It must be mentioned here that all the quantum states of the spin \(\Sigma_{i}\) can be represented as tensor products: \[\chi_{x,\pm}= \left(\left|0\right\rangle+\left|1\right\rangle\right)\otimes \left(\left|0\right\rangle\pm\left|1\right\rangle\right),\quad\chi_{y,\pm}= \left(\left|0\right\rangle+\left|1\right\rangle\right)\otimes\left(\left|0 \right\rangle\pm\imath\left|1\right\rangle\right),\] \[\chi_{y,+}= \left(\left|0\right\rangle+\left|1\right\rangle\right)\otimes \left|1\right\rangle,\quad\quad\quad\quad\chi_{y,-}=\left(\left|0\right\rangle +\left|1\right\rangle\right)\otimes\left|0\right\rangle.\] ### Nonrelativistic limit for the test fermions In order to understand what extra features (compared with the four-dimensional case) appear for the test fermions living on the domain wall embedded in the five-dimensional bulk, we consider the nonrelativistic limit of the five-dimensional Dirac equation, i.e., the Pauli equation in the five-dimensional bulk spacetime. In such case we should rewrite the Dirac equation for a flat five-dimensional spacetime and take into account the interaction with a gravitational field which is now described by the potential energy \(m\Phi\), where \(m\) is the mass of a particle and \(\Phi\) is the Newtonian gravitational potential. In order to obtain the nonrelativistic limit for the five-dimensional Dirac equation (the Pauli equation), we proceed in the standard way and write the Dirac equation in the following form (for clarity, in this subsection we resurrect \(\hbar\) and \(c\) in the equations): \[\imath\hbar\partial_{t}\begin{pmatrix}\phi\\ \eta\end{pmatrix}=c\begin{pmatrix}\sigma_{i}\hat{\Pi}_{i}\eta\\ \sigma_{i}\hat{\Pi}_{i}\phi\end{pmatrix}+\imath c\hat{\Pi}_{5}\begin{pmatrix} \eta\\ -\phi\end{pmatrix}+mc^{2}\begin{pmatrix}\phi\\ -\eta\end{pmatrix}+\left(eA_{0}+m\Phi\right)\begin{pmatrix}\phi\\ \eta\end{pmatrix}, \tag{42}\] where \(\hat{\Pi}_{i}=\hat{p}_{i}-e/cA_{i}\), \(\hat{p}_{i}\) being the three-dimensional momentum operator, \(\hat{\Pi}_{5}=\hat{p}_{5}-e/cA_{5}\), and \(A_{B}=\left(A_{0},A_{i},A_{5}\right)\) is the electromagnetic five-potential. Also, as in the four-dimensional case, the four-component spinor \(\chi\) is again decomposed into two two-component spinors, \[\chi=\begin{pmatrix}\phi\\ \eta\end{pmatrix}.\] The relativistic energy of a particle described by Eq. (42) contains also its rest energy \(mc^{2}\). In arriving at the nonrelativistic approximation, this energy must be excluded by introducing a new function \(\chi\rightarrow\chi e^{-imc^{2}t/\hbar}\). Next, using the same nonrelativistic approximations as those employed in the four-dimensional case, \[\left|\imath\hbar\eta\right|,\left|eA_{0}\eta\right|,\text{ and }\left|m\Phi\eta \right|\ll\left|mc^{2}\eta\right|,\] one can obtain from the lower component of Eq. (42) the relation \[\eta=\frac{\sigma_{i}\hat{\Pi}_{i}-\imath\hat{\Pi}_{5}}{2mc}\phi.\] In this way, after substituting of this into the upper part of the Eq. (42), we finally arrive at the required Pauli equation for the five-dimensional spacetime: \[\imath\hbar\dot{\phi}=\left[\frac{1}{2m}\left(\dot{\vec{p}}-\frac{e}{c}\vec{A} \right)^{2}+\frac{1}{2m}\left(\dot{p}_{5}-\frac{e}{c}A_{5}\right)^{2}-\frac{e \hbar}{2mc}\vec{\sigma}\left(\vec{H}+\vec{\mathcal{H}}\right)+\left(eA_{0}+m \Phi\right)\right]\phi, \tag{43}\] where the vector \(\vec{H}\equiv H_{i}=(1/2)\epsilon_{ijk}F^{jk}\) is the magnetic field and the vector \(\vec{\mathcal{H}}\equiv\mathcal{H}_{i}=F_{i5}\) is the "second magnetic field". The physical meaning of the new terms appearing in Eq. (43) (compared with the four-dimensional Pauli equation) is as follows. Because of the presence of the second term in the square brackets, there are quantum corrections to the motion of a test fermion which is located on the domain wall embedded in the bulk. These corrections are not associated with the spin of the fermion. In turn, the presence of the term with \(\vec{\mathcal{H}}\) results in quantum corrections to the motion of the particle which are due to the presence of the spin. This term describes the potential energy of a particle with a spin in the "second magnetic field." Hence, if the gradient of this field differs from zero, trajectories of particles possessing different projections of a spin on some direction in the presence of the "second magnetic field" will be different from one another (the Stern-Gerlach effect). Thus, if the motion of a test fermion takes place on the domain wall, only the "second magnetic field" \(\vec{\mathcal{H}}\) in the five-dimensional Pauli equation (43) will lead to the corrections associated with the fact that our world is embedded as a domain wall in the five-dimensional spacetime. _This correction allows the possibility of an experimental verification of the existence of extra dimensions._ ## V Conclusion In the first part of the present paper, we have considered the self-consistent set of the Einstein-Dirac equations and shown that there exist plane symmetric solutions for one gravitating classical spinor field. These solutions can be treated as describing a thick domain wall embedded in a five-dimensional spacetime. The second part of the paper is devoted to exploring the properties and behavior of test fermions localized on _any_ domain wall. In a five-dimensional spacetime, we cannot introduce the concept of a spin vector since the procedure of dualization of the spatial part of the spin tensor results in a tensor, and not in a vector. Nevertheless, the spatial part of the five-dimensional spin tensor can be represented as some combination of two three-vectors. The first of these vectors corresponds to an ordinary spin, and we referred to the second one as the "second spin". It was shown that eigenvalues of the operator of the "second spin" are also \(\hbar/2\), and some eigenspinors are the Bell states. To study the behavior of test fermions, we have obtained a nonrelativistic approximation for the five-dimensional Dirac equation, which is a natural generalization of the Pauli equation containing now additional terms that describe (i) the presence of the component \(A_{5}\) of the electromagnetic potential; and (ii) the interaction of a spin-1/2 particle with the components \(F_{i5}\) of the electromagnetic field five-tensor, which we called the "second magnetic field." The presence of the term with the component \(A_{5}\) of the electromagnetic potential must lead to the appearance of the Aharonov-Bohm effect; however, to register this effect, it is necessary to escape beyond the domain wall, i.e., one has to have a solenoid directed along the fifth dimension. By contrast, the effects associated with the presence of the "second magnetic field" can be measured directly, since this "second magnetic field" interacts with the spin directly. Summarizing the results obtained, * We have found regular symmetric and asymmetric thick domain wall solutions supported by one classical spinor field. In doing so, we have examined both linear and nonlinear spinor fields by choosing different values of the free parameters of the system. Note that, as the numerical calculations indicate, for the regular solutions to exist, the presence of the cosmological constant is necessary. * We have studied the properties of these domain wall solutions which depend on the presence or absence of the nonlinearity and mass term of the spinor field. * We have demonstrated that there are the nonvanishing components \(L_{\varphi}\) and \({\cal L}_{z}\) of the angular momenta density associated with the current of the spinor field. The presence of the component \(L_{\varphi}\) allows the possibility of an experimental verification of the existence of the domain wall by using the spin-orbit interaction. * The behavior and properties of _test_ fermions living on _any_ domain wall have been investigated: * It has been shown that the spatial part of the five-dimensional spin tensor can be represented as a combination of two vectors, the first of which is an ordinary spin, and the second vector can be called the "second spin." * The eigenvalues (\(\hbar/2\)) and eigenspinors for the "second spin" have been found. It has been demonstrated that some eigenspinors are the Bell states. * The nonrelativistic approximation for the five-dimensional Dirac equation (the Pauli equation) has been derived. * It has been shown that the Pauli equation obtained contains extra terms associated with the fact that the domain wall is embedded in a five-dimensional bulk. One of these terms is related to the appearance of the component \(A_{5}\) in the Pauli equation, and the second one is due to the fact that the spin interacts with the components \(F_{i5}\) of the electromagnetic field five-tensor (one can call it the "second magnetic field"). It has been demonstrated that the interaction of the "second magnetic field" with the spin results in observational effects which enable one to verify experimentally the hypothesis of the existence of extra dimensions. ###### Acknowledgements. This research has been funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant No. AP14869140, "The study of QCD effects in non-QCD theories"). We are also grateful to V. Ivashchuk for fruitful discussions. ## Appendix A The components of the energy-momentum tensor For the metric (5) and \(\mathfrak{Ansatjs}\) (6), the mixed energy-momentum tensor (3) yields the following nonzero components: \[T^{t}_{\ t}= 2\lambda A^{2}B^{2}\] \[+\frac{A^{2}}{2\chi}\left[\Omega\left(\frac{\xi}{\chi}-2\right)+M \left(\frac{\xi}{\eta}-\frac{\xi^{2}}{\eta\chi}\right)\right]+\frac{AB}{2} \left(\frac{\xi\chi^{\prime}}{\chi^{2}}-\frac{\xi^{\prime}}{\chi}\right)+ \frac{B^{2}}{2\chi}\left[M\left(\frac{\xi}{\eta}+\frac{\xi^{2}}{\eta\chi} \right)-\Omega\left(\frac{\xi}{\chi}+2\right)\right], \tag{10}\] \[T^{t}_{\ z}= \frac{A^{2}}{2\chi}\left[\Omega\frac{\eta}{\chi}-M\left(1+\frac {\xi}{\chi}\right)\right]+AB\frac{\eta}{2\chi}\left(\frac{\chi^{\prime}}{ \chi}-\frac{\eta^{\prime}}{\eta}\right)-\frac{B^{2}}{2\chi}\left[\Omega\frac{ \eta}{\chi}+M\left(1-\frac{\xi}{\chi}\right)\right],\] (11) \[T^{x}_{\ x}= T^{y}_{\ y}=2\lambda A^{2}B^{2},\] (12) \[T^{z}_{\ t}= A^{2}\frac{(\xi-\chi)^{2}}{2\eta^{2}\chi}\left[M\left(1+\frac{ \xi}{\chi}\right)-\Omega\frac{\eta}{\chi}\right]+AB\frac{\xi}{2\eta\chi}\left[ \eta^{\prime}\left(\frac{\chi^{2}}{\eta\xi}-\frac{\xi}{\eta}\right)-\chi^{ \prime}\left(\frac{\xi}{\chi}+\frac{\chi}{\xi}\right)+2\xi^{\prime}\right]\] \[+ B^{2}\frac{(\xi+\chi)^{2}}{2\eta^{2}\chi}\left[M\left(1-\frac{ \xi}{\chi}\right)+\Omega\frac{\eta}{\chi}\right],\] (13) \[T^{z}_{\ z}= 2\lambda A^{2}B^{2}\] \[+\frac{A^{2}}{2\eta}\left[M\left(\frac{\xi^{2}}{\chi^{2}}+\frac{ \xi}{\chi}-2\right)-\Omega\frac{\eta\xi}{\chi^{2}}\right]+AB\frac{\xi}{2\chi} \left(\frac{\xi^{\prime}}{\xi}-\frac{\chi^{\prime}}{\chi}\right)+\frac{B^{2}} {2\eta}\left[M\left(-\frac{\xi^{2}}{\chi^{2}}+\frac{\xi}{\chi}+2\right)+\Omega \frac{\eta\xi}{\chi^{2}}\right],\] (14) \[T^{r}_{\ r}= 2\lambda A^{2}B^{2}+BA^{\prime}-AB^{\prime}+AB\frac{\xi}{2\chi} \left(\frac{\eta^{\prime}}{\eta}-\frac{\xi^{\prime}}{\xi}\right). \tag{15}\]
2305.06878
Estimating many properties of a quantum state via quantum reservoir processing
Estimating properties of a quantum state is an indispensable task in various applications of quantum information processing. To predict properties in the post-processing stage, it is inherent to first perceive the quantum state with a measurement protocol and store the information acquired. In this work, we propose a general framework for constructing classical approximations of arbitrary quantum states with quantum reservoirs. A key advantage of our method is that only a single local measurement setting is required for estimating arbitrary properties, while most of the previous methods need exponentially increasing number of measurement settings. To estimate $M$ properties simultaneously, the size of the classical approximation scales as $\ln M$ . Moreover, this estimation scheme is extendable to higher-dimensional systems and hybrid systems with non-identical local dimensions, which makes it exceptionally generic. We support our theoretical findings with extensive numerical simulations.
Yinfei Li, Sanjib Ghosh, Jiangwei Shang, Qihua Xiong, Xiangdong Zhang
2023-05-11T15:21:21Z
http://arxiv.org/abs/2305.06878v3
# Unified direct parameter estimation via quantum reservoirs ###### Abstract Parameter estimation is an indispensable task in various applications of quantum information processing. To predict parameters in the post-processing stage, it is inherent to first perceive the quantum state with a measurement protocol and store the information acquired. In this work, we propose a general framework for constructing classical approximations of arbitrary quantum states with quantum reservoir networks. A key advantage of our method is that only a single local measurement setting is required for estimating arbitrary parameters, while most of the previous methods need exponentially increasing number of measurement settings. To estimate \(M\) parameters simultaneously, the size of the classical approximation scales as \(\ln M\). Moreover, this estimation scheme is extendable to higher-dimensional systems and hybrid systems with non-identical local dimensions, which makes it exceptionally generic. Both linear and nonlinear functions can be estimated efficiently by our scheme, and we support our theoretical findings with extensive numerical simulations. ## I Introduction Parameter estimation plays a central role in the implementation of various quantum technologies, such as quantum computing, quantum communication and quantum sensing. This highlights that extracting information from a quantum system to a classical machine lies at the heart of quantum physics. The prominent technique for this task, quantum tomography, studies the reconstruction methods of density matrices for quantum states. The density matrix captures all the information of a quantum system, and is useful in predicting properties of it. However, the curse of dimensionality has emerged with the advent of the noisy intermediate-scale quantum (NISQ) era [1], which renders it infeasible to obtain a complete description of quantum systems with a large number of constituents. Moreover, a full description is often superfluous in tasks where only key properties are relevant. As a consequence, the concept of shadow tomography is proposed to focus on predicting certain properties of a quantum system [2]. A particularly important progress in the study of shadow tomography is the advancement of randomized measurements [3; 4], the virtue of which is highlighted as _"Measure first, ask questions later"_[5]. The randomized measurement protocols proposed by Huang, Kueng and Preskill construct approximate representations of the quantum system, namely classical shadows, via Pauli group and Clifford group measurements [4]. The single-snapshot variance upper bound of classical shadows is determined by the so-called shadow norm, which is asymptotically optimal for global measurements. In addition, the statistical fluctuation can be further suppressed by constructing classical shadows with positive operator-valued measures (POVMs) [6; 7; 8]. The classical shadows are highly efficient in the estimation of various properties in the post-processing phase, the benefits of which extend to entanglement detection [9], characterization of topological order [10], machine learning for many-body problems [11], etc. However, these protocols pose a challenge in experiments due to the need for exponentially increasing measurement settings to achieve an arbitrary accuracy. Hence, various techniques are introduced to tackle this problem [12; 13; 14; 15]. Moreover, the theoretical results are based on the fact that multi-qubit Clifford groups are unitary 3-designs [16], which is not the case for arbitrary qudit systems. The generalization of these results to higher-dimensional systems typically require complex unitary ensembles that are hard to implement [17; 18; 19; 20]. Therefore, a general method for direct estimation with a single measurement setting is highly desirable. Recently, quantum neural networks [21; 22; 23] are widely studied as promising artificial neural networks due to their enhanced information feature space supported by the exponentially large Hilbert space [24; 25]. Unlike traditional computing frameworks, neural networks learn to perform complex tasks based on training rather than predefined algorithms or strategies [26]. With the capacity to produce data that displays atypical statistical patterns, quantum neural networks have the potential to outperform their classical counterparts [22]. However, training a quantum neural network can be equally hard [27]. Indeed, it has been shown that training of quantum neural networks could be exceptionally difficult owing to the barren plateaus or far local minima in the training landscapes [28, 29, 30, 31]. This is the reason that quantum neural networks are often limited to shallow circuit depths or small number of qubits. A trending line of research that circumvents this issue is quantum reservoir processing (QRP) [32, 33, 34], which is a quantum analogy of recurrent networks. In this work we present a direct parameter estimation scheme via quantum neural networks, which overcomes the obstacles faced by randomized measurement protocols by harnessing the richness of QRP. In QRP, training is completely moved out of the main network to a single output layer, such that the training becomes a linear regression eliminating the possibility of producing barren plateaus or local minima. Such a quantum neural network retains its quantum enhanced feature space while being trainable via a fast and easy mechanism. Based on this efficiently trainable QRP, we establish a unified measurement protocol for direct quantum parameter estimations. A scheme of minimal quantum hardware comprising pair-wise connected quantum nodes is developed to estimate arbitrary parameters of a quantum state. As major advantages, our scheme requires single-qubit measurements, only in a single setting, and a logarithmic network size \(\sim\!\ln d\) with respect to the dimension \(d\) of the input state. All of these are particularly favorable for actual physical implementations. Furthermore, we establish rigorous performance guarantee by adopting the mindset of shadow estimation. According to Born's rule, one measurement of a quantum state is analogous to sampling a probability distribution once. Thus, learning properties of a quantum state involves measuring identical and independently distributed (i.i.d.) samples of the quantum state a certain number of times. To estimate \(M\) observables of the state within an additive error \(\epsilon\) and with constant confidence, the number of i.i.d. input samples consumed scales as \(O\big{(}F_{\mathrm{res}}\ln M/\epsilon^{2}\big{)}\). The factor \(F_{\mathrm{res}}\) represents the variance upper bound of the single sample estimator, which depends solely on the observables and the reservoir dynamics, and its magnitude is comparable to that of the shadow norm. As a direct consequence of the pair-wise reservoir dynamics, \(F_{\mathrm{res}}\) for a \(k\)-local observable is the product of that for each single-qubit observable. We support the theoretical results with extensive numerical simulations. ## II Quantum Reservoir Parameter Estimation In this section we introduce the scheme of quantum reservoir parameter estimation (QRPE), which evaluates physical properties of input quantum states based on a quantum estimation device. Our goal parallels that of shadow estimation: to devise a resource-efficient classical representation of complex quantum states that permits access to their properties through subsequent classical processing. ### Physical setting We consider a system based on interacting quantum nodes which is used as a dynamical estimation device. More specifically, our considered device is a pair-wise connected network of qubits, as shown in Fig. 1, where the connections are obtained with transverse exchange interactions [35] and each qubit is excited with a continuous driving field. For \(n\)-qubit input states, there are \(n\) pairs of reservoir nodes. The corresponding Hamiltonian of the qubit network device is given by \[\begin{split}\hat{H}=&\sum_{i=1}^{n}\!\left[J\big{(} \varsigma_{2i-1}^{x}\varsigma_{2i}^{x}+\varsigma_{2i-1}^{y}\varsigma_{2i}^{y} \big{)}+P_{1}\varsigma_{2i-1}^{x}\right.\\ &+E_{1}\varsigma_{2i-1}^{z}\!+\!P_{2}\varsigma_{2i}^{x}+E_{2} \varsigma_{2i}^{z}\right].\end{split} \tag{1}\] The operators \(\varsigma_{i}^{x,y,z}\) represent the Pauli operators on the \(i\)-th quantum node, which has a compatible dimension with the context. The parameter \(J\) represents the strength of the pair-wise transverse exchange interaction between \((2i-1)\)-th and \(2i\)-th nodes. The parameters \(P_{1,2}\) and \(E_{1,2}\) represent driving field strength and onsite energy respectively. Such a device can be readily realized with superconducting qubits, where the exchange interaction can be realized via a cavity quantum bus [35]. Moreover, the form of \(\hat{H}\) is of a quantum spin Hamiltonian which can be realized in a variety of platforms such as NMR [36, 37], quantum dots [38], and trapped ions [39]. Figure 1: Schematic illustration of direct parameter estimation with quantum reservoirs. A source prepares an \(n\)-qubit quantum state \(\sigma\) which is taken as input by a pair-wise reservoir network with \(2n\) qubits. For the \(i\)-th input copy, the local measurement operator on the \(j\)-th node is \(\hat{n}_{j}=(\mathds{1}-\varsigma_{j}^{z})/2\), and the readout is a bit string \(s_{i}\in\{0,1\}^{n}\), corresponding to one of the \(N_{\mathrm{read}}\) readout operators. Equipped with the training data, an unbiased estimator \(\hat{\Omega}\) is constructed for the parameter \(\mathcal{O}\). ### Measurement protocol To estimate properties of a quantum state \(\sigma\), it is injected into the quantum reservoir via an invertible map, which for simplicity we choose the swap operations. Thus, the initial state of the network at time \(t=0\) is given by the density matrix: \(\rho(0)=\sigma\otimes[|0\rangle\langle 0|]_{\text{rest}}\), where the suffix'rest' indicates the network nodes other than the ones connected to the input qubits via the swap gates. The initial reservoir state evolves in time as, \[\rho(t)=\hat{U}^{\dagger}(t)\rho(0)\hat{U}(t)\,, \tag{2}\] where \(\rho(t)\) is the density operator at time \(t\) and \(\hat{U}(t)=\exp(-it\hat{H}/\hbar)\) is the evolution operator. After a sufficient time evolution, we perform local Pauli-\(Z\) measurements on the reservoir nodes (qubits). For each node there are two readouts \(+1\) and \(-1\), represented by the projectors onto the positive and negative eigen subspace of Pauli-\(Z\) operator \(\{\varsigma_{i}^{z}\}\) respectively. The final readouts are provided by a set of commuting readout operators \[\begin{split}\{\hat{o}_{i}\}&=\prod_{j=1}^{N_{ \text{node}}}\Big{\{}\frac{\openone-\varsigma_{j}^{z}}{2},\,\frac{ \openone+\varsigma_{j}^{z}}{2}\Big{\}}\\ &=\big{\{}\hat{C}_{\varnothing}\big{\}}\cup\big{\{}\hat{C}_{\{i_ {1}\}}\big{\}}\cup\big{\{}\hat{C}_{\{i_{1}^{\prime},i_{2}^{\prime}\}}\big{\}} \cup\dots\,,\end{split} \tag{3}\] where \(N_{\text{node}}\) is the number of reservoir nodes. \(\hat{C}_{S}\) is an element of the set \(\{\hat{o}_{i}\}\), which is related to the Pauli-\(Z\) operators as \[\hat{C}_{S}=\prod_{i\in S}\frac{\openone-\varsigma_{i}^{z}}{2}\prod_{j\not=S} \frac{\openone+\varsigma_{j}^{z}}{2}\,, \tag{4}\] so it represents the configuration of measurement outcomes that only nodes in the set \(S\) result in \(-1\) for local Pauli-\(Z\) measurements. Hence, the readout for the \(i\)-th input is recorded by a bit string \(s_{i}\in\{0,1\}^{n}\). The total number of readout operators in \(\{\hat{o}_{i}\}\) is given by \[N_{\text{read}}=\sum_{k=0}^{N_{\text{node}}}\binom{N_{\text{node}}}{k}=2^{N_{ \text{node}}}\,, \tag{5}\] where \(\binom{N_{\text{node}}}{k}\) is the combinatorial number. Considering that \(\{\hat{o}_{i}\}\) forms an orthogonal measurement basis with a total number of \(2^{N_{\text{node}}}\) elements, a reservoir with a minimal of \(N_{\text{node}}=2\log_{2}d\) nodes is required to estimate arbitrary quantum parameters for input states supported on a \(d\)-dimensional Hilbert space \(\mathcal{H}_{d}\). This observation agrees with our proposal of pair-wise connected reservoir networks. Moreover, the commutativity of the chosen readout operators leads to quantum resource effectiveness. This is in sharp contrast to the traditional quantum reservoir computing schemes where either the required size of the quantum reservoir or the temporal resolution in the measurement tend to be exponentially large (\(\sim\!\!d^{2}\)). In either situations, these traditional schemes are exponentially quantum resource consuming. ### Training To estimate the expectation value \(\text{Tr}(\mathcal{O}\sigma)\), the QRPE scheme essentially maps the input state \(\sigma\) to a vector of probabilities for observing each readout operator \[\sigma\xrightarrow{\text{QRP}}\bar{X}=[\langle\hat{o}_{1}\rangle;\,\langle \hat{o}_{2}\rangle;\,\dots\,;\,\langle\hat{o}_{N_{\text{read}}}\rangle]\,, \tag{6}\] and the target observable \(\mathcal{O}\) to a vector of weights \[\mathcal{O}\xrightarrow{\text{QRP}}W=[w_{1},\,w_{2},\,\dots,\,w_{N_{\text{ read}}}]\,, \tag{7}\] satisfying \[W\cdot\bar{X}=\text{Tr}(\mathcal{O}\sigma)\,. \tag{8}\] We note that similar maps have also been studied in the context of analog quantum simulation [14; 15]. Here each readout in experiments requires only linear classical storage with respect to the system size, owing to the tensor product structure in Eq. (3). The relation given by Eq. (6) is achieved by sampling from i.i.d. copies of the \(n\)-qubit state \(\sigma\) and processing the reservoir readouts with statistical methods, as addressed in Sec. II.4, while Eq. (7) is achieved by a training process described below. For training, we require a one-time estimation of a known set of training states \(\{|\varphi_{k}\rangle\}\). Here we consider the training data to be accurate, and present results that account for statistical noise occurring outside of the training phase. The reservoir dynamics are initialized by setting the parameters \(J\), \(P_{1,2}\), \(E_{1,2}\) and the evolution time \(t\). Each training step starts with an initial state of the reservoir \(\rho(0)=|\varphi_{k}\rangle\langle\varphi_{k}|\otimes[|0\rangle\langle 0|]_{ \text{rest}}\), which then evolves to \(\rho(t)\) at time \(t\). From sufficiently many measurement results of each input training state, we estimate the expectation value \(\langle\hat{o}_{i}(k)\rangle\). The training data is stored by arranging \(\langle\hat{o}_{i}(k)\rangle\) into a column vector \(\bar{X}_{|\varphi_{k}\rangle\langle\varphi_{k}|}\). In this way, we collect readout vectors \(\bar{X}_{|\varphi_{k}\rangle\langle\varphi_{k}|}\) corresponding to all the training states \(|\varphi_{k}\rangle\) for \(k=1,\,2,\,\dots,\,N_{\text{train}}\). Until this step, the whole procedure is completely independent of the parameter to be estimated. The knowledge of \(\mathcal{O}\) is only required at the post processing level, where we set the target output \[Y_{k}^{\text{tar}}=\langle\varphi_{k}|\mathcal{O}|\varphi_{k}\rangle\,. \tag{9}\] Let \(Y_{k}^{\text{out}}=W\cdot\bar{X}_{|\varphi_{k}\rangle\langle\varphi_{k}|}\), and the sum of squared deviations between \(Y_{k}^{\text{out}}\) and \(Y_{k}^{\text{tar}}\) is \[\mathcal{E}_{\text{train}}=\left|Y^{\text{out}}-Y^{\text{tar}}\right|^{2}, \tag{10}\] where \(Y^{\text{out}}\) is a row vector with elements \(Y_{k}^{\text{out}}\), \(k=1,\,2,\,\dots,\,N_{\text{train}}\), and \(Y^{\text{tar}}\) is defined similarly. For typical quantum reservoirs, a total of \(d^{2}\) training states are needed to estimate arbitrary parameters for an input state supported on \(\mathcal{H}_{d}\). The set of training states \(\{|\varphi_{k}\rangle\langle\varphi_{k}|\}\) is a set of \(d^{2}\) vectors which spans a \(d^{2}\)-dimensional Hilbert space, i.e., forms an informationally complete POVM. We choose the training states as \[|\varphi_{k}\rangle=\otimes_{m=1}^{n}|k_{m}\rangle\,,\quad k=\sum_{m=1}^{n}4^{m-1 }k_{m}\,. \tag{11}\] where \(k_{m}\in\{0,1,2,3\}\), \(|0\rangle=[1;0]\), \(|1\rangle=[0;1]\), \(|2\rangle=(|0\rangle+|1\rangle)/\sqrt{2}\) and \(|3\rangle=(|0\rangle+i|1\rangle)/\sqrt{2}\). Denote \[\mathbf{\mathcal{T}}=\mathbf{X}_{\mathrm{t}}\mathbf{M}_{\mathrm{t}}^{-1}\,, \tag{12}\] where \(\mathbf{M}_{\mathrm{t}}=\big{[}|\varrho_{1}\rangle\!\!\!\rangle,\,|\varrho_{2} \rangle\!\!\!\rangle,\,\ldots,\,|\varrho_{d^{2}}\rangle\!\!\!\rangle\big{]}\), \(\varrho_{k}=|\varphi_{k}\rangle\langle\varphi_{k}|\) is the density matrix of the \(k\)-th training state, \(|\cdot\!\!\rangle\) is the Louiville superoperator representation, and the matrix of training data is \(\mathbf{X}_{\mathrm{t}}=[\hat{X}_{|\varphi_{1}\rangle\langle\varphi_{1}|},\,\hat{ X}_{|\varphi_{2}\rangle\langle\varphi_{2}|},\,\ldots]\). Then the expectation of reservoir readout for an input state \(\sigma\) is \[\hat{X}_{\sigma}=\mathbf{\mathcal{T}}|\sigma\rangle\,, \tag{13}\] as is explained in Appendix. A. If \(\mathbf{X}_{\mathrm{t}}\) is full rank, there exists a weight vector \(W\) that minimizes \(\mathcal{E}_{\mathrm{train}}\), i.e., \[W=Y^{\mathrm{tar}}\mathbf{X}_{\mathrm{t}}{}^{-1}\,, \tag{14}\] which can be written as \(W=\langle\!\!\langle\mathcal{O}|\mathbf{\mathcal{T}}^{-1}\). Leveraging pair-wise reservoir dynamics in Eq. (1) lifts the burden on training, given that the training data inherits a tensor product structure. Once the training data of a single pair of interacting reservoir nodes is collected as \(\mathbf{X}_{\mathrm{p}}\), then we have \[\mathbf{X}_{\mathrm{t}}=\bigotimes_{i=1}^{n}\mathbf{X}_{\mathrm{p}}\,, \tag{15}\] where \(n\) is the number of node pairs. Also, \[\mathbf{M}_{\mathrm{t}}=\bigotimes_{i=1}^{n}\mathbf{M}_{\mathrm{p}}\,,\quad\mathbf{ \mathcal{T}}=\bigotimes_{i=1}^{n}\mathbf{\mathcal{T}}_{\mathrm{p}}\,, \tag{16}\] where \(\mathbf{\mathcal{T}}_{\mathrm{p}}=\mathbf{X}_{\mathrm{p}}\mathbf{M}_{\mathrm{p}}^{-1}\). Thus, the training task is effectively reduced to that of a two-node reservoir, and the full-rank requirement of \(\mathbf{X}_{\mathrm{t}}\) is correspondingly reduced to that of \(\mathbf{X}_{\mathrm{p}}\). Moreover, the vector of weights only necessitates polynomial storage if the parameter \(\mathcal{O}\) can be decomposed into a finite sum of tensor products. See Appendix. A for more details. ### Reservoir estimator In this section we introduce the reservoir estimator for linear functions. With the training data at our disposal, we could analyze the sample efficiency of the reservoir estimators. Suppose the observed readout operator for the \(i\)-th input copy is \(\hat{o}_{j}\), then the so-called single snapshot \(X_{i}\) is a vector where the \(j\)-th element is \(1\) and the other elements are \(0\). For a total of \(N_{\mathrm{sample}}\) input copies, one obtains a set of snapshots \(\{X_{i}\,|\,i=1,\,2,\,\ldots,\,N_{\mathrm{sample}}\}\). The single-snapshot estimator is \[\hat{\Omega}\equiv W\cdot\hat{X}\,, \tag{17}\] where \(\hat{X}\) is a random variable that conforms to the probability distribution behind \(X_{i}\). Eq. (8) indicates that \(\hat{\Omega}\) is an unbiased estimator for \(\mathrm{Tr}(\mathcal{O}\sigma)\). Hence in data processing, we could apply the median of means (MoM) method to neutralize the effect of outliers [4, 40]. After processing \(N_{\mathrm{sample}}=KN\) input copies, we divide the snapshots into \(K\) equally sized subsets \(\{X_{i}^{v}\,|\,i=1,\,2,\,\ldots,\,N\}\), and compute the mean value of the single-snapshot estimators for each subset. The corresponding estimators are \[\hat{\Omega}_{\mathrm{M}}^{v}=\frac{1}{N}\sum_{i=1}^{N}W\cdot\hat{X}_{i}^{v}, \quad v=1,\,2,\,\ldots,\,K\,. \tag{18}\] Then, the MoM estimator is given by \[\hat{\Omega}_{\mathrm{MoM}}=\mathrm{Median}\big{\{}\hat{\Omega}_{\mathrm{M}}^ {v}\big{\}}\,. \tag{19}\] With this, we have **Theorem 1**.: _The number of quantum state inputs needed for estimating a set of \(M\) parameters \(\{\mathcal{O}_{i}\,|\,i=1,\,2,\,\ldots,\,M\}\) to precision \(\epsilon\) and confidence level \(1-\delta\) scales in_ \[N_{\mathrm{sample}}\sim O\bigg{(}\mathrm{ln}\Big{(}\frac{2M}{\delta}\Big{)} \frac{\max_{i}F_{\mathrm{res}}^{i}}{\epsilon^{2}}\bigg{)}\,. \tag{20}\] _The factor \(F_{\mathrm{res}}^{i}=||\mathcal{B}_{i}||_{\infty}\) is the variance upper bound of the single-snapshot estimator of \(\mathcal{O}_{i}\) maximized over the possible quantum state inputs, where \(||\cdot||_{\infty}\) represents the spectral norm, and \(\mathcal{B}_{i}\) is defined by_ \[\langle\!\langle\mathcal{B}_{i}|\mathbf{\mathcal{T}}^{-1}=\langle\!\langle \mathcal{O}_{i}|\mathbf{\mathcal{T}}^{-1}\odot\langle\!\langle\mathcal{O}_{i}|\bm {\mathcal{T}}^{-1}\,. \tag{21}\] Proof.: This efficiency scaling results from the median of means method, and we consider the worst-case scenario by maximizing \(F_{\mathrm{res}}^{i}\) over the set of observables. A more detailed proof is included in Appendix B. The variance of a single-snapshot estimator for \(\mathcal{O}\) is invariant for parameters \(\{\mathcal{O}^{\prime}\,|\,\mathcal{O}^{\prime}=\mathcal{O}+c\mathds{1},c\in \mathbb{C}\}\). Thus, one could use only the traceless part of \(\mathcal{O}\) to compute the worst-case variance upper bound. It is interesting to note that the MoM estimator won't have a visible significance in some tested cases [41, 42], where it could be replaced with the sample mean estimator. To further analyze the sample efficiency scaling of the QRPE scheme, we have: **Theorem 2**.: _For a k-local parameter, e.g._ \[\mathcal{O}=\bigotimes_{i=1}^{k}\mathcal{O}_{i}\bigotimes_{i=k+1}^{n}\mathbf{\mathrm{ f}}_{i}\,, \tag{22}\] _the worst-case variance upper bound \(||\mathcal{B}||_{\infty}\) is the product of that of each local parameters \(\mathcal{O}_{i}\), i.e.,_ \[||\mathcal{B}||_{\infty}=\prod_{i=1}^{k}||\mathcal{B}_{i}||_{\infty}\,, \tag{23}\] _where \(\mathcal{B}_{i}\) satisfies_ \[\langle\!\langle\mathcal{B}_{i}|\mathbf{\mathcal{T}}_{\mathrm{p}}{}^{-1}=\langle \!\langle\mathcal{O}_{i}|\mathbf{\mathcal{T}}_{\mathrm{p}}{}^{-1}\odot\langle \!\langle\mathcal{O}_{i}|\mathbf{\mathcal{T}}_{\mathrm{p}}{}^{-1}\,. \tag{24}\] Proof.: This theorem results from the pair-wise reservoir dynamics. See Appendix B for the details. Theorem 2 indicates that for \(k\)-local parameter estimation, if a reservoir setting works well in the single-qubit case, then it also works well in the multi-qubit case. Thus, our approach for evaluating the reservoir parameters \(J\), \(P_{1,2}\), and \(E_{1,2}\) is based on the performance in the single-qubit state overlap estimation task. To see why overlap estimation reflects the overall performance of observable estimation, note that the single-snapshot estimator's variance of an arbitrary parameter \(\mathcal{O}^{\prime}\) satisfying \(\mathcal{O}^{\prime}=c_{1}\sigma+c_{2}\mathds{1}\) equals \(c_{1}^{2}\) times that of \(\sigma\), where \(c_{1,2}\) are real numbers and \(\sigma\) is a density matrix. We find several settings that lead to small average variance upper bound of random target states, and choose one of them as the reservoir setting used in this manuscript. In the task of fidelity estimation, the efficiency of the current reservoir estimator outperforms the shadow estimation based on Pauli group only in a fraction of target pure states. Also, the ratio of the average variance upper bound of shadow estimation and that of the QRPE scheme is slightly larger than unity and almost invariant with the system size. While the average efficiency of the current setting only marginally differ from that of the shadow estimation, the number of settings for the standard shadow estimation is exponentially large compared to that of the present scheme. These results are shown in Fig. 2. We note that the random measurement protocol with Clifford group could obtain a scaling independent of the system size, but requires to implement complex Clifford compiling [44; 45]. Also, we could utilize random settings by probabilistic time multiplexing (PTM), where the pair-wise training data \(\mathbf{X}_{\text{p}}(t_{k})\) are collected at \(N_{\text{time}}\) different time points \(\{t_{k}\,|\,k=1,\,2,\,\ldots,\,N_{\text{time}}\}\), and the single-snapshot estimator is constructed by measuring reservoir nodes at time \(t_{k}\) with probability \(p_{k}\), where \(\sum_{k}p_{k}=1\). The optimization over probability distributions aims to reduce the average variance upper bound of single-qubit state overlap estimation, see Appendix B for more details. The QRPE protocol for linear functions is as follows: 1. Perform a one-time estimation of the training states of a two-node reservoir and load the training data \(\{\mathbf{X}_{\text{p}}\}\) to classical memory. 2. Given parameters \(\{\mathcal{O}_{i}\}\), calculate weights \(\{W_{i}\}\) with the training data. Obtain the worst-case variance upper bound \(\max_{i}||\mathbf{S}_{i}||_{\infty}\). Calculate \(N_{\text{sample}}\) with the given confidence \(1-\delta\) and additive error \(\epsilon\). 3. Process i.i.d. copies of the unknown state \(\sigma\) with the quantum reservoir network. Load \(N_{\text{sample}}\) snapshots \(\{X_{i}\}\) to classical memory. 4. Calculate the estimated values \(\{\tilde{\mathcal{O}}_{i}\}\) with the weights and snapshots. The reservoir estimators for nonlinear functions are based on U-statistics [46; 4], which is a generalization of the sample mean estimator. It is worth noting that the reservoir snapshots are ready to be used to estimate future parameters of the input state \(\sigma\). ## III Applications Here we provide a wide range of applications of the quantum reservoir scheme. The reservoir initialization and evolution time are fixed for all applications. See Fig. 2 for the details of reservoir settings. Figure 2: Comparison with the shadow estimation with Pauli measurements [4]. (top) Average variance upper bound of fidelity estimation. For each system size we randomly generate 1000 pure states as the target states. The ratio of the average variance upper bounds for QRPE and shadow estimation remains almost constant as the number of qubits grows. (bottom) The average number of measurement settings invoked to reach a given precision with confidence level 0.90. For shadow estimation, we assume the variance in single-snapshot estimator equals a tenth of the shadow norm on the top figure. While QRPE requires only a single setting, the standard shadow protocol with Pauli measurements [4; 43] requires exponentially increasing settings to reach an arbitrary precision. The reservoir setting we choose for qubit system is given by \(J=-0.41\), \(P_{1}=4.0\), \(P_{2}=1.3\), \(E_{1}=0.71\), \(E_{2}=0.46\) and \(t=1\). The unit for the Plank’s constant is meV\(\cdot\)ps. ### Fidelity estimation In contrast to full quantum tomography, extracting partial information from the quantum system is more resource efficient. A particularly important feature is the degree to which a given system differs from the target system. Accordingly, quantum fidelity is a widely used distance measure of quantum states, which is a linear function of the given state when the target state is a pure state [47; 48]. Here we present an illustrative example of how the reservoir estimation scheme works in random pure state fidelity estimation. Numerical results indicate that the average worst-case variance upper bound for QRPE is close to that of random Pauli measurements, as shown in Fig. 2. ### Entanglement detection Entanglement is an indispensable resource in tasks ranging from quantum computation to quantum communication [49]. Whether one can claim its existence for a given system has aroused both theoretical and experimental interests [50; 51; 52; 53; 54]. However, the difficulty in describing the convex space of separable states poses a trade-off relation between the detection ability and effectiveness of entanglement criteria [55]. With its capability of estimating multiple observables simultaneously, the reservoir estimation scheme is a natural fit for the task of entanglement detection with linear entanglement criteria. We illustrate the QRPE scheme on the detection of an intriguing and more targeted phenomenon, the entanglement sudden death [56; 57]. Consider a three-qubit GHZ-type state [58] \[\rho=\frac{1-q}{8}\openone+q\rho_{\mathrm{G}}\,, \tag{25}\] where \(\rho_{\mathrm{G}}=(|000\rangle+|111\rangle)(\langle 000|+\langle 111|)/2\). The dephasing channel is defined by the Kraus operators \[K_{0}=\sqrt{1-p}\openone,\quad K_{1,2}=\frac{\sqrt{p}}{2}(\openone+\varsigma ^{\mathrm{z}})\,. \tag{26}\] Set \(p=1-\exp{(-\kappa t)}\), then dephasing of the initial state \(\rho\) reflects on a factor \(\exp{(-\kappa t)}\) that times the off-diagonal elements. We use the following optimal linear entanglement witnesses for detecting genuine multipartite entanglement (GME) and any multipartite entanglement (ME) respectively [59]: \[W_{\mathrm{GME}} =\openone-2\rho_{\mathrm{G}}\,, \tag{27}\] \[W_{\mathrm{ME}} =\openone-4\rho_{\mathrm{G}}+2\rho_{\mathrm{G}-}\,.\] where \(\rho_{\mathrm{G}-}=(|000\rangle-|111\rangle)(\langle 000|-\langle 111|)/2\). We normalize the spectral norm of the two witnesses, and estimate them simultaneously. With a total of 6000 snapshots for each \(\{q,\kappa t\}\), we illustrate the results in Fig. 3. ### Estimating expectation values of local and global observables Estimating the values of observables is a fundamental task in various quantum information processing tasks. For local observables, we present the numerical results of estimating the local Pauli observables for random four-qubit states. We observe an exponential growth in sample complexity with the locality in Pauli observable estimation, which is in line with Theorem 2. The numerical result is shown in Fig. 4. For global observables, we consider the task of GHZ state fidelity estimation. For an input GHZ state with three, six, nine and twelve qubits, we present the results of numerical experiment as the number of input copies versus the average error in Fig. 5. ### Estimating nonlinear functions The reservoir measurements only describe linear functions in Eq. (8). Nevertheless, experimentally accessible nonlinear parameters are typically measured by estimating linear functions that can be translated into the QRPE scheme. For instance, the swap trick is widely used in purity estimation. For two copies of the input state \(\sigma\), the swap operator obeys \(\mathrm{Tr}(S\sigma\otimes\sigma)=\mathrm{Tr}(\sigma^{2})\). Numerical simulation of the purity estimation of 10000 random one-qubit input states shows that the average variance upper bound is around 2.9. The Renyi entropy is another important nonlinear factor for characterizing entanglement, which is the logarithm of subsystem purity. The estimator of second Renyi entropy is \[\mathrm{Tr}(\rho_{A}^{2})=\mathrm{Tr}(S_{A}\rho\otimes\rho)\,, \tag{28}\] Figure 3: Observation of entanglement sudden death. (top): estimation of \(W_{\mathrm{GME}}\). (bottom): estimation of \(W_{\mathrm{ME}}\). We use 6000 measurements for each \(\{q,\kappa t\}\), and the maximal estimation error for estimating GME and ME witnesses are 0.11 and 0.13 respectively. where \(S_{A}\) is the swap operator acting on subsystem \(A\) of two copies of \(\rho\). The second Renyi entropy of small subsystems is useful in avoiding weak barren plateaus (WBP) [60]. Here we perform WBP diagnosis with the reservoir estimation scheme. For an initial state \(|0\rangle^{\otimes 14}\), each gate sequence of the variational quantum eigensolver (VQE) circuit is composed of random local rotations \(\exp(-\frac{i}{2}\theta\zeta)\), where \(\theta\in[\pi/20,\pi/20]\) and \(\varsigma\in\{\xi_{x},\xi_{y},\xi_{z}\}\), and nearest-neighbor controlled-Z gates with periodic condition. We detect the emergence of WBP by estimating the second Renyi entropy of a small region consisting of \(N_{A}\) qubits via 2000 measurements at each circuit depth. The numerical experiment is shown in Fig. 6. ## IV Higher-dimensional and hybrid systems Higher-dimensional systems and hybrid systems with non-identical local dimensions are of fundamental importance as a playground to reveal intriguing quantum phenomena, such as quantum steering [61; 62] and the demonstration of contextuality and nonlocality trade-off [63; 64]. Here we demonstrate the flexibility of our scheme beyond the qubit systems. The bosonic Hamiltonian for the qudit reservoir network is given by \[\begin{split}\hat{H}=&\sum_{\langle i,j\rangle}J \big{(}\hat{a}_{i}^{\dagger}\hat{a}_{j}+\hat{a}_{j}^{\dagger}\hat{a}_{i}\big{)} +P_{1}(\hat{a}_{i}^{\dagger}+\hat{a}_{i})\\ &+P_{2}(\hat{a}_{j}^{\dagger}+\hat{a}_{j})+E_{1}\hat{a}_{i}^{ \dagger}\hat{a}_{i}+E_{2}\hat{a}_{j}^{\dagger}\hat{a}_{j}\\ &+\alpha_{1}\hat{a}_{i}^{\dagger}\hat{a}_{i}^{\dagger}\hat{a}_{i }\hat{a}_{i}+\alpha_{2}\hat{a}_{j}^{\dagger}\hat{a}_{j}^{\dagger}\hat{a}_{j} \hat{a}_{j}\,,\end{split} \tag{29}\] where the operators \(\hat{a}\) represent lowering operators of the quantum nodes (qudits). The parameters \(J\), \(P_{1,2}\), \(E_{1,2}\) and \(\alpha_{1,2}\) represent hopping, onsite driving field, energy and nonlinear strength respectively. We consider the unitary time evolution governed by the quantum Liouville equation. For qudit systems with local dimension \(d\), a superoperator basis consists of the generalized Gell-Mann matrices and the normalized identity [65]. The readout Figure 4: Numerical experiment for estimating Pauli observables for random one, two, three and four-qubit input states. Each data point represents the average error of all the \(k\)-local Pauli observables of a random \(k\)-qubit input state. The solid marks represents the average performance of 30 independent experiments, while the hollow marks represents a single experiment. The subfigure shows the variance upper bound \(\hat{F}_{\text{res}}\), averaged over the \(k\)-local Pauli observables. Numerical results indicate that the worst-case sample complexity of estimating Pauli observables increases exponentially with the locality. Figure 5: Numerical experiment for identifying GHZ states with fidelity estimation. The solid marks represents the average performance of 50 independent experiments, while the hollow marks represents a single experiment. The subfigure shows the variance upper bound for the \(k\)-qubit GHZ state. Numerical results indicate that the sample complexity of identifying GHZ states increases exponentially with the number of qubits. Figure 6: Saturation of the second Rényi entropy \(S_{2}(\rho_{A})\) of a small region \(A\) consisting of \(N_{A}\) qubits. The solid curves represent \(S_{2}(\rho_{A})\) of a fourteen-qubit state generated by a VQE circuit with depth \(p\), which approaches the page entropy represented by the dashed line. For reservoir estimation at each circuit depth we use 2000 independent snapshots. The shaded region is the fluctuation of estimated value in 10 independent estimations. operators are constructed in the same way with that of the qubit system, the only difference is that there are \(d\) projection operators instead of two, corresponding to the \(d\) possible population numbers. Due to the tensor product structure of the measurements and training states, the reservoir estimation scheme can be naturally extended to hybrid systems with non-identical local dimensions, such as qubit-qutrit systems. The pair-wise reservoir setting for a pair of qudit nodes is the same with that of qudit reservoirs. For application we apply virtual distillation (VD) [66; 67; 68] to estimate the fidelity of a noisy state \(\rho_{\epsilon}\), \[\langle|\psi\rangle\langle\psi|\rangle_{\text{VD}}=\frac{\text{Tr}(\rho_{ \epsilon}^{m}|\psi\rangle\langle\psi|)}{\text{Tr}(\rho_{\epsilon}^{m})}\,, \tag{30}\] where \[\rho_{\epsilon}=(1-\epsilon)|\psi\rangle\langle\psi|+\epsilon\openone\,. \tag{31}\] The estimator for \(\rho_{\epsilon}^{m}\) is chosen as \[\frac{1}{P(N,m)}\sum^{*}\prod_{i=1}^{m}\hat{\rho}_{s_{i}}\,, \tag{32}\] where \(\hat{\rho}_{s_{i}}\) is the single-snapshot estimator of the noisy state \(\rho_{\epsilon}\), \(P(N,m)\) is the \(m\)-permutations of \(N\), \(\sum^{*}\) denotes the summation over all distinct subscripts, i.e., \(s_{1},s_{2},\ldots,\)\(s_{m}\) is a \(m\)-tuple of indices from the set \(\{1,\ldots,\)\(N\}\) with distinct entries. The denominator is estimated similarly. The estimation results are shown in Fig. 7 for the following maximally entangled qubit-qutrit and two-qutrit pairs, \[\begin{split}&|\psi_{1}\rangle=\frac{1}{2}(|10\rangle+|12\rangle)+ \frac{1}{\sqrt{2}}|01\rangle\,,\\ &|\psi_{2}\rangle=\frac{1}{\sqrt{3}}(|00\rangle+|11\rangle+|22 \rangle)\,.\end{split} \tag{33}\] ## V Conclusion We have presented a direct parameter estimation scheme, where a classical representation of quantum states is constructed with quantum reservoir networks and used for parameter estimation in the post-processing phase. Unlike existing techniques of shadow estimation that considers complex unitary ensembles or randomized measurements, our scheme explores the versatility of the quantum reservoir platform and requires only a single measurement setting. For the reservoir network, the pair-wise interacting reservoir nodes considered in our scheme results in minimal quantum hardware and training resources. The sample complexity has been rarely addressed in previous works on quantum reservoir computing. In contrast, we have established a stringent performance guarantee regarding the number of samples to be processed by the reservoir network, which is independent of the input quantum state. Furthermore, our scheme can be naturally extended to higher-dimensional systems and hybrid systems with non-identical local dimensions. We complement the theoretical results with diverse applications. For future research, reservoir computing is suitable for temporal pattern recognition, classification, and generation [69; 70; 34]. While we have proposed a criteria for analyzing the statistical fluctuation of quantum reservoir outputs in parameter estimation, it is important to do so in temporal information processing tasks such as temporal quantum tomography [71] and nonlinear temporal machine learning [72]. Also, it is important to further optimize the reservoir networks regarding the sample complexity and investigate whether the efficiency lower bound for local measurements [4] can be achieved with the quantum reservoir platform. Moreover, there has been extensive research conducted on the learning abilities of quantum reservoir networks in tasks ranging from quantum to real-world problems [73; 74; 75; 76]. These studies explore various network topologies and connectivities. While the pair-wise quantum reservoir construction is beneficial for parameter estimation, there lacks a general understanding of the role of topology and connectivity in quantum reservoir computing regarding sample complexity. Another topic is the noise effect. There are inspiring discussions on the noisy training data [77], robustness in tomographic completeness [78] and benefits of quantum noise [79]. In our scheme the influence of time independent system noise on \(\mathbf{\mathcal{T}}\) cancels out by a direct observation of Eq. (13). However, a complete discussion on the noise effect and its mitigation in quantum reservoir processing is important for future experiments. ###### Acknowledgements. We are grateful to Yanwu Gu, Zihao Li, Zhenpeng Xu and Ye-Chao Liu for helpful discussions. This Figure 7: The virtual distillation of maximally entangled states mixed with depolarizing noise in \(2\times 3\) and \(3\times 3\) dimensional systems. The solid curves represent the fidelity of the distilled state with \(m=2\). The shaded region is the range of fluctuation of the estimated value in 10 independent estimations. For each reservoir estimation we use 10000 independent measurement readouts. The reservoir setting we choose for qutrit systems is \(J=0.9\), \(P_{1}=2.1\), \(P_{2}=1.1\), \(E_{1}=1.1\), \(E_{2}=0.4\), \(\alpha_{1}=0.6\), \(\alpha_{2}=0.7\). work was supported by the National Natural Science Foundation of China (Grants No. 12175014 and No. 92265115) and the National Key R&D Program of China (Grant No. 2022YFA1404900). S.G. acknowledges support from the Excellent Young Scientists Fund Program (Overseas) of China, and the National Natural Science Foundation of China (Grant No. 12274034).
2304.09862
Accurate and Fast VR Eye-Tracking using Deflectometric Information
We present two methods for fast and precise eye-tracking in VR headsets. Both methods exploit deflectometric information, i.e., the specular reflection of an extended screen over the eye surface.
Jiazhang Wang, Tianfu Wang, Bingjie Xu, Oliver Cossairt, Florian Willomitzer
2023-04-13T01:11:15Z
http://arxiv.org/abs/2304.09862v1
# Accurate and Fast VR Eye-Tracking using Deflectometric Information ###### Abstract We present two methods for fast and precise eye-tracking in VR headsets. Both methods exploit deflectometric information, i.e., the specular reflection of an extended screen over the eye surface. 1Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL, 60208 2Department of Computer Science, ETH Zurich, Zurich, Switzerland, 8092 3Department of Computer Science, Northwestern University, Evanston, IL, 60208 4Want College of Optical Sciences, University of Arizona, Tuscon, AZ, 85721 *These two authors contributed equally *[email protected] ## 1 Introduction A fast and accurate solution to eye-tracking is vital for many functions in Virtual Reality(VR) headsets, including foveated rendering, virtual avatar interaction, or increasing the viewing comfort. Most state-of-the-art eye-tracking methods either exploit 2D features detected from 2D eye images ("image-based methods"), or use sparse reflections of a few point light sources at the eye surface ("reflection-based methods"). A prominent example of the latter is "glint tracking" which samples the eye surface at \(\sim\) 10-15 point source reflections. For those state-of-the-art methods, the density of measured surface points (2D features for image-based methods, point source reflections for glint tracking) is relatively low, and the acquired 3D information about the eye surface is limited [1, 2]. In [1] we introduced a novel concept for eye tracking in VR headsets that utilizes Deflectometry [3] to estimate the gaze direction: The specular reflection of an extended screen displaying a known pattern is observed over the eye surface. By observing the deformation of the pattern in the camera image, dense 3D information about the eye surface (such as surface shape and normal) can be extracted, which is then used to estimate the gaze direction. The acquired data density is significantly higher than the density of the sparse methods discussed above. Factors of 1000X and higher are easily achievable. We refer to [1] for more information. In this contribution, we present two novel methods based on our original idea described above. One method utilizes single-shot stereo Deflectometry for fast and precise eye surface measurement; the other method uses an optimization-based inverse rendering approach to estimate the gaze direction from the captured deflectometric information [2]. For both methods, we show quantitative gaze evaluation results from real world experiments. ## 2 Methods and Results **Single-shot stereo deflectometry approach:** This method uses a crossed fringe pattern on the screen to measure the surface normal map of the eye surface. To establish the required correspondence between screen and camera, the phase information for both fringe directions is evaluated _in single-shot_ via a 2D continuous wavelet transform approach [4]. The second camera is used to solve the Deflectometry normal-depth ambiguity problem [5]. This allows for a unique reconstruction of the shape _and_ normal map of the eye surface in single-shot. To estimate the gaze direction, the calculated surface normals are traced back towards the eye center. Due to the spherical shape of the eye's cornea and sclera, but their vastly different radii, the back-traced normals intersect at two different points inside the eye (cornea and sclera center). Connecting these two points delivers the optical axis of the eye and the gaze direction. We note that this approach also works if cornea and sclera are not perfectly spherical but rotational Figure 1: **Eye tracking using single-shot stereo Deflectometry.(a) Prototype setup. (b,c) Captured camera images of realistic eye model and real human eye, overlayed with the calculated surface normals and estimated gaze direction.** symmetric. In this case, all back-traced normals intersect along a line that coincides with the optical axis of the eye. We evaluate this method by performing a quantitative real-world experiment on a realistic eye model mounted on a rotation stage (see setup in Fig. 0(a)). We rotate the eye model to different rotation positions \(a\) (\(-3^{\circ}\), \(0^{\circ}\), \(3^{\circ}\), and \(6^{\circ}\)) and evaluate the gaze angle \(\theta_{a}\) as described above. Since the absolute gaze angle is unknown, we calculate the _relative gaze angle_ between two rotation positions and compare the result with the angle we rotated the eye model ("ground truth"). We acquired 20 measurements for each rotation position (80 measurements in total), while the eye model was always moved/rotated before a measurement was taken. Eventually, we evaluated the mean relative error \(\epsilon_{0^{\circ}}\) at the rotation position \(0^{\circ}\) w.r.t. all other rotation positions \(a\): \(\epsilon_{0^{\circ}}=||\tilde{\theta}_{a}-\tilde{\theta_{0^{\circ}}}||-|a-0^{ \circ}||\). The results are shown in Tab.1 left. It can be seen that \(\epsilon_{0^{\circ}}\) is below \(0.35^{\circ}\) for all measurements. To show the feasibility of our method under real conditions, we also performed a first qualitative single-shot measurement on a real living human eye. The result is shown in Fig.1.(c). It can be seen that the measurement on the human eye shows the same properties as the measurement on the model eye: The evaluated surface normals intersect at two points and the evaluated gaze vector is qualitatively correct. A procedure for the quantitative evaluation of the absolute gaze direction is still current work in progress. **Optimization-based inverse rendering approach:** This method uses the known geometry of our calibrated [6] deflectometric setup (see Fig.2(a)) to develop a PyTorch3D-based differentiable rendering pipeline that simulates a virtual computer-generated (CG) eye model under screen illumination (Fig.2.(b)). Eventually, the images and screen-camera correspondence information of the _real_ eye measurement is used to optimize the CG eye's rotation, translation, and shape parameters with our renderer via gradient descent. This is done until the simulated setup with the CG eye produces images and correspondences that closely match the real measurements, and the gaze direction of the CG eye is eventually used as an estimate of the real eye's gaze direction. The method does not require a second camera. Moreover, it does not require a specific screen pattern and can even work with ordinary video frames of the main VR screen itself - also in single-shot. We refer to [2] for more information. We evaluated our optimization-based method with the same experiment described above at different rotation positions (\(-4^{\circ},-2^{\circ}\), \(0^{\circ}\), \(2^{\circ}\), and \(4^{\circ}\)), using phase shifted sinusoids as screen pattern. The evaluated mean relative error \(\epsilon_{0^{\circ}}\) at the rotation position \(0^{\circ}\) w.r.t. all other rotation positions \(a\) is given in Tab.1 right. All evaluated errors \(\epsilon_{0^{\circ}}\) are below \(0.5^{\circ}\)
2303.04339
Learning the Finer Things: Bayesian Structure Learning at the Instantiation Level
Successful machine learning methods require a trade-off between memorization and generalization. Too much memorization and the model cannot generalize to unobserved examples. Too much over-generalization and we risk under-fitting the data. While we commonly measure their performance through cross validation and accuracy metrics, how should these algorithms cope in domains that are extremely under-determined where accuracy is always unsatisfactory? We present a novel probabilistic graphical model structure learning approach that can learn, generalize and explain in these elusive domains by operating at the random variable instantiation level. Using Minimum Description Length (MDL) analysis, we propose a new decomposition of the learning problem over all training exemplars, fusing together minimal entropy inferences to construct a final knowledge base. By leveraging Bayesian Knowledge Bases (BKBs), a framework that operates at the instantiation level and inherently subsumes Bayesian Networks (BNs), we develop both a theoretical MDL score and associated structure learning algorithm that demonstrates significant improvements over learned BNs on 40 benchmark datasets. Further, our algorithm incorporates recent off-the-shelf DAG learning techniques enabling tractable results even on large problems. We then demonstrate the utility of our approach in a significantly under-determined domain by learning gene regulatory networks on breast cancer gene mutational data available from The Cancer Genome Atlas (TCGA).
Chase Yakaboski, Eugene Santos Jr
2023-03-08T02:31:49Z
http://arxiv.org/abs/2303.04339v1
# Learning the Finer Things: Bayesian Structure Learning at the Instantiation Level ###### Abstract Successful machine learning methods require a trade-off between memorization and generalization. Too much memorization and the model cannot generalize to unobserved examples. Too much over-generalization and we risk under-fitting the data. While we commonly measure their performance through cross validation and accuracy metrics, how should these algorithms cope in domains that are extremely under-determined where accuracy is always unsatisfactory? We present a novel probabilistic graphical model structure learning approach that can learn, generalize and explain in these elusive domains by operating at the random variable instantiation level. Using Minimum Description Length (MDL) analysis, we propose a new decomposition of the learning problem over all training exemplars, fusing together minimal entropy inferences to construct a final knowledge base. By leveraging Bayesian Knowledge Bases (BkBs), a framework that operates at the instantiation level and inherently subsumes Bayesian Networks (BNs), we develop both a theoretical MDL score and associated structure learning algorithm that demonstrates significant improvements over learned BNs on 40 benchmark datasets. Further, our algorithm incorporates recent off-the-shelf DAG learning techniques enabling tractable results even on large problems. We then demonstrate the utility of our approach in a significantly under-determined domain by learning gene regulatory networks on breast cancer gene mutational data available from The Cancer Genome Atlas (TCGA). ## 1 Introduction Since popularization by Pearl (1986), learning Bayesian Networks (BNs) has solidified into a steadfast research area for 40 years. It has become an important paradigm for modeling and reasoning under uncertainty and has seen applications from stock market prediction (Malagrino, Roman, and Monteiro, 2018) and medical diagnosis (Shih, Choi, and Darwiche, 2018) to Gene Regulatory Networks (GRNs) (Sauta et al., 2020). Despite Bayesian Network Structure Learning (BNSL) being NP-hard (Chickering, Heckerman, and Meek, 2004) and even simpler structures like polytrees being NP-hard(er) (Dasgupta, 1999), new constraints (Gruttemeier and Komusiewicz, 2020), improvements (Trosser, de Givry, and Katsirelos, 2021), and scalings (Scanagatta et al., 2015) are presented at major AI conferences every year. This is because BNs and affiliated structures like Markov (Koller and Friedman, 2009) and Dependency Networks (DNs) (Heckerman et al., 2000) offer a quality that other methods such as deep learning do not; explainability (Dosilovic, Brcic, and Hlupic, 2018; Burkart and Huber, 2021). The optimization that occurs in Probabilistic Graphical Model (PGM) structure learning is \[G^{*}=\operatorname*{argmax}_{G}F(G,D)\] \[\text{subject to }G\in\Omega\] where \(D\) is a database, \(G\) is a graph structure such as a BN, \(F\) is a scoring function that yields the goodness of fit of the structure \(G\), and \(\Omega\) is the set of allowed structures for \(G\); for BNs this would be the space of all possible Directed Acyclic Graphs (DAGs). Scoring functions are essential to the structure learning problem and should have a theoretical justification in information theory or otherwise. For instance, the most common scoring functions such as Bayesian Information Criteria (BIC) (Schwarz, 1978), Minimum Description Length (MDL) (Rissanen, 1978), and Akaike Information Criterion (Akaike, 1974) are all based on information theoretic criteria or can be viewed from this perspective. While we will spend part of this paper in theoretically justifying our model scoring approach, our goal is _not_ to present a better scoring function. Instead, our goal is to illustrate that no matter the scoring function or learning algorithm, an over-generalization is encountered when modeling at the Random Variable (RV) level. By operating at the RV level, models force a complete distribution, as is the case with BNs. While a complete distribution is often desired, this has an unintended over-generalization consequence, particularly in under-determined domains. This phenomenon even occurs in deep learning systems, and is generally referred to as fooling (Szegedy et al., 2014; Nguyen, Yosinski, and Clune, 2015; Kardan and Stanley, 2018). However, we will limit our scope to PGMs as our end goal is to analyze and/or hypothesize structural dependency relationships. Given this goal, such over-generalization could yield non-optimal structures, biasing analysis and derived hypotheses leading to misguided conclusions. To illustrate this over-generalization and provide intuition for learning at the RV instantiation level, we provide a motivating example taken from real-world data. Motivating ExampleIt is well known in cancer research that the genes TP53 and TTN have somatic mutations that affect chemotherapy responses [20]. To demonstrate a real-world effect of BN over-generalization, we learned a simple BN for this interaction over the TCGA [17] mutational dataset as seen in Figure 0(a). This BN encodes four possible worlds represented by distinctly styled arrows in Figure 0(b). For this example we have reduced the state space of each gene to just mutated or not mutated. Assume our goal is to minimize the entropy or uncertainty of each world or explanation. Then the conditional entropy of the model is the sum over each world's conditional entropy which is inherently direction dependent. Since there exists many possible world edge configurations (RV instantiation dependencies), there may exist a better set of edges than those induced by the BN. Figure 0(c) shows this is true and illustrates the best collection of minimal entropy inferences for this example. ContributionsTo address the over-generalization described we develop a structure learning algorithm leveraging the Bayesian Knowledge Base (BKB) framework as it inherently operates on the RV instantiation level. We accomplish this by detailing a necessary scoring metric to rank BKB models based on an MDL analysis and show theoretically that our MDL score takes over-generalization into account. Leveraging this theoretical result, we then develop our BKB Structure Learning (BKBSL) algorithm to minimize MDL and demonstrate empirically both competitive accuracy and better data fit compared to learned BNs. Further, we show that our algorithm can utilize existing optimization frameworks for DAG learning bringing BKBSL into the realm of these well studied off-the-shelf methods. Lastly, we conclude by utilizing a learned BKB to explain possible gene associations over TCGA breast cancer data. ## 2 Related Work and Preliminaries As the MDL principle will be our guiding force in both theoretical and empirical analysis, we provide a brief review of its applications to directed PGMs, e.g. Bayesian Networks, as these models are most applicable to our study of BKBs. Lam and Bacchus [19] first presented an MDL learning approach for BNs based on a heuristic search method seeking to spend equal time between simple and more complex BN structures. This was accomplished by extending Chow and Liu's [19] result on recovering polytrees to general BNs via Kullback-Leibler cross entropy minimization allowing them to develop a weighting function over nodes. Their approach demonstrated that minimizing the MDL of BNs performs an intuitive trade-off between model accuracy and complexity. In their work, they also eluded to a potential subjectivity in choosing a model encoding strategy leading to research for improved MDL scores for BNs [23, 24]. Hansen and Yu [20] detail a complete review of various MDL formulations. Empirical evaluation of MDL as a scoring function for BN learning has also been well studied. Yang and Chang [20] analyzed the performance of five scoring functions: uniform prior score metric (UPSM), conditional uniform prior score metric (CUPSM), Dirichlet prior score metric (DPSM), likelihood-equivalence Bayesian Dirichlet score metric (BDeu), and MDL. They showed that MDL was able to correctly identify ground-truth network structures from a variety of possible candidates, yielding the highest discrimination ability. Liu et al. [20] also performed empirical BN learning analysis over different scoring function, namely: MDL, Akaike's information criterion (AIC), BDeu, and factorized normalized maximum likelihood (fNML). Their approach tested the recovery accuracy of each scoring method over various gold standard networks as compared to the random networks used by Yang and Chang. Their results confirm the utility of MDL as it performed best in recovering the optimal networks when sufficient data was given. To our knowledge there has been no work in structure learning on the RV instantiation level, likely due to the desire to learn complete distributions. Further, we have limited our comparisons to BNs as they are a predominant model in literature and provide a comparison to judge empirical results. ### Bayesian Networks A BN is a DAG \(G=(V,E)\) that represents the factorized joint probability distribution over random variables \(X=(X_{1},\ldots,X_{n})\) of the form: \[\text{Pr}(X)=\prod_{i}^{n}P(X_{i}\mid\Pi(X_{i})) \tag{1}\] such that \(\Pi(X_{i})\), or more concisely \(\pi_{i}\), are the structural parents of the random variable \(X_{i}\) according to \(G\) and each node \(V_{i}\in V\) correspond directly to a random variable \(X_{i}\) and \(n\) is the number of random variables in \(X\). As the BN MDL formulation is well known in the literature, we point the reader to Appendix D.1 for a review. ### Bayesian Knowledge Bases Santos, Jr. and Santos [19] developed the BKB framework to generalize BNs to the random variable instantiation level, and to offer knowledge engineers a semantically sound and intuitive knowledge representation. BKBs unify "if-then" rules with probability theory to provide several modeling benefits compared to BNs, fuzzy logics, etc. First, BKBs do not require complete accessibility and can even leverage incomplete information to develop potentially more representative models. Second, since BKBs operate at the instantiation level, they can handle various types of cyclic knowledge. Lastly, BKBs have both robust tuning algorithms to assist in model correction/validation [21, 22] and information fusion algorithms to incorporate knowledge efficiently and soundly from disparate knowledge sources [23, 24]. BKBs consist of two components: instantiation nodes (I-nodes) which represent instantiations of random variables of the form \(X_{i}=x_{ik}\) where \(k\) is the \(k\)-th state of \(X_{i}\), and support nodes (S-nodes) that represent the conditional probabilities between I-node relationships of the form \(X_{i}=x_{ik}\to q=0.87\to X_{j}=x_{jl}\). The collection of these (in)dependencies describe the BKB correlation graph. For a precise definition and a graphical depiction see Appendix D. While BKBs can handle incompleteness and various forms of cyclicity, it is necessary that all S-nodes in a BKB obey mutual exclusivity (mutex) and associated probability semantics. Mutex guarantees that mutually exclusive events cannot be true at the same time. Concretely, we say that two sets of I-nodes, \(I_{1}\) and \(I_{2}\), are mutex if there exists an I-node \(X_{i}=x_{ik_{1}}\in I_{1}\) and \(X_{i}=x_{ik_{2}}\in I_{2}\) such that \(k_{1}\neq k_{2}\). We say that two S-nodes are mutex if the two I-node parent sets of each S-node are mutex. BKBs use a weight function to map S-node weights to associated conditional probabilities associated with the conditional dependence relationship described by the S-node. This weight function is analogous to the conditional probability tables (CPTs) of BNs except that BKBs do not require complete information. This weight function along with the correlation graph defines the BKB. Bayesian Knowledge BaseA Bayesian Knowledge Base (BKB) is a tuple \(K=(G,w)\) where \(G\) is a correlation graph \(G=(I\cup S,E)\) where \(I\) and \(S\) are the sets of I- and S-nodes, \(E\subset\{I\times S\}\cup\{S\times I\}\), and \(w:S\rightarrow[0,1]\) is a weight function, such that the following properties are met: 1. \(\forall q\in S\), the set of all incident I-nodes to \(q\), denoted \(\Pi(q)\), contains at most one instantiation of each random variable. 2. For distinct S-nodes \(q_{1},q_{2}\in S\) that support the same I-node, denoted \(Head_{G}(q_{i})\), the sets \(\Pi(q_{1})\) and \(\Pi(q_{2})\) must be mutex. 3. For any \(Q\subseteq S\) such that (i) \(Head_{G}(q_{1})\) and \(Head_{G}(q_{2})\) are mutex, and (ii) \(\Pi(q_{1})\) and \(\Pi(q_{2})\) are not mutex, then \(\forall q_{1},q_{2}\in Q,\sum_{q\in Q}w(q)\leq 1\). The probability of an inference (world) in a BKB, denoted \(\tau\), is the product of all S-node weights \(w(q)\) consistent with that world (Appendix D.2). The joint probability distribution of a BKB is the sum of all inference probabilities consistent with an evidence set \(\Theta\) represented as \(I_{\Theta}\) and given by \[P(\Theta)=\sum_{\tau\in I_{\Theta}}\prod_{q\in\tau}w(q) \tag{2}\] ## 3 The BKB Minimum Description Length To construct a BKB structure learning algorithm, we first need to define a scoring function that can rank BKB structures as well as provide a theoretical justification for its utility. For these reasons we focus our attention on a Minimum Description Length (MDL) score as it is a well studied tenet of learning theory [10]. The idea is that the best model given data should minimize both (1) the encoding length of the model and (2) the length of encoding the data given the model. This is akin to applying Occam's Razor to model selection, i.e., choose the simplest model that still describes the data reasonably well. ### Encoding the BKB The minimum length needed to encode a BKB is related directly to the number of S-nodes modeled by the BKB. The encoding of each S-node will contain a probability value and a parent set of I-nodes. For a problem with \(n\) RVs such that each RV can have \(r_{i}\) number of instantiations, to encode all I-nodes we would need \(\log_{2}(m)\) number of bits where \(m=\prod_{i}r_{i}\). The general BKB MDL is \[\sum_{q\in S}\left((|\Pi(q)|+1)\log_{2}(m)+\delta\right)-N\sum_{\tau}p_{\tau} \log_{2}(q_{\tau}) \tag{3}\] where \(\delta\) is the number of bits needed to store the probability value and the first term is the BKB model encoding length. From Equation 3 it is clear that we incur a modeling cost for the BKB based on the finer granularity of the model. This cost is derived in Appendix C.1. Therefore, if we know that the distribution factorizes to a BN, there is no reason to use a BKB. However, rarely in practice do we know the true distribution of the data and in most cases the data we learn from is incomplete. This eludes to a natural rationale for using a BKB that will be theoretically justified. Figure 1: (a) A simple learned BN over the TCGA gene mutation dataset using GOBNILP [10] where the variable states are either mutated or not mutated. (b) A graph of all corresponding worlds represented in (a) delineated by different line styles. (c) A better orientation of intra-world dependency relationships that lead to a lower total conditional entropy. All values are conditional entropies calculated from the TCGA gene mutational dataset. ### Encoding the data with a BKB The task of encoding dataset \(D\) is centered on learning a joint distribution over random variables \(X=(X_{1},\ldots,X_{n})\). As we are focused on the discrete case, each variable \(X_{i}\) will have \(r_{i}\) discrete states and every unique choice of random variable instantiations defines a complete world \(\tau\) of the ground-truth distribution of the data and is assigned an associated probability value, \(p_{\tau}\). We will make several standard statistical assumptions. We assume that each data instance in \(D\) is a complete world such that each data instance specifies a value for every random variable in \(X\)1. We assume that each data instance is a result of independent random trails and expect that each world would appear in the dataset with a frequency \(\approx Np_{\tau}\). Footnote 1: Only for simplicity do we make the assumption that a data instance must be complete, i.e., not contain any missing values. The main theme of the MDL metric is to efficiently compress \(N\) data instances as a binary string such that we minimize the string's encoding length. There are many different coding and compression algorithms we can use, but since we simply care about comparing different MDLs as a metric we can limit our focus to just symbol codes [12]. Symbols codes assign a unique binary string to each world in the dataset. For instance, the world \(\{X_{1}=1,X_{2}=1,X_{3}=0\}\) might be assigned the symbol 0001. We can then encode the dataset as just the concatenation of all these world symbols and judge our compression effectiveness by calculating the length of this final binary string. Research in information theory has proved that for symbol codes it is possible to minimize the length of this encoded data string by leveraging the probability/frequency of each world symbol occurring. Specifically, Huffman's algorithm generates optimal Huffman codes [12], i.e., optimal symbol code mappings, that yield a minimum length. The key intuition for Huffman codes is that we should give shorter code words to more frequently occurring worlds and longer ones to less probable worlds. Lam and Bacchus (1994) proved that the encoding length of the data is a monotonically increasing function of the cross-entropy between the distribution defined by the model and the true distribution. Therefore, if we have a true distribution \(P\) and a model distribution \(Q\) over the same set of worlds, \(\tau_{1},\ldots,\tau_{t}\), where each world \(\tau_{i}\) is assigned the probability \(p_{i}\) by \(P\) and \(q_{i}\) by \(Q\), then the cross entropy between these distributions is \[C(P,Q)=\sum_{i=1}^{t}p_{i}\log_{2}\frac{p_{i}}{q_{i}}=\sum_{i=1}^{t}p_{i}(\log _{2}p_{i}-\log_{2}q_{i}) \tag{4}\] Calculating Equation 4 is not appealing as the number of worlds is combinatorial in the number of variables. Chow and Lui (1968) developed a famous simplification of Equation 4 as just a local computation over low-order marginals when \(Q\) has a tree factorization. Lam and Bucchus (1994) extended their result to models that have a general DAG structure. Their main result concludes that \(C(P,Q)\) is a monotonically decreasing function of the sum of each random variable's mutual information with their parents, \(I(X_{i};\Pi(X_{i}))\). Their exact result is restated in Appendix -Theorem 3. Further generalizing these results to the instantiation level, we can now show that an optimally learned BKB can encode the distribution as well or better than a BN. Consider again the fundamental MDL problem of learning the dataset encoding. Equation 4 says that we need to minimize the total cross-entropy between \(P\) and \(Q\). However, in terms of data encoding, we only need to minimize the difference between each \(p_{i}\) and \(q_{i}\) for unique worlds that are actually in \(D\). In this sense, our database encoding doesn't care about the worlds that aren't in the dataset, but for which a model like a BN naturally defines and generalizes. Therefore, BKBs handling of incompleteness gives us an opportunity to more tightly fit the data we actually know. Consider the following cross-entropy \[C(P,Q)=\sum_{i=1}^{d} p_{i}(\log_{2}p_{i}-\log_{2}q_{i})\] \[+\sum_{i=d+1}^{t}p_{i}(\log_{2}p_{i}-\log_{2}q_{i}) \tag{5}\] where worlds \(\tau_{1},\ldots,\tau_{d}\) are represented by the unique exemplars in \(D\) that we hope to encode, i.e., \(\{\tau_{i},\ldots,\tau_{d}\}=\{d_{1},d_{2},d_{3},\ldots\}_{\neq d}\subseteq D\), and worlds \(\tau_{d+1},\ldots,\tau_{t}\) are worlds that our model induces. In terms of encoding length we can narrow our focus to only considering worlds present in \(D\). As Lam and Bacchus (1994) proved the encoding length of the data is a monotonically increasing function cross-entropy, it is trivial to prove the following corollary. **Corollary 1**.: _Let's Define the cross-entropy \(C_{D}(P,Q)=\sum_{i=1}^{d}p_{i}(\log_{2}p_{i}-\log_{2}q_{i})\) between two distributions P and Q over the same set of worlds \(\tau_{1},\ldots,\tau_{d}\) s.t. these worlds must be included in a dataset D. Then the encoding length of the data \(D\) is monotonically increasing function of \(C_{D}\)._ Combining Corollary 1 with Lam and Bacchus mutual information theorem (Appendix - Theorem 3) we arrive at our main theoretical result. **Theorem 1**.: \(C_{D}(P,Q)\) _is a monotonically decreasing function of_ \[\sum_{\tau\in D_{\neq}}\sum_{i=1}^{n}p(x_{i\tau},\pi_{i\tau})\log_{2}\frac{p(x _{i\tau},\pi_{i\tau})}{p(x_{i\tau})p(\pi_{i\tau})} \tag{6}\] _where \(x_{i\tau}\) is the instantiation of random variable \(X_{i}\) determined by data instance \(\tau\), \(\pi_{i\tau}\) is the parent set instantiation of random variable \(X_{i}\) governed by \(\tau\), and \(D_{\neq}\) is the set of unique data instances (worlds) represented in \(D\). Therefore, \(C_{D}(P,Q)\) will be minimized when Equation 6 is maximized._ We leave the proof of this theorem to Appendix -B.1 as the intuition is fairly straightforward from Lam and Bacchus' theorem (Appendix -D Theorem 3). We have established that the encoding length of the data is a increasing function solely of \(C_{D}\) and that maximizing Equation 6 minimizes \(C_{D}\) and thereby the encoding length. With these results, we can deduce the existence of a theoretical BKB that will have an equal to or better data encoding length than the induced worlds of a BN given \(D\). **Theorem 2**.: _Given a BN \(G\) learned over a dataset \(D\) as to maximize the weight \(W_{G}=\sum_{i}I(X_{i};\Pi(X_{i}))\) given the structure \(G\), there exists a BKB \(K\) with a weight \(W_{K}\) according to Equation 6 such that \(W_{K}\geq W_{G}\)._ We defer a detailed proof of Theorem 2 to Appendix B.2 and provide a more concise proof sketch. The key insight is that any BN \(G\) can be transformed into a corresponding BKB \(K_{G}\) as BKBs subsume BNs. We are only interested in the data encoding length which can now be calculated over \(K_{G}\) by summing over the complete worlds represented in \(G\). Consider a single random variable \(X_{i}\) and it's associated parent set \(\Pi(X_{i})\). For each instantiation of \(X_{i}\) and \(\Pi(X_{i})\) there will be an associated S-node created in \(K_{G}\) with an instantiated weight according to Equation 6. Since the choice of parent set for each random variable instantiation in \(\Pi(X_{i})\cup\{X_{i}\}\) is governed by \(G\), we don't consider other S-node configurations of the same instantiated random variable set that may have greater weight. The BN structure constrains our S-node structures. Therefore, if we start with a optimal BN, transform it into a BKB \(K_{G}\), and analyze every permutation of each S-node's possible configurations taking the permutation that maximizes the instantiated weight, we will end up with a BKB \(K\) with the same number of S-nodes that has a total weight equal to or greater than the BN representation \(K_{G}\). This result allows us to also state the following corollary based on the fact that \(I(X_{i};\Pi(X_{i}))=H(X_{i})-H(X_{i}|\Pi(X_{i}))\). **Corollary 2**.: _Since \(I(X_{i};Pi(X_{i}))=H(X_{i})-H(X_{i}|\Pi(X_{i}))\geq 0\). We can maximize Equation 6 by minimizing the instantiated conditional entropy \(H(x_{i\tau}|\pi_{i\tau})=p(x_{i\tau},\pi_{i\tau})\log_{2}\frac{p(x_{i\tau}, \pi_{i\tau})}{p(\pi_{i\tau})}\)._ ## 4 BKB Structure Learning Theorem 1 dictates that for every random variable instantiation \(x_{i\tau}\) in a data instance (world) \(\tau\in D_{\neq}\) where \(D_{\neq}\) is the set of unique data instances in \(D\), we should assign an instantiated parent set \(\pi_{i\tau}\) such that the instantiated conditional entropy is minimized according to Corollary 2. The key insight of our structure learning approach is that we can decompose our learning over the worlds represented in the data. In each world, we will have at most a single instantiation of each RV and our goal is to select a set of S-nodes with a structure that minimizes instantiated conditional entropy for that world. We can view each world in the data as a separate complete inference which form an acyclic subgraph of their respective encompassing BKB. A precise definition of a BKB inference can be found in Appendix D. Our structure learning algorithm reduces to finding a directed acyclic inference graph for each world that minimizes \(\sum_{\tau}\sum_{i}H(x_{i\tau}|\pi_{i\tau})=p(x_{i\tau},\pi_{i\tau})\log_{2} \frac{p(x_{i\tau},\pi_{i\tau})}{p(\pi_{i\tau})}\). Further, we can use any off-the-shelf DAG learning algorithm to accomplish this step so far as our scoring function inherently minimizes instantiated conditional entropy and BKB encoding length. There has been significant advancements in field of BN and DAG learning and we make no attempt in covering all such procedures. Instead we will focus on the state-of-the-art exact BN (DAG) solver GOBNILP (Cussens, Haws, and Studeny, 2015; Cussens, Haws, and Studeny, 2012). Upon learning each minimal entropy inference, we then need a method for merging this knowledge together that is semantically sound. A standard union type operation will not generally be supported as the unioned BKB would likely incur many mutex violations as seen in Appendix Figure 3(a). Instead, we can employ a well-studied BKB fusion (Santos, Wilkinson, and Santos, 2011; Yakaboski and Santos, 2021) algorithm that supports the fusion of an arbitrary number of BKB Fragments (BKFs) by attaching source I-nodes to every S-node corresponding to the data instance from which the inference graph originated. A graphical example of this approach is depicted in Appendix Figure 3(b) along with additional information regarding BKB fusion in Appendix D.3. This procedure ensures that that no mutual exclusion violations are present in the fused BKB maintaining a consistent probability distribution over the data and leading to model generalization. Appendix D.3 provides a detailed explanation of generalization in fused BKBs. Aside from forming a mutual exclusive BKB, fusion also presents us with another degree of freedom during learning. If each data instance was generated by an i.i.d. process, it is natural to assume a normalized probability over all source nodes. However, many processes do not generate truly i.i.d or representative samples. Therefore, if we view these source S-nodes as reliabilities that can be tuned, we may be able to correct errors in higher order inference calculations that arise due to under-fitting or over-generalization. We leave such analysis to future work. Combining each of the steps presented so far, we outline our general BKB structure learning procedure in Algorithm 1. ``` 1:\(K\leftarrow\emptyset\) 2:for\(\tau\in D_{\neq}\)do 3:\(G_{\tau}\gets f(\tau,R,\Theta)\) 4:\(K\gets K\cup\{G_{\tau}\}\) 5:endfor 6:returnBKB-Fusion\((K,R)\) ``` **Algorithm 1**BKB Structure Learning ## 5 Empirical Results To demonstrate the utility of both our proposed algorithm as well as our learned BKB models we conducted 40 experiments on benchmark datasets comparing BKBSL and BNSL in terms of MDL and complexity performance. We then conducted 22 cross validation classification experiments to compare accuracy performance with learned BNs as well as a use-case studying the under-determined bioinformatics domain of structural dependency analysis among single-nucleotide polymorphism (SNPs) in breast cancer. ### Benchmark Analysis When comparing MDL between our learned BKBs and BNs, we are only concerned with comparing the _data_ encoding length, as the model encoding length is only used to penalize more complex models. Our _data MDL_ results in Appendix Table 1 demonstrate that a BKB learned using our BKBSL algorithm finds a tighter data fit than the best BN in all 40 dataset. Intuitively, this is because the BN must generalize away from potentially good instantiated scores in favor of the entire random variable score. Our MDL experiments also demonstrates a practical strength of BKBSL over BNSL related to the number of calls to a joint probability calculator or estimator. In order to calculate the necessary scores for an exact DAG learning algorithm like GOBNILP, we needed to calculate empirical joint probabilities from each dataset. For all experiments we tracked the number of unique calls to this function by our BKBSL algorithm and traditional BNSL algorithm. Since BNSL operates at the RV level, it had to calculate all joint probabilities governed by a given parent set configuration. However, BKBSL did not need to calculate the full CPTs as it operates at the RV instantiation level and decomposes over each data instance, reducing the number of calls to this calculator. We feel that this is a more representative complexity performance metric as it is agnostic to equipment configurations. This effect is detailed in Appendix Table 1, and we can see strong correlation between performance savings over BNs and the number of features (Pearson \(r=-0.5994\), \(p\text{-value}=4.363\times 10^{-5}\)) as well as the number of I-nodes (\(r=-0.4916\), \(p\text{-value}=0.0013\)) in the dataset. All learned BKBs and BNs are hosted on Github and can be viewed within in a Jupyter Notebook for easier visualization. To finalize our BKBSL benchmark analysis we performed accuracy comparisons between our learned BKBs and traditionally learned BNs using GOBNILP and standard MDL scoring. We performed a 10-fold classification cross validation on a subset of only 22 datasets due to the increased learning and reasoning time incurred by running cross validation analysis. We can see from Appendix Table 2 that our BKBSL algorithm is very competitive with BNs in terms of precision, recall and F1-score. Further, our BKBSL models even beat BNs in \(63\%\) of cases in terms of precision with greater degradation of performance in terms of recall and F1-score. The alternative hypothesis that either traditionally learned BNs or our learned BKBs will outperform each other in all accuracy categories (Chi\({}^{2}\) Statistic \(\chi=1.0\), \(p\text{-value}=0.3173\)) is _not_ statistically significant. Therefore, we fail to reject the null that learned BNs or BKBs perform better in these cases owing to approximately equal total performance. This raises the question: Why does our learned BKB perform better in some datasets and not in others? While no feature of the datasets provided any statistically significant predictor of superior performance and leaving more in-depth analysis to future work, we do hypothesize an explanation. It is a well-known problem that real world datasets are often unfamiliar to DAGs, e.g. BNs, due to the existence of multiple information equivalent Markov Boundaries (MBs) [21, 14]. Since our BKBSL focuses on learning an optimal structure for every unique exemplar \(\tau\), we can view each learned BKF as an equivalent inference from a hypothetical BN whose dependency structure matches that of the associated BKF. As we are only concerned with the specific instantiations of \(\tau\), the hypothetical BN and BKF will yield the same probability for this world as their parameters are governed by the same dataset. As our prediction task is to determine the most likely state of a response variable \(Y\) given a complete set of evidence \(E\), e.g., \(y^{*}=\operatorname*{argmax}_{y}Q(Y=y\mid E)\), then the closer our joint probability \(Q(Y=y,E)\) is to the true data distribution \(P(Y=y,E)\) the more accurate our classifier. This is due to the fact when comparing all conditional probabilities \(Q(Y=y_{i}\mid E)=\frac{Q(Y=y_{i},E)}{Q(E)}\) the denominator cancels out and we are only concerned with the accuracy of \(Q(Y=y_{i},E)\). If we imagine our learned BKFs deriving from various hypothetical BNs each with uniquely induced MBs for every RV, our fused BKB essentially incorporates a multiplicity of MBs choices for each of these hypothetical BNs and selects the best performing choice for every world of interest, i.e., prediction class given evidence. We hypothesize that our BKBSL will then perform better on datasets that induce more information equivalent MBs since a BN must select only one and our BKBSL can incorporate multiple in its predictions. Whereas in datasets with fewer MBs, our BKBSL performance may degrade due to overfitting. We intend to study this area further as it may yield clear indications about when to use BKBSL over BNSL in practice. ### Gene Regulatory Network (GRN) Application in Breast Cancer We applied our approach to somatic mutation profiles of breast cancer cases in TCGA [13] to study whether our learned model could still offer utility in this extremely under-determined domain. Since prediction accuracy would not be a reliable metric of success in this dataset, we focused our analysis on hypothesizing potentially significant mutational interactions in cancer. However, if we are to trust any structural hypotheses generated by our approach, we need to ensure the model captures two fundamental biological concepts: (1) We can extract two- or three-hit interactions that are supported in the literature [15, 16, 17], and (2) we can identify and (possibly) handle genomic instability [1, 18]. Given the well-regarded two- and three-hit hypotheses for understanding the role of genetic mutations in cancer development, a model attempting to describe a mutational GRN should be able to capture this concept. The premise behind the two- or three-hit hypotheses is that because many cancers are driven by mutations in various tumor suppressor genes and these loss-of-function mutations are recessive in nature, in general, at least two mutations in these genes are needed to develop cancer. There are certain tumor suppressors genes such as TP53 [15, 16] that are common in all cancer sub-types and are likely the first hit for non-hereditary cancers. Looking at the dependence relationship subgraph in our learned BKB between a first hit tumor suppressor gene such as TP53 and a second hit tumor suppressor gene related to breast cancer such as a HER1 or HER2 [14], we should observe a directed dependency relationship from TP53 to HER1 or HER2. Figure 1(a) shows we observe this relationship adding to the biological validity of our model. It is well known that cancer is also driven by genomic instability [10], the increased tendency for gene mutations to accumulate, disrupting various biological pathways and in turn causing more mutations. An artifact that we discovered in our BKB network analysis and illustrative of our first example is that in some cases there exist cyclic relationships on the random variable level. This is due to separate inferences learning opposing dependency relationships due to the additional context of their respective inference. This effect can be seen in Figure 1(b). Here we have a cyclic random variable relationship between TP53 and the SNP located at chromosome 17 position 179234297 of the PIK3CA gene. TP53 and PIK3CA are known drivers of genomic instability [1], affecting the mutational states of many other genes in their inference contexts. Since our inference level parent set limit was set to one, our algorithm cannot reliably extract mutual dependency relationship between TP53 and PIK3CA, thereby causing different inferences to have different directionalities. This cannot be captured by a BN. Further, this result is supported by in-vitro research regarding the joint effect of PIK3CA and TP53 on genomic instability in breast cancer [1] and is expected from this model given that both genes drive many other downstream mutations. Overall we found 12 of these cyclic relationships in our learned BKB. Of these 12 associations we found literature support for four of them, namely, PIK3CA and TP53 [10], OBSCN and TP53 [11, 12], MAP3K1 and PIK3CA [13], CAD and PIK3CA [13]. ## 6 Limitations and Future Work The primary limitation of our approach is that we need to learn a BKB fragment for every instance in the training dataset. While we have some complexity benefits from making fewer calls to a given joint probability calculator, this benefit could be neutralized due to the requirement of having to run the DAG learning algorithm multiple times. For instance, once all scores are calculated, BNSL only requires a single run of the respective DAG learning algorithm, whereas our BKBSL approach requires running this algorithm \(N\) times. We leave to future work the exploration of this trade off while hypothesizing that there may be a DAG learning formulation in which all fragments can be learned in a single pass by reusing/sharing local scores/structures between fragments. While our BKBSL algorithm seems to generalize well to unobserved examples in our benchmark experiments, we still see instances with significant accuracy degradation. It is a largely unanswered question as to why machine learning algorithms perform better or worse on particular datasets [15] and we have detailed a possible explanation in Section 5.1 to be explored in future work. As mentioned in Section 4, we could also address accuracy degradation by tuning the source node reliabilities in our model. Such an approach yields another degree of freedom to adjust the model and also may highlight the importance/significance of individual data instances in relation to over model accuracy. We also leave this direction for future research. ## 7 Conclusions We have presented a new approach for performing Bayesian structure learning at the random variable instantiation level by leveraging Bayesian Knowledge Bases. We have detailed a theoretical justification for our algorithm and learned model as being the fusion of minimal entropy inferences or explanations over each training exemplar. We demonstrated empirically that our BKBSL algorithm finds a superior BKB than an equivalent BN (scored as BKB) on 40 benchmark datasets using our MDL formulation. Further, we demonstrated the practical utility of our approach by presenting statistically competitive accuracy with learned BNs over 22 benchmark datasets using a 10-fold cross validation. This provides evidence that our algorithm adequately generalizes to unseen data based on known knowledge rather than over-generalizing to force a complete distribution. Lastly, we conducted a structural analysis over a gene regulatory network learned from breast cancer mutation data taken from TCGA. This analysis resulted in finding dependency relationships that matched biological intuition and also revealed associations that are well known in the bioinformatics community. Figure 2: (a) RV level graph of learned BKB over TCGA breast cancer subgrabed on TP53 SNP relationships with HER genes. (b) Genomic instability evidence from the RV level cycle between PIK3CA SNP and general TP53 gene variable. To capture multiple levels of granularity, we included features related to exact positional SNPs as well as including gene level features and general variant classifications. For details about our naming conventions and feature selection process see Appendix A. ## Acknowledgements This research was funded in part by the National Center for Advancing Translational Sciences (NCATS), National Institutes of Health, through the Biomedical Data Translator program (NIH award OT2TR003436), Air Force Office of Scientific Research Grant No. FA9550-20-1-0032 and DURIP Grant No. N00014-15-1-2514. Special thanks to Joseph Gormlley and Eugene Hinderer for their comments and encouragement.
2303.07630
Non-isomorphic graphs with common degree sequences
For all positive even integers $n$, graphs of order $n$ with degree sequence \begin{equation*} S_{n}:1,2,\dots,n/2,n/2,n/2+1,n/2+2,\dots,n-1 \end{equation*} naturally arose in the study of a labeling problem in \cite{IMO}. This fact motivated the authors of the aforementioned paper to study these sequences and as a result of this study they proved that there is a unique graph of order $n$ realizing $S_{n}$ for every even integer $n$. The main goal of this paper is to generalize this result.
Rikio Ichishima, Francesc A. Muntaner-Batle
2023-03-14T05:01:19Z
http://arxiv.org/abs/2303.07630v1
# Non-isomorphic graphs with common degree sequences ###### Abstract. For all positive even integers \(n\), graphs of order \(n\) with degree sequence \[S_{n}:1,2,\ldots,n/2,n/2,n/2+1,n/2+2,\ldots,n-1\] naturally arose in the study of a labeling problem in [2]. This fact motivated the authors of the aforementioned paper to study these sequences and as a result of this study they proved that there is a unique graph of order \(n\) realizing \(S_{n}\) for every even integer \(n\). The main goal of this paper is to generalize this result. Key words and phrases:vertex degree, degree sequence, isomorphism problems in graph theory, graph operation 2020 Mathematics Subject Classification: Primary 05C07; Secondary 05C60, 05C76 ## 1. Introduction Unless stated otherwise, the graph-theoretical notation and terminology used here will follow Chartrand and Lesniak [1]. In particular, we assume that graphs considered in this paper are simple, that is, without loops or multiple edges. To indicate that a graph has _vertex set_\(V\) and _edge set_\(E\), we write \(G=\left(V,E\right)\). To emphasize that \(V\) and \(E\) are the vertex set and edge set of a graph \(G\), we will write \(V\) as \(V\left(G\right)\) and \(E\) as \(E\left(G\right)\). The _removal of a vertex_\(v\) from a graph \(G\) results in that subgraph \(G-v\) of \(G\) consisting of all vertices of \(G\) except \(v\) and all edges not incident with \(v\). Thus, \(G-v\) is the maximal subgraph of \(G\) not containing \(v\). On the other hand, if \(v\) is not adjacent in \(G\), _the addition of vertex_\(v\) results in the smallest supergraph \(G+v\) of \(G\) containing the vertex \(v\) and all edges incident with \(v\). The _union_\(G\cong G_{1}\cup G_{2}\) has \(V\left(G\right)=V\left(G_{1}\right)\cup V\left(G_{2}\right)\) and \(E\left(G\right)=E\left(G_{1}\right)\cup E\left(G_{2}\right)\). The _degree_ of a vertex \(v\) in a graph \(G\) denoted by \(\deg_{G}v\) is the number of edges incident with \(v\). A sequence \(s:d_{1},d_{2},\ldots,d_{n}\) of nonnegative integers is called a _degree sequence_ of a graph \(G\) of order \(n\) if the vertices of \(G\) can be labeled \(v_{1},v_{2},\ldots,v_{n}\) so that \(\deg v_{i}=d_{i}\) for \(1\leq i\leq n\). Throughout this paper, we write the degree sequence of a graph as an increasing sequence. A finite sequence \(s\) of nonnegative integers is _graphical_ if there exists some graph that realizes \(s\), that is, \(s\) is a degree sequence of some graph. The concepts of graph isomorphism and isomorphic graphs are also crucial for the development of this paper, and although they are very basic in graph theory, we introduce them as a matter of completeness. Let \(G_{1}=\left(V_{1},E_{1}\right)\) and \(G_{2}=\left(V_{2},E_{2}\right)\) be two graphs. They are _isomorphic_ (written \(G_{1}\cong G_{2}\)) if there exists a bijective function \(\phi:V_{1}\to V_{2}\) such that \(xy\in E_{1}\) if and only if \(\phi\left(x\right)\phi\left(y\right)\in E_{2}\). In this case, the function \(\phi\) is called an _isomorphism_ from \(G_{1}\) to \(G_{2}\). The following two lemmas regarding isomorphism of graphs are very elementary and fundamental, but nevertheless, necessary for the proof of our main result of this paper. Hence, we state and prove them next. **Lemma 1.1**.: _Let \(G_{1}=\left(V_{1},E_{1}\right)\) and \(G_{2}=\left(V_{2},E_{2}\right)\) be two graphs of order \(n\) for which there exist unique vertices \(v_{1}\in V_{1}\) and \(v_{2}\in V_{2}\) such that_ \[\deg_{G_{1}}v_{1}=\deg_{G_{2}}v_{2}=n-1.\] _Then \(G_{1}\cong G_{2}\) if and only if \(G_{1}-v_{1}\cong G_{2}-v_{2}\)._ Proof.: First, assume that \(G_{1}\cong G_{2}\). Then there exists an isomorphism \(\phi:V_{1}\to V_{2}\). Since \(v_{i}\) (\(i=1,2\)) are the only vertices of \(V_{i}\) with degree \(n-1\) and each isomorphism preserves degrees, it follows that \(\phi\left(v_{1}\right)=v_{2}\). Thus, if we consider \(G_{1}-v_{1}\) and \(G_{2}-v_{2}\), it follows that the function \(\phi^{\prime}:V_{1}\backslash\{v_{1}\}\to V_{2}\backslash\{v_{2}\}\) defined by \(\phi^{\prime}\left(a\right)=\phi\left(a\right)\) for all \(a\in V_{1}\backslash\{v_{1}\}\) is well defined and bijective. Furthermore, \(ab\in E\backslash\{v_{1}x\mid x\in V_{1}\backslash\{v_{1}\}\}\) if and only if \(\phi^{\prime}\left(a\right)\phi^{\prime}\left(b\right)\in E\backslash\{v_{2}x \mid x\in V_{2}\backslash\{v_{2}\}\}\). This implies that \(\phi^{\prime}:V_{1}\backslash\{v_{1}\}\to V_{2}\backslash\{v_{2}\}\) is an isomorphism and hence \(G_{1}\cong G_{2}\). Next, assume that \(H_{1}=\left(V_{1}^{\prime},E_{1}^{\prime}\right)\) and \(H_{2}=\left(V_{2}^{\prime},E_{2}^{\prime}\right)\) are two isomorphic graphs with an isomorphism \(\phi:V_{1}^{\prime}\to V_{2}^{\prime}\). Also, let \(v_{1}\) and \(v_{2}\) be two new vertices and consider two graphs \(H_{1}+v_{1}\) and \(H_{2}+v_{2}\). We show that \(H_{1}+v_{1}\cong H_{2}+v_{2}\). To do this, consider the function \(\phi^{\prime}:V\left(H_{1}+v_{1}\right)\to V\left(H_{2}+v_{2}\right)\) defined by \[\phi^{\prime}\left(v\right)=\left\{\begin{array}{ll}\phi\left(v\right)& \text{if }v\in V_{1}^{\prime}\\ v_{2}&\text{if }v=v_{1}.\end{array}\right.\] We will show that \(\phi^{\prime}\) is an isomorphism from \(H_{1}+v_{1}\) to \(H_{2}+v_{2}\). Since \(\phi\) is an isomorphism from \(H_{1}\) to \(H_{2}\), it follows that \(ab\in E\left(H_{1}+v_{1}\right)\) and \(\{a,b\}\cap\{v_{1}\}=\emptyset\) if and only if \(\phi^{\prime}\left(a\right)\phi^{\prime}\left(b\right)\in E\left(H_{2}+v_{2}\right)\). On the other hand, if \(av_{1}\in E\left(H_{1}+v_{1}\right)\) for all \(a\in V_{1}^{\prime}\) and \(bv_{2}\in E\left(H_{2}+v_{2}\right)\) for all \(b\in V_{2}^{\prime}\), then \(\phi^{\prime}\left(a\right)\phi^{\prime}\left(v_{1}\right)=\phi\left(a\right)v _{2}\in E\left(H_{2}+v_{2}\right)\). This implies that \(\phi^{\prime}\) is an isomorphism from \(H_{1}+v_{1}\) to \(H_{2}+v_{2}\) so that \(H_{1}+v_{1}\cong H_{2}+v_{2}\). **Lemma 1.2**.: _Let \(G_{1}=\left(V_{1},E_{1}\right)\) and \(G_{2}=\left(V_{2},E_{2}\right)\) be two graphs. If \(v_{1}\) and \(v_{2}\) are two new vertices, then \(G_{1}\cong G_{2}\) if and only if \(G_{1}\cup v_{1}\cong G_{2}\cup v_{2}\)._ Proof.: First, assume that \(G_{1}\cong G_{2}\). Then there exists an isomorphism \(\phi:V_{1}\to V_{2}\). Now, consider the function \(\phi^{\prime}:V_{1}\cup\{v_{1}\}\to V_{2}\cup\{v_{1}\}\) defined by \[\phi^{\prime}\left(v\right)=\left\{\begin{array}{ll}\phi\left(v\right)& \text{if }v\in V_{1}\\ v_{2}&\text{if }v=v_{1}.\end{array}\right.\] Since no edge of the form \(av_{1}\) exists in \(G_{1}\cup v_{1}\) and no edge of the form \(bv_{2}\) exists in \(G_{2}\cup v_{2}\), it follows that \(\phi^{\prime}\) is an isomorphism from \(G_{1}\cup v_{1}\) to \(G_{2}\cup v_{2}\) and hence \(G_{1}\cup v_{1}\cong G_{2}\cup v_{2}\). Next, assume that \(G_{1}\cup v_{1}\cong G_{2}\cup v_{2}\). Then there exists an isomorphism \(\phi:V_{1}\to V_{2}\). Since the image under \(\phi\) of any isolated vertex is an isolated vertex, we may assume, without loss of generality, that \(\phi\left(v_{1}\right)=v_{2}\). This implies that the function \(\phi^{\prime}:V_{1}\to V_{2}\) defined by \(\phi\left(v\right)=v\) for all \(v\in V_{1}\) is clearly well defined, bijective and an isomorphism from \(G_{1}\) to \(G_{2}\). Therefore, \(G_{1}\cong G_{2}\) ## 2. Main results With the information provided in the introduction, we are ready to present our main results. Let \(S_{0}:0\leqq a_{1}\leqq a_{2}\leqq\dots\leqq a_{n-1}\leqq a_{n}\) be a graphical sequence. If we assume that there exist exactly \(k\) (\(k\geq 1\)) graphs that realize \(S_{0}\), then we have the following result. **Theorem 2.1**.: _The sequences_ \[S_{0}^{(1)}:1,a_{1}+1,a_{2}+1,\dots,a_{n}+1,n+1;\] \[S_{0}^{(2)}:1,2,a_{1}+2,a_{2}+2,\dots,a_{n}+2,n+2,n+3;\] \[S_{0}^{(3)}:1,2,3,a_{1}+3,a_{2}+3,\dots,a_{n}+3,n+3,n+4,n+5;\] \[\vdots\] \[S_{0}^{(i)}:1,2,3,\dots,i,a_{1}+i,a_{2}+i,\dots,a_{n}+i,n+i,n+i+ 1,\dots,n+2i-1\] \[\vdots\] _are all graphical. Furthermore, there exist exactly \(k\), \(k\geq 1\), connected non-isomorphic graphs that realize each one of the sequences \(S_{0}^{(1)},S_{0}^{(2)},\dots,S_{0}^{(i)},\dots\)._ Proof.: We start by showing that each sequence \(S_{0}^{(i)}\) is graphical for any positive integer \(i\). To do this, we only need to take a graph that realizes \(S_{0}\), introduce two new vertices and join one of these two new vertices with all remaining vertices. Hence, \(S_{0}^{(1)}\) is graphical. To obtain a graph that realizes \(S_{0}^{(2)}\), we just need to take a graph that realizes \(S_{0}^{(1)}\) and once again introduce two new vertices joining one of these new two vertices with all remaining vertices. If we continue this process inductively, we obtain a graph that realizes \(S_{0}^{(i)}\) for any positive integer \(i\). Now, observe that since each graph realizing \(S_{0}^{(i)}\) (\(i\geq 1\)) has a vertex which is adjacent to all the other vertices, it follows that all these graphs are connected. Thus, it remains to show that each one of these sequences realizes exactly \(k\) (\(k\geq 1\)) graphs. To see this, let \(S_{0}=S_{0}^{(0)}\) and proceed by induction on the super subscript \(i\) of \(S_{0}^{(i)}\) for \(i\geq 0\). First, observe that \(S_{0}^{(0)}\) has the property that there exist exactly \(k\) (\(k\geq 1\)) non-isomorphic graphs that realizes \(S_{0}^{(0)}\) by assumption. Next, let \(i=l\) (\(l\geq 0\)) and assume that there exist exactly \(k\) (\(k\geq 1\)) non-isomorphic graphs realizing \(S_{0}^{(l)}\). Consider the sequence \[S_{0}^{(l+1)}:1,2,\dots,l,l+1,a_{1}+l+1,a_{2}+l+1,\dots,a_{n}+l+1,n+l+1,n+l+2, \dots,n+2l+1.\] and let \(G_{0}^{(l+1)}\) be any graph that realizes \(S_{0}^{(l+1)}\). It is now clear that the vertex of degree \(n+2l+1\) is adjacent to all other vertices of \(V\left(G_{0}^{(l+1)}\right)\). It is also true that if we eliminate this vertex, then we obtain a new graph with degree sequence \(0,S_{0}^{(l)}\). By inductive hypothesis, there exist exactly \(k\) (\(k\geq 1\)) non-isomorphic graphs with degree sequence \(S_{0}^{(l)}\). Then Lemma 1.2 yields that there are exactly \(k\) (\(k\geq 1\)) non-isomorphic graphs with degree sequence \(0,S_{0}^{(l)}\), and Lemma 1.1 implies that there are exactly \(k\) (\(k\geq 1\)) non-isomorphic graphs realizing \(S_{0}^{(l+1)}\). Therefore, the result follows. To conclude this section, notice that it is clear that the only graph that realizes the sequence \(s:1,1\) is the complete graph \(K_{2}\) of order \(2\). From this observation together with Theorem 2.1, the next result found in [2] follows as an immediate corollary. **Corollary 2.1**.: _For all positive integers \(n\), there exists a unique graph of order \(n\) that realizes the sequence \(S_{n}:1,2,\ldots,n/2.n/2,n/2+1,n/2+2,\ldots,n-1\)._ In summary, what we have proved in this paper is that if a degree sequence is realized by exactly \(k\) (\(k\geq 1\)) non-isomorphic graphs of order \(n\), then there exist infinitely many sequences that realize exactly \(k\) (\(k\geq 1\)) non-isomorphic graphs of order \(n\) each. Furthermore, all these graphs have the additional property that they are all connected. Acknowledgment. The authors would like to dedicate this paper to Susana Clara Lopez Masip who passed away on December 26, 2022 after a life dedicated to graph theory. The authors are also gratefully indebted to Yukio Takahashi for his technical assistance.
2307.04816
Q-YOLO: Efficient Inference for Real-time Object Detection
Real-time object detection plays a vital role in various computer vision applications. However, deploying real-time object detectors on resource-constrained platforms poses challenges due to high computational and memory requirements. This paper describes a low-bit quantization method to build a highly efficient one-stage detector, dubbed as Q-YOLO, which can effectively address the performance degradation problem caused by activation distribution imbalance in traditional quantized YOLO models. Q-YOLO introduces a fully end-to-end Post-Training Quantization (PTQ) pipeline with a well-designed Unilateral Histogram-based (UH) activation quantization scheme, which determines the maximum truncation values through histogram analysis by minimizing the Mean Squared Error (MSE) quantization errors. Extensive experiments on the COCO dataset demonstrate the effectiveness of Q-YOLO, outperforming other PTQ methods while achieving a more favorable balance between accuracy and computational cost. This research contributes to advancing the efficient deployment of object detection models on resource-limited edge devices, enabling real-time detection with reduced computational and memory overhead.
Mingze Wang, Huixin Sun, Jun Shi, Xuhui Liu, Baochang Zhang, Xianbin Cao
2023-07-01T03:50:32Z
http://arxiv.org/abs/2307.04816v1
# Q-YOLO: Efficient Inference for Real-time Object Detection + ###### Abstract Real-time object detection plays a vital role in various computer vision applications. However, deploying real-time object detectors on resource-constrained platforms poses challenges due to high computational and memory requirements. This paper describes a low-bit quantization method to build a highly efficient one-stage detector, dubbed as Q-YOLO, which can effectively address the performance degradation problem caused by activation distribution imbalance in traditional quantized YOLO models. Q-YOLO introduces a fully end-to-end Post-Training Quantization (PTQ) pipeline with a well-designed Unilateral Histogram-based (UH) activation quantization scheme, which determines the maximum truncation values through histogram analysis by minimizing the Mean Squared Error (MSE) quantization errors. Extensive experiments on the COCO dataset demonstrate the effectiveness of Q-YOLO, outperforming other PTQ methods while achieving a more favorable balance between accuracy and computational cost. This research contributes to advancing the efficient deployment of object detection models on resource-limited edge devices, enabling real-time detection with reduced computational and memory overhead. Keywords:Real-time Object Detection Post-training Quantization ## 1 Introduction Real-time object detection is a crucial component in various computer vision applications, such as multi-object tracking [43, 42], autonomous driving [15, 7], and robotics [13, 25]. The development of real-time object detectors, particularly YOLO-based detectors, has yielded remarkable performance in terms of accuracy and speed. For example, YOLOv7-E6 [34] object detector achieves 55.9% mAP on COCO 2017, outperforming both transformer-based detector SWINL Cascade-Mask R-CNN [22, 4] and convolutional based detector ConvNeXt-XL Cascade-Mask R-CNN [36, 4] in both speed and accuracy. Despite their success, the computational cost during inference remains a challenge for real-time object detectors on resource-limited edge devices, such as mobile CPUs or GPUs, limiting their practical usage. Substantial efforts on network compression have been made towards efficient online inference [39; 31; 26; 5]. Methods include enhancing network designs [10; 37; 41], conducting network search [46], network pruning [9; 8], and network quantization [17]. Quantization, in particular, has gained significant popularity for deployment on AI chips by representing a network using low-bit formats. There are two prevailing quantization methods, Quantization-Aware Training (QAT) [17; 38] and Post-Training Quantization (PTQ) [20]. Although QAT generally achieves better results than PTQ, it requires training and optimization of all model parameters during the quantization process. The need for pretraining data and significant GPU resources makes QAT challenging to execute. On the other hand, PTQ is a more efficient approach for quantizing real-time object detectors. To examine low-bit quantization for real-time object detection, we first establish a PTQ baseline using YOLOv5 [33], a state-of-the-art object detector. Through empirical analysis on the COCO 2017 dataset, we observe notable performance degradation after quantization, as indicated in Table 1. For example, a 4-bit quantized YOLOv5s employing Percentile achieves only 7.0% mAP, resulting in a performance gap of 30.4% compared to the original real-valued model. We find the performance drop of quantized YOLOs can be attributed to the activation distribution imbalance. As shown in Fig. 1, we observe high concentration of values close to the lower bound and the significant decrease in occur Figure 1: Activation value distribution histogram (with 2048 bins) of the model.21.conv layer in YOLOv5s. The occurrence of values between 0 and -0.2785 is extremely high, while the frequency of values above zero decreases significantly, reveals an imbalanced pattern. min denotes the fixed minimum truncation value, while max represents the maximum truncation value following the min-max principle. Max Q-YOLO(8) refers to the maximum truncation value when using the Q-YOLO quantization model at 8-bit, and Max Q-YOLO(4) indicates the maximum truncation value when applying the Q-YOLO quantization model at 4-bit. rences above zero. When employing fixed truncation values such as MinMax, representing activation values with extremely low probabilities would consume a considerable number of bits within the limited integer bit width, resulting in further loss of information. In light of the above issue, we introduce Q-YOLO, a fully end-to-end PTQ quantization architecture for real-time object detection, as depicted in Fig. 2. Q-YOLO quantizes the backbone, neck, and head modules of YOLO models, while employing standard MinMax quantization for weights. To tackle the problem of activation distribution imbalance, we introduce a novel approach called Unilateral Histogram-based (UH) activation quantization. UH iteratively determines the maximum truncation value that minimizes the quantization error through histograms. This technique significantly reduces calibration time and effectively addresses the discrepancy caused by quantization, optimizing the quantization process to maintain stable activation quantization. By mitigating information loss in activation quantization, our method ensures accurate object detection results, thereby enabling precise and reliable low-bit real-time object detection performance. Our contributions can be summarized as follows: 1. We introduce a fully end-to-end PTQ quantization architecture specifically designed for real-time object detection, dubbed as Q-YOLO. 2. A Unilateral Histogram-based (UH) activation quantization method is proposed to leverage histogram analysis to find the maximum truncation values, which can effectively minimize the MSE quantization error. 3. Through extensive experiments on various object detectors, we demonstrate that Q-YOLO outperforms baseline PTQ models by a significant margin. The 8-bit Q-YOLO model applied on YOLOv7 achieves a 3\(\times\) acceleration while maintaining performance comparable to its full-precision counterpart on COCO, highlighting its potential as a general solution for quantizing real-time object detectors. ## 2 Related Work ### Quantization Quantized neural networks are based on low-bit weights and activations to accelerate model inference and save memory. The commonly used model quantization methods include quantization-aware training (QAT) and post-training quantization (PTQ). In QAT, Zhang _et al._[40] builds a binarized convolutional neural network based on a projection function and a new updated rule during the backpropagation. Li _et al._[17] proposed an information rectification module and distribution-guided distillation to push the bit-width in a quantized vision transformer. TTQ [44] uses two real-valued scaling coefficients to quantize the weights to ternary values. Zhuang _et al._[45] present a low-bit (2-4 bit) quantization scheme using a two-stage approach to alternately quantize the weights and activations, providing an optimal trade-off among memory, efficiency, and performance. In [12], the quantization intervals are parameterized, and optimal values are obtained by directly minimizing the task loss of the network. ZeroQ [3] supports uniform and mixed-precision quantization by optimizing for a distilled dataset which is engineered to match the statistics of the batch normalization across different network layers. [6] enabled accurate approximation for tensor values that have bell-shaped distributions with long tails and found the entire range by minimizing the quantization error.While QAT often requires high-level expert knowledge and huge GPU resources for training or fine-tuning, especially the large-scale pre-trained model. To reduce the above costs of quantization, PTQ, which is training-free, has received more widespread attention and lots of excellent works arise. MinMax, EMA [11] methods are commonly used to compress or reduce the weights of the PTQ model. MinMax normalizes the weights and bias values in the model to a predefined range, such as [-1, 1], to reduce the storage space and increase the inference speed. MSE quantization involves evaluating and adjusting the quantized activation values to minimize the impact of quantization on model performance. ### Real-time Object Detection Deep Learning based object detectors can be generally classified into two categories: two-stage and single-stage object detectors. Two-stage detectors, such as Faster R-CNN [30], RPN [18], and Cascade R-CNN [4], first generate region proposals and then refine them in a second stage. On the other hand, single-stage object detectors have gained significant popularity in real-time object detection due to their efficiency and effectiveness. These detectors aim to predict object bounding boxes and class labels in a single pass of the neural network, eliminating the need for time-consuming region proposal generation. One of the pioneering single-shot detectors is YOLO [27], which divides the input image into Figure 2: Architecture of Q-YOLO. a grid and assigns bounding boxes and class probabilities to predefined anchor boxes. The subsequent versions, YOLOv2 [28] and YOLOv3 [29], introduced improvements in terms of network architecture and feature extraction, achieving better accuracy without compromising real-time performance. Another influential single-shot detector is SSD [21], which employs a series of convolutional layers at different scales to detect objects of various sizes. By using feature maps at multiple resolutions, SSD achieves high accuracy while maintaining real-time performance. Variants of SSD, such as MobileNet-SSD [10] and Pelee [35], further optimize the architecture to achieve faster inference on resource-constrained devices. Efficiency is a critical aspect of real-time object detection, especially for deployment on computationally limited platforms. MobileNet[10] and its subsequent variants, such as MobileNetV2[32] and MobileNetV3 [14], have received significant attention for their lightweight architectures. These networks utilize depth-wise separable convolutions and other techniques to reduce the number of parameters and operations without significant accuracy degradation. ShuffleNet[41] introduces channel shuffling operations to exploit group convolutions, enabling a trade-off between model size and computational cost. ShuffleNetV2[23] further improves the efficiency by introducing a more efficient block design and exploring different network scales. ## 3 Methodology ### Preliminaries #### 3.1.1 Network Quantization Process. We first review the main steps of the Post-Training Quantization (PTQ) process and supply the details. Firstly, the network is either trained or provided as a pre-trained model using full precision and floating-point arithmetic for weights and activations. Subsequently, numerical representations of weights and activations are suitably transformed for quantization. Finally, the fully-quantized network is deployed either on integer arithmetic hardware or simulated on GPUs, enabling efficient inference with reduced memory storage and computational requirements while maintaining reasonable accuracy levels. #### 3.1.2 Uniform Quantization. Assuming the quantization bit-width is \(b\), the quantizer \(\mathrm{Q}(\mathbf{x}|b)\) can be formulated as a function that maps a floating-point number \(\mathbf{x}\in\mathbb{R}\) to the nearest quantization bin: \[\mathrm{Q}(\mathbf{x}|b):\mathbb{R}\rightarrow\hat{\mathbf{x}}, \tag{1}\] \[\hat{\mathbf{x}}=\begin{cases}\{-2^{b-1},\cdots,2^{b-1}-1\}&\text{Signed},\\ \{0\cdots,2^{b}-1\}&\text{Unsigned}.\end{cases} \tag{2}\] There are various quantizer \(\mathrm{Q}(\mathbf{x}|b)\), where uniform [11] are typically used. Uniform quantization is well supported on most hardware platforms. Its unsigned quantizer \(\mathrm{Q}(\mathbf{x}|b)\) can be defined as: \[\mathrm{Q}(\mathbf{x}|b)=\mathrm{clip}(\lfloor\frac{\mathbf{x}}{s_{\mathbf{x}}} \rceil+zp_{\mathbf{x}},0,2^{b}-1), \tag{3}\] where \(s_{\mathbf{x}}\) (scale) and \(zp_{\mathbf{x}}\) (zero-point) are quantization parameters. In Eq. 4, \(u\) (upper) and \(l\) (lower) define the quantization grid limits. \[s_{\mathbf{x}}= \frac{u-l}{2^{b}-1},zp_{\mathbf{x}}=\mathrm{clip}(\lfloor-\frac{l }{s}\rceil,0,2^{b}-1). \tag{4}\] The dequantization process can be formulated as: \[\tilde{\mathbf{x}}=(\hat{\mathbf{x}}-zp_{\mathbf{x}})\times s_{\mathbf{x}}. \tag{5}\] ### Quantization Range Setting Quantization range setting is the process of establishing the upper and lower clipping thresholds, denoted as \(u\) and \(l\) respectively, of the quantization grid. The crucial trade-off in range setting lies in the balance between two types of errors: clipping error and rounding error. Clipping error arises when data is truncated to fit within the predefined grid limits, as described in Eq.4. Such truncation leads to information loss and a decrease in precision in the resulting quantized representation. On the other hand, rounding error occurs due to the imprecision introduced during the rounding operation, as described in Eq.3. This error can accumulate over time and have an impact on the overall accuracy of the quantized representation. The following methods provide different trade-offs between the two quantities. #### 3.2.1 MinMax. In the experiments, we use the MinMax method for weight quantization, where clipping thresholds \(l_{\mathbf{x}}\) and \(u_{\mathbf{x}}\) are formulated as: \[l_{\mathbf{x}}= \min(\mathbf{x}),u_{\mathbf{x}}=\max(\mathbf{x}). \tag{6}\] This leads to no clipping error. However, this approach is sensitive to outliers as strong outliers may cause excessive rounding errors. #### 3.2.2 Mean Squared Error (MSE). One way to mitigate the problem of large outliers is by employing MSE-based range setting. In this method, we determine \(l_{\mathbf{x}}\) and \(u_{\mathbf{x}}\) that minimize the mean squared error (MSE) between the original and quantized tensor: \[\underset{l_{\mathbf{x}},u_{\mathbf{x}}}{\mathrm{arg\ min}}\ \mathrm{MSE}( \mathbf{x},\mathbf{Q}_{l_{\mathbf{x}},u_{\mathbf{x}}}), \tag{7}\] where \(\mathbf{x}\) represents the original tensor and \(\mathbf{Q}_{l_{\mathbf{x}},u_{\mathbf{x}}}\) denotes the quantized tensor produced using the determined clipping thresholds \(l_{\mathbf{x}}\) and \(u_{\mathbf{x}}\). The optimization problem is commonly solved using grid search, golden section method or analytical approximations with closed-form solution. ### Unilateral Histogram-based (UH) Activation Quantization To address the issue of activation value imbalance, we propose a new approach called Unilateral Histogram-based (UH) activation quantization. We first provide an empirical study of the activation values after forward propagation through the calibration dataset. As depicted in Figure 1, we observe a concentrated distribution of values near the lower bound, accompanied by a noticeable decrease in occurrences above zero. Further analysis of the activation values reveals that the empirical value of -0.2785 serves as the lower bound. This phenomenon can be attributed to the frequent utilization of the Swish (SILU) activation function in the YOLO series. ``` 1:Input: FP32 Histogram \(H\) with 2048 bins 2:for\(i\) in range(128, 2048) do 3: Reference distribution \(P\gets H[0:i]\) 4: Outliers count \(c\leftarrow\sum_{j=i}^{2047}H[j]\) 5:\(P[i-1]\leftarrow P[i-1]+c\) 6:\(P\leftarrow\frac{P}{\sum_{j}(P[j])}\) 7: Candidate distribution \(C\leftarrow\) Quantize \(H[0:i]\) into 128 levels 8: Expand \(C\) to have \(i\) bins 9:\(Q\leftarrow\frac{C}{\sum_{j}(C[j])}\) 10:\(MSE[i]\leftarrow\) Mean Squared Error\((P,Q)\) 11:endfor 12:Output: Index \(m\) for which \(MSE[m]\) is minimal. ``` **Algorithm 1** Unilateral Histogram-based (UH) Activation Quantization Based on the empirical evidence, we introduce an asymmetric quantization approach called Unilateral Histogram-based (UH) activation quantization. In UH, we iteratively determine the maximum truncation value that minimizes the quantization error, while keeping the minimum truncation value fixed at -0.2785, as illustrated in the following: \[u_{\mathbf{x}}=\underset{l_{\mathbf{x}},u_{\mathbf{x}}}{\text{arg min}}\; \text{MSE}(\mathbf{x},\mathbf{Q}_{l_{\mathbf{x}},u_{\mathbf{x}}}),l_{\mathbf{ x}}=-0.2785. \tag{8}\] To evaluate the quantization error during the search for the maximum truncation value, we utilize the fp32 floating-point numbers derived from the center values of the gathered 2048 bins, as introduces in Algorithm 1. These numbers are successively quantized, considering the current maximum truncation value under consideration. Through this iterative process, we identify the optimal truncation range. The UH activation quantization method offers two key advantages. Firstly, it significantly reduces calibration time. Secondly, it ensures stable activation quantization by allowing a larger set of integers to represent the frequently occurring activation values between 0 and -0.2785, thereby improving quantization accuracy. ## 4 Experiments In order to assess the performance of the proposed Q-YOLO detectors, we conducted a comprehensive series of experiments on the widely recognized COCO 2017 [19] detection benchmark. As one of the most popular object detection datasets, COCO 2017 [19] has become instrumental in benchmarking state-of-the-art object detectors, thanks to its rich annotations and challenging scenarios. Throughout our experimental analysis, we employed standard COCO metrics on the bounding box detection task to evaluate the efficacy of our approach. ### Implementation Details We randomly selected 1500 training images from the COCO train2017 dataset [19] as the calibration data, which served as the foundation for optimizing the model parameters. Additionally, the performance evaluation took place on the COCO val2017 dataset [19], comprising 5000 images. The image size is set to 640x640. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline Models & Method & Bits & Size\({}_{(\text{MB})}\) & OPs\({}_{(\text{G})}\) & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{s}\) & AP\({}_{m}\) & AP\({}_{l}\) \\ \hline \multirow{8}{*}{YOLOv5s [33]} & Real-valued & 32-32 & 57.6 & 16.5 & 37.4 & 57.1 & 40.1 & 21.6 & 42.3 & 48.9 \\ \cline{2-11} & MinMax & & & & 37.2 & 56.9 & 39.8 & 21.4 & 42.2 & 48.5 \\ & Percentile [16] & 8-8 & 14.4 & 4.23 & 36.9 & 56.4 & 39.6 & 21.3 & 42.4 & 48.1 \\ & **Q-YOLO** & & & & & **37.4** & **56.9** & **39.8** & **21.4** & **42.4** & **48.8** \\ \cline{2-11} & Percentile [16] & \multirow{2}{*}{4-4} & \multirow{2}{*}{7.7} & \multirow{2}{*}{2.16} & 7.0 & 14.2 & 6.3 & 4.1 & 10.7 & 7.9 \\ & **Q-YOLO** & & & & **14.0** & **26.2** & **13.5** & **7.9** & **17.6** & **19.0** \\ \hline \multirow{8}{*}{YOLOv5m [33]} & Real-valued & 32-32 & 169.6 & 49.0 & 45.1 & 64.1 & 49 & 28.1 & 50.6 & 57.8 \\ \cline{2-11} & MinMax & & & & 44.9 & 64 & 48.9 & 27.8 & 50.5 & 57.4 \\ \cline{1-1} & Percentile [16] & 8-8 & 42.4 & 12.4 & 44.6 & 63.5 & 48.4 & 28.4 & 50.4 & 57.8 \\ \cline{1-1} & **Q-YOLO** & & & & & **45.1** & **64.1** & **48.9** & **28** & **50.6** & **57.7** \\ \cline{1-1} \cline{2-11} & Percentile [16] & \multirow{2}{*}{4-4} & \multirow{2}{*}{21.2} & 6.33 & 19.4 & 35.6 & 19.1 & 14.6 & 28.3 & 17.2 \\ \cline{1-1} & **Q-YOLO** & & & & **28.8** & **46** & **30.5** & **15.4** & **33.8** & **38.7** \\ \hline \multirow{8}{*}{YOLOv7 [34]} & Real-valued & 32-32 & 295.2 & 104.7 & 50.8 & 69.6 & 54.9 & 34.9 & 55.6 & 66.3 \\ \cline{1-1} & MinMax & & & & 50.6 & 69.5 & 54.8 & 34.1 & 55.5 & 65.9 \\ \cline{1-1} & Percentile [16] & 8-8 & 73.8 & 27.2 & 50.5 & 69.3 & 54.6 & 34.5 & 55.4 & 66.2 \\ \cline{1-1} & **Q-YOLO** & & & & & **50.7** & **69.5** & **54.8** & **34.8** & **55.5** & **66.2** \\ \cline{1-1} \cline{2-11} & Percentile [16] & \multirow{2}{*}{4-4} & \multirow{2}{*}{36.9} & \multirow{2}{*}{14.1} & 16.7 & 26.9 & 17.8 & 10.3 & 20.1 & 20.2 \\ \cline{1-1} & **Q-YOLO** & & & & **37.3** & **55.0** & **40.9** & **21.5** & **41.4** & **53.0** \\ \hline \multirow{8}{*}{YOLOv7x [34]} & Real-valued & 32-32 & 25.5 & 189.9 & 52.5 & 71.0 & 56.6 & 36.6 & 57.3 & 68.0 \\ \cline{1-1} & MinMax & & & & 52.3 & 70.9 & 56.7 & 36.6 & 57.1 & 67.7 \\ \cline{1-1} & Percentile [16] & \multirow{2}{*}{8-8} & \multirow{2}{*}{142.6} & \multirow{2}{*}{49.5} & 52.0 & 70.5 & 56.1 & 36.0 & 56.8 & 67.9 \\ \cline{1-1} & **Q-YOLO** & & & & **52.4** & **70.9** & **56.5** & **36.2** & **57.2** & **67.8** \\ \cline{1-1} \cline{2-11} & Percentile [16] & \multirow{2}{*}{4-4} & \multirow{2}{*}{71.3} & \multirow{2}{*}{25.6} & 36.8 & 55.3 & 40.5 & 21.2 & 41.7 & 49.3 \\ \cline{1-1} & **Q-YOLO** & & & & **37.6** & **57.8** & **42.1** & **23.7** & **43.8** & **49.1** \\ \hline \end{tabular} \end{table} Table 1: A comparison of various quantization methods applied to YOLOv5s [33], YOLOv7 [33], YOLOv7 [34] and YOLOv7x[34], which have an increasing number of parameters, on the COCO val2017 dataset [19]. The term Bits (W-A) represents the bit-width of weights and activations. The best results are displayed in bold. In our experiments, unless otherwise noted, we employed symmetric channel-wise quantization for weights and asymmetric layer-wise quantization for activations. To ensure a fair and unbiased comparison, we consistently applied the MinMax approach for quantizing weights. The input and output layers of the model are more sensitive to the loss of accuracy. In order to maintain the overall performance of the model, the original accuracy of these layers is usually retained. We also follow this practice. ### Main results We apply our proposed Q-YOLO to quantize YOLOv5s [33], YOLOv5m [33], YOLOv7 [34] and YOLOv7x [34], which have an increasing number of parameters.The results of the full-precision model, as well as the 8-bit and 4-bit quantized models using MinMax, Percentile, and Q-YOLO methods, are all presented in Table. 1. Table. 1 lists the comparison of several quantization approaches and detection methods in computing complexity, storage cost. Our Q-YOLO significantly accelerates computation and reduces storage requirements for various YOLO detectors. Similarly, in terms of detection accuracy, when using Q-YOLO to quantize the YOLOv5 series models to 8 bits, there is virtually no decline in the average precision (AP) value compared to the full-precision model. As the number of model parameters increases dramatically, quantizing the YOLOv7 series models to 8 bits results in an extremely slight decrease in accuracy. When quantizing models to 4 bits, the accuracy experiences a significant loss due to the reduced expressiveness of 4-bit integer representation. Particularly, when using the MinMax quantization method, the model loses all its accuracy; whereas the Percentile method, which roughly truncates 99.99% of the extreme values, fails to bring notable improvement. Differently, Q-YOLO successfully identifies a more appropriate scale for quantization, resulting in a considerable enhancement compared to conventional Post-Training Quantization (PTQ) methods. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline models & Bits & Symmetry & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{s}\) & AP\({}_{m}\) & AP\({}_{l}\) \\ \hline \multirow{4}{*}{YOLOv5s [33]} & Real-valued & - & 37.4 & 57.1 & 40.1 & 21.6 & 42.3 & 48.9 \\ \cline{2-10} & \multirow{2}{*}{6-6} & _Asymmetric_ & 35.9 & 55.7 & 38.3 & 20.4 & 41.0 & 47.6 \\ & & _Symmetric_ & 34.4 & 53.9 & 37.0 & 19.3 & 39.8 & 45.0 \\ \cline{2-10} & \multirow{2}{*}{4-4} & _Asymmetric_ & 14.0 & 26.2 & 13.5 & 7.9 & 17.6 & 19.0 \\ & & _Symmetric_ & 2.7 & 5.9 & 2.2 & 1.3 & 4.2 & 4.6 \\ \hline \multirow{4}{*}{YOLOv5m [33]} & Real-valued & - & 45.1 & 64.1 & 49.0 & 28.1 & 50.6 & 57.8 \\ \cline{2-10} & \multirow{2}{*}{6-6} & _Asymmetric_ & 44.0 & 63.1 & 47.7 & 28 & 49.9 & 56.8 \\ & & _Symmetric_ & 42.4 & 61.1 & 46.0 & 25.3 & 48.3 & 55.9 \\ \cline{2-10} & \multirow{2}{*}{4-4} & _Asymmetric_ & 28.8 & 46.0 & 30.5 & 15.4 & 33.8 & 38.7 \\ & & _Symmetric_ & 11.3 & 24.8 & 8.6 & 7.5 & 15.2 & 14.5 \\ \hline \hline \end{tabular} \end{table} Table 2: A comparison of Symmetrical Analysis of Activation Value Quantization. _Asymmetric_ indicates the use of an asymmetric activation value quantization scheme, while _Symmetric_ refers to the symmetric quantization of activation values. ### Ablation Study #### 4.3.1 Symmetry in Activation Quantization. Nowadays, quantization schemes are often subject to hardware limitations; for instance, NVIDIA[24] only supports symmetric quantization, as it is more inference-speed friendly. Therefore, discussing the symmetry in activation value quantization is meaningful. Table. 2 presents a comparison of results using Q-YOLO for symmetric and asymmetric quantization, with the latter exhibiting higher accuracy. The range of negative activation values lies between 0 and -0.2785, while the range of positive activation values exceeds that of the negative ones. If we force equal integer expression bit numbers on both positive and negative sides, the accuracy will naturally decrease. Moreover, this decline becomes more pronounced as the quantization bit number decreases. #### 4.3.2 Quantization Type. In Table. 3, we analyze the impact of different quantization types on the performance of the YOLOv5s and YOLOv5m models, considering three cases: quantizing only the weights (_only weights_), quantizing only the activation values (_only activation_), and quantizing both weights and activation values (_weights+activation_). The results demonstrate that, compared to quantizing the activation values, quantizing the weights consistently induces larger performance degradation. Additionally, the lower the number of bits, the greater the loss incurred by quantization. In YOLO, the weights learned by a neural network essentially represent the knowledge acquired by the network, making the precision of the weights crucial for model performance. In contrast, activation values serve as intermediate representations of input data propagating through \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline models & Bits & Quantization type & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{s}\) & AP\({}_{m}\) & AP\({}_{1}\) \\ \hline \multirow{8}{*}{YOLOv5s[33]} & Real-valued & - & 37.4 & 57.1 & 40.1 & 21.6 & 42.3 & 48.9 \\ \cline{2-9} & 6-32 & _only weights_ & 36.7(-0.7) & 56.6 & 39.3 & 20.9 & 41.4 & 48.4 \\ & 32-6 & _only activation_ & 36.6(-0.8) & 56.2 & 39.3 & 21.0 & 42.0 & 47.9 \\ & 6-6 & _weights+activation_ & 35.9 & 55.7 & 38.3 & 20.4 & 41.0 & 47.6 \\ \cline{2-9} & 4-32 & _only weights_ & 19.6(-16.3) & 35.6 & 19.3 & 11.3 & 22.5 & 25.7 \\ & 32-4 & _only activation_ & 30.6(-5.3) & 49.1 & 32.6 & 17.0 & 36.7 & 40.7 \\ & 4-4 & _weights+activation_ & 14.0 & 26.2 & 13.5 & 7.9 & 17.6 & 19 \\ \hline \multirow{8}{*}{YOLOv5m[33]} & Real-valued & - & 45.1 & 64.1 & 49.0 & 28.1 & 50.6 & 57.8 \\ \cline{2-9} & 6-32 & _only weights_ & 44.7(-0.4) & 63.9 & 48.6 & 28.0 & 50.3 & 57.3 \\ \cline{1-1} & 32-6 & _only activation_ & 44.3(-0.8) & 63.4 & 48.1 & 28.4 & 50.3 & 57.2 \\ \cline{1-1} & 6-6 & _weights+activation_ & 44 & 63.1 & 47.7 & 28.0 & 49.9 & 56.8 \\ \cline{1-1} \cline{2-9} & 4-32 & _only weights_ & 34.6(-9.4) & 54.0 & 37.3 & 20.0 & 39.2 & 45.3 \\ \cline{1-1} & 32-4 & _only activation_ & 37.7(-6.3) & 57.3 & 41.8 & 23.7 & 44.1 & 51.0 \\ \cline{1-1} & 4-4 & _weights+activation_ & 28.8 & 46.0 & 30.5 & 15.4 & 33.8 & 38.7 \\ \hline \hline \end{tabular} \end{table} Table 3: A comparison of Quantization type. The term _only weights_ signifies that only the weights are quantized, _only activation_ indicates that only the activation values are quantized, and _activation+weights_ represents the quantization of both activation values and weights. the network, and can tolerate some degree of quantization error to a certain extent. ### Inference speed To practically verify the acceleration benefits brought about by our quantization scheme, we conducted inference speed tests on both GPU and CPU platforms. For the GPU, we selected the commonly used desktop GPU NVIDIA RTX 4090 [24] and the NVIDIA Tesla T4 [24], often used in computing centers for inference tasks. Due to our limited CPU resources, we only tested Intel products, the i7-12700H and i9-10900, both of which have x86 architecture. For deployment tools, we chose TensorRT [1] and OpenVINO [2]. The entire process involved converting the weights from the torch framework into an ONNX model with QDQ nodes and then deploying them onto specific inference frameworks. The inference mode was set to single-image serial inference, with an image size of 640x640. As most current inference frameworks only support symmetric quantization and 8-bit quantization, we had to choose a symmetric 8-bit quantization scheme, which resulted in an extremely small decrease in accuracy compared to asymmetric schemes. As shown in Table. 4, the acceleration is extremely significant, especially for the larger YOLOv7 model, wherein the speedup ratio when using a GPU even exceeded \(\mathbf{3}\times\) compared to the full-precision model. This demonstrates that applying quantization in real-time detectors can bring about a remarkable acceleration. ## 5 Conclusions Real-time object detection is crucial in various computer vision applications. However, deploying object detectors on resource-constrained platforms poses challenges due to high computational and memory requirements. This paper introduces Q-YOLO, a highly efficient one-stage detector built using a low-bit quantization method to address the performance degradation caused by activation distribution imbalance in traditional quantized YOLO models. Q-YOLO employs a fully end-to-end Post-Training Quantization (PTQ) pipeline with a \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{models} & \multirow{2}{*}{Bits} & \multirow{2}{*}{AP} & \multicolumn{2}{c}{GPU speed / _ms_} & \multicolumn{2}{c}{Intel CPU speed / _ms_} \\ \cline{3-6} & & & RTX 4090 & Tesla & T4 i7-12700H(x86) i9-10900(x86) \\ \hline \multirow{2}{*}{YOLOv5s} & 32-32 & 37.4 & 4.9 & 7.1 & 48.7 & 38.7 \\ \cline{2-6} & 8-8 & 37.3 & 3.0 & 4.5 & 33.6 & 23.4 \\ \hline \multirow{2}{*}{YOLOv7} & 32-32 & 50.8 & 16.8 & 22.4 & 269.8 & 307.8 \\ \cline{2-6} & 8-8 & 50.6 & 5.4 & 7.8 & 120.4 & 145.2 \\ \hline \hline \end{tabular} \end{table} Table 4: The inference speed of the quantized model is essential. The quantization scheme adopts uniform quantization, with single-image inference mode and an image size of 640*640. TensorRT [1]is selected as the GPU inference library, while OpenVINO [2] is chosen for the CPU inference library well-designed Unilateral Histogram-based (UH) activation quantization scheme. Extensive experiments conducted on the COCO dataset demonstrate the effectiveness of Q-YOLO. It outperforms other PTQ methods while achieving a favorable balance between accuracy and computational cost. This research significantly contributes to advancing the efficient deployment of object detection models on resource-limited edge devices, enabling real-time detection with reduced computational and memory requirements.
2310.10274
No Compromise in Solution Quality: Speeding Up Belief-dependent Continuous POMDPs via Adaptive Multilevel Simplification
Continuous POMDPs with general belief-dependent rewards are notoriously difficult to solve online. In this paper, we present a complete provable theory of adaptive multilevel simplification for the setting of a given externally constructed belief tree and MCTS that constructs the belief tree on the fly using an exploration technique. Our theory allows to accelerate POMDP planning with belief-dependent rewards without any sacrifice in the quality of the obtained solution. We rigorously prove each theoretical claim in the proposed unified theory. Using the general theoretical results, we present three algorithms to accelerate continuous POMDP online planning with belief-dependent rewards. Our two algorithms, SITH-BSP and LAZY-SITH-BSP, can be utilized on top of any method that constructs a belief tree externally. The third algorithm, SITH-PFT, is an anytime MCTS method that permits to plug-in any exploration technique. All our methods are guaranteed to return exactly the same optimal action as their unsimplified equivalents. We replace the costly computation of information-theoretic rewards with novel adaptive upper and lower bounds which we derive in this paper, and are of independent interest. We show that they are easy to calculate and can be tightened by the demand of our algorithms. Our approach is general; namely, any bounds that monotonically converge to the reward can be utilized to achieve significant speedup without any loss in performance. Our theory and algorithms support the challenging setting of continuous states, actions, and observations. The beliefs can be parametric or general and represented by weighted particles. We demonstrate in simulation a significant speedup in planning compared to baseline approaches with guaranteed identical performance.
Andrey Zhitnikov, Ori Sztyglic, Vadim Indelman
2023-10-16T10:59:22Z
http://arxiv.org/abs/2310.10274v2
No Compromise in Solution Quality: Speeding Up Belief-dependent Continuous POMDPs via Adaptive Multilevel Simplification ###### Abstract Continuous POMDPs with general belief-dependent rewards are notoriously difficult to solve online. In this paper, we present a complete provable theory of adaptive multilevel simplification for the setting of a given externally constructed belief tree and MCTS that constructs the belief tree on the fly using an exploration technique. Our theory allows to accelerate POMDP planning with belief-dependent rewards without any sacrifice in the quality of the obtained solution. We rigorously prove each theoretical claim in the proposed unified theory. Using the general theoretical results, we present three algorithms to accelerate continuous POMDP online planning with belief-dependent rewards. Our two algorithms, SITH-BSP and LAZY-SITH-BSP, can be utilized on top of any method that constructs a belief tree externally. The third algorithm, SITH-PFT, is an anytime MCTS method that permits to plug-in any exploration technique. All our methods are guaranteed to return exactly the same optimal action as their unsimplified equivalents. We replace the costly computation of information-theoretic rewards with novel adaptive upper and lower bounds which we derive in this paper, and are of independent interest. We show that they are easy to calculate and can be tightened by the demand of our algorithms. Our approach is general; namely, any bounds that monotonically converge to the reward can be easily plugged-in to achieve significant speedup without any loss in performance. Our theory and algorithms support the challenging setting of continuous states, actions, and observations. The beliefs can be parametric or general and represented by weighted particles. We demonstrate in simulation a significant speedup in planning compared to baseline approaches with guaranteed identical performance. Decision-making under Uncertainty, Belief Space Planning, POMDP, Belief-dependent Rewards, Planning with Imperfect Information ## 1 Introduction Efficiently solving Partially Observable Markov Decision Processes (POMDPs) implies enabling autonomous agents and robots to plan under uncertainty (Smith and Simmons, 2004; Kurniawati et al., 2008; Silver and Veness, 2010; Ye et al., 2017; Sunberg and Kochenderfer, 2018; Garg et al., 2019). Typical sources of uncertainty are the imprecise actions, sensor type, sensor noise, imprecise models, and unknown agent surroundings. However, solving a POMDP is notoriously hard. Specifically, it was proven to be PSPACE-complete (Papadimitriou and Tsitsiklis, 1987). The actual POMDP state is hidden. Instead, at each time step, the robot shall decide which action to take based on the distribution over the state, given the corresponding history of performed actions and observations received so far. Such a distributions received the name "beliefs". In a planning session, the robot has to take into account all possible future actions interleaved with possible observations. Each such future history of the length of predefined horizon defines a lace of the future beliefs (blue lace in Fig. 1) and corresponding cumulative rewards named return. Solving POMDP in the most common sense means finding a mapping from belief to action called policy, which maximizes expected return. Earlier _offline_ solvers such as (Smith and Simmons, 2004; Kurniawati et al., 2008) are applicable to small or moderately sized discrete POMDP. These methods require passage over all possible states and observations (Kochenderfer et al., 2022) since they are built on value iteration of \(\alpha\)-vectors, so called full-width methods (Silver and Veness, 2010). More recent _online_ solvers are suitable for POMDPs with large but discrete action, state, and observation spaces (Ye et al., 2017; Silver and Veness, 2010). Still, continuous state, action, and observation spaces remain to be an open problem (Sunberg and Kochenderfer, 2018). Another challenging aspect of solving POMDP and the subject of interest in this paper is general belief distributions represented by weighted particles. Further in the manuscript we will regard the combination of both, nonparametric beliefs and fully continuous POMDP as **nonparametric fully continuous** setting. In a fully continuous setting with parametric or general beliefs one shall resort to sampling of future possible actions and observations. In a sampled form, this abundance of possible realizations of action-observation pairs constitutes a _belief tree_. Building the full belief tree is intractable since each node in the tree repeatedly branches with all possible actions and all possible observations as illustrated in Fig. 1. The number of nodes grows exponentially with the horizon. This problem is known as the _curse of history_. The reward function in a classical POMDP is assumed to have a specific structure, namely, to be the expectation with respect to the belief of the state-dependent reward function. While alleviating the solution, this formulation does not support more general, belief-dependent reward functions, such as information-theoretic rewards. However, POMDP planning with belief-dependent rewards is essential for various problems in robotics and Artificial Intelligence (AI), such as informative planning (Hollinger and Sukhatme, 2014), active localization (Burgard et al., 1997), active Simultaneous Localization and Mapping (SLAM) (Stachniss et al., 2005), Belief Space Planning (BSP) (Indelman et al., 2015; Van Den Berg et al., 2012; Platt et al., 2010). Yet, POMDP planning with general belief-dependent rewards in particular, when the beliefs are represented by particles exacerbate the computational challenge of the solution even more. For example information theoretic rewards such as differential entropy, are computationally expensive. Let us focus for the moment on differential entropy. Even if the belief is parametric but not Gaussian, calculating the exact value of differential entropy involves intractable integrals. This fact also motivates to use a weighted particles representation for the belief. In this case differential entropy can be estimated, for instance by Kernel Density Estimation (KDE) (Fischer and Tas, 2020) or a model-based estimator (Boers et al., 2010). However, these estimators have quadratic cost in the number of samples and are usually the bottleneck of planning algorithms. The reason is that this increased computational burden is incurred for all nodes in the belief tree. Importantly, the estimation errors of these estimators with respect to differential entropy over theoretical belief are out of the reach due to the unavailability of both, the theoretical belief and the entropy on top of it. Yet, common estimators are converging due to the convergence (almost surely (Crisan and Doucet, 2002)) of the belief represented by particles to the theoretical belief. This prompts us to use **as many belief particles as possible** to get closer to the theoretical belief. Increasing number of belief particles severely impacts planning time. In this paper we accelerate online decision making in the setting of nonparametric fully continuous POMDPs with general belief dependent rewards. Crucially, planning performance of our accelerated approach is the same as that of the baseline approaches without our acceleration. Before stating our contributions, we review the most relevant works in this context. ### Related Work Allowing general belief-dependent rewards in POMDP while solving such a problem efficiently is a long standing effort. Some previous seminal works such as \(\rho\)-POMDP (Araya et al., 2010; Fehr et al., 2018) as well as (Dressel and Kochenderfer, 2017) have focused on discrete domains, small sized spaces and have tackled the offline solvers. Furthermore, these approaches are limited to piecewise linear and convex or Lipschitz-continuous rewards. Another work named POMDP-IR (Spaan et al., 2015) suggest an interesting framework for specific form of information rewards involving manipulations on the action space. Still, in (Araya et al., 2010; Fehr et al., 2018; Dressel and Kochenderfer, 2017) the state, action and observation spaces are discrete and small sized. Another line of works is Belief Space Planning (BSP) (Platt et al., 2010; Van Den Berg et al., 2012; Indelman et al., 2015). These approaches are designed for fully continuous POMDPS, but limited to Gaussian beliefs. In striking contrast, our approach is centered in the more challenging fully continuous domain and nonparametric general beliefs represented by particles. One way to tackle a nonparamteric fully continuous setting with belief dependent reward is to reformulate POMDP as a Belief-MDP (BMDP). On top of this reformulation one can utilize MDP sampling based methods such as Sparse Sampling (SS) proposed by Kearns et al. (2002). However, this algorithm still suffers from the curse of history and such that increasing the horizon is still problematic. Monte Carlo Tree Search (MCTS) made a significant breakthrough in overcoming the course of history by building the belief tree incrementally and exploring only the "promising" parts of the tree using the exploration strategy. An inherent part of MCTS based algorithms is the exploration strategy designed to balance exploration and exploitation while building the belief tree. Most widely used exploration technique is Upper Confidence Bound (UCB) (Kocsis and Szepesvari, 2006). MCTS algorithms assume that calculating the reward over the belief node does not pose any computational difficulty. Information-theoretic rewards violate this assumption. When the reward is a general function of the belief, the origin of the computational burden is shifted towards the reward calculation. Moreover, belief-dependent rewards require the complete set of belief particles at each node in the belief tree. Therefore, algorithms such as POMCP (Silver and Veness, 2010), and its numerous predecessors are inapplicable since they simulate each time a single particle down the tree when expanding it. DESPOT based algorithms behave similarly (Ye et al., 2017), with the DESPOT-\(\alpha\) as an exception (Garg et al., 2019). DESPOT-\(\alpha\) simulates a complete set of particles. However, the DESPOT-\(\alpha\) tree is built using \(\alpha\)-vectors, such that they are an indispensable part of the algorithm. The standard \(\alpha\)-vectors technique requires that the reward is state dependent, and the reward over the belief is merely expectation over the state reward. In other words DESPOT-\(\alpha\) does not support belief-dependent rewards since it contradicts the application of the \(\alpha\)-vectors. The only approach posing no restrictions on the structure of belief-dependent reward and not suffering from limiting assumptions is Particle Filter Tree (PFT). The idea behind PFT is to apply MCTS over Belief-MDP (BMDP). The authors of (Sunberg and Kochenderfer, 2018) augmented PFT with Double Progressive Widening (DPW) to support continuous spaces in terms of actions, states and observations, and coined the name PFT-DPW. PFT-DPW utilizes the UCB strategy and maintains a complete belief particle set at each belief tree node. Recently, Fischer and Tas (2020) presented Information Particle Filter Tree (IPFT), a method to incorporate information-theoretic rewards into PFT. The IPFT simulates small subsets of particles sampled from the root of the belief tree and averages entropies calculated over these subsets, enjoying a fast runtime. However, differential entropy estimated from a small-sized particle set can be significantly biased. This bias is unpredictable and unbounded, therefore, severely impairs the performance of the algorithm. In other words, celerity comes at the expense of quality. Oftentimes, the policy defined by this algorithm is very far from optimal given a time budget. (Fischer and Tas, 2020) provides guarantees solely for the asymptotic case, i.e, the number of state samples (particles) tends to infinity. Asymptotically their algorithm behaves precisely as the PFT-DPW in terms of running speed and performance. Yet, in practice the performance of IPFT in terms of optimality can degrade severely compared to PFT-DPW. Moreover, (Fischer and Tas, 2020) does not provide any study of comparison of IPFT against PFT-DPW with an information-theoretic reward. Prompted by this insight, we chose the PFT-DPW as our _baseline_ approach, which we aim to accelerate. In contrast to IPFT designed specifically for differential entropy, our approach is suitable for any belief dependent reward and explicitly guarantees an _identical_ solution to PFT-DPW with an information-theoretic reward, for _any_ size of particle set representing the belief and serving as input to PFT-DPW. The computational burden incurred by the complexity of POMDP planning inspired many research works to focus on approximations of the problem on top of existing solvers, e.g., multilevel successive approximation of a motion model (Hoerger et al., 2019), lazy belief extraction on top of a particle based representation (Hoerger and Kurniawati, 2021), linearity based solvers (Hoerger et al., 2020), and averaging differential entropy estimated from tiny subsets of particles (Fischer and Tas, 2020). Typically, these works provide only asymptotical guarantees (Hoerger et al., 2019; Fischer and Tas, 2020), or no guarantees at all. In addition many of these approximations leverage the assumption that the belief-dependent reward is an averaged state-dependent reward, e.g, (Hoerger et al., 2019; Hoerger and Kurniawati, 2021), and therefore cannot accommodate belief dependent-rewards with general structure (e.g. do not support information-theoretic rewards such as differential entropy). Recently, the novel paradigm of _simplification_ has appeared in literature (Zhitnikov and Indelman, 2022; Sztylgic and Indelman, 2022; Elimelech and Indelman, 2022; Shieman and Indelman, 2022). The simplification is concerned with carefully replacing the nonessential elements of the decision making problem and quantifying the impact of this relaxation. Specifically, simplification methods are accompanied by stringent guarantees. A prominent aspect of a simplification paradigm is the usage of the bounds over the reward or the objective function. As opposed to approximations, the simplification framework always keeps some sort of connection to the original unsimplified problem and by that provides deterministic guarantees relative to the given solver. In spite that various objective function bounds have been practiced in (Ye et al., 2017; Smith and Simmons, 2004; Walsh et al., 2010; Kochenderfer et al., 2022), Figure 1: Schematic visualization of the belief tree and the inplace simplification. The superscript in this visualization denotes the index in the belief tree. By \(b^{s}\) we denote the simplified version of the belief \(b\). these techniques are not applicable in the realm of belief-dependent rewards and a fully continuous setting. In addition commonly these approaches assume that state dependent reward is trivially bounded by some constant. ### Contributions This work is about accelerating online decision making while obtaining exactly the same solution as without acceleration. Specifically, we contribute an adaptive multi-level simplification framework considering belief-dependent rewards, possibly nonparametric beliefs, and continuous state, observation and action spaces. We further develop particular simplification adaptation mechanisms for two main settings: the belief tree is given, or its construction is not coupled with the solution (e.g. SS), and an anytime setting (MCTS) where the belief tree construction is coupled with the solution due to an exploration strategy (e.g. UCB). In both cases, we guarantee no sacrifice in planning performance with a significant planning speedup. This paper is an extension of the work presented in (Sztylgic and Indelman, 2022), which proposed novel adaptive bounds on the differential entropy estimator of (Boers et al., 2010) and introduced the simplification paradigm in the context of a given belief tree. For clarity, we list down the contributions of this work, in the order they are presented in the manuscript. 1. Building on **any** adaptive monotonically convergent bounds over belief-dependent reward, we present in this paper a **provable** general theory of adaptive multilevel simplification with deterministic performance guarantees. 2. For the case of a given belief tree as in Sparse Sampling, we develop two algorithms, Simplified Information Theoretic Belief Space Planning (SITH-BSP) and a faster variant, LAZY-SITH-BSP. Both are complementary to any POMDP solver that does not couple belief tree construction with an objective estimation while exhibiting a significant speedup in planning with a guaranteed same planning performance. 3. In the context of MCTS, we embed the theory of simplification into the PFT-DPW algorithm and introduce SITH-PFT. We provide stringent guarantees that exactly the same belief tree is constructed by SITH-PFT and PFT-DPW. We focus on a UCB exportation technique, but with minor adjustments, an MCTS with any exploration method will be suitable for acceleration. 4. We derive novel lightweight adaptive bounds on the differential entropy estimator of (Boers et al., 2010) and prove the bounds presented are monotonic and convergent. Moreover, these bounds can be incrementally tightened. We believe these bounds are of interest on their own. The bounds are calculated using the simplified belief (See Fig. 1) 5. We present extensive simulations that exhibit a significant improvement in planning time without any sacrifice in planning performance. ### Paper Organization The remainder of this paper is structured as follows. Section 2 provides background in terms of POMDPs, theoretical objective and commonly used objective estimators. We devote Section 3 to our general adaptive multi-level simplification framework. In Section 4 we consider a given belief tree setting in which the belief tree construction is not coupled with the solution. In Section 5 we delve into the MCTS approach in the context of our multilevel simplification. In Section 6 we consider a specific simplification and develop novel bounds on an information-theoretic reward function. Finally, Section 7 presents simulations and results corroborating our ideas. In order not to disrupt the flow of the presentation, proofs are presented in appropriate Appendices. ## 2 Background In this section we present the background. ### POMDPs with Belief-dependent Rewards A POMDP is a tuple \[\langle\mathcal{X},\mathcal{A},\mathcal{Z},T,\mathcal{O},\rho,\gamma,b_{0}\rangle \tag{1}\] where \(\mathcal{X},\mathcal{A},\mathcal{Z}\) are state, action, and observation spaces, respectively. In this paper we consider continuous state, observation and action spaces. \(T(x,a,x^{\prime})=\mathbb{P}_{T}(x^{\prime}|x,a)\) is the stochastic transition model from the past state \(x\) to the subsequent \(x^{\prime}\) through action \(a\), \(\mathcal{O}(z,x)=\mathbb{P}_{Z}(z|x)\) is the stochastic observation model, \(\gamma\in(0,1]\) is the discount factor, \(b_{0}\) is the belief over the initial state (prior), and \(\rho\) is the reward function. Let \(h_{k}=\{b_{0},a_{0},z_{1},\ldots,a_{k-1},z_{k}\}\) denote _history_ of actions and observations obtained by the agent up to time instance \(k\) and the prior belief. The posterior belief at time instant \(k\) is given by \(b_{k}(x_{k})=\mathbb{P}(x_{k}|h_{k})\). In our generalized formulation, the reward is a function of two subsequent beliefs, an action and an observation: \[\rho(b_{k},a_{k},z_{k+1},b_{k+1}) = (1-\lambda)r^{x}(b_{k},a_{k},b_{k+1})+ \tag{2}\] \[+\lambda r^{I}(b_{k},a_{k},z_{k+1},b_{k+1}), \tag{3}\] where \(\lambda\geq 0\). The first reward component \(r^{x}(b_{k},a_{k},b_{k+1})\) is the expectation over the state and action dependent reward \(r(x_{k},a_{k})\) or \(r(a_{k},x_{k+1})\). Correspondingly, these two possibilities yield \[r^{x}(b_{k},a_{k})=\underset{x_{k}\sim b_{k}}{\mathbb{E}}[r(x_{k},a_{k})] \approx\frac{1}{n_{x}}\sum_{\xi=1}^{n_{x}}r(x_{k}^{\xi},a_{k}), \tag{4}\] or \[r^{x}(a_{k},b_{k+1})\underset{x_{k+1}\sim b_{k+1}}{\mathbb{E}}[r(a_{k},x_{k+1} )]\!\approx\!\frac{1}{n_{x}}\sum_{\xi=1}^{n_{x}}r(a_{k},x_{k+1}^{\xi}). \tag{5}\] which is commonly approximated by sample mean using \(n_{x}\) samples of the belief. The second reward component \(r^{I}(b_{k},a_{k},z_{k+1},b_{k+1})\) is an information-theoretic reward weighted by \(\lambda\), which in general can be dependent on consecutive beliefs and the elements relating them, e.g. information gain or specific estimators as (Boers et al., 2010) for nonparametric beliefs represented by particles. For instance, in Section 6.1 we consider the entropy estimator introduced by Boers et al. (2010). As will be seen in the sequel, although the theoretical entropy is only a function of a single belief \(b_{k+1}\), the mentioned estimator utilizes \(b_{k}\), \(a_{k}\), \(z_{k+1}\) and \(b_{k+1}\); hence the second reward component, \(r^{\ell}(b_{k},a_{k},z_{k+1},b_{k+1})\), depends on these quantities. The _policy_ is a mapping from belief to action spaces \(a_{k}=\pi_{k}(b_{k})\). Let \(\pi_{\ell+}\) be a shorthand for policy for \(\ell-k+L\) consecutive steps ahead starting at index \(\ell\), namely \(\pi_{\ell:k+L-1}\) for \(\ell\geq k\). ### Theoretical Objective The decision making goal is to find an optimal policy \(\pi_{k+}\) maximizing \[\begin{split} V(b_{k},\pi_{k+})\!=&\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! a policy tree by executing multiple simulations. Each simulation adds a single belief node to the belief tree or terminates by terminal state or action. To steer towards more deeper and more beneficial simulations, MCTS selects an action \(a^{*}\) at each belief node according to the following rule \(a^{*}=\underset{a\in\mathcal{A}}{\arg\max}\) UCB\((ha)\) where \[\text{UCB}(ha)=\hat{Q}(ha)+c\cdot\sqrt{\nicefrac{{\log(N(h))}}{{N(ha)}}}, \tag{15}\] where \(N(h)\) is the visitation count of the belief node defined by history \(h\), \(N(ha)\) is the visitation count of the belief-action node, \(c\) is the exploration parameter and, \(\hat{Q}(ha)\) is the estimator of the action-value function \(Q\) for node \(ha\) obtained by simulations. When the action is selected, a question arises either to open a new branch in terms of observation and posterior belief or to continue through one of the existing branches. In continuous action, and observation spaces, this can be resolved by the Double Progressive Widening (DPW) technique (Sunberg and Kochenderfer, 2018; Auger et al., 2013). If a new branch is expanded, an observation \(o\) is created from state \(x\) drawn from the belief \(b\). Let the return, corresponding to lace \(i\) starting from some belief \(b_{\ell}^{i}\) at depth \(\ell-k\), be \(g(b_{\ell}^{i})\) for \(\ell\in[k:k+L-1]\). More specifically, suppose the new posterior belief was expanded at depth \(d^{i}\) of the belief tree such that \(d^{i}>\ell\). We have that \[g(b_{\ell}^{i},a_{\ell})=\] \[=\underbrace{\rho(b_{\ell}^{i},a_{\ell},b_{\ell+1}^{i})+\sum_{l= \ell+1}^{k+d^{i}}\gamma^{l-\ell}\rho(b_{l}^{i},\pi_{l}^{*,i}(b_{l}),b_{l+1}^{i} )}_{\text{belief tree}}+ \tag{16}\] \[+\underbrace{\sum_{l=k+d^{i}}^{k+L-1}\gamma^{l-\ell}\rho(b_{l}^{ i},\mu(b_{l}^{i}),b_{l+1}^{i})}_{\text{rollout}}, \tag{17}\] where \(L\) is the horizon (tree depth), \(\pi^{*,i}\) is an optimal tree policy depending on the number of the simulation \(i\) through \(\hat{Q}\) and visitation counts in (15) and \(\mu\) is the rollout policy. Importantly, in rollout the observations are drawn randomly and since we are in continuous spaces the beliefs in the rollouts are unique. A new belief node is added for \(l=k+d^{i}\). If due to DPW no new belief node was added to the belief tree, the rollout depicted by (17) and the return sample takes the form of \[g(b_{\ell}^{i},a_{\ell})=\rho(b_{\ell}^{i},a_{\ell},b_{\ell+1}^{i})+\sum_{l= \ell+1}^{k+L-1}\gamma^{l-\ell}\rho(b_{l}^{i},\pi_{l}^{*,i}(b_{l}),b_{l+1}^{i}). \tag{18}\] The estimate for (8) under optimal future policy is assembled from laces in accordance to \[\hat{Q}(h_{\ell}a_{\ell})=\frac{1}{N(h_{\ell}a_{\ell})}\sum_{i=1}^{N(ha_{\ell })}g(b_{\ell}^{i},a_{\ell}), \tag{19}\] where each reward \(\rho(b,a,b^{\prime})\) in the belief tree appears the number of times according to the visitation count of the node \(b^{\prime}\), namely \(N(h^{\prime})\). We note that for both estimators (14) and (19), the formulation in (12) holds. Now we move to the details of our general approach. ## 3 Approach This section is the core of our general approach. We first describe bounds over the theoretical and the estimated objectives. We then endow the rewards bounds with discrete simplification levels. Finally, instead of calculating rewards, we calculate the bounds over them and if they are not tight enough we tighten them so we can make faster decisions with bounds over the objectives instead of objectives themselves. ### Theoretical Simplification Formulation Simplification is any kind of relaxation of POMDP tuple (1) elements, accompanied by guarantees that quantify the (worst-case or potential) impact of a particular simplification technique on planning performance. In this paper, we consider a specific manifestation of this general simplification framework, which will become clear shortly. As mentioned, we aim to simplify the belief-dependent reward \(\rho(b_{\ell},a_{\ell},z_{\ell+1},b_{\ell+1})\) calculations. Namely, the original reward \(\rho\) is bounded using the simplified belief \(b^{*}\) instead of original belief \(b\). This operation materializes in the form of following inequality \[\underline{\rho}(b_{\ell}^{*},b_{\ell},a_{\ell},z_{\ell+1},b_{ \ell+1},b_{\ell+1}^{*})\leq \tag{20}\] \[\leq\rho(b_{\ell},a_{\ell},z_{\ell+1},b_{\ell+1})\leq\] \[\leq\overline{\rho}(b_{\ell}^{*},b_{\ell},a_{\ell},z_{\ell+1},b_ {\ell+1},b_{\ell+1}^{*}),\] where \(\underline{\rho}\) and \(\overline{\rho}\) are the corresponding lower and upper bounds, respectively. The superscript \(s\) denotes the fact that the corresponding belief was simplified as we depict in Fig. 1. Notice that in (20) the pair of consecutive beliefs, \(b_{\ell}\) and \(b_{\ell+1}\), can be simplified differently. Henceforth in order to avoid unnecessary clutter we will omit the dependence on the observation and denote the bounds over the reward using simplified beliefs as follows \[\underline{\rho}^{s}(b,a,b^{\prime})\leq\rho(b,a,b^{\prime})\leq\overline{\rho }^{s}(b,a,b^{\prime}). \tag{21}\] It should be stressed that since in the belief tree \(b^{\prime}\) always has a single parent \(b\), the reader should think about such a reward as one corresponding to \(b^{\prime}\). Since the reward depends on a pair of consecutive beliefs, the same belief is simplified differently for successive rewards \(\rho(b,a,b^{\prime})\) and \(\rho(b^{\prime},a^{\prime},b^{\prime\prime})\). A key requirement is reduced computational complexity of these bounds compared to the complexity of the original reward. Instead of calculating the expensive reward \(\rho(b,a,b^{\prime})\) for each pair of beliefs \(b,b^{\prime}\), we first obtain the corresponding simplified beliefs \(b^{*}\) and \(b^{\prime*}\), as illustrated in Fig. 1, and then formulate the bounds \(\underline{\rho}^{s}\) and \(\overline{\rho}^{s}\) from (21). However, we note that the form (21) is actually more general and not limited to belief simplification. Further we formulate bounds over the value function and action-value function, both under the optimal policy. In fact, our bounds hold under an arbitrary policy. We narrow the discussion to optimal polices solely for the clarity of the explanation and this is not a limitation of our approach. Suppose inequality (21) holds for any possible pair of consecutive beliefs, e.g. these are analytical bounds, as opposed to (Zhitnikov and Indelman, 2022). A direct consequence of this fact, alongside the structure of (6), is that \[\underline{V}(b_{\ell},\pi_{\ell+}^{*})\leq V(b_{\ell},\pi_{\ell+}^{*})\leq \overline{V}(b_{\ell},\pi_{\ell+}^{*}), \tag{22}\] holds for any belief \(b_{\ell}\) and \(\ell\in[k,k+L-1]\). Using the Bellman representation as in (7) the bounds (22) take the form of \[\overline{V}(b_{\ell},\pi^{*}_{\ell+}) =\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! that \[\overline{Q}(b_{k},a_{k}) =(1-\lambda)\bar{Q}^{x}(b_{k},a_{k})+\lambda\overline{Q}^{I}(b_{k},a _{k}) \tag{33}\] \[\underline{\hat{Q}}(b_{k},a_{k}) =(1-\lambda)\bar{Q}^{x}(b_{k},a_{k})+\lambda\underline{Q}^{I}(b_{k },a_{k}). \tag{34}\] Impact of the Information Weight \(\lambda\)Allow us to linger on the \(\lambda\) from eq. 10 and 12. It is hard to predict how the objective will behave with various values of \(\lambda\). Nevertheless, if the bounds are over the belief-dependent element of the reward, by subtracting (34) from (33), we arrive at \[\overline{\hat{Q}}(b_{k},a_{k})-\underline{\hat{Q}}(b_{k},a_{k})=\lambda \Big{(}\overline{\hat{Q}}^{I}(b_{k},a_{k})-\underline{\hat{Q}}^{I}(b_{k},a_{ k})\Big{)}. \tag{35}\] The width of the bounds is monotonically increasing with \(\lambda\). Of course, it will also happen to a theoretical analog of such a bounds displayed by eq. (26) and (27). We can envision more speedup with lower values of \(\lambda\) and will see it in the simulations. Further we will consider the estimated action-value or value functions and therefore omit the word "estimated". We will also omit mentioning each time that our bounds are under the optimal policy. ### Multi-Level Simplification We now extend the definition of simplification as we envision it to be an _adaptive paradigm_. We denote _level of simplification_ as how "aggressive" the suggested simplification is. Observe an illustration in Fig. 2. With this setting, we can naturally define many discrete levels such that \(s\in\{1,2,\ldots,n_{\max}\}\) represents the simplification level, where \(1\) and \(n_{\max}\) correspond to the coarsest and finest simplification levels, respectively. For instance, suppose the belief is represented by a set of samples (particles), as in Section 6. Taking a small subset of particles to represent the simplified belief corresponds to a _coarse_ simplification. If one takes many particles, this corresponds to a _fine_ simplification. **Remark:** From now on the superscript \(s\) denotes the discrete simplification level. Importantly we always have a **finite** number, denoted by \(n_{\max}\), of simplification levels. Further, we assume bounds monotonically become tighter as the simplification level is increased and that the bounds for the finest simplification level \(n_{\max}\) converge to the original reward without simplification. More formally, denote \(\overline{\Delta}^{s}(b,a,b^{\prime})\triangleq\overline{\rho}^{s}(b,a,b^{ \prime})-\rho(b,a,b^{\prime})\) and \(\underline{\Delta}^{s}(b,a,b^{\prime})\triangleq\rho(b,a,b^{\prime})-\underline {\rho}^{s}(b,a,b^{\prime})\). **Assumption 1**.: Monotonicity. Let \(n_{\max}\geq 2\), \(\forall s\in[1,n_{\max}-1]\) we get: \(\overline{\Delta}^{s}(b,a,b^{\prime})\geq\overline{\Delta}^{s+1}(b,a,b^{ \prime})\) and \(\underline{\Delta}^{s}(b,a,b^{\prime})\geq\underline{\Delta}^{s+1}(b,a,b^{ \prime})\). **Assumption 2**.: Convergence. \(\forall b,a,b^{\prime}\) we get: \(\overline{\rho}^{s=n_{\max}}(b,a,b^{\prime})=\underline{\rho}^{s=n_{\max}}(b, a,b^{\prime})=\rho(b,a,b^{\prime})\). In Section 6, we derive novel bounds on top of a particular simplification that takes a subset of belief samples instead of a complete set. We prove that these bounds indeed satisfy both assumptions. The simplification levels of the reward bounds for different posterior belief nodes in the belief tree determine how tight the bounds over the value or action-value function are. To tighten the bounds over the objective, we have the freedom to select any rewards the belief tree and tighten the bounds over these selected rewards by increasing their simplification levels; this, in turn, would contract the bounds over the objective. We call a particular algorithmic scheme to select the rewards a **resimplification strategy**. A general valid resimplificaiton strategy is defined as follows. **Definition 1**.: Resimplification strategy. Given a pair of lower \(\underline{\hat{V}}(b_{\ell},\pi_{\ell+})\) (\(\underline{\hat{Q}}(b_{\ell},\{a_{\ell},\pi^{*}_{(\ell+1)+}\})\)) and upper bounds \(\overline{\hat{V}}(b_{\ell},\pi_{\ell+})\) (\(\overline{\hat{Q}}(b_{\ell},\{a_{\ell},\pi^{*}_{(\ell+1)+}\})\)) over the estimated objective, the resimplification strategy is a rule to promote one or more simplification levels of the rewards in the subtree rooted at \(b_{\ell}\) and defined by the mentioned above estimated objective. If the resimplification does not promote the simplification level for any reward, so \(\overline{\hat{Q}}(b_{\ell},\{a_{\ell},\pi^{*}_{(\ell+1)+}\})-\underline{\hat {Q}}(b_{\ell},\{a_{\ell},\pi^{*}_{(\ell+1)+}\})=0\). Note that, all the rewards within a subtree defined by \(\overline{\hat{Q}}(b_{\ell},\{a_{\ell},\pi^{*}_{(\ell+1)+}\}),\underline{\hat {Q}}(b_{\ell},\{a_{\ell},\pi^{*}_{(\ell+1)+}\})\) are being at the maximal simplification level implies \(\overline{\hat{Q}}(b_{\ell},\{a_{\ell},\pi^{*}_{(\ell+1)+}\})-\underline{\hat {Q}}(b_{\ell},\{a_{\ell},\pi^{*}_{(\ell+1)+}\})=0\), but the inverse implication is not necessarily true. Once initiated, a **valid** strategy can select no reward for simplification level promotion only if \(\overline{\hat{Q}}(b_{\ell},\{a_{\ell},\pi^{*}_{(\ell+1)+}\})-\underline{\hat {Q}}(b_{\ell},\{a_{\ell},\pi^{*}_{(\ell+1)+}\})=0\). **Theorem 1**.: Monotonicity and Convergence of Estimated Objective Function Bounds. _If the bounds over the reward are monotonic (assumption 1) and convergent (assumption 2), for both estimators (31) and (32), the bounds on the sample approximation (29) are monotonic as a function of the number of resimplifications and convergent after at most \(n_{\max}\cdot M\) resimplifications for_ **any** _resimplification strategy. Here \(M\) is the number of posterior beliefs in (31) or (32). Namely, if all the rewards are at the maximal simplification level \(n_{\max}\) we have to reach_ \[\underline{\hat{Q}}(\cdot)=\hat{Q}(\cdot)=\overline{\hat{Q}}(\cdot). \tag{36}\] _Similarly for Optimal value function the equality \(\underline{\hat{V}}(\cdot)=\hat{V}(\cdot)=\overline{\hat{V}}(\cdot)\) holds._ The reader can find the proof in the Appendix 10.1. Theorem 1 ensures that if the resimplification strategy is valid (Definition 1), we do not get stuck in an infinite loop of resimplifications if instead of \(\hat{Q}(\cdot)\) we use its bounds. In particular, if (36) is reached, there is no reason to activate the resimplification routine. Importantly, as we discuss next and corroborate by simulations in many cases we can identify the optimal action before reaching the maximal number of resimplificaitons. ### Adaptive Simplification Mechanics Our adaptive simplification approach is based on two key observations. The _first key observation_ is that we can compare bounds over (29) constituted by rewards at different levels of simplification. Our _second key observation_ is that we can reuse calculations between different simplification levels avoiding recalculation of the simplification from scratch. Naturally we do not want to reach (36). Let us begin by explaining how we determine an optimal action by using bounds over the action-value function instead of its explicit calculation and obtain a significant speedup in planning time. If there is no overlap between the intervals originated from the upper and lower bounds (29) of each candidate action, we can determine the optimal action and therefore there is no reason to call the resimplification routine. Contemplate about some belief \(b_{\ell}\) in the belief tree. We annotate by superscript \(j\) candidate actions emanating from \(b_{\ell}\), such that the index \(j\) corresponds to the \(j\)th candidate action. We first select a candidate action using the lower bound (29) over \(\hat{Q}(b_{\ell},a_{\ell}^{j})\) as \[j^{\dagger}(b_{\ell})=\operatorname*{arg\,max}_{j}\Big{\{}\underline{Q}(b_{ \ell},\{a_{\ell}^{j},\pi_{(\ell+1)+}^{*}\})+c^{j}\Big{\}}, \tag{37}\] where \(c^{j}\) is an action dependent constant. In case of a given belief tree \(c^{j}=0\)\(\forall j\), whereas in case of MCTS, it is a constant originated from UCB as in (15). We then ask the question whether or not an overlap with another candidate action exists, \[\begin{split}&\underline{\hat{Q}}(b_{\ell},\{a_{\ell}^{j^{\dagger} },\pi_{(\ell+1)+}^{*}\})+c^{j^{\dagger}}\overbrace{\geq}^{\gamma}\\ &\geq\max_{j\in\{1...\}\setminus\{j^{\dagger}\}}\{\overline{Q}(b _{\ell},\{a_{\ell}^{j},\pi_{(\ell+1)+}^{*}\})+c^{j}\Big{\}}\end{split} \tag{38}\] See a visualization in Fig. 3a. If the condition displayed by equation (38) is not fullfilled, as depicted in Fig. 3a, we shall tighten the bounds (29) by calling a **resimplification strategy**. Importantly, in case of a given belief tree, even if an overlap is present similar to branch-and-bound technique (Kochenderfer et al., 2022) we can prune any subtree obtained with action \(j\) satisfying \[\begin{split}\underline{\hat{Q}}(b_{\ell},\{a_{\ell}^{j^{\dagger} },\pi_{(\ell+1)+}^{*}\})+c^{j^{\dagger}}\geq\overline{\hat{Q}}(b_{\ell},\{a_ {\ell}^{j},\pi_{(\ell+1)+}^{*}\})+c^{j}.\end{split} \tag{39}\] We illustrated this aspect in Fig. 3b. If the belief tree is constructed gradually as in MCTS based methods and anytime setting, instead of pruning, we still can use (39) to dismiss suboptimal, at current simulation of MCTS, actions. Once no overlap is present (the condition (38) is fulfilled) we can declare that the selected action is optimal (\(\pi_{\ell}^{*}(b_{\ell})=a_{\ell}^{j^{\dagger}(b_{\ell})}\)). Utilizing the optimal action we can bound the _optimal_ value function \(\hat{V}(b_{\ell},\pi_{\ell+}^{*})\) as such \[\overline{\hat{V}}(b_{\ell},\{\pi_{\ell}^{*},\pi_{(\ell+1)+}^{*}\}) \triangleq\overline{\hat{Q}}(b_{\ell},\{a_{\ell}^{j^{\dagger}(b_{\ell})},\pi_ {(\ell+1)+}^{*}\}), \tag{40}\] \[\underline{\hat{V}}(b_{\ell},\{\pi_{\ell}^{*},\pi_{(\ell+1)+}^{*} \})\triangleq\underline{\hat{Q}}(b_{\ell},\{a_{\ell}^{j^{\dagger}(b_{\ell})}, \pi_{(\ell+1)+}^{*}\}). \tag{41}\] Let us recite that the bounds (40) and (41) are conditioned on the fact that there is no overlap of the bounds intervals that correspond to different candidate actions, namely the condition (38) is met for _each_ belief \(b_{\ell}\) in the belief tree. This situation is visualized in Fig. 3b. On the other hand, to identify the optimal immediate action \(a_{k}^{*}\), we require no overlap between bounds of different actions only at the root of the belief tree (where the belief is \(b_{k}\)). This means that at each belief node \(b_{\ell}\) in the tree, besides the root, we only want to bound the value function for the optimal action (and under optimal future policy). While it is possible to do so by first determining the optimal action, as in (40) and (41), we can bypass this step and directly bound the value function over the optimal action as follows, \[\overline{\hat{V}}(b_{\ell},\{\pi_{\ell}^{*},\pi_{(\ell+1)+}^{*}\})\triangleq \max_{j}\overline{\hat{Q}}(b_{\ell},\{a^{j},\pi_{(\ell+1)+}^{*}\}), \tag{42}\] \[\underline{\hat{V}}(b_{\ell},\{\pi_{\ell}^{*},\pi_{(\ell+1)+}^{*} \})\triangleq\max_{j}\underline{\hat{Q}}(b_{\ell},\{a^{j},\pi_{(\ell+1)+}^{*} \}), \tag{43}\] i.e. relaxing the requirement of no overlap between bounds for different actions at any node \(b_{\ell}\) besides \(b_{k}\). See illustration Figure 3: In this illustration we have three candidate actions \(\{a^{1},a^{2},a^{3}\}\) that can possibly be taken by the robot from the belief node \(b_{\ell}\)**(a)** We observe that \(\overline{\hat{Q}}(b_{\ell},a_{\ell}^{1})>\underline{\hat{Q}}(b_{\ell},a_{\ell }^{2})\) and prune action \(a^{2}\). **(b)** After the resimplification no overlap an we can safely decide that \(a_{\ell}^{j}\) is optimal. Moreover we prune the withered interval corresponding to the \(a_{\ell}^{2}\). **(c)** Another situation where we are not concerned about optimal action, we solely want to send up to the tree the bounds over optimal value function. of (42) and (43) in Fig. 3c. In turn, eliminating a single overlap at the root results in lower rewards simplification levels in the tree, although such a value bounds may be looser. As we shall see, this approach would typically yield more speedup. Nevertheless, when we need a policy tree we still have to obtain an optimal action at each belief node within the tree. This requires no bounds overlap at each node, as in the former setting. This situation arises for example when the action and observation spaces are large but discrete. In this case the robot sometimes does not do re-planning at each time step. Instead the robot uses the policy tree as a representation of the policy and selects an optimal action that corresponds to the received observation. In addition, such a strategy accommodates possible reuse calculations in such a solved belief tree (Farhi and Indelman, 2019, 2021). To conclude this section let us summarize. As discussed, we have the following two variants: * The resimplification is initiated at each nonterminal posterior belief node \(b_{\ell}\) up until no overlap between candidate actions is present and the optimal action \(\pi^{*}_{\ell}(b_{\ell})\) is selected. This way we bound the optimal value function of the descendant to \(b_{k}\) nodes using an optimal action according to (40) and (41). We named this approach Policy Tree (PT). * The resimplification is commenced solely at the root \(b_{k}\) of the whole belief tree. We eliminate the overlap and obtain an optimal action only at \(b_{k}\). This way we use (42) and (43) to bound the optimal value function of the descendant to \(b_{k}\) nodes. We shall refer to this variant of our approach as LAZY. ### Specific Resimplification Strategies In this paper we consider two specific resimplification strategies that are elaborated in the next sections: Simplification Level (SL) and Gap. We note that additional valid resimplification strategies exist and can be plugged-in into the above-proposed general theory. **Simplification level:** The resimplification strategy can be directly tied to the simplification level. In this situation the resimplification strategy promotes simplification level of the rewards inside the belief tree corresponding to bounds in (28) or (29) based on the simplification level itself. We provide further details in the setting of a given belief tree, considering a PT variant in Section 4. **Gap:** Another possibility is that the resimplification is tied to the gap \(\overline{p}^{*}-\underline{\rho}^{*}\). Such a resimplification promotes the simplification level if the reward bounds gap satisfies a certain condition. We describe thoroughly this resimplification flavor in the setting of a given belief tree, considering LAZY variant in Section 4.2, and in MCTS setting, considering a PT variant in Section 5.4. Each of these strategies can be used in conjunction with any of the variants PT and LAZY. In the sequel, we shall denote these combinations explicitly, e.g. PT-SL, LAZY-Gap and PT-Gap. The preceding discussion raises the question of how do we actually incorporate the proposed bounds into online decision making. This brings us to the next section. We first consider a given belief tree and then coupled belief tree construction and solution as in MCTS methods. It shall be noted that further presented resimplification strategies are also suitable for static candidate action sequences, with minor modifications. ## 4 Adaptive Simplification in the Setting of a Given Belief Tree We start with the assumption that the belief tree was generated in some way and that it is given, e.g, Sparse Sampling (SS) algorithm introduced by Kearns et al. (2002). In other words the belief tree construction is not coupled with rewards calculation and estimation of the objective. In this setting, we contribute two resimplification strategies. The first strategy is described in Section 4.1. The general idea is to break down recursively a given belief tree \(\mathbb{T}\) into its sub-problems (subtrees), denoted as \(\{\mathbb{T}^{j}\}_{j=1}^{|\mathcal{A}|}\) (each subtree \(j\) at the root belief has a single action \(j\)), and solve each sub-problem with its own simplification level of the corresponding belief subtree. Ultimately this would lead to the solution of the entire problem via action-value function bounds (31). This strategy is based on Simplification Level and it is a PT strategy. The action-value bounds should not overlap **at each node** in the given belief tree. The second strategy is described in Section 4.2. This resimplification strategy is based on Gap and it is LAZY strategy. Here, the general idea is to first substitute all the rewards in a given belief tree by bounds with the coarsest simplification level. We then eliminate an overlap between candidate actions only at the root belief node \(b_{k}\) by a repetitive descending to the belief tree, promoting the simplification levels along a single lace chosen according to largest gap and ascending back. We emphasize that in this setting, the action-value bounds should not overlap **only at the root node** in the given belief tree. As mentioned in the beginning of section 2.3.1, only for simplicity we consider a symmetric setting in terms of sampled actions and the observations but the approach is applicable without any limitations to any given belief tree. Figure 4: Pruning the subtrees by adaptively promoting the simplification levels of the rewards inside. Here the simplification levels of a subtrees are not equal. It is possible that \(s^{i}\neq s^{i+1}\). Note that here the superscripts are relative to \(b_{\ell}\) as opposed to Fig. 1 and Fig. 5. ### Resimplification strategy: PT-SL This section presents our first resimplification strategy. We now turn to thorough description. Not to be confused with **policy tree** represented by the (13) or (14) the **given belief tree** (\(\mathbb{T}\)) has more than a single action emanating from each belief node besides the leafs. We now assign a simplification level to the bounds on the value and action value functions. Consider again some belief node \(b_{\ell}\) in the belief tree, and assume recursively for _each_ of its children belief nodes \(b_{\ell+1}\) we already calculated the optimal policy \(\pi^{*}_{(\ell+1)+}(b_{\ell+1})\) and the corresponding upper and lower bounds \(\underline{\hat{V}}^{s}\big{(}b_{\ell+1},\pi^{*}_{(\ell+1)+}\big{)}\) and \(\overline{\hat{V}}^{s}\big{(}b_{\ell+1},\pi^{*}_{(\ell+1)+}\big{)}\). In general, these bounds for each child sub-policy tree of \(b_{\ell}\) can correspond to different simplification levels. From now on let the superscript \(s\) over the action-value function bounds from (31) and (30) denote the simplification level stemmed from pertaining reward bounds. The bounds previously described by Eqs. (31) for belief node \(b_{\ell}\), incorporating simplification level, are now modified to \[\begin{split}\overline{\hat{Q}}^{s^{j}}(b_{\ell},\{a^{j}_{\ell},\pi^{*}_{(\ell+1)+}\})=\frac{1}{n_{z}}\sum_{i=1}^{n_{z}}\underline{\rho}^{s} (b_{\ell},a^{j}_{\ell},b^{i}_{\ell+1})+\\ +\gamma\frac{1}{n_{z}}\sum_{i=1}^{n_{z}}\overline{\hat{V}}^{s^{i} }(b^{i}_{\ell+1},\pi^{*}_{(\ell+1)+})\\ \underline{\hat{Q}}^{s^{j}}(b_{\ell},\{a^{j}_{\ell},\pi^{*}_{( \ell+1)+}\})=\frac{1}{n_{z}}\sum_{i=1}^{n_{z}}\underline{\rho}^{s}(b_{\ell},a^ {j}_{\ell},b^{i}_{\ell+1})+\\ +\gamma\frac{1}{n_{z}}\sum_{i=1}^{n_{z}}\underline{\hat{V}}^{s^{ i}}(b^{i}_{\ell+1},\pi^{*}_{(\ell+1)+}),\end{split} \tag{44}\] as illustrated in Fig. 4. We shall pinpoint the abuse of notation here. In contrast to (31) the superscript \(s\) over the immediate reward bounds denotes a specific simplification level instead of indicating a general simplification. Note equation (44) applies for each \(a^{j}_{\ell}\in\mathcal{A}\), and as mentioned, each belief node \(b^{i}_{\ell+1}\) (one for each observation \(z^{i}_{\ell+1}\)) has, in general, its own simplification level \(s^{i}\). In other words, for each \(b^{i}_{\ell+1}\), \(s^{i}\) is the simplification level that was sufficient for calculating the bounds \(\left\{\overline{\hat{V}}^{s^{i}}(b^{i}_{\ell+1},\pi^{*}_{(\ell+1)+}),\underline {\hat{V}}^{s^{i}}(b^{i}_{\ell+1},\pi^{*}_{(\ell+1)+})\right\}\) and the corresponding optimal policy \(\pi^{*}_{(\ell+1)+}\). Thus, when addressing belief node \(b_{\ell}\) in (44), for each belief node \(b^{i}_{\ell+1}\) and its corresponding simplification level \(s^{i}\), these bounds are already available. Further, as seen in (44), the immediate reward and the corresponding bounds \(\overline{\rho}\) and \(\underline{\rho}\), in general, can be calculated with their own simplification level \(s\). In particular, when starting calculations, \(s\) could correspond to a default coarse simplification level, e.g. coarsest level \(s=1\). Another possibility is to set \(s=s^{i}\) for corresponding simplification level of value function bounds of the \(i\)-th child belief. To define simplification level \(s^{j}\) of the bounds (44) we leverage the recursive nature of the Bellman update and define \[s^{j}\triangleq\min\{\underbrace{s}_{\overline{\rho}^{s}},\underbrace{s^{i=1 },s^{i=2}\ldots s^{i=n_{z}}}_{\overline{\hat{V}}^{s^{i}}(b^{i}_{\ell+1},\pi^{*} _{(\ell+1)+})}\}, \tag{45}\] where \(\{s^{i=1},s^{i=2},\ldots,s^{i=n_{z}}\}\) represent the (generally different) simplification levels of optimal value functions of belief nodes \(b^{i}_{\ell+1}\) considered in the expectation approximation in (44). We now wish to decide which action \(a^{j^{\dagger}(b_{\ell})}_{\ell}\in\mathcal{A}\) is optimal from belief node \(b_{\ell}\); the corresponding optimal policy would then be \(\pi^{*}_{\ell+}=\{a^{*}_{\ell},\pi^{*}_{(\ell+1)+}\}\), where \(\pi^{*}_{(\ell+1)+}\) is the already-calculated optimal policy for belief nodes \(\{b^{i}_{\ell+1}\}_{i=1}^{n_{z}}\) that \(a^{*}_{\ell}\) leads to. See illustration in Fig. 4. Let us utilize now a general simplification approach described in section 3.4. Overall in each belief node we have \(n_{a}\) candidate actions indexed by superscript \(j\) in (44). **At each belief node** we first select an optimal action candidate according to (37) with a nullified action dependent constant (\(\forall j\ c^{j}=0\)). Further, in any PT resimplification strategy there are three possible scenarios. * No overlap is present ((38) is satisfied) and we are at the root i.e. \(b_{\ell}=b_{k}\). In this case the optimal action shall be returned. * No overlap is present ((38) is satisfied) and we not at the root \(b_{k}\). In this case, using the optimal action we bound optimal value function using the (40) and (41). * Eq. (38) is not satisfied, meaning an overlap is present. In the presence of overlap we shall prune actions Figure 5: An example of the simplification paradigm. The superscript here denotes **global** number of the belief, observation or action in the belief tree as opposed to equation (44) and Fig 4. according to (39) and commence resimplification routine based on resimplification strategy. We now discuss how the simplification level is updated recursively from the simplificiation level of pertaining reward bounds, and revisit the process to calculate the optimal policy and the corresponding bounds. For some belief node \(b_{\ell}\) in the belief tree, consider the bounds \(\widetilde{Q}^{s^{j}}(b_{\ell},\{a_{\ell}^{j},\pi_{(\ell+1)+}^{*}\})\) and \(\underline{\underline{Q}}^{s^{j}}(b_{\ell},\{a_{\ell}^{j},\pi_{(\ell+1)+}^{*}\})\) from (44) for different actions \(a_{\ell}^{j}\in\mathcal{A}\), that partially overlap and therefore could not be pruned. Each policy tree corresponding to action \(a_{\ell}^{j}\) can generally have its own simplification level \(s^{j}\). We now iteratively increase the simplification level by \(1\). This can be done for each of the branches, if \(s^{j}\) is identical for all branches, or only for the branch with the coarsest simplification level. Consider now any such branch whose simplification level needs to be adapted from \(s^{j}\) to \(s^{j}+1\). Recall, that at this point, the mentioned bounds were already calculated, thus their ingredients, in terms of \(\overline{\rho}(b_{\ell}^{i},b_{\ell},a_{\ell}^{j})\), \(\underline{\rho}(b_{\ell}^{*},b_{\ell},a_{\ell}^{j})\) and \(\{\widetilde{V}^{s^{i}}(b_{\ell+1}^{i},\pi_{(\ell+1)+}^{*}),\underline{ \underline{V}}^{s^{i}}(b_{\ell+1}^{i},\pi_{(\ell+1)+}^{*})\}_{i=1}^{n_{i}}\), involved in approximating the expectation in (44), are available. Recall also (45), i.e. each element in \(\{s,s^{i=1},s^{i=2},\ldots,s^{i=n_{z}}\}\) is either equal or larger than \(s^{j}\). We now discuss both cases, starting from the latter. As we assumed bounds to improve monotonically as simplification level increases, see Assump. 1, for any \(s^{i}>s^{j}+1\) we already have readily available bounds \(\widetilde{V}^{s^{i}}(b_{\ell+1}^{i},\pi_{(\ell+1)+}^{*}),\underline{ \underline{V}}^{s^{i}}(b_{\ell+1}^{i},\pi_{(\ell+1)+}^{*})\) which are tighter than those that would be obtained for simplification level \(s^{j}+1\). Thus, we can _safely skip_ the calculation of the latter and use the existing bounds from level \(s^{i}\) as is. For the former case, i.e. \(s^{i}=s^{j}\), we now have to adapt the simplification level of child tree \(i\) to \(s^{j}+1\) by calculating the bounds \(\widetilde{V}^{s^{i+1}}(b_{\ell+1}^{i},\pi_{(\ell+1)+}^{*}),\underline{ \underline{V}}^{s^{i+1}}(b_{\ell+1}^{i},\pi_{(\ell+1)+}^{*})\). Here, our _key insight_ is that, instead of calculating these bounds from scratch, we can re-use calculations between different simplification levels, in this case, from level \(s^{i}\). As the bounds from that level are available, we can identify only the incremental part that is "missing" to get from simplification level \(s^{i}\) to \(s^{i}+1\), and update the existing bounds \(\widetilde{V}^{s^{i}}(b_{\ell+1}^{i},\pi_{(\ell+1)+}^{*}),\underline{ \underline{V}}^{s^{i}}(b_{\ell+1}^{i},\pi_{(\ell+1)+}^{*})\) to recover \(\widetilde{V}^{s^{i+1}}(b_{\ell+1}^{i},\pi_{(\ell+1)+}^{*}),\underline{ \underline{V}}^{s^{i+1}}(b_{\ell+1}^{i},\pi_{(\ell+1)+}^{*})\) exactly. The same argument applies also for bounds over momentary rewards. In Section 6.2.3 we apply this approach to a specific simplification and reward function. We can repeat iteratively the above process of increasing the simplification level until we can prune all branches but one. This means each subtree will be solved maximum once, per simplification level. Since we assumed the reward bounds converge monotonically to the original reward for the finest level \(s=n_{\max}\) (See Fig. 2), from Theorem 1, we are guaranteed to eventually disqualify all sub-optimal branches. Our described approach is summarized in Algs. 1 and 2. #### 4.1.1 Illustrative Example We now illustrate the described above resimplification strategy in a toy example. Before we start this section, let us clarify that in the example the superscripts are global over the belief tree in contrast to previous section. Consider Fig. 5 and assume the subtrees to \(b_{\ell}^{1}\) were solved using simplification levels that hold \(s^{2}=s^{1}+1,s^{2}<s^{3},s^{4}\). Further assume the immediate reward simplification is \(s=s^{1}\). According to definitions above this means that for subtree starting at \(b_{\ell}^{1}\) and action \(a_{\ell}^{1}\) the simplification level is \(\min\{s^{1},s^{2}\}\) and for action \(a_{\ell}^{2}\) the simplification level is \(\min\{s^{3},s^{4}\}\). Now, we consider the case the existing bounds of the subtrees were not tight enough to prune, we adapt simplification level starting from \(b_{\ell}^{1}\) and promote \(s\gets s^{1}+1\). Since \(s^{1}<s^{1}+1\) we re-simplify the subtree corresponding to simplification level of \(s^{1}\) to simplification level \(s^{1}+1\), i.e. to a finer simplification. However we do not need to re-simplify subtrees corresponding to \(s^{2},s^{3},s^{4}\): The tree corresponding to \(s^{2}\) is already simplified to the currently desired level; thus we can use its existing bounds. For the two other trees, their current simplification levels, \(s^{3}\) and \(s^{4}\), are higher (finer) than the desired \(s^{1}+1\) level, and since the bounds are tighter as simplification level increases we can use their existing tighter bounds without the need to "go-back" to a coarser level of simplification. If we can now prune one of the actions, we keep pruning up the tree. If pruning is still not possible, we need to adapt simplification again with simplification level \(s^{1}+2\). #### 4.1.2 A Detailed Algorithm Description Let us thoroughly describe Alg. 1. We are given a belief tree \(\mathbb{T}\). First at the line \(10\) Alg. 1 recursively descend to the leafs. When the line \(11\) is hit for the first time the corresponding rewards are set to the initial simplification level or also possible that minimal level of child optimal value bounds is used. In our simulations we used minimal reward level. Further the algorithm calculates bounds over action-value function represented by (44). This happens in line \(15\) of Alg. 1. The next step is to try to prune all subtrees but one utilizing the Alg. 1. Note, at this point all the subtrees \(\mathbb{T}^{j}\) are already policy trees, namely only a single action emanating from each posterior belief. In there is more that single action left after pruning, at the line \(20\) the Algorithm 1 calls routine ResimplifyTree to initiate **resimplification** for selected subtree corresponding to action \(a^{j}\). The simplification level of a single step ahead reward is always have to be promoted as we do in line \(27\). Further, Alg. 1 treats similarly subtrees, if they are present. ### Resimplification strategy: Lazy-Gap The PT resimplification strategy from previous section assure that no overlap is present (Fig. 2(b)) at each non-leaf posterior belief and we know the optimal action to take. However, it can inflict a redundant computational burden. We can handle the overlap only at the root of the belief tree and use the bounds over optimal value function according to (42) and (43). Since we already presented the resimplification strategy based on the simplification levels, our second resimplification strategy will be based on the distance between reward bounds. However, the bounds (42) and (43) can be utlizied also with resimplification strategy based on simplification levels directly. Yet, this is out of the scope of this paper. In this section we present a lazy variant of resimplification strategy. We recite again that no necessarily lazy variant shall be tied to the gap. In LAZY variant, the overlap is checked solely at the root \(b_{k}\) of the whole belief tree. In this approach three scenarios can be encountered at each belief node. * The belief node is not root. We bound optimal value according to (42) and (43). * At the root \(b_{k}\) we shall check for overlap. If no overlap is present ((38) is satisfied) we prune all suboptimal actions according to Alg. 2 and return an optimal action as described in section 3.4. * In the presence of overlap at the root \(b_{k}\) (Eq. (38) is not satisfied), we shall prune actions according to (39) and Alg. 2 and commence resimplification routine for the non pruned actions based on the resimplification strategy. ``` 1:procedurePrune 2:Input: (belief-tree root, \(b\); bounds of root's children, \(\{\underline{Q}^{j},\overline{Q}^{j}\}_{j=1}^{n_{a}}\)) // \(n_{a}\) is the number of child branches (candidate actions) going out of \(b\). 3:\(\underline{Q}^{*}\leftarrow\max_{j}\{\underline{Q}^{j}\}_{j=1}^{n_{a}}\) 4:for\(j\in 1:n_{a}\)do 5:if\(\underline{Q}^{*}>\overline{Q}^{j}\)then 6: prune child \(j\) from the belief tree 7:endif 8:endfor 9:endprocedure ``` **Algorithm 2** Pruning of trees Having presented general steps of any LAZY variant of resimplification strategy, we are ready to delve into specific gap driven resimplification strategy. Let us introduce the following notation \[G(ha)\triangleq\overline{Q}(ha)-\underline{Q}(ha). \tag{46}\] We remind the reader that sometimes, for simplicity of explanation, we will make the gap dependent on belief and an action, and denote \(G(ba)\). We use this gap to steer the resimplification procedure towards more promising lace. The lace with actions inducing largest gap (46) at each belief action node along the lace will be selected to resimplification. In fact we use similar gap for value function to select observations along the lace. Now let us proceed to the detailed algorithm description. #### 4.2.1 A Detailed Algorithm Description This approach is summarized in Alg. 5 When we apply this resimplification strategy, we first use the lowest simplification level for each pair of consecutive beliefs in the given belief tree. In other words, the Alg. 5 first descends to the leaves of the given belief tree. Then it bounds each optimal value function using the initial simplification level using (42) and (43). This initial passage over the given belief tree is enclosed by routine BoundOptimalValue. In the procedure ActionSelection we increase the simplification level of the reward bounds in the given tree until there is no overlap at the **root**, as in Fig. 2(b). In this way, we can prune entire given subtrees at the root, corresponding to candidate actions. The procedure LazyResimplify descends back to some leaf through the lace with largest gaps on the way. It select action in line \(15\). It then select observation/belief according to largest gap of a single step ahead rewards if these rewards are leafs (line 17) or the largest gap of the optimal value function bounds (line 19). ``` 1:procedurePlan(belief: \(b\), belief-tree: \(\mathbb{T}\)) 2:BoundOptimalValue(belief: \(b\), belief-tree: \(\mathbb{T}\)) 3:\(a^{*}\leftarrow\textsc{ActionSelection}(b,\,L)\)// Alg. 6 4:return\(a^{*}\) 5:endprocedure 6:procedureBoundOptimalValue(belief-tree: \(\mathbb{T}\)) 7:if\(\mathbb{T}\) is a leaf then 8://Corresponds to a single belief node. 9:return\(0,0\) 10:endif 11:for all subtrees \(\mathbb{T}^{j}\in\{\mathbb{T}^{j}\}_{j=1}^{|\mathcal{A}|}\)do 12:for all subtrees \(\mathbb{T}^{j,i}\in\{\mathbb{T}^{j,i}\}_{i=1}^{n}\)do 13:\(\underline{\hat{Y}}(b^{\prime}),\overline{\hat{V}}(b^{\prime})\leftarrow\textsc{ BoundOptimalValue}(b,\,\mathbb{T}^{j,i})\) 14: Set the simplification level of \(\varrho^{*}(b,a^{j},b^{ti})\) and \(\overline{\varrho}^{*}(b,a^{j},b^{ti})\) to coarsest possible 15:endfor 16: Calculate \(\underline{\hat{Q}}^{j},\overline{\hat{Q}}^{j}\) 17:endfor 18:\(\underline{\hat{Y}}(b)\leftarrow\max\limits_{j}\{\underline{\hat{Q}}^{j}\}\) 19:\(\overline{\hat{V}}(b)\leftarrow\max\limits_{j}\{\overline{\hat{Q}}^{j}\}\) 20:return\(\underline{\hat{V}}(b),\overline{\hat{V}}(b)\) 21:endprocedure ``` **Algorithm 5** Lazy Simplified Information Theoretic Belief Space Planning ## 5 Adaptive Simplification in the Setting of MCTS In the previous sections, we described the application of the adaptive simplification paradigm when the belief tree is given or its construction is not coupled with the solution. We now turn to anytime setting where the belief tree is not given. Instead, the belief tree construction is coupled with the estimation of the action-value function (19) at each belief action node. Such an approach is commonly used in Monte Carlo tree search (MCTS) methods based on an exploration strategy, e.g. Upper Confidence Bound (UCB) as in (15). Our goal is to suggest a resimplfcation strategy so that exactly the same belief tree as without simplification would be constructed. Also the same optimal action is identified with and without simplification. To support general belief-dependent rewards we select PFT-DPW as the baseline, as mentioned in Section 1.1. Common exploration strategies conform to the structure presented in (37). Without loosing generality we focus on the most advanced to our knowledge exploration strategy, named UCB and portrayed by (15). ### UCB bounds With this perspicuity in mind, we now introduce bounds over (15) \[\overline{\mathrm{UCB}}(ha) \triangleq\overline{\hat{Q}}(ha)+c\cdot\sqrt{\nicefrac{{\log(N(h) )}}{{N(ha)}}}, \tag{47}\] \[\underline{\mathrm{UCB}}(ha) \triangleq\underline{\hat{Q}}(ha)+c\cdot\sqrt{\nicefrac{{\log(N(h) )}}{{N(ha)}}}. \tag{48}\] Similar to the given belief tree setting we now proceed to the explanation how the reward bounds (21) yield (47) and (48). ### Guaranteed Belief Tree Consistency Since the simplification paradigm substituted UCB (15) by the bounds (47) and (48) the belief tree construction is coupled with these quantities as opposed to the situation with the given belief tree, we now shall address this question. If there is an overlap between bounds on UCB for different actions, we can no longer guarantee the same belief tree will be constructed with and without simplification. In this and the following sections we address this key issue. Specifically, we define the notion of Tree Consistency and prove the equivalence of our algorithm to our baseline PFT-DPW. **Definition 2**.: Tree consistent algorithms. Consider two algorithms, constructing a belief tree. Assume every common sampling operation for the two algorithms uses the same seed. The two algorithms are _tree consistent_ if two belief trees constructed by the algorithms are identical in terms of actions, observations, and visitation counts. Our approach relies on a specific procedure for selecting actions within the tree. Since each simulation the MCTS descends down the tree with a single return lace as in (19), on the way down it requires the action maximizing UCB (15) we shall eliminate overlap at each belief node as described in section 3.4. Further we restate the action selection procedure described in section 3.4 with particular action dependent constant from eq. (37) and (38) rendering the UCB bounds from (47) and (48). Our action selection is encapsulated by Alg. 8, which is different from the procedure used in PFT-DPW. On top of DPW as in (Sunberg and Kochenderfer, 2018) with parameters \(k_{a}\) and \(\alpha_{a}\), instead of selecting an action maximizing the UCB (15), at every belief node we mark as a candidate action the one that maximizes the lower bound \(\underline{\mathrm{UCB}}\) as such \[\tilde{a}=\operatorname*{arg\,max}_{a\in C(h)}\underline{\mathrm{UCB}}(ha). \tag{49}\] If \(\forall a\neq\tilde{a},\underline{\mathrm{UCB}}(h\tilde{a})\geq\overline{ \mathrm{UCB}}(ha)\), there is no overlap (Fig. 6 **(c)**) and we can declare that \(\tilde{a}\) is identical to \(a^{*}\), i.e., the action that would be returned by PFT using (15) and the tree consistency has not been affected. Otherwise, the bounds must be tightened, so ensure the tree consistency. We examine the \(ha\) siblings of \(h\tilde{a}\), which satisfy \(a\neq\tilde{a}:\underline{\mathrm{UCB}}(h\tilde{a})<\overline{\mathrm{UCB}}(ha)\) (Fig. 6 **(a)**). Our next step is to tighten the bounds by resimplification (Fig. 6 **(b)**) until there is no overlap using the valid resimplification strategy according to Definition 1. Note that here we cannot use the "lazy variant" from section 4.2 due to fact that the MCTS requires to select an action going down to the tree, see line \(12\) of the Algorithm 7. ### A Detailed Algorithm Description Now we introduce our efficient variant of the Particle Filter Tree (PFT) presented in (Sunberg and Kochenderfer, 2018). Figure 6: Illustration of our approach. The circles denote the belief nodes, and the rectangles represent the belief-action nodes. Rollouts, emanating from each belief node, are indicated by dashed lines finalized with triangles. **(a)** The simulation starts from the root of the tree, but at node \(b_{i}^{3}\) it can not continue due to an overlap of the child nodes (colored red) bounds. **(b)** One of the red colored belief-action nodes is chosen, and resimplification is triggered from it down the tree to the leaves (shaded green area in the tree). The beliefs and rollouts inside the green area (colored by light brown) undergo resimplification if decided so. This procedure results in tighter bounds. **(c)** After the bounds got tighter, nothing prevents the SITH-PFT from continuing down from node \(b_{1}^{3}\) guaranteeing the Tree Consistency. If needed, additional resimplifications can be commenced. We call our approach Simplified Information-Theoretic Particle Filter Tree (SITH-PFT). SITH-PFT (Alg. 7) incorporates the adaptive simplification into PFT-DPW. We adhere to the conventional notations as in (Sunberg and Kochenderfer, 2018) and denote by \(G_{\text{PF($m$)}}(bao)\) a generative model receiving as input the belief \(b\), an action \(a\) and an observation \(o\), and producing the posterior belief \(b^{\prime}\). For belief update, we use a particle filter based on \(n_{x}\) state samples. A remarkable property of our efficient variant is the consistency of the belief tree. In other words, PFT and SITH-PFT have same belief tree constructed with (15), while SITH-PFT enjoys substantial acceleration. By \(C(ha)\) we denote the set of the children (posterior beliefs corresponding to the myopic observations) of the belief action node uniquely indexed by the history \(h\) with concatenated action \(a\). The line 13 in Alg. 7 is the DPW technique from (Sunberg and Kochenderfer, 2018) with parameters \(k_{o}\) and \(\alpha_{o}\). The \(N(\cdot)\) is the visitation count of belief or belief action nodes. In MCTS, the \(Q\) estimate is assembled by averaging the laces of the returns over simulations see Eq. 19. Each simulation yields a sum of discounted cumulative rewards. Therefore, by replacing the reward with adaptive lightweights bounds (21), we get corresponding discounted cumulative upper and lower bounds over the returns. Averaging the simulations (Alg. 7 lines 28-29), yields the bounds over the action-value function and the UCB bounds used in the routine ActionSelection() to be explained in the next paragraph. ``` 1:procedureActionSelection(\(b\), \(h\)) 2:if\(|C(h)|\leq k_{a}N(h)^{\alpha_{a}}\)then// action Prog. Widening 3:\(a\leftarrow\) NextAction(\(h\)) 4:\(C(h)\gets C(h)\cup\{a\}\) 5:endif 6:while true do 7: Status, \(a\leftarrow\) SelectBest(\(b\), \(h\)) 8:if Status then 9: break 10:else 11:for all\(b^{\prime},o\in C(ha)\)do 12: Resimplify(\(b^{\prime}\), \(hao\)) 13:endfor 14: reconstruct \(\overline{Q}(ha),\underline{Q}(ha)\) 15:endif 16:endwhile 17: return \(a\) 18:endprocedure 19:procedureSelectBest(\(b\), \(h\)) 20: Status \(\leftarrow\) true 21:\(\tilde{a}\leftarrow\operatorname*{arg\,max}\limits_{a}\{\operatorname{UCB}(ha)\}\) 22:\(\operatorname*{gap}\gets 0\) 23: child-to-resimplify \(\leftarrow\tilde{a}\) 24:for all\(h\) a children of \(b\)do 25:if\(\operatorname*{UCB}(ha)\leftarrow\operatorname*{UCB}(ha)\wedge a\neq\tilde{a}\)then 26: Status \(\leftarrow\) false 27:if\(\underline{Q}(ha)-\underline{Q}(ha)>\operatorname*{gap}\)then 28:\(\operatorname*{gap}\leftarrow\overline{\underline{Q}}(ha)-\underline{Q}(ha)\) 29: child-to-resimplify \(\gets a\) 30:endif 31:endif 32:endfor 33:return Status, child-to-resimplify 34:endprocedure ``` **Algorithm 7** SITH-PFT Consider a belief-action node \(ha\) at level \(d\) with \(\overline{Q}(ha)\), \(\underline{Q}(ha)\). Suppose the algorithm selects it for bounds narrowing, as described in section 5.2 and Alg. 8 line 7. All tree nodes of which \(ha\) is an ancestor, contribute their immediate \(\overline{\rho}^{s},\underline{\rho}^{s}\) bounds to \(\overline{Q}(ha)\), \(\underline{Q}(ha)\) computation. Thus, to tighten \(\overline{Q}(ha),\underline{Q}(ha)\), we can potentially choose any candidate node(s) in the subtree of \(ha\). Each child belief node of \(ha\) is sent to the resimplification routine (Alg. 8 lines \(11-13\)), which performs the following tasks. First, it selects the action (Alg. 9 line 7) that will participate in the subsequent resimplification call and sends all its children beliefs nodes to the recursive call further down the tree (Alg. 9 line 8-10). Secondly, It refines the belief node \(\overline{\rho},\rho\) according to the specific resimplification strategy_ (Alg. 9 lines \(3,4,12,18\)). Thirdly, it reconstructs \(\widetilde{Q}(ha)\), \(\underline{\dot{Q}}(ha)\) once all the child belief nodes of \(ha\) have returned from the resimplification routine (Alg. 9 line 11) as we thoroughly explain in the next section. Fourthly, it engages the rollout resimplification routine according to the specific _resimplification strategy_ (Alg. 9 lines 4, 13). Upon completion of this resimplification call initiated at \(ha\), we obtain tighter immediate bounds of some of \(ha\) descendant belief nodes (including rollouts nodes). Accordingly, appropriate descendant of \(ha\) belief-action nodes bounds (\(\widetilde{Q},\underline{\dot{Q}}\)) shall be updated. Many resimplification strategies are possible, below we present our approach. In section 4.2 we presented a resimplicifation strategy based on gap. Now we adapt it to the MCTS setting. ### Specific Resimplification Strategy: PT-Gap In this section, we explain the resimplification procedure in more detail. In particular we present specific resimplification strategy and further show that this strategy is valid according to the Definition 1. When some sibling belief action nodes have overlapping bounds (Fig. 3a, Fig. 6), we strive to avoid tightening them all at once since fewer resimplifications lead to greater acceleration (speedup). Thus, we choose a single \(ha\)-node that causes the largest "gap", denoted by \(G\), between its bounds (see Alg. 8 lines 24-30), where \(G\) is defined by (46). Further, we tighten the bounds down the branch of the chosen node (see Alg. 8 lines 11-13) for each member of \(C(ha)\), the set of children of \(ha\). Since the bounds converge to the actual reward (Assumption 2) we can guarantee that Alg. 8 will pick a single action after a finite number of resimplifications; thus, tree consistency is assured. Specifically, we decide to refine \(\overline{\rho}^{s},\underline{\rho}^{s}\) of a belief node indexed by \(h^{\prime}\) at depth \(d^{\prime}\) within the subtree starting from a belief action node indexed by \(ha\) at depth \(d\) when \[\gamma^{d-d^{\prime}}\cdot(\overline{\rho}^{s}-\underline{\rho}^{s})\geq \frac{1}{d}G(ha), \tag{50}\] where \(G(ha)\) corresponds to the gap (46) of the belief-action node \(ha\) that initially triggered resimplification in Alg. 8 line 24. The explanation of resimplification strategy based on (50) is rather simple. The right hand side of (50) is the mean gap per depth/level in the sub-tree with \(ha\) as its root and spreading downwards to the leaves. Naturally, some of the nodes in this subtree have \(\overline{\rho}^{s}-\underline{\rho}^{s}\) above or equal to the mean gap and some below. We want to locate and refine all those above or equal to it. For the left side of (50); the rewards are accumulated and discounted according to their depth. Thus, we must account for the relative discount factor. Note that the depth identified with the root is \(d_{\max}\), as seen in Alg. 7 line 4, and the leaves are distinguished by depth \(d=0\). For each rollout originating from a tree belief node, we find the rollout node with the largest \(\overline{\rho}-\underline{\rho}\) satisfying (50) term locally in the rollout and resimplify it (Alg. 9 lines 4,13). To choose the action to continue resimplification down the tree, we take the action corresponding to the belief action node with the largest gap, weighted by its visitation count (Alg. 9 line 7). With this strategy, we aim to keep the belief tree at the lowest possible simplification level while maintaining belieffree consistency. If the action selection procedure triggers resimplification, it modifies the bounds through the tree. Since the resimplification works recursively, it reconstructs the belief-action node bounds coming back from the recursion (Alg. 9 line 11). Similarly, the action dismissal procedure reconstructs \(\widetilde{Q}\) and \(\underline{\dot{Q}}\) of the belief-action node at which the action dismissal is performed (Alg. 8 line 14). Moreover, on the way back from the simulation, we shall update the ancestral belief-action nodes of the tree. Specifically, we need to reconstruct each \(\widetilde{Q}\) and \(\underline{\dot{Q}}\) that is higher than the deepest starting point of the resimplification (Alg. 7 line 23-25). The reconstruction is essentially a double loop. To reconstruct \(\widetilde{Q}(ha),\underline{\dot{Q}}(ha)\) we first query for all belief children nodes \(hao\). We then query all belief-action nodes that are children to the \(hao\), i.e. \(haoa^{\prime}\). The possibly modified immediate bounds \(\underline{\rho}\) and \(\overline{\rho}\) are taken from \(hao\) nodes and the \(\widetilde{Q}(\cdot)\), \(\underline{\dot{Q}}(\cdot)\) bounds are taken from the \(haoa^{\prime}\) nodes. Importantly, each of the bounds is weighted according to the proper visitation count. ### Guarantees In this section we first show that resimplification strategy suggested in the previous section is valid. **Lemma 1.** Validity of the suggested resimplification strategy. _The resimplification strategy presented in section 5.4 promotes the simplification level of at least one reward in the rollout or belief tree. Alternatively, all the rewards are at maximal simplification level \(n_{\max}\). In other words the suggested resimplifcation strategy is valid._ We provide the complete proof in Appendix 0.1.2. Having proved the validity of suggested resimplification strategy, we proceed to the monotonicity and convergence of UCB bounds from (47) and (48). **Lemma 2**.: Monotonicity and convergence of UCB bounds. _The UCB bounds are monotonic as the function of the number of resimplifications and after at most \(n_{\max}\cdot M\) resimplifications we have that_ \[\overline{\mathrm{UCB}}(ha)=\underline{\mathrm{UCB}}(ha)=\mathrm{UCB}(ha) \tag{51}\] We placed the proof in Appendix 0.1.3. Now, using Lemma 2, we prove that SITH-PFT (Alg. 7) yields same belief tree and same best action as PFT. **Theorem 2**.: _SITH-PFT and PFT are Tree Consistent Algorithms for_ **any** _valid resimplification strategy._ **Theorem 3**.: _SITH-PFT provides the same solution as PFT for_ **any** _valid resimplification strategy._ We provide full proof of Theorems 2 and 3 in Appendix 0.1.4 and 0.5 respectively. We showed that for any valid resimplification strategy SITH-PFT is guaranteed to construct same belief tree as PFT and select the same best action at the root. From Lemma 1, our resimplification strategy is valid. Thus, we achieved the desired result. ## 6 Specific Simplification and Information-theoretic Bounds In this section we focus on a specific simplification in the context of a continuous state space and nonparametric beliefs represented by \(n_{x}\) weighted particles, \[b\triangleq\{w^{i},x^{i}\}_{i=1}^{n_{x}}. \tag{52}\] _Suggested Simplification:_ Given the belief representation (52), the simplified belief is a subset of \(n_{x}^{s}\) particles, sampled from the original belief, where \(n_{x}^{s}\leq n_{x}\). More formally: \[b_{k}^{s}\!\triangleq\!\!\left\{(x_{k}^{i},w_{k}^{i})\right\}\!\!|i\in A_{k}^ {s}\subseteq\{1,2,\ldots,n_{x}\},|A_{k}^{s}|=n_{x}^{s}\right\}\!, \tag{53}\] where \(A_{k}^{s}\) is the set of particle indices comprising the simplified belief \(b_{k}^{s}\) for time \(k\). Increasing the level of simplification is done _incrementally_. Specifically, when resimplification is carried out, new indices are drawn from the sets \(\{1,2,\ldots,n_{x}\}\setminus A_{k}^{s}\) and included to the set \(A_{k}^{s}\). This operation promotes the simplification level to \(s+1\) and defines \(A_{k}^{s+1}\). ### Novel Bounds Over Differential Entropy Estimator As one of our key contributions, we now derive novel analytical bounds for the differential entropy estimator from (Boers et al., 2010). These bounds can then be used within our general simplification framework presented in the previous sections. To calculate differential entropy \[\mathcal{H}(b(x_{k}))=-\int b(x_{k})\cdot\log\left(b(x_{k})\right)\mathrm{d}x _{k},\] one must have access to the manifold representing the belief. In nonparametric setting this manifold is out of reach. We have to resort to approximations. Several approaches exist. One of them is using Kernel Density Estimation (KDE) as done, e.g., by Fischer and Tas (2020). Here, however, we consider the method proposed by Boers et al. (2010). This method builds on top of usage of motion and observation models such that \[\hat{\mathcal{H}}(b_{k},a_{k},z_{k+1},b_{k+1})\triangleq\log\left[\sum_{i=1}^ {n_{x}}\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})w_{k}^{i}\right]- \tag{54}\] One can observe this method requires access to samples representing both \(b_{k}\) and \(b_{k+1}\); thus, this corresponds to an information-theoretic reward of the form \(r^{l}(b_{k},a_{k},z_{k+1},b_{k+1})\). Note that as explained in Section 3 such a reward is tied to \(b_{k+1}\). For the sake of clarity and to remove unnecessary clutter we apply an identical simplification described by (53) to both beliefs \(b_{k}\) and \(b_{k+1}\). The simplification indices for both beliefs are defined by \(A_{k+1}^{s}\). However this is not an inherent limitation. One can easily maintain two sets of indices so as the theory presented below is developed to this more general setting. Moreover, as mentioned in Section 3, we have the same belief \(b_{k+1}\) also participating in \(r^{l}(b_{k+1},a_{k+1},z_{k+2},b_{k+2})\). In this reward, the simplification indices for \(b_{k+1}\) will according to \(A_{k+2}^{s}\) (and not according to \(A_{k+1}^{s}\)). Utilizing the chosen simplification (53), we now introduce the following upper and lower bounds on 54. **Theorem 4**.: Adaptive bounds on differential entropy estimator. _The estimator 54 can be bounded by_ \[\begin{split}&\ell(b_{k},a_{k},z_{k+1},b_{k+1};A_{k}^{s},A_{k+1}^{ s})\leq\\ &\quad\leq-\hat{\mathcal{H}}(b_{k},a_{k},z_{k+1},b_{k+1})\leq\\ &\quad\leq u(b_{k},a_{k},z_{k+1},b_{k+1};A_{k}^{s},A_{k+1}^{s}), \end{split} \tag{55}\] _where_ \[u\triangleq-\log\left[\sum_{i}^{n_{x}}\mathbb{P}_{Z}(z_{k+1}|x_{ k+1}^{i})w_{k}^{i}\right]+ \tag{56}\] \[+\!\!\!\!\sum_{i\notin A_{k+1}^{s}}\!\!w_{k+1}^{i}\cdot\log\left[m \cdot\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\right]+\] \[+\!\!\!\!\sum_{i\in A_{k+1}^{s}}\!\!w_{k+1}^{i}\!\cdot\!\log\left[ \mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\!\!\!\sum_{j}^{n_{x}}\mathbb{P}_{T}(x_{k+1}^ {i}|x_{k}^{j},a_{k})w_{k}^{j}\right]\] \[\ell\triangleq-\log\left[\sum_{i}^{n_{x}}\mathbb{P}_{Z}(z_{k+1}| x_{k+1}^{i})w_{k}^{i}\right]+\] (57) \[+\sum_{i}^{n_{x}}w_{k+1}^{i}\cdot\!\log\left[\mathbb{P}_{Z}(z_{k+ 1}|x_{k+1}^{i})\!\!\sum_{j\in A_{k}^{s}}\mathbb{P}_{T}(x_{k+1}^{i}|x_{k}^{j},a_ {k})w_{k}^{j}\right]\] _and where superscript \(s\) is the discrete level of simplification \(s\in\{1,2,\ldots,n_{\max}\}\), \(m\triangleq\max\limits_{x^{\prime},a}\mathbb{P}_{T}(x^{\prime}|x,a)\) and \(A_{k}^{s}\), \(A_{k+1}^{s}\subseteq\{1,2,\ldots,n_{x}\}\)._ See proof in Sec. 10.6. The Thm. 4 accommodates different sets \(A_{k}^{s}\neq A_{k+1}^{s}\). These sets as explained above denote sets of particle indices from \(b_{k}\) and \(b_{k+1}\) for simplification level \(s\). In general each of these sets can have its own simplification level. However, this is out of the scope of this paper. Here, both sets \(A_{k}^{s}\), \(A_{k+1}^{s}\) have same simplification level as well as the number of levels. Yet, the number of particles at each level can vary between \(A_{k}^{s}\) and \(A_{k+1}^{s}\). Each subsequent level (low to high) defines a larger set of indices such that higher levels of simplification (i.e. more samples) correspond to tighter and lower levels of simplification correspond to looser bounds. Note that the bounds (56) and (57) actually use the original and simplified beliefs so it settles with Eq. 20 and 21. Importantly, by caching the shared calculations of the both bounds in same time instance, we never repeat the calculation of these values and obtain maximal speedup. Without compromising on the solution's quality we are accelerating the online decision making process. ### Bounds Properties and Analysis We now turn to analysis of the bound and investigation of their properties. Allow us to start from computational complexity. We then examine monotonicity and convergence of the bounds and reuse of calculations. #### 6.2.1 Computational complexity The(56) and (57) suggest that the bounds are cheaper to calculate than \(\hat{\mathcal{H}}\) from (54), with complexity of \(O(n_{x}^{s}\cdot n_{x})\) instead of \(O(n_{x}^{2})\), where \(n_{x}^{s}\triangleq|A_{k}^{s}|\equiv|A_{k+1}^{s}|\). Altogether, time saved for all belief nodes in the tree will result in the total speedup of our approach. #### 6.2.2 Monotonicity and Convergence **Theorem 5**.: Monotonicity and convergence. _The bounds from (55) are monotonic (Assumption 1) and convergent (Assumption 2) to (54)._ See proof in Sec. 10.7. Finally, bounding (54) using Theorem 4 corresponds, in our general framework from Section 3, to (20). #### 6.2.3 Re-use of Calculations The bounds can be tightened on demand incrementally without an overhead. Moving from simplification level \(s\) to level \(s+1\), corresponds to adding some \(m\) additional particles to \(b^{s}\) to get \(b^{s+1}\). For bounds calculation, we store the highlighted elements of the matrix in Fig. 7. This allows us to reuse the calculations when promoting the simplification level and **between the lower and the upper bounds** in a particular time index. Namely, after a few bounds-contracting iterations they are just the reward itself and the entire calculation is roughly time-equivalent to calculating the original reward. This will happen in a worst-case scenario. We provide the theoretical time complexity analysis using the specific bounds (from Section 6.1) in the Appendix 10.8. Now we are keen to present our simulations. ## 7 Simulations and Results We evaluate our proposed framework and approaches in simulation considering the setting of nonparametric fully continuous POMDP. Our implementation is built upon the JuliaPOMDP package collection (Egorov et al., 2017). For our simulations, we used a \(16\) cores 11th Gen Intel(R) Core(TM) i9-11900K with 64 GB of RAM working at 3.50GHz. First, we study empirically the specific simplification and bounds from Section 6 and show that they become tighter as the number of particles increases. We, then benchmark our algorithms for planning in the setting of a given belief tree (Section 4) and in an anytime MCTS setting (Section 5). In the former setting, we compare SITH-BSP and LAZY-BSP against Sparse Sampling (Kearns et al., 2002). In an anytime MCTS setting, we compare SITH-PFT with PFT-DPW (Sunberg and Kochenderfer, 2018). This performance evaluation is conducted considering two problems, as discussed next. ### Problems under Consideration We proceed to the description of the evaluated problems. In both problems the immediate reward for \(b^{\prime}\) is \[\rho(b,a,z^{\prime},b^{\prime})=-(1-\lambda)\underset{x^{\prime} \sim b^{\prime}}{\mathbb{E}}\Big{[}r(x^{\prime})\Big{]}-\lambda\hat{\mathcal{H }}(b,a,z^{\prime},b^{\prime}). \tag{58}\] #### 7.1.1 Continuous Light Dark Our first problem is \(2D\)_continuous Light-Dark problem_. The robot starts at some unknown point \(x_{0}\in\mathbb{R}^{2}\). In this world, there are spatially scattered beacons with known locations. Near the beacons, the obtained observations are less "noisy". The robot's mission is to get to the goal located at the upper right corner of the world. The state dependent reward in this problem is \(r(x)=-\|x-x^{\text{goal}}\|_{2}^{2}\). The initial belief is \(b_{0}=\mathcal{N}(\mu_{0},I\cdot\sigma_{0})\), where we select \(x_{0}=\mu_{0}\) for actual robot initial state. The motion and observation models are \[\mathbb{P}_{T}(x^{\prime}|x,a)=\mathcal{N}(x+a,I\cdot\sigma_{T}),\] and \[\mathcal{O}=\mathbb{P}_{Z}(z|x)=\mathcal{N}(x-x^{b},I\cdot\sigma_{\mathcal{O}} \cdot\max\{d(x),d_{\min}\}), \tag{59}\] respectively, where \(d(x)\) is the \(\ell^{2}\) distance from robot's state \(x\) to the nearest beacon with known location denoted by \(x^{b}\), and \(d_{\min}\) is a tuneable parameter. Figure 7: Schematic visualization of calculations reuse principle in bounds. We select **columns** using indexes from set \(A_{k}^{s}\) and rows by \(A_{k+1}^{s}\). We marked by olive color resulting constituents of the bounds. #### 7.1.2 Target Tracking Our second problem is _2D continuous Target Tracking_. In this problem we have a moving target in addition to the agent. In this problem the belief is maintained over both positions, the agent and the target. The state dependent reward in this problem is \(r(x)=-\|x^{\text{agent}}-x^{\text{target}}\|_{2}^{2}\). The motion model of the target and the agent follows \[\mathbb{P}_{T}(\cdot|x,a)=\mathcal{N}(x^{\text{agent}}+a^{\text{agent}}, \Sigma_{T})\cdot\mathcal{N}(x^{\text{target}}+a^{\text{target}},\Sigma_{T}),\] where by \(x\) we denote the concatenated \(\{x^{\text{agent}},x^{\text{target}}\}\). For target actions we use a circular buffer with \(\{\uparrow,\uparrow,\leftarrow\}\) action sequence of unit length motion primitives. For simplicity we assume that in inference as well as in the planning session the agent knows the target action sequence. The observation model is also the multiplication of the observation model from the previous section with the additional observation model due to a moving target. Thus, the overall observation model is \[\mathbb{P}_{Z}(\cdot|x;\{x^{k,i}\}_{i=1})=\mathcal{N}(x^{\text{ agent}},\Sigma_{\mathcal{O}}(x^{\text{agent}};\{x^{b,i}\}_{i=1}))\cdot\] \[\cdot\mathcal{N}(x^{\text{agent}}-x^{\text{target}},\Sigma_{ \mathcal{O}}(x^{\text{agent}},x^{\text{target}})),\] where \(\Sigma_{\mathcal{O}}(x^{\text{agent}};\{x^{b,i}\}_{i=1})\) conforms to the observation model covariance described in Section 7.1.1 and \[\Sigma_{\mathcal{O}}(x^{\text{agent}},x^{\text{target}})= \tag{60}\] \[\begin{cases}\sigma_{T}^{2}I\|x^{\text{agent}}-x^{\text{target}} \|_{2},\text{ if }\|x^{\text{agent}}-x^{\text{target}}\|_{2}\geq d_{\text{min}}\\ \sigma_{\mathcal{O}}^{2}I,\text{else}\end{cases}.\] Before the planning experiments we study of the entropy estimators and the bounds presented in Theorem 4. ### Entropy Estimators and Bounds Study In this section, we experiment with a passive case of the continuous 2D Light Dark problem from Section 7.1.1. Our goal is to study the various entropy estimators and our derived bounds from Section 6 over the estimator developed in Boers et al. (2010). In this study, we manually supply the robot with an action sequence to conduct. This results in a single lace of the beliefs corresponding to observations that the robot actually obtained by executing a given externally action sequence. Over this sequence of the beliefs, at each time instance of the sequence we calculate minus differential entropy estimator (information) in four ways. The first is the Boers estimator (Boers et al., 2010) and our bounds from Theorem 4. The second is KDE approximation as done by Fischer and Tas (2020). The third is the naive calculation of discrete entropy over the the particles weights: \(\hat{\mathcal{H}}(b)=-\sum_{i}w^{i}\cdot\log w^{i}\). The fourth estimator is analytical and it requires additional explanation. If we make an unrealistic assumption that robot's ground truth state from which the observation has been taken is known, plug it into the covariance matrix of (59) and set prior belief to be Gaussian; the motion and observation models met all the requirement for the exact update by Kalman Filter (linear additive models). For the proof see Thrun et al. (2005). In this case the belief stays Gaussian and the differential entropy has closed form solution. We have two scenarios. In the first scenario, the robot moves diagonally to the goal using a unit length action \(\nearrow\) (Fig. 7(a)) fifteen times. Along the way, it passes close-by two beacons. Consequentially, the robot's information about its state peaks twice. In our second scenario the robot moves five times to the right \(\rightarrow\) followed by ten times \(\uparrow\) and again five times to the right \(\rightarrow\) (Fig. 7(b)). The prior belief in this setting follows a Gaussian distribution \(b_{0}=\mathcal{N}\left(\begin{pmatrix}0.0\\ 0.0\end{pmatrix},\begin{pmatrix}2.0&0.0\\ 0.0&2.0\end{pmatrix}\right)\), the motion and observation models parameters are \(\sigma_{\mathcal{O}}=\sigma_{T}=0.075,d_{\text{min}}=0.0001\). The number of unsmplified belief weighted particles is \(n_{x}=300\). For creating initial weighted particles we use Figure 8: The plot shows the evolution of belief in terms of sets of particles along the actual trajectory of the robot. The color of the particles from yellow to red illustrate the evolution of the belief over time. The green ellipses represent the parametric Gaussian belief covariances obtained from update by Kalman filter. The canvas color here is \(\sigma_{\mathcal{O}}=\sigma_{T}=0.075\). **(a)** Our first scenario. **(b)** Our second scenario. the following proposal \[q=0.25\cdot\mathcal{N}\left(\begin{pmatrix}0.0\\ 1.0\end{pmatrix},\begin{pmatrix}2.0&0.0\\ 0.0&0.2\end{pmatrix}\right)+\] \[+0.25\cdot\mathcal{N}\left(\begin{pmatrix}1.0\\ 0.0\end{pmatrix},\begin{pmatrix}2.0&0.0\\ 0.0&0.2\end{pmatrix}\right)+\] \[+0.25\cdot\mathcal{N}\left(\begin{pmatrix}-1.0\\ 0.0\end{pmatrix},\begin{pmatrix}2.0&0.0\\ 0.0&0.2\end{pmatrix}\right)+\] \[+0.25\mathcal{N}\left(\begin{pmatrix}1.0\\ -1.0\end{pmatrix},\begin{pmatrix}2.0&0.0\\ 0.0&0.2\end{pmatrix}\right).\] The initial weights are the ratio \(w(x)=\frac{b_{y}(x)}{q(x)}\). To examine the bounds monotonical convergence with a growing number of simplified belief particles we plot the bounds (56) and (57) for minus entropy estimator (54) alongside estimators described above for the entire robot trajectory of the beliefs. The results for the first and second scenarios are provided in Figs. 9 and 10, respectively. For both scenarios we observe that the bounds become tighter as the number of particles of simplified belief \(n_{x}^{s}\) increases. We also witness that all estimators vary but the overall trend is similar, putting aside the discrete entropy over the weights. The discrete entropy over the weights fails to adequately represent the uncertainty of the belief. This is an anticipated result. Let us proceed to the planning experiments. ### Planning In this section we study and benchmark our efficient planning algorithms. In our algorithms 1 and 5 the tree is build by SS (Kearns et al., 2002) such that the given belief tree is obtained when the algorithm descends to the leafs. We first compare Alg. 1 and 5 versus SS. We then proceed to simulations in an anytime MCTS setting. For all further experiments, the belief is approximated by a set of \(n_{x}\) weighted samples as in (52). The robot does replanning after each executed action. #### 7.3.1 Acceleration measures Let us begin this section by describing our measures of acceleration. We report planning time speedup in terms of saved accesses to particles. The following speedup is based on the final number of simplified beliefs particles required for planning \[\frac{\sum_{i}\left(n_{i,x}^{2}-n_{i,x}^{s}n_{i,x}\right)}{\sum_{i}n_{i,x}^{2 }}\cdot 100=\frac{\sum_{i}\left(n_{i,x}-n_{i,x}^{s}\right)}{\sum_{i}n_{i,x} }\cdot 100, \tag{61}\] where the summation is over the future posterior beliefs in all the belief trees in a number of a consecutive planning sessions in particular scenario. Eq. (61) measures relative speedup without time spent on resimplifications. It is calculated at the end of several consecutive planning sessions. To calculate speedup according to (61) one shall pick up the **final** number of particles of simplified belief \(n_{i,x}^{s}\) used for the simplified reward for each belief node \(i\), sum over all the nodes of the belief trees (given or constructed on the fly) from planning sessions, make a calculation portrayed by (61). Importantly, acceleration measure (61) assumes that time of evaluating the motion and observation models do not vary from one evaluation to another. To calculate planning time speedup we use the following metric \[\frac{t_{\mathrm{baseline}}-t_{\mathrm{our}}}{t_{\mathrm{baseline}}}\cdot 100. \tag{62}\] If the quantities (61) and (62) are identical we can conclude that there will be no overhead from resimplifications and adapting the bounds. Note also that in the first place it is not clear how many particles \(n_{x}\) for belief representation to Figure 10: Bounds convergence for our second scenario \(n_{x}=300\)**(a)**\(n_{x}^{s}=30\)**(b)**\(n_{x}^{s}=150\) particles **(c)**\(n_{x}^{s}=270\) particles. take. The number of particles \(n_{x}\) shall be as large as possible due to fact that we do not know when the belief represented by weighted particles will converge to the corresponding theoretical belief. To rephrase that, we always can make the overhead from resimplifications and adaptations neglectable by increasing \(n_{x}\). To thoroughly study the acceleration yielded by our simplification paradigm we calculate total speedup over a number of the consecutive planning sessions in terms of particles in accordance to (61) and in terms of time in accordance to (62). executing it outside the radius will yield a negative reward of \(-200\). For all other actions the multi-objective reward function is \(\rho(b,a,z,b^{\prime})=-\mathop{\mathbb{E}}_{x\sim b^{\prime}}[[|x|]_{2}]-\hat{ \mathcal{H}}(b,a,z,b^{\prime})\). The \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline BSP Alg. & \(n_{x}\) & \(\lambda\) & particles speedup (61) & time speedup (62) & resimpl. calls (recursive) & motion model calls & obs. model calls & return (\(\hat{V}\)) \\ \hline Alg 1 STH & & \(78.76\pm 0.20\) & \(64.44\pm 1.51\) & \(2.05\cdot 10^{5}\pm 0.05\cdot 10^{5}\) & \(3.13\cdot 10^{5}\pm 0.02\cdot 10^{5}\) & \(9.62\cdot 10^{6}\pm 0.0\) & \(-115.49\pm 16.58\) \\ \hline Alg 5 LAZY & \(100\) & \(0.1\) & \(85.46\pm 1.22\) & \(71.59\pm 1.52\) & \(10.71\cdot 10^{5}\pm 4.61\cdot 10^{5}\) & \(2.38\cdot 10^{5}\pm 0.13\cdot 10^{5}\) & \(9.62\cdot 10^{6}\pm 0.0\) & \(-115.49\pm 16.58\) \\ \hline SS & & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} \\ \hline Alg 1 STH & & \(68.82\pm 0.32\) & \(53.59\pm 2.05\) & \(3.36\cdot 10^{5}\pm 0.04\cdot 10^{5}\) & \(4.22\cdot 10^{5}\pm 0.03\cdot 10^{5}\) & \(9.62\cdot 10^{6}\pm 0.0\) & \(-103.51\pm 14.91\) \\ \hline Alg 5 LAZY & \(100\) & \(0.2\) & \(80.09\pm 1.52\) & \(65.01\pm 1.88\) & \(25.65\cdot 10^{5}\pm 6.17\cdot 10^{5}\) & \(3.01\cdot 10^{5}\pm 0.18\cdot 10^{5}\) & \(9.62\cdot 10^{6}\pm 0.0\) & \(-103.51\pm 14.91\) \\ \hline SS & & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} \\ \hline Alg 1 STH & & \(58.33\pm 0.52\) & \(42.76\pm 2.96\) & \(4.13\cdot 10^{5}\pm 0.05\cdot 10^{5}\) & \(5.40\cdot 10^{5}\pm 0.01\cdot 10^{5}\) & \(9.62\cdot 10^{6}\pm 0.0\) & \(-91.86\pm 13.88\) \\ \hline Alg 5 LAZY & \(100\) & \(0.3\) & \(74.85\pm 2.63\) & \(58.94\pm 3.04\) & \(42.66\cdot 10^{5}\pm 9.80\cdot 10^{5}\) & \(3.59\cdot 10^{5}\pm 0.29\cdot 10^{5}\) & \(9.62\cdot 10^{6}\pm 0.0\) & \(-91.86\pm 13.88\) \\ \hline SS & & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} \\ \hline Alg 1 STH & & \(45.66\pm 0.83\) & \(29.33\pm 4.78\) & \(4.70\cdot 10^{5}\pm 0.04\cdot 10^{5}\) & \(6.84\cdot 10^{5}\pm 0.08\cdot 10^{5}\) & \(9.62\cdot 10^{6}\pm 0.0\) & \(-94.84\pm 11.77\) \\ \hline Alg 5 LAZY & \(100\) & \(0.4\) & \(69.94\pm 1.89\) & \(53.85\pm 2.56\) & \(59.05\cdot 10^{5}\pm 8.76\cdot 10^{5}\) & \(4.16\cdot 10^{5}\pm 0.22\cdot 10^{5}\) & \(9.62\cdot 10^{6}\pm 0.0\) & \(-80.44\pm 11.77\) \\ \hline SS & & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} \\ \hline Alg 1 STH & & \(34.46\pm 0.79\) & \(18.98\pm 4.16\) & \(5.27\cdot 10^{5}\pm 0.05\cdot 10^{5}\) & \(7.92\cdot 10^{5}\pm 0.01\cdot 10^{5}\) & \(9.62\cdot 10^{6}\pm 0.0\) & \(-66.3\pm 8.0\) \\ \hline Alg 5 LAZY & \(100\) & \(0.5\) & \(63.6\pm 2.23\) & \(46.67\pm 2.81\) & \(81.48\cdot 10^{5}\pm 5.82\cdot 10^{5}\) & \(4.87\cdot 10^{5}\pm 0.24\cdot 10^{5}\) & \(9.62\cdot 10^{6}\pm 0.0\) & \(-66.3\pm 8.0\) \\ \hline SS & & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} \\ \hline Alg 1 STH & & \(25.09\pm 0.89\) & \(12.05\pm 4.83\) & \(5.85\cdot 10^{5}\pm 0.05\cdot 10^{5}\) & \(8.64\cdot 10^{5}\pm 0.05\cdot 10^{5}\) & \(9.62\cdot 10^{6}\pm 0.0\) & \(-55.36\pm 6.93\) \\ \hline Alg 5 LAZY & \(100\) & \(0.6\) & \(56.32\pm 2.72\) & \(38.45\pm 3.65\) & \(113.26\cdot 10^{5}\pm 11.45\cdot 10^{5}\) & \(5.71\cdot 10^{5}\pm 0.28\cdot 10^{5}\) & \(9.62\cdot 10^{6}\pm 0.0\) & \(-55.36\pm 6.93\) \\ \hline SS & & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} \\ \hline SS & & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} & \multicolumn{5}{c|}{} \\ \hline \end{tabular} \end{table} Table 1: This table shows cumulative results of \(20\) consecutive alternating planning and execution sessions averaged over \(15\) trials of Continuous Light Dark problem. The given belief tree in a single planning session has \(4809\) belief nodes. Overall, in \(20\) planning sessions, we have \(96180\) belief nodes. The horizon in each planning session is \(L=3\). The number of observations sampled from each belief action node is \(n_{x}^{1}=1\), \(n_{x}^{2}=3\), \(n_{x}^{3}=3\) at the corresponding to superscripts depths \(1,2,3\). In this table we examine influence of various number of belief particles. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline BSP Alg. & \(n_{x}\) & \(\lambda\) & particles speedup (61) & time speedup (62) & resimpl. calls (recursive) & motion model calls & obs. model calls & return (\(\hat{V}\)) \\ \hline Alg 1 STH & & \(34.1\pm 0.8\) & \(25.01\pm 5.11\) & \(5.30\cdot 10^{5}\pm 0.04\cdot 10^{5}\) & \(31.80\cdot 10^{5}\pm 0.25\cdot 10^{5}\) & \(19.24\cdot 10^{5}\pm 0.0\) & \(-69.36\pm 7.95\) \\ \hline Alg 5 LAZY & \(200\) & \(0.5\) & \(64.0\pm 2.98\) & \(51.71\pm 4.9\) & \(83.95\cdot agent can move in eight evenly spread directions \(\mathcal{A}=\{\rightarrow,\nearrow,\uparrow,\times,\leftarrow,\swarrow,\downarrow, \searrow,Null\}\). Motion and observation models, and the initial belief are \(\mathbb{P}_{T}(\cdot|x,a)=\mathcal{N}(x+a,\Sigma_{T}),\ \mathbb{P}_{Z}(z|x)= \mathcal{N}(x,\min\{1,\|\ x-x^{b}\|_{2}^{2}\}\cdot\Sigma_{\mathcal{O}}),\ b_{0}= \mathcal{N}(x_{0},\Sigma_{0})\) respectively. \(x^{b}\) is the 2D location of the beacon and all covariance matrices are diagonal (i.e. \(\Sigma=I\cdot\sigma^{2}\)). We selected the following parameters \(x_{0}=\begin{pmatrix}-5.5\\ 0.0\end{pmatrix},\Sigma_{0}=\begin{pmatrix}0.2&0.0\\ 0.0&0.2\end{pmatrix},\sigma_{T}=\sigma_{\mathcal{O}}=0.075\). We experiment with \(10\) different configurations (rows of Table 7) that differ in \(n_{x}\) (number of particles), \(L\) (MCTS simulation depth), and \(\#\)iter (number of MCTS simulation iterations per planning session). Each scenario comprises \(10\) planning sessions, i.e. the agent performs up to \(10\) planning action-executing iterations. The scenario stops if the best action determined in planning is \(Null\). We repeat each experiment 25 times. In each such repetition we run PFT-DPW and SITH-PFT Figure 13: In this illustration we show second trial of Table. 5, configuration \(n_{x}=250\). The canvas color here is \(\sigma_{\mathcal{O}}=\sigma_{T}=0.1\). (a) Agent particles (b) Target Particles. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{BSP Alg.} & \multirow{2}{*}{\(n_{x}\)} & \multirow{2}{*}{\(\lambda\)} & \multirow{2}{*}{particles speedup (\(\mathrm{\SIUnitSymbolMicro}\))} & \multirow{2}{*}{time speedup (\(\mathrm{\SIUnitSymbolMicro}\))} & \multirow{2}{*}{resimpl. calls (recursive)} & \multicolumn{4}{c|}{motion model calls} & \multicolumn{2}{c|}{obs. model calls} & \multicolumn{2}{c|}{return (\(\mathrm{\SIUnitSymbolMicro}\))} \\ \cline{6-13} \cline{8-13} \cline{13-13} Alg 1 STH & & & \(77.43\pm 0.26\) & \(60.3\pm 2.21\) & \(1.69\cdot 10^{2}\pm 0.04\cdot 10^{-1}\) & \(3.48\cdot 10^{4}\pm 0.03^{-1}\) & \(10.22\cdot 10^{6}\pm 0.0\) & \(-79.87\pm 9.69\) \\ \hline Alg 5 LAZY & \(100\) & \(0.1\) & \(86.97\pm 1.28\) & \(71.18\pm 2.42\) & \(7.44\cdot 10^{-1}\) & \(3.09\cdot 10^{4}\) & \(2.32\cdot 10^{6}\pm 0.16^{-10}\) & \(10.22\cdot 10^{6}\pm 0.0\) & \(-79.87\pm 9.69\) \\ \hline \multicolumn{13}{|c|}{SS} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} \\ \cline{6-13} \cline{8-13} \cline{13-13} \cline{13-13} Alg 1 STH & & & \(64.64\pm 0.57\) & \(46.39\pm 2.27\) & \(2.60\cdot 10^{5}\pm 0.04\cdot 10^{-1}\) & \(5.03\cdot 10^{4}\pm 0.07^{-1}\) & \(10.22\cdot 10^{6}\pm 0.0\) & \(-73.38\pm 9.8\) \\ \hline Alg 5 LAZY & \(100\) & \(0.2\) & \(83.52\pm 1.7\) & \(67.24\pm 2.62\) & \(16.52\cdot 10^{5}\pm 5.56\cdot 10^{3}\) & \(2.75\cdot 10^{4}\pm 0.022\cdot 10^{6}\pm 0.0\) & \(-73.38\pm 9.8\) \\ \hline \multicolumn{13}{|c|}{SS} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} \\ \cline{6-13} \cline{8-13} \cline{13-13} \cline{13-13} Alg 1 STH & & & \(49.57\pm 0.93\) & \(29.25\pm 2.39\) & \(3.14\cdot 10^{5}\pm 0.05\cdot 10^{5}\) & \(6.86\cdot 10^{5}\pm 0.10\cdot 10^{8}\) & \(10.44\cdot 10^{6}\pm 0.0\) & \(-66.29\pm 9.3\) \\ \hline Alg 5 LAZY & \(100\) & \(0.3\) & \(79.83\pm 2.55\) & \(63.34\pm 3.45\) & \(26.61\cdot 10^{5}\pm 8.41\cdot 10^{5}\) & \(3.21\cdot 10^{4}\pm 0.30\cdot 10^{8}\) & \(10.44\cdot 10^{6}\pm 0.0\) & \(-66.29\pm 9.3\) \\ \hline \multicolumn{13}{|c|}{SS} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} & \multicolumn{13}{c|}{} \\ \cline{6-13} \cline{8-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13-13} \cline{13-13} \cline{13-13-13} \cline{13-13} \cline{13-13} \cline{13-13-13} \cline{13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13-13} \cline{13-13-13-13} \cline{13-13-13-13} \cline{13-13-13-13} \cline{13-13-13-13} \cline{13-13-13-13-13} \cline{13-13-13-13-13} \cline{13-13-13-13-13-13} \cline{13-13 \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{BSP Alg.} & \multirow{2}{*}{\(n_{x}\)} & \multirow{2}{*}{\(n_{z}^{1}\)} & \multirow{2}{*}{\(n_{z}^{2}\)} & \multirow{2}{*}{\(n_{z}^{3}\)} & \multirow{2}{*}{\(\lambda\)} & \multirow{2}{*}{\(\lambda\)} & \multirow{2}{*}{\(L\)} & & & & & & & & & & & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & & & & & & & & & & \\ \hline \multirow{2}{*}{Alg 5 LAZY} & \multirow{2}{*}{\(250\)} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\(3\)} & \multirow{2}{*}{\(3\)} & \multirow{2}{*}{\(0.5\)} & \multirow{2}{*}{\(3\)} & \multirow{2}{*}{\(0.5\)} & \multirow{2}{*}{\(3\)} & \multirow{2}{*}{\(3487\)} & \multirow{2}{*}{\(949\)} & \multirow{2}{*}{\(776\)} & \multirow{2}{*}{\(884\)} & \multirow{2}{*}{\(379\)} & \multirow{2}{*}{\(106\)} & \multirow{2}{*}{\(48\)} & \multirow{2}{*}{\(34\)} & \multirow{2}{*}{\(11\)} & \multirow{2}{*}{\(139\)} \\ Alg 1 STIH & & & & & & & & & & & & & & & & & & \\ \hline \multirow{2}{*}{Alg 1 STH} & \multirow{2}{*}{\(250\)} & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\(3\)} & \multirow{2}{*}{\(3\)} & \multirow{2}{*}{\(0.5\)} & \multirow{2}{*}{\(3\)} & \multirow{2}{*}{\(10\)} & \multirow{2}{*}{\(19\)} & \multirow{2}{*}{\(46\)} & \multirow{2}{*}{\(144\)} & \multirow{2}{*}{\(538\)} & \multirow{2}{*}{\(966\)} & \multirow{2}{*}{\(1266\)} & \multirow{2}{*}{\(1208\)} & \multirow{2}{*}{\(1164\)} & \multirow{2}{*}{\(1452\)} \\ \hline \end{tabular} \end{table} Table 6: This table displays the numbers of the beliefs at each simplification level in given tree after the identification of optimal action at the root \(b_{k}\). Here we investigate Target Tracking problem and belief tree as in Fig. 14. The size of given belief tree is \(6814\) belief nodes. Figure 14: Simplification levels at each depth of the given belief tree of Target Tracking problem (Section 7.1.2) after determining best action for one of the planning sessions.. Here we present planning session 6 of the first trial of configuration \(n_{x}=250\) of Table 5. The radius of circles represent the fraction of all nodes at particular depth that have a particular simplification level. This figure is associated with Table 6. **(a)** LAZY-SITH-BSP Alg 5 **(b)** SITH-BSP Alg 1. Figure 15: 2D Continuous Light Dark. The agent starts from an initial unknown location and is given an initial belief. The goal is to get to location \((0,0)\) (circled in red) and execute the terminal action. Near the beacon (white light) the observations are less noisy. We consider multi-objective function, accounting for the distance to the goal and the differential entropy approximation (with the minus sign for reward notation). Executing the terminal action inside the red circle gives the agent a large positive reward but executing it outside it, will yield a large negative reward. with the same seed and calculate the relative time speedup in percentage according to (62) where \(t_{\rm{PFT-DPW}}\) and \(t_{\rm{SITH-PFT}}\) are running times of a baseline and our methods respectively. In all different configurations, we obtained significant time speedup of approximately \(20\%\) while achieving the exact same solution compared to PFT. In Table 7 we report the mean and standard error of (62) as well as maximum and minimum value. Remarkably, we observe that we never slowdown the PFT-DPW with SITH-PFT. We also present total running times of \(25\) repetitions of at most \(10\) (the simulation stops if best identified action is \(Null\)) planning sessions in Table 8. Note that we divided the total planning time by the number of planning sessions in each repetition. An illustration of evaluated scenario can be found in Fig. 15. Note that SITH-PFT (Fig. (a)a) yields an identical to PFT solution (Fig. (b)b) while IPFT demonstrates a severely degraded behavior. We remind the purpose of our work is to speedup the PFT approach when coupled with information-theoretic reward. Since the two algorithms produce identical belief trees and action at the end of each planning session, there is no point reporting the algorithms _identical_ performances (apart from planning time). ### Discussion Although the speedup was significant and steady for all simulations, we did not observe growth in speed-up with growth of number of belief particles in any simulation. This can be explained by the fact that the estimators are far from convergence and increasing number of particles just randomly changes the estimator itself due to sampling induced variance. The limitation of our approach is that it leans on converging bounds, which are not trivial to derive and specific for a particular reward function. In addition, it requires slightly more caching than the baseline. Our simplification approach may still be ill-timed, since the resimplifications take an additional toll in terms of running time. ## 8 Conclusions We contributed a rigorous provable theory of adaptive multilevel simplification that accelerates solution of belief-dependent fully continuous POMDP. Our theory always identifies the same optimal action or policy as the unsimplified analog. Our theoretical approach receives as input adaptive bounds over the belief dependent reward. Using the suggested theory and these bounds we formulated three algorithms, considering a given belief tree and an anytime MCTS setting. We also contributed a specific simplification considering nonparametric beliefs represented by weighted particles, and derived novel bounds over a differential entropy estimator, which only utilize a subset of the particles and are computationally cheaper than the latter. Our experiments demonstrate that our algorithms are paramount in terms of computation time while guaranteed to have the same performance as the baselines. In the setting of given belief tree we achieved a speedup up to \(70\%\). In an anytime MCTS setting, our algorithm enjoyed the speedup of \(20\%\). ## 9 Funding This research was supported by the Israel Science Foundation (ISF) and by a donation from the Zuckerman Fund to the Technion Artificial Intelligence Hub (Tech.AI). ## 10 Appendix ### Proof for Theorem 1 To shorten the notations we prove the theorem for value function under arbitrary policy. Note that by substituting the policy \(\pi_{(\ell)+}\) by \(\{\pi_{\ell}(b_{\ell}),\pi^{*}_{(\ell+1)+}\}\) where \(a_{\ell}=\pi_{\ell}(b_{\ell})\) we always can obtain action-value function. Without loosing generality assume the resimplification hits an arbitrary belief \begin{table} \begin{tabular}{|c|c|c|c|} \hline \((n_{x},\,L,\,\#\)iter.) & mean \(\pm\)std & max. & min. \\ \hline (50, 30, 200) & \(19.35\pm 6.34\) & \(30.17\) & \(7.99\) \\ (50, 50, 500) & \(17.43\pm 5.4\) & \(33.49\) & \(10.72\) \\ (100, 30, 200) & \(21.97\pm 8.74\) & \(49.24\) & \(7.36\) \\ (100, 50, 500) & \(22.54\pm 6.33\) & \(36.09\) & \(13.65\) \\ (200, 30, 200) & \(26.27\pm 9.36\) & \(42.43\) & \(11.17\) \\ (200, 50, 500) & \(26.17\pm 7.64\) & \(44.31\) & \(14.43\) \\ (400, 30, 200) & \(21.88\pm 8.47\) & \(37.04\) & \(10.34\) \\ (400, 50, 500) & \(21.71\pm 6.01\) & \(32.69\) & \(9.67\) \\ (600, 30, 200) & \(20.27\pm 7.38\) & \(32.95\) & \(8.77\) \\ (600, 50, 500) & \(19.93\pm 6.48\) & \(31.26\) & \(6.49\) \\ \hline \end{tabular} \end{table} Table 7: Time speedup (62) obtained SITH-PFT versus PF-DPW. The rows are different configurations of the number of belief particles \(n_{x}\), maximal tree depth \(L\), and the number of iterations per planning session. In all simulations SITH-PFT and PF-DPW declared _identical_ actions as optimal and exhibited _identical_ belief trees in terms of connectivity and visitation counts. \begin{table} \begin{tabular}{|c|c|c|} \hline \((n_{x},\,L,\,\#\)iter.) & Algorithm & tot. plan. time [sec] \\ \hline \multirow{2}{*}{(50, 30, 200)} & PFT-DPW & \(49.7\) \\ & SITH-PFT & \(40.25\) \\ \hline \multirow{2}{*}{(50, 50, 500)} & PFT-DPW & \(125.05\) \\ & SITH-PFT & \(103.71\) \\ \hline \multirow{2}{*}{(100, 30, 200)} & PFT-DPW & \(185.47\) \\ & SITH-PFT & \(145.08\) \\ \hline \multirow{2}{*}{(100, 50, 500)} & PFT-DPW & \(460.29\) \\ & SITH-PFT & \(357.57\) \\ \hline \multirow{2}{*}{(200, 30, 200)} & PFT-DPW & \(709.66\) \\ & SITH-PFT & \(526.18\) \\ \hline \multirow{2}{*}{(200, 50, 500)} & PFT-DPW & \(1755.08\) \\ & SITH-PFT & \(1298.86\) \\ \hline \multirow{2}{*}{(400, 30, 200)} & PFT-DPW & \(2672.56\) \\ & SITH-PFT & \(2099.0\) \\ \hline \multirow{2}{*}{(400, 50, 500)} & PFT-DPW & \(6877.24\) \\ & SITH-PFT & \(5403.91\) \\ \hline \multirow{2}{*}{(600, 30, 200)} & PFT-DPW & \(6335.09\) \\ & SITH-PFT & \(5056.96\) \\ \hline \multirow{2}{*}{(600, 50, 500)} & PFT-DPW & \(15682.47\) \\ & SITH-PFT & \(12602.09\) \\ \hline \end{tabular} \end{table} Table 8: Total runtime of \(25\) repetitions of two algorithms. action node. The new upper bound will be \[\overline{V}(b_{\ell},\pi_{\ell+})+\frac{1}{M}\left(\underbrace{\overline{\Delta }^{s+1}(b,a,b^{\prime})-\overline{\Delta}^{s}(b,a,b^{\prime})}_{\leq 0} \right)\leq\overline{V}(b_{\ell},\pi_{\ell+}) \tag{63}\] The new lower bound will be \[\underline{\hat{V}}(b_{\ell},\pi_{\ell+})-\frac{1}{M}\left( \underbrace{\underline{\Delta}^{s+1}(b,a,b^{\prime})-\underline{\Delta}^{s} (b,a,b^{\prime})}_{\leq 0}\right)\geq\underline{\hat{V}}(b_{\ell},\pi_{\ell+}) \tag{64}\] where \(M=n_{z}^{d}\) depending on the depth \(d\) of resimplified reward bound. Moreover if the inequalities involving increments are strict \(\overline{\Delta}^{s}(b,a,b^{\prime})>\overline{\Delta}^{s+1}(b,a,b^{\prime})\) and \(\underline{\Delta}^{s}(b,a)>\underline{\Delta}^{s+1}(b,a,b^{\prime})\) also the retracting the bounds over Value function inequalities are strict. In case of MCTS, we have that \(M=\frac{N(ha)}{N(b^{\prime})}\) where history \(ha\) corresponds to \(b_{\ell}\) and action \(a\), and \(h^{\prime}\) corresponds to \(b^{\prime}\). \(\blacksquare\) ### Proof of Lemma 1 Recall that the bounds \(\overline{\rho},\underline{\rho}\) of belief nodes and "weakest link" rollout nodes are refined when the inequality (50) is encountered. Assume in contradiction that the resimplificiation strategy does not promote any reward level and \(G(ha)>0\). This means that \(\nicefrac{{G(ha)}}{{d}}>0\) and for all reward bounds the inequality \(\gamma^{d-\ell^{\prime}}\cdot(\overline{\rho}-\underline{\rho})<\frac{1}{d}G (ha)\). This is not possible since \(\nicefrac{{G(ha)}}{{d}}\) is the mean gap with respect to simulations of MCT and the depth of the belief tree, multiplied by the appropriate discount factor, over all the nodes that are the descendants to \(ha\). See equation (32). \(\blacksquare\) ### Proof of Lemma 2 Observe that \[\overline{\mathrm{UCB}}(ha)-\underline{\mathrm{UCB}}(ha)=\overline{Q}(ha)- \underline{Q}(ha). \tag{65}\] We already proved the desired for \(\overline{Q}(ha),\underline{\hat{Q}}(ha)\) in Theorem 1. Using the convergence \(\underline{\hat{Q}}(\cdot)=\hat{Q}(\cdot)=\overline{\hat{Q}}(\cdot)\) we obtain \[\begin{array}{l}\underline{\hat{Q}}(\cdot)+c\cdot\sqrt{\nicefrac{{\log(N( h))}}{{N(ha)}}}=\\ \underline{\hat{Q}}(\cdot)+c\cdot\sqrt{\nicefrac{{\log(N(h))}}{{N(ha)}}}=\\ \overline{\hat{Q}}(\cdot)+c\cdot\sqrt{\nicefrac{{\log(N(h))}}{{N(ha)}}}. \end{array} \tag{66}\] The proof is completed. \(\blacksquare\) ### Proof of Theorem 2 We provide proof by induction on the belief tree structure. **Base:** Consider an initial given belief node \(b_{0}\). No actions have been taken and no observations have been made. Thus, both the PFT tree and the SITH-PFT tree contain a single identical belief node, and the claim holds. **Induction hypothesis:** Assume we are given two identical trees with \(n\) nodes, generated by PFT and SITH-PFT. The trees uphold the terms of **Definition 2**. **Induction step:** Assume in contradiction that different nodes were added to the trees in the next simulation (expanding the belief tree by one belief node by definition). Thus, we got different trees. Two scenarios are possible: **Case 1.** The same action-observation sequence \(a_{0},z_{1},a_{1},z_{2}...a_{m}\) was chosen in both trees, but different nodes were added. **Case 2.** Different action-observation sequences were chosen for both trees, and thus, we got different trees structure. Since the Induction hypothesis holds, the last action \(a_{m}\) was taken from the same node denoted \(h^{\prime}\) shared and identical to both trees. Next, the same observation model is sampled for a new observation, and a new belief node is added with a rollout emanating from it. The new belief nodes and the rollout are identical for both trees since both algorithms use the same randomization seed and observation and motion models. Case 2 must be true since we showed Case 1 is false. There are two possible scenarios such that different action-observation sequences were chosen: **Case 2.1.** At some point in the actions-observations sequence, different observations \(z_{i},z^{\prime}_{i}\) were chosen. **Case 2.2.** At some point in the actions-observations sequence, PFT chose action \(a^{\dagger}\) while SITH-PFT chose a different action, \(\tilde{a}\), or got stuck without picking any action. Case 2.1 is not possible since if new observations were made, they are the same one by reasons contradicting Case 1. If we draw existing observations (choose some observation branch down the tree) the same observations are drawn since they are drawn with the same random seed and from the same observations "pool". It is the same "pool" since the Induction hypothesis holds. Case 2.2 must be true since we showed Case 2.1 is false, i.e., when both algorithms are at the identical node denoted as \(h\) PFT chooses action \(a^{\dagger}\), while SITH-PFT chooses a different action, \(\tilde{a}\), or even got stuck without picking any action. Specifically, PFT chooses action \(a^{\dagger}=\arg\max\mathrm{UCB}\) and SITH-PFT's candidate action is \(\tilde{a}=\operatorname*{arg\,max}\limits_{a\in A}\underline{\mathrm{UCB}}(ha)\). Three different scenarios are possible: **Case 2.2.1.** the \(\overline{\mathrm{UCB}},\underline{\mathrm{UCB}}\) bounds over \(h\tilde{a}\) were tight enough and \(\tilde{a}\) was chosen such that \(a^{\dagger}\neq\tilde{a}\). **Case 2.2.2.** SITH-PFT is stuck in an infinite loop. It can happen if the \(\overline{\mathrm{UCB}},\underline{\mathrm{UCB}}\) bounds over \(h\tilde{a}\), and at least one of its sibling nodes \(ha\), are not tight enough. However, all tree nodes are at the maximal simplification level. Hence, resimplification is triggered over and over without it changing anything. Case 2.2.1 is not possible as the bounds are analytical (always true) and converge to the actual reward (\(\underline{\mathrm{UCB}}=\mathrm{UCB}=\overline{\mathrm{UCB}}\)) for the maximal simplification level. Case 2.2.2 is not possible. If the bounds are not close enough to make a decision, resimplification is triggered. Each time some \(ha\) node - sibling to \(h\tilde{a}\) and maybe even \(h\tilde{a}\) itself is chosen in _SelectBest_ to over-go resimplification. According to lemmas 1 and 2, after some finite number of iterations for all of the sibling \(ha\) nodes (including \(h\tilde{a}\)) it holds \(\underline{\mathrm{UCB}}(ha)=\mathrm{UCB}(ha)=\overline{\mathrm{UCB}}(ha)\) and some action can be picked. If different actions have identical values we choose one by the same rule UCB picks actions with identical values (e.g. lower index/random). Since Case 2.2.2 is false, after some finite number of ressimplification iterations, SITH-PFT will stop with bounds sufficient enough to make a decision; as Case 2.2.1 is false it holds that \(a^{\dagger}=\tilde{a}\). Thus we get a contradiction and the proof is complete. \(\blacksquare\) ### Proof of Theorem 3 Since same tree is built according to Theorem 2, the only modification is the final criteria at the end of the planning session at the root of the tree: \(a^{*}=\underset{a}{\arg\max}\ Q(ha)\). Note we can set the exploration constant of UCB to \(c=0\) and we get that UCB is just the \(Q\) function. Thus if the bounds are not tight enough at the root to decide on an action, resimplification will be repeatedly called until SITH-PFT can make a decision. The action will be identical to the one chosen by UCB at PFT from similar arguments in the proof of Theorem 2. Note that additional final criteria for action selection could be introduced, but it would not matter as tree consistency is kept according to Theorem 2 and the bounds converge to the immediate rewards and \(Q\) estimations. \(\blacksquare\) ### Proof for Theorem 4 Let us first prove that \(u+\hat{\mathcal{H}}\geq 0\). the It holds \[u+\hat{\mathcal{H}}=\underset{i\notin A_{k+1}^{i}}{\sum}w_{k+1}^{i}\cdot \log\left[m\cdot\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\right]+ \tag{67}\] \[\underset{i\in A_{k+1}^{i}}{\sum}w_{k+1}^{i}\cdot \log\Biggl{[}\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\underset{j=1}{\sum}\mathbb{ P}_{T}(x_{k+1}^{i}|x_{k}^{j},a_{k})w_{k}^{j}\Biggr{]}-\] \[\underset{i\in A_{k+1}^{i}}{\sum}w_{k+1}^{i}\cdot \log\Biggl{[}\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\underset{j=1}{\sum}\mathbb{ P}_{T}(x_{k+1}^{i}|x_{k}^{j},a_{k})w_{k}^{j}\Biggr{]}=\] The Eq. (67) equals to \[\underset{i\notin A_{k+1}^{i}}{\sum}w_{k+1}^{i}\cdot \log\left[m\cdot\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\right]-\] \[\underset{i\notin A_{k+1}^{i}}{\sum}w_{k+1}^{i}\cdot \log\Biggl{[}\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\underset{j=1}{\sum}\mathbb{ P}_{T}(x_{k+1}^{i}|x_{k}^{j},a_{k})w_{k}^{j}\Biggr{]}\] Fix arbitrary index \(i\notin A_{k+1}^{i}\). The \(\log\) is monotonically increasing function so it is left to prove that \[m\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\geq \mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\underset{j=1}{\sum}\mathbb{ P}_{T}(x_{k+1}^{i}|x_{k}^{j},a_{k})w_{k}^{j}\] If \(\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})=0\), we finished. Assume \(\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\neq 0\). It holds that \[\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\underset{j=1}{\sum}\underset{ x_{k+1}^{i}}{\max}\mathbb{P}_{T}(x_{k+1}^{i}|x_{k}^{j},a_{k})w_{k}^{j}\geq \tag{68}\] \[\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\underset{j=1}{\sum}\mathbb{ P}_{T}(x_{k+1}^{i}|x_{k}^{j},a_{k})w_{k}^{j} \tag{69}\] We reached the desired result. Now let us show the second part \(\ell+\hat{\mathcal{H}}\leq 0\). Observe, that \[0\geq\ell+\hat{\mathcal{H}}= \tag{70}\] \[\underset{i=1}{\sum}w_{k+1}^{i}\log\Biggl{[}\mathbb{P}_{Z}(z_{k+1 }|x_{k+1}^{i})\underset{j\in A_{k}^{i}}{\sum}\mathbb{P}_{T}(x_{k+1}^{i}|x_{k}^ {j},a_{k})w_{k}^{j}\Biggr{]}-\] \[\underset{i=1}{\sum}w_{k+1}^{i}\log\Biggl{[}\mathbb{P}_{Z}(z_{k+1 }|x_{k+1}^{i})\underset{j=1}{\sum}\mathbb{P}_{T}(x_{k+1}^{i}|x_{k}^{j},a_{k})w _{k}^{j}\Biggr{]}\] Select arbitrary index \(i\). We shall prove that \[\log\left[\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\underset{j\in A_{k}^{i}}{ \sum}\mathbb{P}_{T}(x_{k+1}^{i}|x_{k}^{j},a_{k})w_{k}^{j}\right]-\] \[\log\left[\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\underset{j=1}{ \sum}\mathbb{P}_{T}(x_{k+1}^{i}|x_{k}^{j},a_{k})w_{k}^{j}\right]\leq 0\] Again use that \(\log\) is monotonically increasing and assume that \(\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\neq 0\). We have that \[\underset{j\in A_{k}^{i}}{\sum}\mathbb{P}_{T}(x_{k+1}^{i}|x_{k}^{j},a_{k})w_{k} ^{j}-\underset{j=1}{\sum}\mathbb{P}_{T}(x_{k+1}^{i}|x_{k}^{j},a_{k})w_{k}^{j}= \tag{71}\] \[-\underset{j\notin A_{k}^{i}}{\sum}\mathbb{P}_{T}(x_{k+1}^{i}|x_{k}^ {j},a_{k})w_{k}^{j}\leq 0\] \(\blacksquare\) ### Proof for Theorem 5 We first prove that \[\overline{\Delta}^{s}(b,a,b^{\prime})\geq\overline{\Delta}^{s+1}(b,a,b^{\prime})\geq 0 \tag{72}\] Recall that from the previous proof equation (67) \[\overline{\Delta}^{s}(b,a,b^{\prime})=\] \[\underset{i\notin A_{k+1}^{i}}{\sum}w_{k+1}^{i}\log\bigl{[}m \cdot\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\bigr{]}- \tag{73}\] \[\underset{i\notin A_{k+1}^{i}}{\sum}w_{k+1}^{i}\log\Biggl{[} \mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})\underset{j=1}{\sum}\mathbb{P}_{T}(x_{k+1}^{i }|x_{k}^{j},a_{k})w_{k}^{j}\Biggr{]}\] Suppose we promote the simplification level. Without loss of generality assume that \(A_{k+1}^{s+1}=A_{k+1}^{s}\cup\{q\}\). From the above we conclude that \(q\notin A_{k+1}^{s}\) \[\overline{\Delta}^{s+1}(b,a,b^{\prime})=\overline{\Delta}^{s}(b,a,b^{ \prime})-\] \[-w_{k+1}^{q}\Bigg{(}\log\left[m\cdot\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^ {q})\right]- \tag{74}\] \[-\log\left[\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{q})\underset{j=1}{ \sum}\mathbb{P}_{T}(x_{k+1}^{q}|x_{k}^{j},a_{k})w_{k}^{j}\right]\Bigg{)}\] It is left to prove that \[m\cdot\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{q})\geq \tag{75}\] \[\geq\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{q})\underset{j=1}{\sum} \mathbb{P}_{T}(x_{k+1}^{q}|x_{k}^{j},a_{k})w_{k}^{j}\] We already done that in previous theorem. Now we prove the second part \[\underline{\Delta}^{s}(b,a,b^{\prime})\geq\underline{\Delta}^{s+1}(b,a,b^{\prime})\geq 0 \tag{76}\] The next equation is minus the equation (70) \[\underline{\Delta}^{s}(b,a,b^{\prime})= \tag{77}\] \[\sum_{i=1}^{n_{x}}w_{k+1}^{i}\log\Biggl{[}\mathbb{P}_{Z}(z_{k+1}| x_{k+1}^{i})\!\!\!\sum_{j=1}^{n_{x}}\mathbb{P}_{T}(x_{k+1}^{i}|x_{k}^{j},a_{k})w_{ k}^{j}\Biggr{]}\!\!-\] \[\sum_{i=1}^{n_{x}}w_{k+1}^{i}\log\Biggl{[}\mathbb{P}_{Z}(z_{k+1}| x_{k+1}^{i})\!\!\!\sum_{j\in A_{k}^{s}}\mathbb{P}_{T}(x_{k+1}^{i}|x_{k}^{j},a_{k})w_ {k}^{j}\Biggr{]}\] Assume again without loosing generality that \(A_{k}^{s+1}=A_{k}^{s}\cup\{q\}\). In that case \[\underline{\Delta}^{s}(b,a,b^{\prime})-\underline{\Delta}^{s+1}(b, a,b^{\prime})= \tag{78}\] \[-\sum_{i=1}^{n_{x}}w_{k+1}^{i}\log\Biggl{[}\mathbb{P}_{Z}(z_{k+1} |x_{k+1}^{i})\!\!\!\sum_{j\in A_{k}^{s}}\mathbb{P}_{T}(x_{k+1}^{i}|x_{k}^{j},a _{k})w_{k}^{j}\Biggr{]}\] (79) \[+\sum_{i=1}^{n_{x}}\!w_{k+1}^{i}\log\Biggl{[}\mathbb{P}_{Z}(z_{k+ 1}|x_{k+1}^{i})\!\!\!\sum_{j\in A_{k}^{s+1}}\!\!\!\mathbb{P}_{T}(x_{k+1}^{i}|x_ {k}^{j},a_{k})w_{k}^{j}\Biggr{]} \tag{80}\] Select arbitrary index \(i\). We got back to end to previous theorem. Note that by definition the bounds are convergent since we are using all the particles. To see it explicitly suppose that \(\{i\notin A_{k+1}^{s}\}=\emptyset\) and \(\{i\notin A_{k}^{s}\}=\emptyset\). We have that \[\overline{\Delta}^{s}(b,a,b^{\prime})=\underline{\Delta}^{s}(b,a,b^{\prime})=0. \tag{81}\] This concludes the proof. ### Bounds time complexity analysis We turn to analyze the time complexity of our method using the chosen bounds (56) and (57). We assume the significant bottleneck is querying the motion \(\mathbb{P}_{T}(x^{\prime}|x,a)\) and observation \(\mathbb{P}_{Z}(z|x)\) models respectively. Assume the belief is approximated by a set of \(n_{x}\) weighted particles, \[b=\{x^{i},w^{i}\}_{i=1}^{n_{x}}. \tag{82}\] Consider the Boers et al. (2010) differential entropy approximation for belief at time \(k+1\), \[\hat{\mathcal{H}}(b_{k},a_{k},z_{k+1},b_{k+1})\triangleq\underbrace{\log \left[\sum_{i}\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^{i})w_{k}\right]}_{(a)}+ \tag{83}\] \[\underbrace{\sum_{i}w_{k+1}^{i}\cdot\log\left[\mathbb{P}_{Z}(z_{k+1}|x_{k+1}^ {i})\sum_{j}\mathbb{P}_{T}(x_{k+1}^{i}|x_{k}^{j},a_{k})w_{k}^{j}\right]}_{(b)} \tag{84}\] Denote the time to query the observation and motion models a single time as \(t_{obs},t_{mot}\) respectively. It is clear from (82), (83) (term a) and, (84) (term b) that: \[\forall b\text{ as in \eqref{eq:b_mot} }\Theta(\hat{\mathcal{H}}(b))=\Theta(n_{x}\cdot t_{obs}+n_{x}^{2} \cdot t_{mot}). \tag{85}\] Since we share calculation between the bounds, the bounds' time complexity, for some level of simplification \(s\), is: \[\Theta(\ell^{s}+u^{s})=\Theta(n_{x}\cdot t_{obs}+n_{x}^{s}\cdot n_{x}\cdot t_{ mot}), \tag{86}\] where \(n_{x}^{s}\) is the size of the particles subset that is currently used for the bounds calculations, e.g. \(n_{x}^{s}=|A^{s}|\) (\(A^{s}\) is as in (56) and (57)) and \(\ell^{s},u^{s}\) denotes the immediate upper and lower bound using simplification level \(s\). Further, we remind the simplification levels are discrete, finite, and satisfy \[s\in\{1,2,\ldots,n_{\max}\},\ \ \ell^{s=n_{\max}}=-\hat{\mathcal{H}}=u^{s=n_{ \max}}. \tag{87}\] Now, assume we wish to tighten \(\ell^{s},u^{s}\) and move from simplification level \(s\) to \(s+1\). Since the bounds are updated incrementally (as introduced by Sztyslie and Indelman (2022)), when moving from simplification level \(s\) to \(s+1\) the only additional data we are missing are the new values of the observation and motion models for the newly added particles. Thus, we get that the time complexity of moving from one simplification level to another is: \[\Theta(\ell^{s}+u^{s}\rightarrow\ell^{s+1}+u^{s+1})=\Theta((n_{x}^{s+1}-n_{x}^ {s})\cdot n_{x}\cdot t_{mot}), \tag{88}\] where \(\Theta(\ell^{s}+u^{s}\rightarrow\ell^{s+1}+u^{s+1})\) denotes the time complexity of updating the bounds from one simplification level to the following one. Note the first term from (86), \(n_{x}\cdot t_{obs}\), is not present in (88). This term has nothing to do with simplification level \(s\) and it is calculated linearly over all particles \(n_{x}\). Thus, it is calculated once at the beginning (initial/lowest simplification level). We can now deduce using (86) and (88) \[\Theta(\ell^{s+1}+u^{s+1})= \tag{89}\] \[\Theta(\ell^{s}+u^{s})+\Theta(\ell^{s}+u^{s}\rightarrow\ell^{s+1} +u^{s+1}).\] Finally, using (85), (86), (87), (88), and (89), we come to the conclusion that if at the end of a planning session, a node's \(b\) simplification level was \(1\leq s\leq n_{\max}\) than the time complexity saved for that node is \[\Theta((n_{x}-n_{x}^{s})\cdot n_{x}\cdot t_{mot}). \tag{90}\] This makes perfect sense since if we had to resimplify all the way to the maximal level we get \(s=n_{\max}\Rightarrow n_{x}^{s=n_{\max}}=n_{x}\) and by substituting \(n_{x}^{s}=n_{x}\) in (90) we saved no time at all. To conclude, the total speedup of the algorithm is dependent on how many belief nodes' bounds were not resimplified to the maximal level. The more nodes we had at the end of a planning session with lower simplification levels, the more speedup we get according to (90).
2302.00276
Coherence Transfer and Destructive Interference in Two-Dimensional Coherence Maps
Coherence maps (CMs) in multidimensional spectroscopy report total interference of all quantum coherent pathways. Detailed understanding of how this interference manifests spectroscopically is vital for deciphering mechanistic origins of impulsively generated wavepackets, but currently lacking. Here we explain the origin of recently reported diagonal node-like features in CMs of bacteriochlorophyll monomers and photosynthetic reaction centers (RCs), where the apparent resemblance in the two disparate systems was reportedly perplexing. We show that both spectroscopic signatures have distinct physical origins. Node-like lineshapes in monomers arise from unique phase twists caused by destructive interference between ground and excited state vibrational coherences. In contrast, nodal lines in RCs are explained by coherence transfer of vibrational wavepackets which do not participate in the ultrafast energy transfer and their destructive interference with ground state pathways. Our results resolve recent spectroscopic observations and illustrate new mechanistic insights gained from understanding interference effects in multidimensional spectroscopy.
Amitav Sahu, Vivek Tiwari
2023-02-01T06:47:44Z
http://arxiv.org/abs/2302.00276v1
# Coherence Transfer and Destructive Interference in Two-Dimensional Coherence Maps ###### Abstract Coherence maps (CMs) in multidimensional spectroscopy report total interference of all quantum coherent pathways. Detailed understanding of how this interference manifests spectroscopically is vital for deciphering mechanistic origins of impulsively generated wavepackets, but currently lacking. Here we explain the origin of recently reported diagonal node-like features in CMs of _bacteriochlorophyll_ monomers and photosynthetic reaction centers (RCs), where the apparent resemblance in the two disparate systems was reportedly perplexing. We show that both spectroscopic signatures have distinct physical origins. Node-like lineshapes in monomers arise from unique phase twists caused by destructive interference between ground and excited state vibrational coherences. In contrast, nodal lines in RCs are explained by coherence transfer of vibrational wavepackets which do not participate in the ultrafast energy transfer and their destructive interference with ground state pathways. Our results resolve recent spectroscopic observations and illustrate new mechanistic insights gained from understanding interference effects in multidimensional spectroscopy. Two-dimensional electronic spectroscopy [1] (2DES) resolves ultrafast dynamics ensuing femtosecond excitation as 2D contour map snapshots along initially correlated excitation and detection frequency axes, evolving along the pump-probe waiting time \(T\). Analysis of coherent signal contributions, often accompanying impulsive excitations, along the corresponding frequency axis \(\omega_{T}\) leads to coherence maps (CMs). At a given \(\omega_{T}\), CMs report the total interference of all quantum coherent Feynman pathways which may arise from distinct origins, for instance, from purely vibrational versus mixed vibrational-electronic wavepackets, or from wavepackets on ground versus excited electronic states. CM analysis of peak positions [2] can disentangle overlapping signal contributions with varying degree of success, and despite spectral decongestion along three dimensions, ambiguous spectroscopic signatures can still arise. For example, recent 2DES experiments on photosynthetic reaction centers (RCs) have reported [3; 4] diagonal nodes in CMs for all reported intramolecular vibrational frequencies. However similar diagonal node-like interference effects were later reported [5] in CMs of bacteriochlorophyll (_BChl a_) monomers. These similarities in spectroscopic signatures from multichromophoric RCs and _BChl a_ monomers in solution were reportedly perplexing. Here we show that these apparently similar spectroscopic signatures arise from distinct physical mechanisms, and illustrate how CM lineshapes serve as subtle reporters of the underlying physics of vibrational coherence transfer [6; 7] and destructive interference between signal pathways contributing to 2D spectra. To address the above questions, we start by deriving analytic expressions for CM lineshapes to show that uniquely different phase-twists, as opposed to those arising from imbalanced rephasing and non-rephasing signals, can arise due to interference between Feynman pathways for ground and excited vibrational wavepackets. We will show that reported [5] diagonal node-like features in 2DES CMs of _BChl a_ monomers are manifestations of this interference. We consider the simplest model of a three electronic level system with a Franck-Condon (FC) active intramolecular vibration identically coupled to all three electronic states. Note that unlike the transition strengths derived from 2-electrons in a 2D box model for D\({}_{4h}\) symmetric monomers, only one-electron transition strengths are expected [8] in _BChl a_ due large electronic splitting between the Q\({}_{x}\) and Q\({}_{y}\) bands. The ground state bleach (GSB), excited state emission (ESE) and absorption (ESA) Feynman pathways for vibrational quantum coherences that contribute to the 2DES diagonal peak (DP) can bewritten as a product of an orientational factor arising from four transition dipole factors interacting with the pump and probe electric fields, and a Green's function time-propagator \(\mathscr{G}(t)\) for each time interval between light-matter interactions. The GSB, ESE and ESA rephasing Feynman pathways for vibrational coherences on the 2D diagonal are given by Eqns. S1-S3 and represented as wavermixing diagrams in Fig. 1B and Fig. S1. Only dominant \(0-1\) vibrational coherences and Bloch dephasing have been considered in Eqns. S1-S3 for the purpose of deriving analytic expressions. In the Bloch limit, \(\mathscr{G}_{mn}(t)=\theta(t)exp[-\gamma_{mn}(t)]exp[-i\omega_{mn}t]\), where \(\theta(t)\) is the Heaviside step function, \(\omega_{mn}=(E_{m}-E_{n})/\hbar\), and \(\gamma_{mn}\) is the dephasing rate. It is reasonable to expect ground and excited state vibrational coherences to dephase with different rates along \(T\), denoted by \(\gamma_{g}\) and \(\gamma_{e}\), respectively. Optical Bloch dephasing rates have been assumed to be equal for simplicity, and denoted by \(\gamma\). Note that the 2D lineshape is determined by the product of Green functions, while the transition dipoles impart an overall strength and sign. All such individual lineshapes interfere to result in total 2D signal strength and lineshape. Crucially, sign of the coherence frequency along \(T\) is opposite for GSB versus ESE and ESA pathways (Fig. S1), and dictates the diagonal node-like interference feature as explained below. Inverse Fourier transformation of Eqns. S1-S3 along the first and third time intervals yields 2D lineshapes along excitation and detection frequencies, \(-\omega_{\tau}\) and \(\omega_{t}\), for each non-zero value of waiting time \(T\). For the case of GSB pathways, the resulting frequency domain complex 2D signal \(\tilde{S}_{3}\) is given by - \[\tilde{S}_{3}(-\omega_{\tau},\omega_{t};T)=\mathscr{F}^{-1}[\mathscr{G }_{g_{0}e_{0}}(\tau)]\mathscr{G}_{g_{0}g_{1}}(T)\mathscr{F}^{-1}[\mathscr{G}_{ e_{1}g_{1}}(t)]\] \[=[\alpha(-\omega_{\tau},\omega_{t})-i\beta(-\omega_{\tau},\omega_ {t})]e^{-i\omega_{\tau}T}e^{-\gamma_{g}T}, \tag{1}\] where vibrational coherence frequency along \(T\) has been substituted by the vibrational frequency \(\omega_{v}\). \(a_{mn}(\omega)\) and \(d_{mn}(\omega)\) are absorptive and dispersive 2D Lorentzian lineshapes, respectively, with \[\alpha(-\omega_{\tau},\omega_{t}) = [a_{g_{0}e_{0}}(-\omega_{\tau})a_{e_{1}g_{1}}(\omega_{t})-d_{g_{0 }e_{0}}(-\omega_{\tau})d_{e_{1}g_{1}}(\omega_{t})]\] \[\beta(-\omega_{\tau},\omega_{t}) = [a_{g_{0}e_{0}}(-\omega_{\tau})d_{e_{1}g_{1}}(\omega_{t})+a_{e_{1} g_{1}}(\omega_{t})d_{g_{0}e_{0}}(-\omega_{\tau})] \tag{2}\] In the same fashion, including the ESE and ESA contributions, the total complex 2D signal becomes - \[\tilde{S}_{tot}(-\omega_{\tau},\omega_{t};T)=[\alpha(-\omega_{\tau},\omega_{t })-i\beta(-\omega_{\tau},\omega_{t})](e^{i\omega_{\tau}T}e^{-\gamma_{e}T}(1- \kappa)+e^{-i\omega_{\tau}T}e^{-\gamma_{g}T}). \tag{3}\] where an ESA strength factor \(\kappa\) has been included to account for any differences in transition strengths of doubly-excited states. See Section S1 for details of the derivation. To derive CM lineshapes from a real rephasing 2D spectrum \(S_{tot}^{R}(-\omega_{\tau},\omega_{t};T)\), we will consider special cases of Eqn. 3. Experimental absorptive 2D spectra of _BChl_ a monomers have reported [5] very weak off-diagonal contributions from excited state absorption (ESA) with negligible contribution on the 2D diagonal. Accordingly we simplify Eqn. 3 to first consider only excited state emission (ESE) and ground state bleach (GSB) pathways with \(\kappa\) set to zero. Assuming dephasing rates of excited and ground state vibrational coherences such that \(\gamma_{e}\approx\gamma_{g}=\gamma_{g,e}\), and Fourier transforming along \(T\) yields the corresponding CM lineshape contributing on the diagonal with frequency \(\omega_{T}=\omega_{v}\), \(CM_{tot}(-\omega_{\tau},\omega_{t};\omega_{T})=\text{abs}[\alpha(-\omega_{\tau},\omega_{t})a(\omega_{T}=\omega_{v})]\), where the absolute value is consistent with how 2DES CMs are typically reported. Inspection of the CM lineshape \(CM_{tot}(-\omega_{\tau},\omega_{t};\omega_{T})\) suggests reduction in diagonal amplitude due to cancellations between \(a(-\omega_{\tau})a(\omega_{t})\) and \(d(-\omega_{\tau})d(\omega_{t})\) terms in \(\alpha(-\omega_{\tau},\omega_{t})\) (Eqn. 3). Although the latter term vanishes at the peak center, it removes amplitude from off-center locations above and below the diagonal. This destructive interference between ESE and GSB coherence pathways on the DP is distinct from the phase-twist quantum beats arising [9] from imbalance between rephasing and non-rephasing pathways. The GSB/ESE destructive interference discussed here only arises at the DP and not on other 2D CM locations (see Fig. S2) because opposite phases of vibrational quantum coherences only overlap on the diagonal (Eqn. 2). 2DES simulations with Bloch lineshapes confirm the narrowed diagonal lineshapes in CMs arising from destructively interference between GSB/ESE vibrational coherence pathways. These are shown in Section S1. We extend the above analytic reasoning to simulate the recently reported 2D CMs of _BChl a_ monomers at 77K. The model parameters are described in Section S2. Briefly, the reported FC active intramolecular vibrations are modeled as underdamped Brownian oscillators with stabilization energies, damping and frequencies similar to those reported for _BChl a_ monomers [5, 10, 11]. Energetic disorder of 230 cm\({}^{-1}\) in the optical energy gap is also included in the model to approximately match the diagonal linewidth reported [5] for early \(T\) 2DES spectra of _BChl a_ monomers. All the simulation parameters are summarized in Table S2. Figure 1A shows the \(T=50\) fs absorptive 2DES spectrum. A Frobenius spectrum of vibrational coherences corresponding to all intramolecular vibrations in the model is shown in Fig. S3. Fig. 1B shows the GSB and ESE wavermixing diagrams which interfere on the 2D diagonal. The real rephasing CMs for two of the intramolecular vibrations are shown in the left column of Fig. 1C. Rest Figure 1: (A) Real absorptive \(T=50\) fs 2D spectrum of a two electronic level system with five underdamped FC vibrations modeled as Brownian oscillators to simulate coherence maps (CMs) for dominant vibrational coherences recently reported [5] in _Bacteriochlorophyll a_ monomer. The spectrum is calculated at 77 K. The model parameters are described in Table S2. Contours are drawn at the 5%, and 10-90% in 10% intervals for positive or negative contours. (B) Wavenizing diagrams corresponding to 2DES signal contributions at the diagonal peak arising from ground (GSB) and excited state (ESE) vibrational coherences. (C) Real rephasing (left) and \(\omega_{T}\) (center) and \(+\omega_{T}\) (right) complex rephasing CMs for vibrational frequencies 339 cm\({}^{-1}\) and 554 cm\({}^{-1}\). CMs at other vibrational frequencies are shown in Fig. S4. The narrowing of diagonal CM lineshape in the real rephasing CMs due to the destructive interference between overlapping GSB and ESE coherent pathways of panel B is evident for all vibrational coherences in the model. of the CMs are shown in Fig. S4. As expected, for all vibrations, the \(d_{g_{0}e_{0}}(-\omega_{\tau})d_{e_{1}g_{1}}(\omega_{t})\) term in \(CM_{tot}(-\omega_{\tau},\omega_{t};\omega_{T})\), that results from the interference of GSB and ESE vibrational coherence pathways, leads to narrow node-like features on the 2D diagonal. Compared to simulations with Bloch lineshapes (Fig. S2), these features are further accentuated by energetic disorder in the optical energy gap in the reported [5] 2D spectra at 80K. It can be easily verified that when one starts from the complex rephasing 2D spectrum \(\tilde{S}_{tot}(-\omega_{\tau},\omega_{t};T)\) (Eqn. 3), the absolute value CM lineshape for the case of \(\omega_{T}=\pm\omega_{v}\) is now given by \(\mathscr{R}(-\omega_{\tau},\omega_{t})=\sqrt{\alpha^{2}(-\omega_{\tau},\omega_ {t};T)+\beta^{2}(-\omega_{\tau},\omega_{t};T)}\), which does not predict a diagonal narrowing but instead an approximately absorptive lineshape. It is therefore no surprise that when the CMs are resolved according to the quantum beat phase \(\pm\omega_{T}\) (Fig. 1C middle and right panels), DP narrowing due to GSB/ESE destructive interference does not occur. Note that the above reasoning suggests that the diagonal node-like feature is not specific to _BChl a_. Recent real rephasing CMs for Oxazine 720 monomers [12] are consistent with this expectation. In deriving \(CM_{tot}(-\omega_{\tau},\omega_{t};\omega_{T})\) lineshape it is assumed that the dephasing timescale of excited and ground state vibrational coherences is comparable, that is, \(\gamma_{g}\sim\gamma_{e}\). However, ultrafast electronic relaxation channels can cause large anharmonicities on the excited state potentials such that some excited state vibrational wavepackets may not survive electronic relaxation. [13, 14] For example, vibrational quantum beats from 'promoter' modes [15] that _tune_ the relative energy gaps and electronic couplings between excited state potentials do not survive relaxation though a conical intersection and dephase [16] within \(\sim\)100 fs. In contrast, the ground state beats survive for picoseconds. The above assumption will also be invalidated in systems exhibiting ultrafast singlet exciton fission, such as pentacene thin films, due to rapid internal conversion [17] of excited state population into correlated triplet states. In all such cases, Eqn. 3 predicts absence of diagonal node-like GSB/ESE interference feature in the CMs. Interestingly, the CM lineshape then also becomes a useful spectroscopic reporter of 'promoter' vibrational modes and excited state dynamics. For example, Policht et al. reported [5] no change in the diagonal node-like feature for _BChl a_ for penta- or hexa-coordinating solvents, suggesting no substantial effect on the dephasing rates of ground and excited vibrational wavepackets due to solvent coordination. Recent broadband pump-probe measurements from Scholes and coworkers report that solvent tuning of electron transfer rates can dephase [18] vibrational modes parallel to the reaction coordinate up to 5x faster, whereas'spectator' modes remain unaffected. Our analysis of GSB/ESE interference predicts that in a corresponding 2DES experiment, as opposed to'spectator' modes, the diagonal CM lineshapes for 'promoter' modes will _not_ show the node-like diagonal lineshape. Having explained the node-like features reported for _BChl a_ monomers, we can now investigate similar spectroscopic signatures that were reported [3, 19] to accompany ultrafast energy transfer in bacterial RC proteins (BRCs). Recently, Zigmantas and co-workers reported [3] diagonal nodes in all the vibrational CMs from a chemically oxidized BRC undergoing sub-200 fs \(H\to B\) energy transfer. Interestingly, similar nodes are also reported by Ogilvie and co-workers [4, 19] in the context of \(B\to P\) energy transfer for mutant BRCs, even when charge separation is precluded [20] (\(D_{LL}\) mutant). Similarities in diagonal nodes for the two cases are curious. Furthermore, in all above cases, dominant ESA contributions on the upper diagonal 2D peak (\(DP_{U}\)) are also reported. Bukarte et al. have explained [21] these contributions using ESA related "re-excitation" Feynman pathways with dispersive lineshapes caused by electrochromic shifts induced when charge separation is complete at long waiting times pump-probe (\(T>\)2 ps). However, dispersive lineshapes are seen [20] for as early as 250 fs, and even for the \(D_{LL}\) mutant. For example, see Fig. S2 and Fig. S6 of ref. [20]. Palacek et al. [3] have qualitatively argued for an excited to ground state coherence shift explain the nodal feature. Recent experiments, supported by simulations, from Policht et al. [19] explain coherent ESA contributions in the upper 2D cross-peak region by incorporating a distinctly different coherence transfer mechanism between excited state vibronic eigenstates. Overall, presence of nodal lines on the diagonal for all reported intramolecular vibrations, even for BRC mutants incapable of charge separation, and the similarity with node-like diagonal features in _BChl a_ monomers [5] has remained perplexing and begs further explanation. While destructive GSB/ESE interference (Fig. 1) already explains the diagonal node-like CM lineshapes in monomers, below we explain the distinct physical origin for the reported diagonal nodal lines. We consider an excitonic dimer model with two intramolecular FC vibrations, as a minimum model for \(P-B\) or \(B-H\) exciton pairs studied earlier. [3, 19] The rapid energy transfer process is complete within \(\sim\)200 fs and the reported CMs correspond to vibrational wavepackets which survive energy transfer. In order to incorporate electronic relaxation in waemixing pathways, we adopt the recently reported [22] approach of Engel and co-workers which exploits symmetries between 2D diagonal and cross peaks (as they grow or decay due to electronic relaxation) to extract population transfer kinetics. In the context of coherence transfer accompanying ultrafast electronic relaxation, multilevel Redfield simulations [6] of Jean and Fleming are quite instructive. Their results suggest that coherent vibrational motions orthogonal to the'reaction coordinate' can undergo coherence transfer to the acceptor through dominant secular terms in the Redfield tensor, \(\mathbf{R}_{\alpha\beta,\gamma\delta}\) as long as the coherence frequency is maintained on the donor and acceptor excitons, that is, \(\omega_{\alpha\beta}=\omega_{\gamma\delta}\), respectively. This is so because orthogonal spectator motions maintain the donor-acceptor energy gap, while energy gap tuning motions lead to 'nesting' [14] of donor-acceptor electronic states and may not survive energy transfer. Witkowskii and Moffitt [23], and later others [24, 25] have analyzed donor-acceptor energy transfer in terms of symmetric or correlated \(\hat{q}_{+}\) and anti-symmetric or anti-correlated \(\hat{q}_{-}\), relative motions of intramolecular vibrational coordinates \(\hat{q}_{A,B}\) on the respective molecules. Using a surface-crossing description of electronically coherent donor-acceptor energy transfer, Cina and Fleming [7] have elucidated the interplay of vibrational coherence transfer and in-phase or symmetric vibrational motions between the donor and acceptor. Correlated vibrations play the role of spectator motions in an excitonic dimer. Since they maintain donor-acceptor energy gap, coherence transfer is expected to be dominant along correlated modes. Fig. 2 connects wavernixing diagrams incorporating ultrafast \(D\to A\) population relaxation and vibrational coherence transfer to corresponding contributions on the 2D spectrum. It is well understood [22] that the 2D locations of ESE and ESA population contributions on \(DP_{U}\) and lower cross-peak (\(CP_{L}\)), respectively, will be interchanged by population transfer. This is illustrated in the wavernixing pathways in the first column in Figs. 2A-C. However, it is vital to recognize the corresponding evolution of coherence pathways after vibrational coherence transfer. This is shown in the wavernixing pathways in the middle column, with corresponding 2D CM contributions in the right column. Interestingly, due to vibrational coherence transfer dominant along spectator modes such as \(\hat{q}_{+}\) in an excitonic dimer, a concomitant shift in the positions of coherence peaks is also expected. 0-1 vibrational coherences contribute as a chair Figure 2: Wavenixing pathways with electronic relaxation through population and coherence transfer. (A) ESE wavernixing pathway corresponding to (left) population relaxation and (middle) vibrational coherence transfer from the donor \(\beta\) to acceptor \(\alpha\) excitons. (right) Expected 2D CM locations of 0-1 vibrational coherences arising from real rephasing ESE pathways. Red and blue squares denote the quantum beat phase \(+\omega_{T}\) and \(-\omega_{T}\), respectively. Subscripts 0,1 denote vibrational levels separated by frequency \(\omega_{v}\). Length of first and last arrow in the wavernixing pathways determines the 2D location along the excitation and detection axis, respectively. The arrow marks the chair shift from \(DP_{U}\) to \(CP_{L}\) due to vibrational coherence transfer. The wavernixing diagram for the peak highlighted by the dashed square is shown in the middle figure, all other pathways are shown in Fig. S6. (B) ESA wavernixing pathways and CM contributions. The CM chair shifts opposite to ESE, from \(CP_{L}\) to \(DP_{U}\) (C) GSB wavernixing pathways and CM contributions. From the 2D CMs, it is evident that after vibrational coherence transfer, CM contributions on \(DP_{U}\) will be a result of interfering GSB and ESA pathways. pattern [26] in a 2D CM, such that the entire chair pattern of ESE/ESA coherence peaks is interchanged as well. Only waemixing pathways for one of the coherence peaks in shown and marked as dashed square in the 2D CM. All other contributions are shown in Fig. S6. Recalling monomer CM lineshapes in Eqns. 2-3, vibrational coherence transfer in an excitonic dimer implies ESE vibrational coherence pathways on \(DP_{U}\) are replaced by ESA, which interfere with unshifted GSB pathways to result in \(\beta(-\omega_{\tau},\omega_{t})\) nodal lineshapes, with the extent of destructive interference on the 2D diagonal dependent on ESA strength \(\kappa\). In order to confirm the above expectations, we simulate 2DES spectra which include exact non-adiabatic couplings through numerically diagonalized eigenvectors, phenomenological population relaxation and coherence transfer through phenomenological relaxation incorporated in sum-over-states response functions [27], optical decoherence through Brownian oscillators and ensemble dephasing [13] of vibronic coherences through energetic averaging. The vibrational frequencies and weak FC displacements are based on intramolecular FC active vibrations of _BChl a_. In the study of Palecek et al. [3], the \(B-H\) exciton energy gap of 650 cm\({}^{-1}\) was in vibronic resonance [28] with a prominent intramolecular vibrational frequency of _BChl a_. Enhanced [28] GSB vibrational coherences arising from this resonance were previously reported [29] by Ryu et al. To study the effect of vibronic resonance on the expected destructive interference, the diabatic exciton energy gap is chosen to be resonant with a 650 cm\({}^{-1}\) intramolecular vibration, while the other vibrational frequency of 350 cm\({}^{-1}\) does not participate in resonant vibronic mixing. Choosing a resonant and non-resonant vibrational mode provides a minimum model to explain the diagonal nodal lines reported [3; 19] for _all_ observed intramolecular vibrations. Note that we choose complete two-particle basis sets in our calculations such that vibronic enhancement and multiple waemixing pathways [30] due to resonance are accurately captured in the simulations. Describing multiple waemixing pathways arising at resonance is necessary to assess whether they appreciably perturb the destructive interference on the 2D diagonal. The anti-correlated energetic disorder of 68 cm\({}^{-1}\) is chosen such that ensemble dephasing [13] of purely electronic coherences is complete within \(\sim\)200 fs and consistent with experiments [3]. Following previous approaches [19; 31] treating vibrational coherence transfer phenomenologically through Feynman pathways, the model here does not consider the details of ultrafast energy transfer mechanism. Instead population transfer timescale of \(\sim\) 50 fs [20; 32] is included phenomenologically, such that the energy transfer is near complete [3] by \(\sim\)200 fs. \(\mathbf{R}_{\alpha\beta=\gamma\delta}\) coherence transfer is also included phenomenologically in the analytic response functions (Section S3). The simulations are carried out in the site basis with intramolecular vibrational coordinates \(\hat{q}_{A,B}\) which allow for both correlated \(\hat{q}_{+}\) and anti-correlated \(\hat{q}_{-}\) vibrational motions on the donor and acceptor excitons. To understand whether vibronic resonance affects the diagonal nodal line, calculations along only the spectator mode \(\hat{q}_{+}\) where no resonant vibronic mixing is possible [33], are also analyzed. Vibronic coherences are not expected to survive anharmonic non-adiabatic couplings [34]. Experimentally, dephasing rates for such wavepackets are not known in case of RCs. For a photosynthetic antenna, Thyrhaug et al. have reported [2]\(\sim\)4-5x faster dephasing of vibronic versus purely vibrational wavepackets. In our simulations, we do not include any excited state dephasing for vibronic wavepackets. This allows us to infer the effect of surviving vibronic coherences, if any, on the expected nodal line. Because symmetric vibrations in an excitonic dimer maintain a fixed donor-acceptor energy gap, they do not participate [28] in vibronic mixing, such that, in the simulations, vibronic eigenvectors with \(\hat{q}_{+}\) excitations undergo vibrational coherence transfer. In contrast, vibronic eigenvectors with only \(\hat{q}_{-}\) excitations strongly mix excitons near a vibronic resonance to result in energetic splittings (for example, see Fig. 1 of ref. [35]). Consequently, the corresponding vibronic wavepackets in the simulations do not undergo coherence transfer. A multiplicative factor \(\kappa\) is included in the ESA response as a parameter to account for the collective ESA transition strength. Fig. S2 and Fig. S6 of ref. [20] suggest that ESA contributions on the main diagonal peak are already significant by \(T=250\) fs. Similar to experiments, the CMs are calculated from \(T=200\) fs after dephasing of electronic coherence and energy transfer are approximately complete. All the model parameters are described in Section S3. Fig. 3 shows the total absorptive 2D spectra at \(T=50\), \(100\) and \(200\) fs using the above model. As expected from the analysis in Fig. 2, population relaxation between excitons causes \(DP_{U}\) and \(CP_{L}\) ESE population signals to decay and grow, respectively, with an opposite trend for ESA signals. The total 2D signal at \(DP_{U}\) is diminished due to the cancellation of positive GSB and negative ESA signals with increasing waiting time \(T\), consistent with experiments which report [20] a dominant ESA signal on \(DP_{U}\). The corresponding 2D CMs are shown in Fig. 4 for the resonant 650 cm\({}^{-1}\) and the non-resonant 350 cm\({}^{-1}\) vibration. The CMs are zoomed in on \(DP_{U}\) to highlight the expected destructive interference on the 2D diagonal due to vibrational coherence transfer along the spectator modes. Zoomed out CMs are shown in Figs. S7-S8. Fig. 4A compares the \(DP_{U}\) diagonal node for a resonant versus non-resonant vibration. The calculation assumes that excited state vibronic wavepackets along the tuning mode \(\hat{q}_{-}\) survive energy transfer. Both vibrations show a nodal line on \(DP_{U}\) due to GSB-ESA destructive interference. However, the destructive interference for the resonant vibration is strongly perturbed by unshifted ESE vibronic coherence contributions due to lack of coherence transfer along \(\hat{q}_{-}\). This is accentuated further Figure 3: Real absorptive 2D spectra for waiting times \(T=50\), \(100\) and \(200\) fs with an ESA signal strength of \(\kappa=2\) and temperature of \(77\) K. The simulations details and model parameters are described in Section S3. by multiple wa Coherence transfer along spectator modes is expected to accompany ultrafast electronic energy transfer. Our simulations of excitonic dimer model and Feynman pathways account for this phenomenologically without considering the details of energy transfer or multiple 1- and 2-quantum electronic manifolds Figure 4: (A)Zoomed \(DP_{U}\) real rephasing 2D CM for non-resonant (350 cm\({}^{-1}\)) and resonant (650 cm\({}^{-1}\)) vibration. The calculation corresponds to the case where both \(\hat{q}_{-}\) and \(\hat{q}_{+}\) tuning and spectator modes, respectively, are allowed. (B) 2D CM calculation with only spectator mode \(\hat{q}_{+}\) (top), and resolved into -\(\omega_{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}\) (middle) and +\(\omega_{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}}}\) (bottom) components. The full CMs with the red square around \(DP_{U}\) are shown in Figs. S7–S8. Contours are drawn at the 5%, and 10-90% in 10% intervals. (C) The surface plot of ESA signal strength \(\kappa\) required for complete destructive interference between GSB-ESA coherence pathways, as a function of the relative excitonic oscillator strength (\(m\)) and the angle between the excitonic transition dipoles (\(\theta_{\alpha\beta}\)). The detailed strength calculation is shown in Section S4. Black dot shows required \(\kappa\) for perpendicular and equal magnitude transition dipoles. in the RCs. This phenomenological treatment of coherence transfer between vibronic eigenstates is consistent with that recently proposed by Policht et al. [19] to explain coherence contributions in the upper cross-peak region arising from ESA pathways. In our case, by accounting for shifting population and coherence contributions in the 2D spectra, we have shown that the resulting GSB-ESA interference between coherent pathways contributing on the 2D diagonal produces persistent signatures, similar to those reported in BRC 2DES studies [20] for all observed intramolecular vibrations. ## 4 Conclusions Our results provide new insights on the connections between vibrational coherence transfer, interference between coherent Feynman pathways and the resulting 2D CM lineshapes. By establishing these connections we resolve the distinct physical origins that lead to reportedly [3, 4, 5] similar nodal diagonal lineshapes in the 2D CMs of two disparate systems, _BChl a_ monomer and multichromophoric BRCs. We show that while the former arises from unique phase-twists from interfering GSB-ESE coherence pathways, the latter is fundamentally different and arises from previously overlooked vibrational coherence transfer along spectator modes accompanying ultrafast electronic relaxation. By incorporating relaxation pathways in Feynman diagrams, we show that along with population transfer, vibrational coherence transfer leads to a concomitant shift of coherence contributions on the 2D CM spectrum. Such coherence-shifts are expected to be dominant along spectator modes. The resulting destructive interference between GSB-ESA signal pathways is consistent with the reported nodal lines on the 2D diagonal. From experimental observations of dominant ESA signals, and estimations of dipole strengths and directions in BRCs, our analysis suggests that such nodal lines may be readily expected in BRCs. Our results resolve recent spectroscopic observations, highlight the rich information content of a 2D CM spectrum, and establish its usefulness as a subtle spectroscopic reporter of underlying electronic relaxation mechanisms. ## 5 Acknowledgments AS acknowledges research fellowship from the Indian Institute of Science (IISc). This project is supported by Science and Engineering Research Board, India under grant sanction number CRG/2019/003691 and Department of Biotechnology, India under grant sanction number BT/PR38464/BRB/10/1893/2020. ## 6 Supporting Information Available Bloch model analytical calculations, monomer 2D simulation parameters, 2D simulations with population and coherence transfer, simulations for ESA strength dependence of nodal line, multiple sub-peaks on upper 2D diagonal at vibronic resonance, analytical expression for nodal line with generalized transition dipole directions and magnitudes, 2D and CM calculations with blue shifted ESA signal, Figures S1-S11.
2308.15146
Square-free values of polynomials on average
The number of square-free integers in $x$ consecutive values of any polynomial $f$ is conjectured to be $c_fx$, where the constant $c_f$ depends only on the polynomial $f$. This has been proven for degrees less or equal to 3. Granville was able to show conditionally on the $abc$-conjecture that this conjecture is true for polynomials of arbitrarily large degrees. In 2013 Shparlinski proved that this conjecture holds on average over all polynomials of a fixed naive height, which was improved by Browning and Shparlinski in 2023. In this paper, we improve the dependence between $x$ and the height of the polynomial. We achieve this via adapting a method introduced in a 2022 paper by Browning, Sofos, and Ter\"av\"ainen on the Bateman-Horn conjecture, the polynomial Chowla conjecture, and the Hasse principle on average.
Pascal Jelinek
2023-08-29T09:27:20Z
http://arxiv.org/abs/2308.15146v1
# Square-free values of polynomials on average ###### Abstract. The number of square-free integers in \(x\) consecutive values of any polynomial \(f\) is conjectured to be \(c_{f}x\), where the constant \(c_{f}\) depends only on the polynomial \(f\). This has been proven for degrees less or equal to \(3\). Granville was able to show conditionally on the _abc_-conjecture that this conjecture is true for polynomials of arbitrarily large degrees. In 2013 Shparlinski proved that this conjecture holds on average over all polynomials of a fixed naive height, which was improved by Browning and Shparlinski in 2023. In this paper, we improve the dependence between \(x\) and the height of the polynomial. We achieve this via adapting a method introduced in a 2022 paper by Browning, Sofos, and Teravainen on the Bateman-Horn conjecture, the polynomial Chowla conjecture, and the Hasse principle on average. 2010 Mathematics Subject Classification: 11N32 ###### Contents * 1 Introduction * 2 Main tool * 3 Equidistribution of square-free numbers in short intervals * 4 An analogue for \(\mu^{2}(\cdot)\) * 5 Proof of Theorem 1.4 ## 1. Introduction The number of square-free values of a given polynomial has been studied for more than a century, with the first results dating back at least to Landau [17] in 1911. He showed that infinitely many square-free values are attained by any linear univariate polynomial with square-free content. More generally, we have the following heuristic: Given a polynomial \(f\), let \(c_{p}\) be the number of solutions of \(f\) when viewed over \(\mathbb{Z}/p^{2}\mathbb{Z}\), i.e., \(c_{p}=\#\{(a_{1},\ldots,a_{n})\in(\mathbb{Z}\cap[1,p^{2}])^{n}:f(a_{1},\ldots, a_{n})\equiv 0\bmod p^{2}\}\). If we now assume that the primes are independent of each other, we get the following conjecture: **Conjecture 1.1**.: Let \(f\in\mathbb{Z}[x_{1},\ldots,x_{n}]\). We define \(S:=\{(a_{1},\ldots,a_{n})\in\mathbb{Z}^{n}:f(a_{1},\ldots,a_{n})\) is square-free\(\}\). Then for any \(\operatorname{Box}=\operatorname{Box}(N_{1},\ldots,N_{n})=\{(a_{1},\ldots,a_{n}) \in\mathbb{Z}^{n}:|a_{i}|<N_{i}\text{ for all }i\}\) we have that \[\lim_{N_{1},\ldots,N_{n}\to\infty}\frac{\#(S\cap\operatorname{Box})}{\# \operatorname{Box}}=\prod_{p}\left(1-\frac{c_{p}}{p^{2n}}\right). \tag{1.1}\] A consequence of this conjecture is that the polynomial \(f\) attains infinitely many square-free values if it is not the square of another polynomial \(g\) and if it has non-zero values mod \(p^{2}\) for each prime number \(p\). **Remark 1.2**.: In the case where \(f\) is a univariate polynomial, we will often use the term \(\rho_{f}(p^{2})\) instead of \(c_{p}\). In 2002, Poonen [19] showed that this problem regarding a general polynomial \(f\in\mathbb{Z}[x_{1},\ldots,x_{n}]\) can be reduced to the analogous problem concerning univariate polynomials. He achieved this via considering \(f\) as a polynomial in \(\mathbb{Z}[x_{1}\ldots,x_{n-1}][x_{n}]\). He then combined this reduction with prior work by Granville [13], who showed that Conjecture 1.1 holds for univariate polynomials under the \(abc\)-conjecture. Poonen emphasises that this method only provides a proof to a slightly weaker form of the conjecture (he needed to impose some relations between the \(N_{i}\)'s) since the proof of Granville under the \(abc\)-conjecture is not homogeneous in the coefficients. However, the relations between the \(N_{i}\)'s could be dropped with a new proof of the univariate case. (cf. remark after Lemma 6.2 in [19]) If we do not assume the \(abc\)-conjecture, then the conjecture is wide open, both regarding multivariate polynomials and regarding univariate polynomials. In both cases, it is solved only for specific families of polynomials, of which we now want to highlight a few. ### Results regarding multivariate polynomials If the degree is low compared to the number of variables, one can use the circle method to show that the conjectured asymptotical behaviour is true. Remarkably, work by Destagnol and Sofos [11] shows that in this case, a third of the number of variables compared to the Birch setting already suffices. Explicitly, this means that if \(n-\sigma_{f}>1/3(d-1)2^{d}\), the conjecture holds, where \(n\) is the number of variables, \(d\) is the degree of the polynomial \(f\) and \(\sigma_{f}\) is the degree of the singular locus of \(f=0\). Another family of polynomials for which it has been shown that the heuristic holds is due to work of Bhargava [2]. If \(f\) is invariant under the action of a suitably large algebraic group, then the heuristic correctly predicts the asymptotic behaviour of square-free numbers. In particular, the polynomials related to the discriminants of degree \(3\), degree \(4\), and degree \(5\) extensions of \(\mathbb{Q}\) satisfy this criterion. These results and their methods of proof were also used to prove lower bounds on the average size of Selmer groups of families of elliptic curves and also to prove that the Hasse principle fails for a positive proportion of plane cubic curves over \(\mathbb{Q}\). (see for example [1], [3], [4], [5], [6] and [7]) ### Results regarding univariate polynomials In the case of a univariate polynomial \(f\in\mathbb{Z}[t]\), we will state the conjecture in the following way: **Conjecture 1.3**.: Let \(f\in\mathbb{Z}[t]\). Then: \[\sum_{1\leqslant n\leqslant x}\mu^{2}(f(n))\sim\prod_{p}\left(1-\frac{\rho_{f} (p^{2})}{p^{2}}\right)x, \tag{1.2}\] where \(\rho_{f}\left(m\right)\) is defined to be the number of solutions to \(f\left(n\right)\equiv 0\bmod m\), where \(1\leqslant n\leqslant m\). There have been many different results related to this conjecture, and we want to highlight a few of them: Ricci [22] was able to show that the asymptotics holds for polynomials of degree \(\leqslant 2\). If \(f\) has degree \(1\), then an explicit error term and an explicit dependence between \(x\) and the height of the polynomial were first stated by Prachar [20]. The error term has subsequently been improved by Hooley [15] and the dependence between \(x\) and the height of the polynomial has been improved by Nunes [18]. If \(f\) has degree \(3\), the asymptotics was first proven by Erdos [12]. Later Hooley [16] provided the first explicit log-saving error term, which Reuss [21] improved to a power-saving error term in 2013. Erdos also conjectured that the inputs can be restricted to the prime numbers, in which case it has been shown by Helfgott [14] to hold up to degree \(3\). It appears that for univariate polynomials of degree \(>3\), there is no polynomial known that attains infinitely many square-free values. ### Averaging and announcement of new results In the last few years, there has been much progress on the average behaviour of arithmetic functions over polynomial values, among others behaviour related to the van Mangold function \(\Lambda\), Liouville's function \(\lambda\) and the \(r\)-function. Regarding square-free numbers, we want to study the following problem: Consider a polynomial \(g=c_{0}t^{d}+\cdots+c_{d}\in\mathbb{Z}[t]\) of height at most \(H\) and degree at most \(d\). Let \(S\subset\{0,\ldots,d\}\). We associate each polynomial to its coefficient vector and define \(\mathscr{F}_{g,S}(H)\) to be roughly the set of coefficient vectors whose entries agree with or are close to the entries of \(g\), depending on whether the index of the coefficient is in \(S\) or not. More precisely, \(\mathscr{F}_{g,S}(H)\) is defined to be \[\left\{(a_{0},\ldots,a_{d})\in(\mathbb{Z})^{d+1}:\begin{array}{ll}a_{i}=c_{ i}&\text{if }i\notin S\\ |a_{i}-c_{i}|\leqslant H&\text{if }i\in S\\ \gcd(a_{0},\ldots,a_{d})=1\end{array}\right\}.\] We want to show that as \(H\) tends to infinity, almost all \(f\in\mathscr{F}_{g,S}(H)\) satisfy the conjecture. This is achieved via proving that \[\frac{1}{\#\mathscr{F}_{g,S}(H)}\sum_{f\mathscr{F}_{g,S}(H)}\left|\sum_{1 \leqslant n\leqslant x}\mu^{2}(f(n))-c_{f}x\right|^{k}\ll x^{1-\delta} \tag{1.3}\] for some exponent \(k\) and \(c_{f}=\prod\limits_{p}\left(1-\frac{\rho_{f}\left(p^{2}\right)}{p^{2}}\right)\). All cited results below were derived using \(k=1\). Shparlinski [23] showed that given \(g\equiv 0\) and \(S=\{0,\ldots,d\}\), the conjecture holds for almost all polynomials under the condition that \[x^{d-1+\varepsilon}\leqslant H\leqslant x^{A},\] for some constant \(A\). In recent work, Browning and Shparlinski [8] used the geometry of numbers and the determinant method to improve this result to \[x^{d-3+\varepsilon}\leqslant H\leqslant x^{A}.\] In the same paper, Browning and Shparlinski showed that if \(g=c_{0}x^{d}+\cdots+c_{d-1}x^{1}\) and only the constant coefficient is varied, i.e. \(S=\{d\}\), then the dependence between \(x\) and \(H\) is \[x^{(d-1)/2+\eta(d)+\varepsilon},\] where \[\eta(d)=\begin{cases}2^{-(d+1)}&\text{if }2\leqslant d\leqslant 5\\ \frac{1}{d(d-1)}&\text{if }d\geqslant 6.\end{cases}\] This was achieved via adapting some works on the Vinogradov mean value theorem. (see [8] and the references therein) In this paper, we approach this problem by considering inequality (1.3) for k=2. We adapt methods developed in recent work by Browning, Sofos and Teravainen [9] to prove the following theorem: **Theorem 1.4**.: _Let \(A,d>1\), \(1/5>\varepsilon>0\) be fixed, \(d\geqslant\ell>k\geqslant 0\). Then there is a constant \(H_{0}(A,d)\) such that for all \(H>H_{0}(A,d)\) the following statement holds: Let \(g\) be any polynomial of degree \(\leqslant d\) with \(g(0)\neq 0\) and let \(S=\{\ell,k\}\). Let \(x\) be such that_ 1. \(x^{2\ell-d}\leqslant H\)_;_ 2. \(x^{d+\ell-2k+14/5+\varepsilon}\leqslant H\leqslant x^{A}\)_._ _Then there exists some \(\delta>0\) such that for all but \(\ll H^{2-\delta}\) many degree \(d\) polynomials \(f\) in \(\mathscr{F}_{g,S}(H)\), where the implied constant only depends on fixed parameters, we have_ \[\left|\frac{1}{x}\sum_{1\leqslant n\leqslant x}\mu^{2}(f(n))-c_{f}x\right| \leqslant x^{-\delta}. \tag{1.4}\] An explicit value that can be chosen for \(\delta\) will be presented in the proof in section 5. **Remark 1.5**.: By going through the proof of Theorem 1.4 above, one can see that under the condition that \(\ell/2+k>d+7/5\), the result above still holds, even if \(x^{2\ell-d}\geqslant H\), after a slight change in the second relation between \(x\) and \(H\). **Remark 1.6**.: We can achieve the minimal error term by taking \(\ell=1\) and \(k=0\). Using the notation from above, we have the following statement: If \(x\) is such that \[x^{d+19/5+\varepsilon}\leqslant H\leqslant x^{A},\] then for all but \[H^{2-\frac{1-\varepsilon}{(3d+19)(2+d)}}\] degree \(d\) polynomials \(f\) in \(\mathscr{F}_{g,S}(H)\), where the implied constant only depends on fixed parameters, we have \[\left|\frac{1}{x}\sum_{1\leqslant n\leqslant x}\mu^{2}(f(n))-c_{f}x\right| \leqslant x^{-\frac{1-\varepsilon}{20+10d}}. \tag{1.5}\] **Remark 1.7**.: We can achieve the maximal range of values of \(x\) by choosing \(\ell=[2d/3]\). Let \(r(d)=2d/3-\ell\). Then using the notation from above, we have, after optimising for \(k\), the following statement: Let \(k=\ell-1\). If \(x\) is such that \[x^{d/3+4/5+r(d)+\varepsilon}\leqslant H\leqslant x^{A},\] then there exists some \(\delta>0\) such that for all but \(\ll H^{2-\delta}\) many degree \(d\) polynomials \(f\) in \(\mathscr{F}_{g,S}(H)\), where the implied constant only depends on fixed parameters, we have \[\left|\frac{1}{x}\sum_{1\leqslant n\leqslant x}\mu^{2}(f(n))-c_{f}x\right| \leqslant x^{-\delta}. \tag{1.6}\] ### Structure of the paper In Section 2, we present the tool that allows us to prove the result on average for random polynomials via equidistribution in short intervals and in arithmetic progressions. In Section 3, we derive results on how \(\mu^{2}(\cdot)\) is distributed in the relevant cases. In Section 4, we find an approximation of \(\mu^{2}(\cdot)\) in short intervals and in arithmetic progressions, which enables us to apply Corollary 2.1 to the setting of Theorem 1.4 and to prove the theorem in Section 5. ### Notation \(h\), \(i\), \(j\), \(k\), \(\ell\) and \(q\) will always be integers and \(p\) will always denote a prime number. Further, \(I\) will always be an interval and \(\mu(\cdot)\) will always denote the Mobius function, \(\varphi(\cdot)\) the Euler totient function and \(\tau(\cdot)\) the divisor function. \(A\), \(d\) and \(\varepsilon\) will be constants, where \(A\) is usually very large, and \(\varepsilon\) very small. We will adopt Vinogradov's notation \(O(\cdot)\) and \(\ll\), and we will allow the implied constant to depend only on constant factors \(A\), \(d\) and \(\varepsilon\) unless indicated in the subscript. ### Acknowledgements This paper is the result of my Master's thesis at the University of Vienna, written under the supervision of Tim Browning. I am greatly indebted to Tim Browning, who welcomed me into his research group without hesitation and without whose insights and tips this project would not have been possible. Special thanks go also to Igor Shparlinski for his valuable feedback. ## 2. Main tool We present a corollary of Theorem 2.2 in [10]. In the setting of this thesis, we expect a power-saving instead of a log-saving, so we change the second assumption accordingly. **Corollary 2.1**.: _Let \(A\geqslant 1\), \(d>1\), \(\delta>\varepsilon>0\) and \(0\leqslant k<\ell\leqslant d\) be fixed. Let \(H\geqslant H_{0}(A,d)\) and \(x^{\ell-k+\varepsilon}<H\leqslant x^{A}\). Let \(F:\mathbb{Z}\to\mathbb{C}\) be a sequence such that_ 1. \(|F(n)|\ll n^{\varepsilon}\) _for all_ \(n\in\mathbb{Z}\)_;_ 2. _For any_ \(q\leqslant x^{\ell}\)__ \[\max_{\begin{subarray}{c}1\leqslant u\leqslant q\\ \gcd(u,q)\leqslant x\end{subarray}}\sup_{\begin{subarray}{c}I\text{ interval}\\ |I|>H^{1-\varepsilon}x^{k}\end{subarray}}\frac{q}{|I|}\left|\sum_{ \begin{subarray}{c}n\in I\\ n\equiv u\bmod q\end{subarray}}F(n)\right|\ll x^{-\delta}.\] _Then, for any polynomial \(g\in\mathbb{Z}[t]\) of degree \(\leqslant d\) with coefficients in \([-H,H]\) and \(g(0)\neq 0\), and for any coefficients \(\alpha_{n}\in\mathbb{C}\) such that \(|\alpha_{n}|\leqslant 1\), we have_ \[\sup_{x^{\prime}\in[x/2,x]}\sum_{|a|,|b|\leqslant H}\left|\sum_{1\leqslant n \leqslant x^{\prime}}\alpha_{n}F(an^{k}+bn^{\ell}+g(n))\right|^{2}\ll\frac{H^ {2}x^{2}}{x^{\delta^{\prime}}}, \tag{2.1}\] _with_ \[\delta^{\prime}=\min\left(\frac{\delta-\varepsilon}{1+\ell+d(\ell-k)},\frac{1- \varepsilon}{1+3\ell-2k+d}\right).\] To prove this corollary, we proceed along the same lines as in the proof of Theorem 2.2 in [10]. We first state some lemmata which we use in the proof. An analogous version of each lemma can be found in [10]. **Lemma 2.2**.: _Let \(q\in\mathbb{N}\) and \(z\geqslant 1\). If \(q\leqslant z^{A}\), then_ \[\sum_{\begin{subarray}{c}d|q\\ d\geqslant z\end{subarray}}\frac{1}{d}\ll\frac{1}{z^{1-\varepsilon}}. \tag{2.2}\] Proof.: By the standard estimate for the divisor function and by the assumption \(q\leqslant z^{A}\), we obtain that \[\sum_{\begin{subarray}{c}d|q\\ d\geqslant z\end{subarray}}\frac{1}{d}\ll\frac{1}{z}\sum_{d|q}1\ll\frac{1}{z^{ 1-\varepsilon}}.\] **Lemma 2.3**.: _Let \(A\geqslant 1\), \(a,c\in\mathbb{Z}\backslash\{0\}\), \(b\in\mathbb{Z}\) and \(x_{1},x_{2}\geqslant 1\), such that \(|c|<x_{2}^{A}\). Then we have_ \[\#\{n\in\mathbb{Z}\cap[-x_{1},x_{1}]:\gcd(an+b,c)>x_{2}\}\ll\frac{x_{1}\gcd( a,c)}{x_{2}^{1-\varepsilon}}+\tau(c).\] The proof follows the proof in [10], now using (2.2) as an estimate. **Lemma 2.4**.: _Let \(\varepsilon>0\), \(d\in\mathbb{N}\), \(d>\kappa>0\) be fixed and let \(x>1\). Let \(g\in\mathbb{Z}[t]\) be a polynomial of degree \(\leqslant d\) such that \(g(0)\neq 0\). Define_ \[\mathscr{M}_{\kappa,d}:=\left\{\begin{array}{rl}n_{1},n_{2}>x^{1-\kappa/d} &\text{and }|n_{1}-n_{2}|>x^{1-\kappa/d}\\ \mathbf{n}\in(\mathbb{N}\cap[1,x])^{2}:&\gcd(n_{1},n_{2})<x^{\kappa/d}\\ &\gcd(n_{1}^{d},g(n_{1}))<x^{\kappa+\varepsilon}\end{array}\right\}. \tag{2.3}\] _Then \(\#((\mathbb{N}\cap[1,x])^{2}\setminus\mathscr{M}_{\kappa,d})\ll x^{2-\kappa/d}\)._ We show Lemma 2.4 completely analogously to the corresponding statement in [10]. In both cases, the proof is done by considering the number of pairs failing each property individually and then applying the union bound. Now we have stated all the necessary lemmata to prove the corollary. **Remark 2.5**.: Due to the assumption that \(x^{\ell-k+\varepsilon}<H\leqslant x^{A}\), we may interchange \((Hx^{d})^{\varepsilon}\), \(H^{\varepsilon}\) and \(x^{\varepsilon}\) throughout the proof. ### Proof of Corollary 2.1 In the proof, we will expand the left-hand side of (2.1), and we will show that the inequality holds for all \(x^{\prime}\in[x/2,x]\) and \(g\in\mathbb{Z}\left[t\right]\), hence also for the maximal value that any such combination of \(x^{\prime}\) and \(g\) can reach. The left-hand side now looks like: \[\sum_{1\leqslant n_{1},n_{2}\leqslant x^{\prime}}\alpha_{n_{1}}\overline{ \alpha}_{n_{2}}\sum_{|a|,|b|\leqslant H}F(an_{1}^{k}+bn_{1}^{\ell}+g(n_{1})) \overline{F}(an_{2}^{k}+bn_{2}^{\ell}+g(n_{2})). \tag{2.4}\] Using the notation of Lemma 2.4, we define the sets \(\mathscr{M}:=\mathscr{M}_{\kappa,d}\) and \(\mathscr{M}^{c}:=\left\{(n_{1},n_{2})\in\mathbb{N}^{2}:n_{1},n_{2}\leqslant x \right\}\setminus\mathscr{M}\), where \(\kappa\) is a parameter that will be defined later. In order to consider their contributions individually, we define the following two cases: 1. \((n_{1},n_{2})\in\mathscr{M}^{c}\); 2. \((n_{1},n_{2})\in\mathscr{M}\). Lemma 2.4 gives us that \[\#\mathscr{M}^{c}\ll x^{2-\kappa/d}. \tag{2.5}\] #### 2.1.1. Contribution of \((n_{1},n_{2})\in\mathscr{M}^{c}\) We can use the estimate above to bound the contribution of \(\mathscr{M}^{c}\). By the first assumption of Corollary 2.1, we have that \[\sum_{n_{1},n_{2}\in\mathscr{M}^{c}}\alpha_{n_{1}}\overline{ \alpha}_{n_{2}}\sum_{|a|,|b|\leqslant H}F(an_{1}^{k}+bn_{1}^{\ell}+g(n_{1})) \overline{F}(an_{2}^{k}+bn_{2}^{\ell}+g(n_{2}))\] \[\ll\#\mathscr{M}^{c}H^{2}H^{\varepsilon}\] \[\ll H^{2}x^{2-\kappa/d+\varepsilon}.\] We now turn to Case (2) and follow the argument in [10]. #### 2.1.2. Contribution of \((n_{1},n_{2})\in\mathscr{M}\) We first take absolute values and use the triangle inequality to bound the contribution of \(\mathscr{M}\) to (2.4). Hence, the contribution is \[\ll\sum_{\mathbf{n}\in\mathscr{M}}\left|\alpha_{n_{1}}\overline{ \alpha}_{n_{2}}\right|\left|\sum_{|a||b|\leqslant H}F(an_{1}^{k}+bn_{1}^{\ell} +g(n_{1}))\overline{F}(an_{2}^{k}+bn_{2}^{\ell}+g(n_{2}))\right|. \tag{2.6}\] Since \(\left|\alpha_{n_{1}}\right|,\left|\alpha_{n_{2}}\right|\leqslant 1\), and after using the triangle inequality again, (2.6) can be written as \[\ll\sum_{\mathbf{n}\in\mathscr{M}}\sum_{m_{2}\in\mathbb{Z}}F(n_{ 2}^{k}m_{2}+g(n_{2}))\left|\sum_{m_{1}\in\mathbb{Z}}F(n_{1}^{k}m_{1}+g(n_{1}) )\gamma(\mathbf{m})\right|, \tag{2.7}\] where we take \(m_{i}=a+bn_{i}^{\ell-k}\) for \(i=1,2\) and let \[\gamma(\mathbf{m}):=\#\left\{(a,b)\in(\mathbb{Z}\cap[-H,H])^{2}:m_ {i}=a+bn_{i}^{\ell-k}\ \forall i=1,2\right\}.\] From now on, we will use the following shorthand: \[\Delta:=n_{1}^{\ell-k}-n_{2}^{\ell-k}.\] This can be bounded below using the first property of \(\mathscr{M}\): \[|\Delta|\geqslant|n_{1}-n_{2}|n_{1}^{\ell-k-1}\gg x^{(\ell-k)(1- \kappa/d)}. \tag{2.8}\] We have by the definition of \(m_{2}\) that \[|m_{2}|\leqslant H+n_{2}^{\ell-k}H\leqslant 2n_{2}^{\ell-k}H.\] Now we rewrite \(\gamma(\mathbf{m})\) as an indicator function of the following simultaneous statements: \[\Delta\mid(m_{1}-m_{2}),\quad|m_{1}-m_{2}|\leqslant|\Delta|H,\quad|m_{2}n_{1} ^{\ell-k}-m_{1}n_{2}^{\ell-k}|\leqslant|\Delta|H. \tag{2.9}\] The latter two conditions can be merged as \(m_{1}\in J(\mathbf{n},m_{2})\) for some interval \(J(\mathbf{n},m_{2})\) whose length can be bounded above using the first property of \(\mathscr{M}\). This yields \[|J(\mathbf{n},m_{2})|\leqslant 2|\Delta|H/n_{2}^{\ell-k}\leqslant 2Hx^{\ell-k} \big{/}n_{2}^{\ell-k}\leqslant 2Hx^{(\ell-k)\kappa/d}. \tag{2.10}\] Hence, equation (2.7) can be rewritten again as \[\sum_{\mathbf{n}\in\mathscr{M}}\sum_{|m_{2}|\leqslant 2n_{2}^{\ell-k}H}(Hx^{d}) ^{\varepsilon}\left|S_{1}\right|, \tag{2.11}\] where \[S_{1}=\sum_{\begin{subarray}{c}m_{1}\in I_{1}(\mathbf{n})\cap J(\mathbf{n},m_ {2})\\ m_{1}\equiv m_{2}\bmod\Delta\end{subarray}}F(n_{1}^{k}m_{1}+g(n_{1})).\] We now need to distinguish if \(\gcd(n_{1}^{k}m_{2}+g(n_{1}),\Delta)\) is larger than \(x^{1-\kappa-\varepsilon}\) or not. First we consider the case where \(\gcd(n_{1}^{k}m_{2}+g(n_{1}),\Delta)>x^{1-\kappa-\varepsilon}\). By assumption (1) of Corollary 2.1, we have that the expression in (2.11) is \[\ll H^{\varepsilon}\sum_{\mathbf{n}\in\mathscr{M}}\sum_{\begin{subarray}{c}| m_{2}|\leqslant 2n_{2}^{\ell-k}H\\ \gcd(n_{1}^{k}m_{2}+g(n_{1}),\Delta)>x^{1-\kappa-\varepsilon}\end{subarray}} \left|\sum_{\begin{subarray}{c}m_{1}\in I_{1}(\mathbf{n})\cap J(\mathbf{n},m_ {2})\\ m_{1}\equiv m_{2}\bmod\Delta\end{subarray}}(Hx^{d})^{\varepsilon}\right|. \tag{2.12}\] By the estimate of the sizes of \(J(\mathbf{n},m_{2})\) and \(|\Delta|\), we have that the inner sum is \[\ll H^{1+\varepsilon}x^{(l-k)(2\kappa/d-1)}+1\ll H^{1+\varepsilon}x^{(l-k)(2 \kappa/d-1)},\] since \(H>x^{\ell-k}\). Hence, we get that the expression is \[\ll H^{1+\varepsilon}x^{(l-k)(2\kappa/d-1)}\sum_{\mathbf{n}\in\mathscr{M}} \sum_{\begin{subarray}{c}|m_{2}|\leqslant 2n_{2}^{\ell-k}H\\ \gcd(n_{1}^{k}m_{2}+g(n_{1}),\Delta)>x^{1-\kappa-\varepsilon}\end{subarray}}1. \tag{2.13}\] Now by Lemma 2.3, we get that the sum over \(m_{2}\) is \[\ll\frac{x^{\ell-k}Hx^{\kappa\ell/d}}{x^{1-\kappa-\varepsilon}}+\tau(\Delta) \ll\frac{x^{\ell-k}Hx^{\kappa\ell/d}}{x^{1-\kappa-\varepsilon}},\] since \[\gcd(n_{1}^{k},\Delta)=\gcd(n_{1}^{k},n_{1}^{\ell-k}-n_{2}^{\ell-k})\ll\gcd(n _{1},n_{2})^{\ell}\ll x^{\kappa\ell/d}.\] Combining everything, we get that (2.11) is \[\ll x^{2}H^{1+\varepsilon}x^{(\ell-k)(2\kappa/d-1)}\frac{x^{\ell-k}Hx^{\kappa \ell/d}}{x^{1-\kappa-\varepsilon}}\ll\frac{x^{2}H^{2}}{x^{1-\varepsilon- \kappa((3\ell-2k)/d+1)}}.\] Now we consider the case where \(\gcd(n_{1}^{k}m_{2}+g(n_{1}),\Delta)\leqslant x^{1-\kappa-\varepsilon}\). Making the change of variables \(m=n_{1}^{k}m_{1}+g(n_{1})\) in (2.11), we see that \[S_{1}=\sum_{\begin{subarray}{c}m\in J^{\prime}(\mathbf{n},m_{2})\\ m\equiv u\bmod q\end{subarray}}F(m),\] where * \(u\) is the unique solution \(\bmod\,q\) to \(u\equiv n_{1}^{k}m_{2}+g(n_{1})\bmod\Delta\) and \(u\equiv g(n_{1})\bmod n_{1}^{k}\), where \(q=\operatorname{lcm}(n_{1}^{k},|\Delta|)\leqslant x^{\ell}\); * \(J^{\prime}(\mathbf{n},m_{2})=n_{1}^{k}J(\mathbf{n},m_{2})+g(n_{1})\) is an interval with \[|J^{\prime}(\mathbf{n},m_{2})|=n_{1}^{k}|J(\mathbf{n},m_{2})|\leqslant 2n_{1}^{ k}|\Delta|H/n_{2}^{\ell-k},\] (2.14) where the inequality follows from (2.10). Also, using the second property of (2.3), we get that \[q\geqslant\frac{n_{1}^{k}\,|\Delta|}{\gcd(n_{1}^{k},\Delta)}\geqslant\frac{n _{1}^{k}\,|\Delta|}{x^{\kappa\ell/d}}.\] Furthermore, we have \[\gcd(u,q) \leqslant\gcd(g(n_{1}),n_{1}^{k})\gcd(n_{1}km_{2}+g(n_{1}),| \Delta|)\] \[\leqslant x^{\kappa+\varepsilon}x^{1-\kappa-\varepsilon}\] \[\leqslant x\] by the third property in (2.3). We can now estimate the size of \(S_{1}\) using assumption (2) of Corollary 2.1. Hence, we get that \[|S_{1}|\ll\frac{|J^{\prime}(\mathbf{n},m_{2})|}{qx^{\delta}}\ll\frac{n_{1}^{k} |\Delta|H/n_{2}^{\ell-k}}{qx^{\delta}}\ll\frac{Hx^{\kappa\ell/d}/n_{2}^{\ell-k }}{x^{\delta}}\ll\frac{H}{x^{\delta-\kappa\ell/d+(\ell-k)(1-\kappa)}},\] except when \(|J^{\prime}(\mathbf{n},m_{2})|\leqslant H^{1-\varepsilon}x^{k}\); that case will be considered below. Therefore, we can see by assumption (1) that the left-hand side of (2.11) is \[\ll\sum_{n_{1}\leqslant x}\sum_{n_{2}\leqslant x}\sum_{|m_{2}| \leqslant 2n_{2}^{\ell-k}H}(Hx^{d})^{\varepsilon}\frac{H}{x^{\delta-\kappa\ell/d+( \ell-k)(1-\kappa)}}\] \[\ll\sum_{n_{1}\leqslant x}\sum_{n_{2}\leqslant x}\frac{H^{2+ \varepsilon}x^{l-k}}{x^{\delta-\kappa\ell/d+(\ell-k)(1-\kappa)}}\] \[\ll\frac{x^{2}H^{2}}{x^{\delta-(\ell/d+\ell-k)\kappa-\varepsilon}}.\] This is an upper bound in this case. In the case where \(|J^{\prime}(\mathbf{n},m_{2})|\leqslant H^{1-\varepsilon}x^{k}\), we use \(|F(n)|\ll(Hx^{d})^{\varepsilon}\), which gives us the estimate \[|S_{1}|\ll(Hx^{d})^{\varepsilon}\left(\frac{|J^{\prime}(\mathbf{n},m_{2})|}{q} +1\right)\ll(Hx^{d})^{\varepsilon}\left(\frac{H^{1-\varepsilon}x^{k+\kappa \ell/d}}{x^{\ell-k}}\right),\] since \(q\leqslant x^{\ell}<H^{1-\varepsilon}x^{k}\). The case \(|J^{\prime}(\mathbf{n},m_{2})|\leqslant H^{1-\varepsilon}x^{k}\) remains to be investigated. First, we assume \(n_{1}>n_{2}\). By the definition of \(J^{\prime}(\mathbf{n},m_{2})\), we have that \(J^{\prime}(\mathbf{n},m_{2})\) is the intersubsection of the following two intervals: \[n_{1}^{k}\cdot[m_{2}-|\Delta|H,m_{2}+|\Delta|H]\] \[n_{1}^{k}\cdot[m_{2}\frac{n_{1}^{\ell-k}}{n_{2}^{\ell-k}}-\frac{|\Delta|H}{n_{ 2}^{\ell-k}},m_{2}\frac{n_{1}^{\ell-k}}{n_{2}^{\ell-k}}+\frac{|\Delta|H}{n_{ 2}^{\ell-k}}]\] We know by the assumption \(n_{1}>n_{2}\) that if the interval has length smaller than \(H^{1-\varepsilon}x^{k}\), then the following inequality is satisfied: \[|m_{2}|+|\Delta|H-\frac{H^{1-\varepsilon}x^{k}}{n_{1}^{k}}<|m_{2}|\frac{n_{1}^{ \ell-k}}{n_{2}^{\ell-k}}-\frac{|\Delta|H}{n_{2}^{\ell-k}}.\] We rearrange and recall the definition of \(\Delta\). This gives us that \[|m_{2}|>H(n_{2}^{\ell-k}+1)-\frac{H^{1-\varepsilon}x^{k}n_{2}^{\ell-k}}{| \Delta|n_{1}^{k}}.\] In the case \(n_{1}<n_{2}\), we start with the analogous inequalities and get the same result. By the definition of \(m_{2}\), we also know that \(|m_{2}|<H(n_{2}^{\ell-k}+1)\), hence we have that \(|m_{2}|\in U(n_{2})\), where \(U(n_{2}):=[H(n_{2}^{\ell-k}+1)-\frac{H^{1-\varepsilon}x^{k}n_{2}^{\ell-k}}{| \Delta|n_{1}^{k}},H(n_{2}^{\ell-k}+1)]\). Plugging everything into equation (2.11) and using the size estimate of \(|\Delta|\), we get \[\ll\sum_{n_{1}\leqslant x}\sum_{n_{2}\leqslant x}\sum_{|m_{2}| \in U(n_{2})}(Hx^{d})^{\varepsilon}(Hx^{d})^{\varepsilon}\left(\frac{H^{1- \varepsilon}x^{k+\kappa\ell/d}}{x^{\ell-k}n_{1}^{k}}\right)\] \[\ll H^{2+\varepsilon}\sum_{n_{1}\leqslant x}\sum_{n_{2}\leqslant x }\frac{n_{2}^{\ell-k}x^{2k+\kappa\ell/d}}{|\Delta|x^{\ell-k}n_{1}^{2k}}\] \[\ll x^{2+\varepsilon}H^{2}\frac{x^{\kappa\ell/d}}{|\Delta|}\] \[\ll x^{2}H^{2}\frac{1}{x^{(\ell-k)(1-\kappa/d)-\kappa\ell/d}- \varepsilon}.\] This provides an upper bound in this case. #### 2.1.3. Optimising \(\kappa\) Now we have in total four different upper bounds for the desired quantity. Their denominators are \(x^{\kappa/d+\varepsilon}\), \(x^{1-\varepsilon-\kappa((3\ell-2k)/d+1)}\), \(x^{\delta-(\ell/d+l-k)\kappa-\varepsilon}\) and \(x^{(\ell-k)(1-\kappa/d)-\kappa\ell/d-\varepsilon}\). We now want to find the optimal \(\kappa\) to maximise the exponent. By the assumption that \(d>1\), we have the following inequality: \[(\ell-k)(1-\kappa/d)-\kappa\ell/d-\varepsilon>1-\varepsilon-\kappa(\ell/d+2( \ell-k)+1).\] Now \(\delta-(\ell/d+l-k)\kappa-\varepsilon\) and \(1-\varepsilon-\kappa(\ell/d+2(\ell-k)+1)\) are decreasing in \(\kappa\), while \(\kappa/d+\varepsilon\) is increasing in \(\kappa\). Since neither of the decreasing functions is strictly smaller than the other for all relevant values of \(\delta\) and \(\kappa\), the optimal \(\kappa\) is the minimal solution to the following two equations: * \(\delta-(\ell/d+l-k)\kappa-\varepsilon=\kappa/d+\varepsilon\); * \(1-\varepsilon-\kappa((3\ell-2k)/d+1)=\kappa/d+\varepsilon\). The first equation gives us that \(\kappa=\frac{\delta-\varepsilon}{(1+\ell)/d+(\ell-k)}\), the second one gives us that \(\kappa=\frac{1-\varepsilon}{(1+3\ell-2k)/d+1}\). Therefore, we have that \(\kappa=\min\left(\frac{\delta-\varepsilon}{(1+\ell)/d+(\ell-k)},\frac{1- \varepsilon}{(1+3\ell-2k)/d+1}\right)\) is the optimal choice, which completes the proof. ## 3. Equidistribution of square-free numbers in short intervals In this section, we will prove the following result, which will help us to verify that the second assumption of Corollary 2.1 is satisfied. **Proposition 3.1** (Square-free numbers in short intervals and arithmetic progressions).: _Let \(q\geqslant 1,a\geqslant 0\), let \(h=\gcd\left(a,q\right)\) and \(q^{\prime}h=q\). Further, let \(0<y<x\) be any real numbers, then_ \[\sum_{\begin{subarray}{c}x-y\leqslant n\leqslant x\\ n\equiv a\bmod q\end{subarray}}\mu^{2}\left(n\right) =\frac{6}{\pi^{2}}\frac{\mu^{2}\left(h\right)y}{q}\prod_{p|q^{ \prime}}\left(1-\frac{1}{p}\right)^{-1}\prod_{p|q}\left(1+\frac{1}{p}\right)^ {-1}\] \[+O\left(h\left(\left(\frac{x}{q}\right)^{1/2}+q^{1/2+\varepsilon} \right)\right). \tag{3.1}\] We can rewrite the right-hand side above in terms of the solutions of the polynomial \(f\bmod p^{2}\). Therefore, we get \[\frac{6\mu^{2}(h)y}{\pi^{2}q}\prod_{p|q^{\prime}}\left(1-\frac{1}{p}\right)^{ -1}\prod_{p|q}\left(1+\frac{1}{p}\right)^{-1}=\frac{y}{q}\prod_{p}\left(1- \frac{\rho_{f}(p^{2})}{p^{2}}\right),\] where \(f\left(n\right)=qn+a\) and \(\rho_{f}(m)\) is defined to be the number of solutions of a polynomial \(f(n)\bmod m\) for \(1\leqslant n\leqslant m\), hence it agrees with the conjectured asymptotic behaviour. Proof.: We want to estimate \[S:=\sum_{\begin{subarray}{c}x-y\leqslant n\leqslant x\\ n\equiv a\bmod q\end{subarray}}\mu^{2}\left(n\right).\] Our aim is to reduce the problem to the case where \(\gcd(a,q)=1\) and then apply a theorem by Hooley. Let \(h=\gcd(a,q)\) and \(n^{\prime}h=n\), \(a^{\prime}h=a\) and \(q^{\prime}h=q\), hence \(\gcd(a^{\prime},q^{\prime})=1\). We have \[S=\sum_{\begin{subarray}{c}x-y\leqslant n\leqslant x\\ n\equiv a\bmod q\end{subarray}}\mu^{2}\left(n\right)=\sum_{\begin{subarray}{ c}\frac{x-y}{h}\leqslant n^{\prime}\leqslant\frac{x}{h}\\ n^{\prime}\equiv a^{\prime}\bmod q^{\prime}\end{subarray}}\mu^{2}\left(hn^{ \prime}\right).\] If \(\gcd(h,n^{\prime})\neq 1\), we have \(\mu^{2}(hn^{\prime})=0\). Therefore, we can impose that \(\gcd(h,n^{\prime})=1\). We get that \[S =\sum_{\begin{subarray}{c}x-y\leqslant n^{\prime}\leqslant\frac{x }{h}\\ n^{\prime}\equiv a^{\prime}\bmod q^{\prime}\\ \gcd(h,n^{\prime})=1\end{subarray}}\mu^{2}(h)\mu^{2}(n^{\prime})\] \[=\sum_{\begin{subarray}{c}x-u\leqslant n^{\prime}\leqslant\frac{x }{h}\\ n^{\prime}\equiv a^{\prime}\bmod q^{\prime}\\ \gcd(\bar{h},n^{\prime})=1\end{subarray}}\mu^{2}(h)\mu^{2}(n^{\prime}),\] where \[\tilde{h}=\prod_{\begin{subarray}{c}p\mid h\\ p\mid q^{\prime}\end{subarray}}p,\] since the congruence condition already ensures that \(\gcd(n^{\prime},q^{\prime})=1\). Now we rewrite \(\gcd(\tilde{h},n^{\prime})\) as a sum of congruences and apply the Chinese Remainder Theorem to the inner sum, since \(\gcd(\tilde{h},q^{\prime})=1\) by the construction of \(\tilde{h}\). Therefore, \[S =\mu^{2}(h)\sum_{\begin{subarray}{c}1\leqslant b\leqslant\tilde{h }\\ \gcd(b,\tilde{h})=1\end{subarray}}\sum_{\begin{subarray}{c}\frac{x-y}{h}\leqslant n ^{\prime}\leqslant\frac{x}{h}\\ n^{\prime}\equiv a^{\prime}\mod q^{\prime}\\ n^{\prime}\equiv b\bmod\tilde{h}\end{subarray}}\mu^{2}(n^{\prime})\] \[=\mu^{2}(h)\sum_{\begin{subarray}{c}1\leqslant b\leqslant\tilde{h }\\ \gcd(b,\tilde{h})=1\end{subarray}}\sum_{\begin{subarray}{c}\frac{x-y}{h}\leqslant n ^{\prime}\leqslant\frac{x}{h}\\ n^{\prime}\equiv\tilde{a}\bmod q^{\prime}\tilde{h}\end{subarray}}\mu^{2}(n^{ \prime}),\] where \(\tilde{a}\) is the unique solution mod \(q^{\prime}\tilde{h}\) to \(\tilde{a}\equiv a^{\prime}\bmod q^{\prime}\) and \(\tilde{a}\equiv b\bmod\tilde{h}\). The inner sum now has a form where we can apply Theorem 3 from Hooley [15]. Also, we notice that the remaining sum is defined to be \(\varphi(\tilde{h})\). This gives that \[S =\mu^{2}(h)\frac{6}{\pi^{2}}\frac{y}{hq^{\prime}\tilde{h}}\prod_{ p\mid q^{\prime}\tilde{h}}\left(1-\frac{1}{p^{2}}\right)^{-1}\sum_{ \begin{subarray}{c}1\leqslant b\leqslant\tilde{h}\\ \gcd(b,\tilde{h})=1\end{subarray}}1+O\left(\varphi(\tilde{h})\left(\left( \frac{x}{q^{\prime}\tilde{h}}\right)^{1/2}+(q^{\prime}\tilde{h})^{1/2+ \varepsilon}\right)\right)\] \[=\mu^{2}(h)\frac{6}{\pi^{2}}\frac{y}{hq^{\prime}}\frac{\varphi( \tilde{h})}{\tilde{h}}\prod_{p\mid q}\left(1-\frac{1}{p^{2}}\right)^{-1}+O \left(\tilde{h}\left(\left(\frac{xh/\tilde{h}}{q}\right)^{1/2}+q^{1/2+ \varepsilon}\right)\right)\] \[=\mu^{2}(h)\frac{6}{\pi^{2}}\frac{y}{q}\prod_{\begin{subarray}{ c}p\mid h\\ p\mid q^{\prime}\end{subarray}}\left(1-\frac{1}{p}\right)\prod_{p\mid q}\left(1 -\frac{1}{p^{2}}\right)^{-1}+O\left(h\left(\left(\frac{x}{q}\right)^{1/2}+q^{1 /2+\varepsilon}\right)\right)\] \[=\frac{6}{\pi^{2}}\frac{\mu^{2}\left(h\right)y}{q}\prod_{p\mid q ^{\prime}}\left(1-\frac{1}{p}\right)^{-1}\prod_{p\mid q}\left(1+\frac{1}{p} \right)^{-1}+O\left(h\left(\left(\frac{x}{q}\right)^{1/2}+q^{1/2+\varepsilon} \right)\right).\] ## 4. An analogue for \(\mu^{2}(\cdot)\) This subsection aims to find an analogue for \(\mu^{2}(\cdot)\). We define \[k_{D}(n):=\sum_{\begin{subarray}{c}d^{2}\mid n\\ d\leqslant D\end{subarray}}\mu\left(d\right).\] We will need to understand the values of \(k_{D}(n)\) in arithmetic progression for short intervals to verify assumption (2) of Corollary 2.1. Further, we also need to understand these values for general polynomials. The first one will be calculated via explicit manipulations to arrive at the expression given by the asymptotics of \(\mu^{2}\left(n\right)\) in arithmetic progressions in short intervals. The latter one will be derived via a general approach for the first \(x\) values of an arbitrary polynomial whose coefficients have no common divisors. ### Preliminary Before proving any statements regarding \(k_{D}(\cdot)\), we will investigate a bound on the average number of solutions \(\bmod\,d^{2}\), which is stated and proven in [23]. We will repeat the proof along the same lines. **Lemma 4.1** (Shparlinski).: _Let f be a square-free polynomial such that the coefficients have no common factor. Then, if \(D\leqslant H^{A}\) for some A, the following two statements hold:_ \[\sum_{\begin{subarray}{c}d\leqslant D\\ \mu^{2}(d)=1\end{subarray}}\rho_{f}(d^{2})=O(DH^{\varepsilon});\] \[\sum_{\begin{subarray}{c}d>D\\ \mu^{2}(d)=1\end{subarray}}\frac{\rho_{f}(d^{2})}{d^{2}}=O(D^{-1}H^{\varepsilon }).\] Proof.: Let \(d_{f}\) be the degree of the polynomial \(f\). We clearly have \(\rho_{f}(p)\leqslant d_{f}\), hence by Hensel lifting, we have \[\rho_{f}(p^{2})\leqslant d_{f},\] if \(p\) does not divide the discriminant \(\Delta_{f}\) of \(f\), which is non-zero since \(f\) is square-free. Also, we have \(\Delta_{f}=H^{O(1)}\). If \(p\mid\Delta_{f}\), we have the trivial bound \(\rho_{f}(p^{2})\leqslant d_{f}p\). Since \(\rho_{f}\) is multiplicative, we have for square-free \(d\) that \[\rho_{f}(d^{2})=\prod_{p\mid d}\rho_{f}(p^{2})=d_{f}^{\omega(d)}\gcd(d,\Delta _{f}),\] where \(\omega(d)\) is the number of prime divisors of \(d\). By a standard bound of \(\omega(d)\) we get \[\rho_{f}(d^{2})\leqslant d_{f}^{\varepsilon}\gcd(d,\Delta_{f}).\] Now we have \[\sum_{d\leqslant D}\gcd(d,\Delta_{f})\leqslant\sum_{e\mid\Delta_{f}}e\sum_{ \begin{subarray}{c}d\leqslant D\\ e\mid d\end{subarray}}1\leqslant D\tau(\Delta_{f})=D\Delta_{f}^{\varepsilon}.\] This gives the first estimate \[\sum_{\begin{subarray}{c}d\leqslant D\\ \mu^{2}(d)=1\end{subarray}}\rho_{f}(d^{2})=D^{\varepsilon}\sum_{d\leqslant D }\gcd(d,\Delta_{f})=O(DH^{\varepsilon}).\] Further, we have \[\sum_{d>D}\frac{\gcd(d,\Delta_{f})}{d^{2}} \leqslant\sum_{e\mid\Delta_{f}}e\sum_{\begin{subarray}{c}d>D\\ e\mid d\end{subarray}}\frac{1}{d^{2}}\leqslant\sum_{e\mid\Delta_{f}}\frac{1}{ e}\sum_{\begin{subarray}{c}d>D\\ e\mid d\end{subarray}}\frac{1}{(d/e)^{2}}\] \[\leqslant\sum_{e\mid\Delta_{f}}\frac{1}{e}\min\{(e/D),1\}.\] Considering the two cases \(e>D\) and \(e\leqslant D\) separately, we see that each of the \(\tau(\Delta_{f})\) summands is \(D^{-1+\varepsilon}\). Hence, using estimates for \(\Delta_{f}\) and \(\tau\), we get the desired result \[\sum_{d>D}\frac{\rho_{f}(d^{2})}{d^{2}}=O(D^{-1}H^{\varepsilon}).\] ### \(k_{D}(\cdot)\) in short arithmetic progressions **Lemma 4.2**.: _Let \(q\geqslant 1\), \(a\geqslant 0\) and let \(h=h_{1}h_{2}=\gcd(a,q)\) be such that \(h_{1}\) is square-free and \(h_{2}\) is square-full. We assume further that each prime factor of \(h_{2}\) is less than \(D^{2}\). Then we have that_ \[\sum_{\begin{subarray}{c}x-y\leqslant n\leqslant x\\ n\equiv a\bmod q\end{subarray}}k_{D}\left(n\right)=\frac{6}{\pi^{2}}\frac{ \mu^{2}\left(h\right)y}{q}\prod_{p\mid q^{\prime}}\left(1-\frac{1}{p}\right)^ {-1}\prod_{p\mid q}\left(1+\frac{1}{p}\right)^{-1}+O\left(\frac{y}{q}\frac{ \sqrt{h_{2}}}{D}H^{\varepsilon}+DH^{\varepsilon}\right). \tag{4.1}\] Proof.: As before, we start by exchanging summation. Thus, we have \[S: =\sum_{\begin{subarray}{c}x-y\leqslant n\leqslant x\\ n\equiv a\bmod q\end{subarray}}k_{D}\left(n\right)\] \[=\sum_{\begin{subarray}{c}x-y\leqslant n\leqslant x\\ n\equiv a\bmod q\end{subarray}}\sum_{\begin{subarray}{c}d^{2}\mid n\\ d\leqslant D\end{subarray}}\mu\left(d\right)\] \[=\sum_{d\leqslant D}\mu\left(d\right)\sum_{\begin{subarray}{c}x- y\leqslant n\leqslant x\\ n\equiv a\bmod q\\ d^{2}\mid n\end{subarray}}1.\] Let \(h=\gcd(a,q)\). Then there exist \(n^{\prime},a^{\prime},q^{\prime}\in\mathbb{N}\) such that \(n^{\prime}h=n\), \(a^{\prime}h=a\) and \(q^{\prime}h=q\), with \(\gcd(a^{\prime},q^{\prime})=1\). We have \[S =\sum_{d\leqslant D}\mu\left(d\right)\sum_{\begin{subarray}{c}x- y\leqslant n^{\prime}\leqslant\frac{x}{h}\\ n^{\prime}\equiv a^{\prime}\bmod q^{\prime}\\ d^{2}\mid hn^{\prime}\end{subarray}}1\] \[=\sum_{k\mid h}\sum_{\begin{subarray}{c}d\leqslant D\\ \gcd(d^{2},h)=k\end{subarray}}\mu(d)\sum_{\begin{subarray}{c}x-y\leqslant n^{ \prime}\leqslant\frac{x}{h}\\ n^{\prime}\equiv a^{\prime}\bmod q^{\prime}\\ \frac{d^{2}}{k}\mid n^{\prime}\end{subarray}}1.\] Let \(h=h_{1}h_{2}\) and \(k=k_{1}k_{2}\), where \(h_{1}\) and \(k_{1}\) are square-free, and \(h_{2}\) and \(k_{2}\) are square-full. The decomposition into a square-free and a square-full part gives us that \(\gcd(h_{1},h_{2})=\gcd(k_{1},k_{2})=1\). Then we have by \(\gcd(h_{1},h_{2})=1\) that \[\gcd(d^{2},h_{1})\cdot\gcd(d^{2},h_{2})=\gcd(d^{2},h_{1}h_{2})=k_{1}k_{2}.\] In particular, we have that \(\gcd(d^{2},h_{1})=k_{1}\), and \(\gcd(d^{2},h_{2})=k_{2}\), since every prime divisor of the first factor has multiplicity \(1\), and every prime divisor of the second factor has multiplicity at least \(2\). Therefore, we have \[S=\sum_{k_{2}|h_{2}}\sum_{k_{1}|h_{1}}\sum_{\begin{subarray}{c}d\leq D\\ \gcd(d,h_{1})=k_{1}\\ \gcd(d^{2},h_{2})=k_{2}\end{subarray}}\mu(d)\sum_{\begin{subarray}{c}\frac{x-y \leqslant n^{\prime}\leqslant\frac{x}{h}}{n^{\prime}}\\ n^{\prime}\equiv a^{\prime}\bmod q^{\prime}\\ \frac{d^{2}}{k_{1}k_{2}}|n^{\prime}\end{subarray}}1.\] We can assume that \(d\) is square-free since otherwise we have \(\mu^{2}(d)=0\). Hence, \(k_{2}\) is a perfect square since it contains no cubic factors, say \(\left(\tilde{k_{2}}\right)^{2}=k_{2}\). We have \(\gcd(d,h_{2})=\tilde{k_{2}}\) and \(\tilde{k_{2}}\mid\operatorname{rad}(h_{2})\) since \(d\) is square-free. Hence, we can rewrite the equation above as \[S=\sum_{\tilde{k_{2}}\mid\operatorname{rad}(h_{2})}\sum_{k_{1}|h_{1}}\sum_{ \begin{subarray}{c}d\leq D\\ \gcd(d,h_{1})=k_{1}\\ \gcd(d,h_{2})=\tilde{k_{2}}\end{subarray}}\mu(d)\sum_{\begin{subarray}{c}\frac {x-y}{h}\leqslant n^{\prime}\leqslant\frac{x}{h}\\ n^{\prime}\equiv a^{\prime}\bmod q^{\prime}\\ \frac{d^{2}}{k_{1}(\tilde{k_{2}})^{2}}|n^{\prime}\end{subarray}}1.\] In order for the congruence to have any solutions, we need \(\gcd(\frac{d^{2}}{k_{1}\left(\tilde{k_{2}}\right)^{2}},q^{\prime})=1\). Therefore, there exist \(n^{\prime\prime},a^{\prime\prime}\in\mathbb{N}\) such that \(n^{\prime}=\frac{d^{2}}{k_{1}\left(\tilde{k_{2}}\right)^{2}}n^{\prime\prime}\) and \(a^{\prime}\equiv\frac{d^{2}}{k_{1}\left(\tilde{k_{2}}\right)^{2}}a^{\prime \prime}\bmod q^{\prime}\). Then \[S=\sum_{\tilde{k_{2}}\mid\operatorname{rad}(h_{2})}\sum_{k_{1}|h_{1}}\sum_{ \begin{subarray}{c}d\leq D\\ \gcd(d,h_{1})=k_{1}\\ \gcd(d,h_{2})=\tilde{k_{2}}\\ \gcd(\frac{d^{2}}{k_{1}(\tilde{k_{2}})^{2}},q^{\prime})=1\end{subarray}}\mu(d) \sum_{\begin{subarray}{c}\frac{x-y}{h}\frac{k_{1}\left(\tilde{k_{2}}\right)^{2 }}{d^{2}}\leqslant n^{\prime\prime}\leqslant\frac{x}{h}\frac{k_{1}\left( \tilde{k_{2}}\right)^{2}}{d^{2}}\\ n^{\prime\prime}\equiv a^{\prime\prime}\bmod q^{\prime}\end{subarray}}1.\] Using the trivial estimate for the inner sum, we get \[\frac{y}{q^{\prime}h}\frac{k_{1}\left(\tilde{k_{2}}\right)^{2}}{d^{2}}+O(1)= \frac{y}{q}\frac{k_{1}\left(\tilde{k_{2}}\right)^{2}}{d^{2}}+O(1).\] Therefore, we have \[S=\frac{y}{q}\sum_{\tilde{k_{2}}\mid\operatorname{rad}(h_{2})}\left(\tilde{k_{ 2}}\right)^{2}\sum_{k_{1}|h_{1}}k_{1}\sum_{\begin{subarray}{c}d\leq D\\ \gcd(d,h_{1})=k_{1}\\ \gcd(d,h_{2})=\tilde{k_{2}}\\ \gcd(\frac{d^{2}}{k_{1}(\tilde{k_{2}})^{2}},q^{\prime})=1\end{subarray}}\frac {\mu(d)}{d^{2}}+O(DH^{\varepsilon}).\] Let \(d^{\prime}\) be such that \(d=k_{1}\tilde{k_{2}}d^{\prime}\). Then \[S=\frac{y}{q}\sum_{\tilde{k_{2}}\mid\operatorname{rad}(h_{2})}\mu(\tilde{k_{2} })\sum_{k_{1}|h_{1}}\frac{\mu(k_{1})}{k_{1}}\sum_{\begin{subarray}{c}d^{ \prime}\leqslant\frac{D}{k_{1}k_{2}}\\ \gcd(d^{\prime},h)=1\\ \gcd(d^{\prime}k_{1},q^{\prime})=1\end{subarray}}\frac{\mu(d^{\prime})}{(d^{ \prime})^{2}}+O(DH^{\varepsilon}),\] since \(k_{1}\), \(\tilde{k_{2}}\) and \(d^{\prime}\) are pairwise coprime by the assumption that \(d\) is square-free. Extending the inner sum to an infinite series, we get by the standard estimate an error term of \[O\left(\frac{y}{q}\sum_{\tilde{k_{2}}|\text{rad}(h_{2})}\sum_{k_{1}|h_{1}}\frac{1 }{k_{1}}\sum_{\begin{subarray}{c}d^{\prime}>\frac{D}{k_{1}k_{2}}\\ \gcd(d^{\prime},h)=1\\ \gcd(d^{\prime}k_{1},q^{\prime})=1\end{subarray}}\frac{|\mu(d^{\prime})|}{ \left(d^{\prime}\right)^{2}}\right)=O\left(\frac{y}{q}\frac{\text{rad}(h_{2})}{ D}H^{\varepsilon}\right).\] Hence, the sum is \[S=\frac{y}{q}\sum_{\tilde{k_{2}}|\text{rad}(h_{2})}\mu(\tilde{k_{2}})\sum_{k_{1 }|h_{1}}\frac{\mu(k_{1})}{k_{1}}\sum_{\begin{subarray}{c}d^{\prime}\\ \gcd(d^{\prime},h)=1\\ \gcd(d^{\prime}k_{1},q^{\prime})=1\end{subarray}}\frac{\mu(d^{\prime})}{ \left(d^{\prime}\right)^{2}}+O\left(\frac{y}{q}\frac{\text{rad}(h_{2})}{D}H^{ \varepsilon}+DH^{\varepsilon}\right).\] Since \(\gcd(k_{1},d^{\prime})=1\), we have \[S=\frac{y}{q}\sum_{\tilde{k_{2}}|\text{rad}(h_{2})}\mu(\tilde{k_{2}})\sum_{ \begin{subarray}{c}k_{1}|h_{1}\\ \gcd(k_{1},q^{\prime})=1\end{subarray}}\frac{\mu(k_{1})}{k_{1}}\sum_{ \begin{subarray}{c}d^{\prime}\\ \gcd(d^{\prime},h)=1\\ \gcd(d^{\prime},q^{\prime})=1\end{subarray}}\frac{\mu(d^{\prime})}{\left(d^{ \prime}\right)^{2}}+O\left(\frac{y}{q}\frac{\sqrt{h_{2}}}{D}H^{\varepsilon}+DH^ {\varepsilon}\right),\] using that \(h_{2}\) is square-full. Since all three sums are now independent of each other, we can evaluate each of them separately. First, we obtain \[\sum_{\tilde{k_{2}}|\text{rad}(h_{2})}\mu(\tilde{k_{2}})=\mu^{2}(h)=\begin{cases} 1&\text{if }\operatorname{rad}(h_{2})=1\\ 0&\text{if }\operatorname{rad}(h_{2})\neq 1,\end{cases}\] by recalling the definition of \(h_{2}\) and the fact that the sum is \(1\) if \(h_{2}=1\) and \(0\) otherwise. The middle sum is \[\sum_{\begin{subarray}{c}k_{1}|h_{1}\\ \gcd(k_{1},q^{\prime})=1\end{subarray}}\frac{\mu(k_{1})}{k_{1}}=\prod_{ \begin{subarray}{c}p|h_{1}\\ p|q^{\prime}\end{subarray}}\left(1-\frac{1}{p}\right).\] The innermost sum is \[\sum_{\begin{subarray}{c}d^{\prime}\\ \gcd(d^{\prime},h)=1\\ \gcd(d^{\prime},q^{\prime})=1\end{subarray}}\frac{\mu(d^{\prime})}{\left(d^{ \prime}\right)^{2}}=\frac{6}{\pi^{2}}\prod_{p|q}\left(1-\frac{1}{p^{2}}\right) ^{-1}.\] Combining the values of these three sums, we have \[S=\mu^{2}(h)\frac{6}{\pi^{2}}\frac{y}{q}\prod_{\begin{subarray}{c}p|h_{1}\\ p|q^{\prime}\end{subarray}}\left(1-\frac{1}{p}\right)\prod_{p|q}\left(1-\frac{ 1}{p^{2}}\right)^{-1}+O\left(\frac{y}{q}\frac{\sqrt{h_{2}}}{D}H^{\varepsilon} +DH^{\varepsilon}\right).\] In the case \(\mu^{2}(h)=1\), this is the desired result since \(h\) is square-free in this case, i.e. \(h_{1}=h\). ### Summing \(k_{D}(\cdot)\) over polynomial values First, let us recall \(\rho_{f}\left(m\right)\). It is defined to be the number of solutions to \(f\left(n\right)\equiv 0\bmod m\), where \(1\leqslant n\leqslant m\). Furthermore, we denote by \(H\) the upper bound for the size of the coefficients of \(f\). Additionally, we assume that the gcd over all the coefficients is \(1\) and that \(f\) is of degree at most \(d\). **Lemma 4.3**.: _Using the assumptions and notation above, if f is square-free, we have_ \[\sum_{n\leqslant x}k_{D}\left(f\left(n\right)\right)=x\prod_{p}\left(1-\frac{ \rho_{f}\left(p^{2}\right)}{p^{2}}\right)+O\left(DH^{\varepsilon}+\frac{x}{D}H ^{\varepsilon}\right). \tag{4.2}\] _If f is not square-free, then we have_ \[\sum_{n\leqslant x}k_{D}\left(f\left(n\right)\right)=O\left(x^{1+\varepsilon} H^{\varepsilon}+D\right). \tag{4.3}\] Proof.: First, we assume that \(f\) is square-free, i.e. its discriminant \(\Delta_{f}\) is non-zero. Again, our first step in calculating the sum is to interchange the order of summation. Afterwards, we reformulate the divisibility condition in the sum to a congruence condition and use the definition and multiplicativity of \(\rho_{f}\left(m\right)\) to get the following result: \[\sum_{n\leqslant x}k_{D}\left(f\left(n\right)\right) =\sum_{n\leqslant x}\sum_{\begin{subarray}{c}d^{2}|f\left(n \right)\\ d\leqslant D\end{subarray}}\mu\left(d\right)\] \[=\sum_{d\leqslant D}\mu\left(d\right)\sum_{\begin{subarray}{c}n \leqslant x\\ d^{2}|f\left(n\right)\end{subarray}}1\] \[=x\sum_{d\leqslant D}\frac{\mu\left(d\right)}{d^{2}}\rho_{f} \left(d^{2}\right)+O\left(DH^{\varepsilon}\right)\] \[=x\prod_{p}\left(1-\frac{\rho_{f}\left(p^{2}\right)}{p^{2}} \right)+O\left(DH^{\varepsilon}+\frac{x}{D}H^{\varepsilon}\right).\] The error terms were first calculated in [23], and can also be seen in Lemma 4.1 above. We now assume that \(f\) is not square-free and obtain that \[\sum_{n\leqslant x}k_{D}(f(n)) =\sum_{n\leqslant x}\sum_{\begin{subarray}{c}d^{2}|f\left(n \right)\\ d\leqslant D\end{subarray}}\mu(d)\] \[\leqslant\sum_{\begin{subarray}{c}n\leqslant x\\ f\left(n\right)\neq 0\end{subarray}}\sum_{d|f\left(n\right)}1+\sum_{ \begin{subarray}{c}n\leqslant x\\ f\left(n\right)=0\end{subarray}}d\] \[=O(x^{1+\varepsilon}H^{\varepsilon}+D).\] By the standard estimate for the divisor function and since \(f\) can have at most \(d\) roots, the proof is complete. ## 5. Proof of Theorem 1.4 Proof of Theorem 1.4.: We want to use Corollary 2.1. We need to show that all the necessary conditions for Corollary 2.1 hold with our choices of \(F\left(n\right)=\mu^{2}\left(n\right)-k_{D}\left(n\right)\) and \(\delta=2/5+\varepsilon\). (The latter is an arbitrary choice, and one can take \(\delta\) to be any value as long as \(\varepsilon<\delta<1/2\).) To verify assumption (1) of Corollary 2.1, let us compare \(F(n)\) to the divisor function. If \(n=1\), we have \(F(1)=0\). For \(n>1\), we have \[\left|k_{D}\left(n\right)\right|=\left|\sum_{\begin{subarray}{c}d^{2}|n\\ d\leq D\end{subarray}}\mu\left(d\right)\right|\leqslant\sum_{d|n}\left|\mu \left(d\right)\right|=\tau\left(n\right)\ll n^{\varepsilon} \tag{5.1}\] by the standard estimate of the divisor function. Hence, also \(F(n)\ll n^{\varepsilon}\). Now we check assumption (2). By Proposition 3.1 and Lemma 4.2 we get \[\sum_{\begin{subarray}{c}n\in I\\ n\equiv u\bmod q\end{subarray}}F(n) =\sum_{\begin{subarray}{c}n\in I\\ n\equiv u\bmod q\end{subarray}}\left(\mu^{2}(n)-k_{D}(n)\right)\] \[=\frac{6}{\pi^{2}}\frac{\mu^{2}\left(h\right)\left|I\right|}{q} \prod_{p|q^{\prime}}\left(1-\frac{1}{p}\right)^{-1}\prod_{p|q}\left(1+\frac{1}{ p}\right)^{-1}+O\left(h\left(\left(\frac{x^{d}H}{q}\right)^{1/2}+q^{1/2+ \varepsilon}\right)\right)\] \[-\frac{6}{\pi^{2}}\frac{\mu^{2}\left(h\right)\left|I\right|}{q} \prod_{p|q^{\prime}}\left(1-\frac{1}{p}\right)^{-1}\prod_{p|q}\left(1+\frac{1}{ p}\right)^{-1}+O\left(\frac{|I|}{q}\frac{\sqrt{h_{2}}}{D}H^{\varepsilon}+DH^{ \varepsilon}\right)\] \[=O\left(h\left(\frac{Hx^{d}}{q}\right)^{1/2}+\frac{|I|}{q}\frac{ \sqrt{h_{2}}}{D}H^{\varepsilon}+DH^{\varepsilon}\right),\] by recalling the assumption on \(q\) and \(H\), i.e. \(q^{2}<Hx^{d}\). Plugging this into the expression in assumption (2), we get \[\max_{\begin{subarray}{c}1\leqslant u\leqslant q\\ \gcd(u,q)\leqslant x\end{subarray}}\sup_{\begin{subarray}{c}I\text{ interval}\\ \left|I\right|>H^{1-\varepsilon}x^{k}\\ I\subset[-2Hx^{d},2Hx^{d}]\end{subarray}}\frac{q}{\left|I\right|}\left|\sum_{ \begin{subarray}{c}n\in I\\ n\equiv u\bmod q\end{subarray}}F(n)\right|\] \[\ll\max_{\begin{subarray}{c}1\leqslant u\leqslant q\\ \gcd(u,q)\leqslant x\end{subarray}}\sup_{\begin{subarray}{c}I\text{ interval}\\ \left|I\right|>H^{1-\varepsilon}h^{k}\\ I\subset[-2Hx^{d},2Hx^{d}]\end{subarray}}\frac{q}{\left|I\right|}\left(h(u) \left(\frac{Hx^{d}}{q}\right)^{1/2}+\frac{|I|}{q}\frac{\sqrt{h_{2}(u)}}{D}H^ {\varepsilon}+DH^{\varepsilon}\right)\] \[\ll\max_{\begin{subarray}{c}1\leqslant u\leqslant q\\ \gcd(u,q)\leqslant x\end{subarray}}\left(\frac{h(u)x^{\frac{d+\ell}{2}}}{H^{ \frac{1}{2}}x^{k}}+\frac{\sqrt{h_{2}(u)}}{D}+\frac{Dx^{\ell}}{Hx^{k}}\right)H ^{\varepsilon}.\] To finish the proof that condition (2) holds, we need to verify that the term above is \(\ll x^{-2/5+\varepsilon}\). The first summand is \(\ll x^{-2/5+\varepsilon}\) by the assumptions that \(x^{d+l-2k+14/5+\varepsilon}\leqslant H\) and \(\gcd(u,q)=h\leqslant x\). By the assumption \(\gcd(u,q)\leqslant x\), we see that \(\sqrt{h_{2}}\leqslant\sqrt{x}\), hence, by choosing \(D=x^{9/10+\varepsilon}\), we get \(\frac{h_{2}}{D}H^{\varepsilon}\ll x^{-2/5+\varepsilon}\). The last summand is \(\ll x^{-2/5+\varepsilon}\) by the definitions of \(H\) and \(D\). Hence, we conclude that both assumptions are verified. Therefore, we get the following: \[\sup_{x^{\prime}\in[x/2,x]}\sum_{|a|,|b|\leqslant H}\left|\sum_{1\leqslant n \leqslant x^{\prime}}\mu^{2}(an^{k}+bn^{\ell}+g(n))-k_{D}(an^{k}+bn^{\ell}+g(n ))\right|^{2}\ll H^{2}x^{2-\frac{2-\varepsilon}{5+5\ell+5d(\ell-k)}},\] since under the choices \(k<\ell\leqslant d\) and \(\delta=2/5-\varepsilon\), we have that \[\min\left(\frac{2/5-\varepsilon}{1+\ell+d(\ell-k)},\frac{1-\varepsilon}{1+3 \ell-2k+d}\right)=\frac{2/5-\varepsilon}{1+\ell+d(\ell-k)}.\] By Lemma 4.3, we have for square-free polynomials that \[\left|\sum_{1\leqslant n\leqslant x}k_{D}(f(n))-c_{f}x\right|\ll DH^{ \varepsilon}+\frac{x}{D}H^{\varepsilon}.\] We denote by \(\mathscr{G}_{g}^{\#}(H)\) the subset of \(\mathscr{G}_{g}(H)\) that contains only the square-free polynomials. Therefore, we can bound the contribution of square-free polynomials to be at most \[\sup_{x^{\prime}\in[x/2,x]}\sum_{f\in\mathscr{G}_{g}^{\#}(H)} \left|\sum_{1\leqslant n\leqslant x^{\prime}}\mu^{2}(f(n))-c_{f}x^{\prime} \right|^{2}\] \[\ll \sup_{x^{\prime}\in[x/2,x]}\sum_{f\in\mathscr{G}_{g}^{\#}(H)} \left|\sum_{1\leqslant n\leqslant x^{\prime}}\mu^{2}(f(n))-k_{D}(f(n))+k_{D}(f (n))-c_{f}x^{\prime}\right|^{2}\] \[\ll \sup_{x^{\prime}\in[x/2,x]}\sum_{f\in\mathscr{G}_{g}^{\#}(H)} \left|\sum_{1\leqslant n\leqslant x^{\prime}}\mu^{2}(f(n))-k_{D}(f(n))\right|^ {2}+\sup_{x^{\prime}\in[x/2,x]}\sum_{f\in\mathscr{G}_{g}^{\#}(H)}\left|\sum_{ 1\leqslant n\leqslant x^{\prime}}k_{D}(f(n))-c_{f}x^{\prime}\right|^{2}\] \[\ll H^{2}x^{2-\frac{2-\varepsilon}{5+5\ell+5d(\ell-k)}},\] by recalling that \(D=x^{9/10+\varepsilon}\). If we can show that there are at most \(O(H)\) non-square-free polynomials in \(\mathscr{G}_{g}(H)\), then we can extend the sum above to the whole set \(\mathscr{G}_{g}(H)\). We consider the \(O(H)\) polynomials \(g(n,a)=g(n)+an^{\ell}=c_{e}(a)n^{e}+\ldots+c_{0}(a)\), and we can assume that \(c_{e}(a)\neq 0\) and \(c_{0}(a)\neq 0\) by the assumption that \(g(0)\neq 0\) and \(\ell\geqslant 1\). Also we note that \(e\leqslant d\). We now use that a polynomial \(f\) is not square-free if and only if \(\Delta_{f}=0\), where \(\Delta_{f}\) is defined as \(\Delta_{f}=\frac{(-1)^{(n-1)n/2}}{c_{d}}\text{Res}(f,f^{\prime})\) and \(\text{Res}(f,f^{\prime})\) is the resultant of \(f\) and its derivative \(f^{\prime}\). Viewing the resultant as a polynomial in the \(c_{i}\)'s, we see that the coefficient of \(c_{e}^{e-1}c_{0}^{e-1}\) is \(\pm(e)^{e}\), and the coefficient of \(c_{i}^{e}c_{e}^{e-1-i}c_{0}^{i-1}\) is \(\pm(i)^{i}(e-i)^{e-i}\). Now for each of the \(O(H)\) many \(b\)'s, we consider the discriminant of the polynomials as a function in the coefficient of \(n^{k}\). If \(k\geqslant e\), there is at most one polynomial with leading coefficient \(0\), hence were are done in this case. Therefore, we can assume from now on that the leading coefficient is non-zero. We know from above that the resultant this polynomial is not constantly zero, hence has only at most \(2d-2\) many solutions. Therefore, in total, there are at most \(O(H)\) non-square-free polynomials, each contributing at most \(O(x^{1+\varepsilon}+D)=O(x^{1+\varepsilon})\). Therefore, the contribution of all non-square-free polynomials together is negligible. Hence, \[\sup_{x^{\prime}\in[x/2,x]}\sum_{f\in\mathscr{G}_{g}(H)}\left|\sum_{1\leq n\leq x ^{\prime}}\mu^{2}(f(n))-c_{f}x^{\prime}\right|^{2}\ll H^{2}x^{2-\frac{2- \varepsilon}{5+5\ell+5d(\ell-k)}}.\] Now we want to find an upper bound for the number of polynomials that fail to satisfy the predicted asymptotic behaviour. We achieve this by estimating the following set: \[E_{\eta}(x,H):=\#\left\{f\in\mathscr{G}_{g}(H):\left|\sum_{1\leq n\leq x}\mu^{ 2}(f(n))-c_{f}x\right|>\eta x\right\}.\] By equation (2.1), we have \[E_{\eta}(x,H)\leqslant\frac{1}{\eta^{2}x^{2}}\sum_{f\in\mathscr{G}_{g}(H)} \left|\sum_{1\leq n\leq x}\mu^{2}(f(n))-c_{f}x\right|^{2}\ll\frac{H^{2}}{\eta^ {2}x^{\frac{2-\varepsilon}{5+5\ell+d(\ell-k)}}}.\] We take \(\eta=x^{-\frac{1-\varepsilon}{10+10\ell+10d(\ell-k)}}\) and recall that \(x\leqslant H^{(1-\varepsilon)/(d+l+14/5)}\) to complete the proof.
2305.03900
Rethinking Class Imbalance in Machine Learning
Imbalance learning is a subfield of machine learning that focuses on learning tasks in the presence of class imbalance. Nearly all existing studies refer to class imbalance as a proportion imbalance, where the proportion of training samples in each class is not balanced. The ignorance of the proportion imbalance will result in unfairness between/among classes and poor generalization capability. Previous literature has presented numerous methods for either theoretical/empirical analysis or new methods for imbalance learning. This study presents a new taxonomy of class imbalance in machine learning with a broader scope. Four other types of imbalance, namely, variance, distance, neighborhood, and quality imbalances between/among classes, which may exist in machine learning tasks, are summarized. Two different levels of imbalance including global and local are also presented. Theoretical analysis is used to illustrate the significant impact of the new imbalance types on learning fairness. Moreover, our taxonomy and theoretical conclusions are used to analyze the shortcomings of several classical methods. As an example, we propose a new logit perturbation-based imbalance learning loss when proportion, variance, and distance imbalances exist simultaneously. Several classical losses become the special case of our proposed method. Meta learning is utilized to infer the hyper-parameters related to the three types of imbalance. Experimental results on several benchmark corpora validate the effectiveness of the proposed method.
Ou Wu
2023-05-06T02:36:39Z
http://arxiv.org/abs/2305.03900v1
# Rethinking Class Imbalance in Machine Learning ###### Abstract Imbalance learning is a subfield of machine learning that focuses on learning tasks in the presence of class imbalance. Nearly all existing studies refer to class imbalance as a proportion imbalance, where the proportion of training samples in each class is not balanced. The ignorance of the proportion imbalance will result in unfairness between/among classes and poor generalization capability. Previous literature has presented numerous methods for either theoretical/empirical analysis or new methods for imbalance learning. This study presents a new taxonomy of class imbalance in machine learning with a broader scope. Four other types of imbalance, namely, variance, distance, neighborhood, and quality imbalances between/among classes, which may exist in machine learning tasks, are summarized. Two different levels of imbalance including global and local are also presented. Theoretical analysis is used to illustrate the significant impact of the new imbalance types on learning fairness. Moreover, our taxonomy and theoretical conclusions are used to analyze the shortcomings of several classical methods. As an example, we propose a new logit perturbation-based imbalance learning loss when proportion, variance, and distance imbalances exist simultaneously. Several classical losses become the special case of our proposed method. Meta learning is utilized to infer the hyper-parameters related to the three types of imbalance. Experimental results on several benchmark corpora validate the effectiveness of the proposed method. Class imbalance, variance imbalance, logit perturbation, local imbalance, fairness. ## I Introduction Data imbalance exists ubiquitously in real machine learning tasks. For instance, in object classification, the number of training samples for common objects like cups and buildings is often much greater than that of rare objects. The classes dominate the training set are referred to as majority classes, whereas those occupy little are called minority classes. In tasks with extreme class imbalance, also known as long-tailed classification [1], the majority classes are referred to as "head", while the minority classes are referred to as "tail". The ignorance of the imbalance among classes will result in unfairness and even poor generalization capability. To enhance the fairness among classes and increase the generalization capability, a number of studies involve the learning for class imbalance and constitute an independent research area of machine learning, namely, imbalance learning. Various classical methods have been proposed in the literature, such as logit adjustment [2], BBN [3], MetaWeight [4], LDAM [5], and ResLT [6]. Several benchmark data datasets have been compiled for evaluation. Despite the progress made in imbalance learning, addressing class imbalance encounters the following issues: * Previous research on imbalance learning has mainly focused on the imbalance in class proportions. However, there are other types of imbalances that have received little attention in the literature. Our theoretical investigation reveals that ignoring these other types of imbalances can impede our ability to effectively tackle machine learning tasks and utilize existing imbalance learning algorithms. * The current approaches to imbalance learning solely focus on global imbalance, which considers the imbalance between/among entire classes. However, there is a notable imbalance within the local regions of classes that has scantily been considered in previous literature. It is imperative not to overlook imbalance within local regions, as neglecting it can lead to unfairness and suboptimal generalization capability. This study provides a comprehensive exploration of imbalance learning that goes beyond the scope of existing studies. First, four other types of class imbalance, namely, variance, distance, neighborhood, and quality, are introduced and formulated. The first three types of imbalance have been not referred to in previous literature. Although the fourth type is usually considered in noisy-label learning, it has not been explicitly recognized as a type of class imbalance1. Further more, this study introduces the concept of imbalance from the viewpoint of the local perspective. Several research studies that propose intra-class imbalance can be considered examples of local imbalance. A series of theoretical analysis is then performed to quantify the influence of variance and distance imbalances as well as mixed imbalance. Our results demonstrate that even when there is no proportion imbalance, variance or distance imbalance can lead to an equivalent degree of unfairness. Based on our findings, we design a novel logit perturbation-based imbalance learning approach that improves upon existing classical methods. Our proposed method encompasses several classical methods as special cases. The effectiveness of our approach is validated by experiments carried out on benchmark data sets. Footnote 1: As quality imbalance is actually explored in noisy-label learning, it is not the focus of this study. In addition, some recent studies (e.g., [7]) highlight that the different classes may contain different proportions of hard samples, which is also a form of quality imbalance. Our contributions can be summarized as follows: * The scope of imbalance learning is expanded, and a more comprehensive taxonomy is developed for it. As far as we know, this study is the first to introduce the concepts of variance, distance, neighborhood, quality imbalance, and global/local imbalance. * Theoretical analysis is conducted to quantify how variance and distance imbalances negatively affect model fairness. The case when more than one types of imbalance
2304.05080
Investigating Imbalances Between SAR and Optical Utilization for Multi-Modal Urban Mapping
Accurate urban maps provide essential information to support sustainable urban development. Recent urban mapping methods use multi-modal deep neural networks to fuse Synthetic Aperture Radar (SAR) and optical data. However, multi-modal networks may rely on just one modality due to the greedy nature of learning. In turn, the imbalanced utilization of modalities can negatively affect the generalization ability of a network. In this paper, we investigate the utilization of SAR and optical data for urban mapping. To that end, a dual-branch network architecture using intermediate fusion modules to share information between the uni-modal branches is utilized. A cut-off mechanism in the fusion modules enables the stopping of information flow between the branches, which is used to estimate the network's dependence on SAR and optical data. While our experiments on the SEN12 Global Urban Mapping dataset show that good performance can be achieved with conventional SAR-optical data fusion (F1 score = 0.682 $\pm$ 0.014), we also observed a clear under-utilization of optical data. Therefore, future work is required to investigate whether a more balanced utilization of SAR and optical data can lead to performance improvements.
Sebastian Hafner, Yifang Ban, Andrea Nascetti
2023-04-11T09:22:51Z
http://arxiv.org/abs/2304.05080v1
# Investigating Imbalances Between SAR and Optical Utilization for Multi-Modal Urban Mapping ###### Abstract Accurate urban maps provide essential information to support sustainable urban development. Recent urban mapping methods use multi-modal deep neural networks to fuse Synthetic Aperture Radar (SAR) and optical data. However, multi-modal networks may rely on just one modality due to the greedy nature of learning. In turn, the imbalanced utilization of modalities can negatively affect the generalization ability of a network. In this paper, we investigate the utilization of SAR and optical data for urban mapping. To that end, a dual-branch network architecture using intermediate fusion modules to share information between the uni-modal branches is utilized. A cut-off mechanism in the fusion modules enables the stopping of information flow between the branches, which is used to estimate the network's dependence on SAR and optical data. While our experiments on the SEN12 Global Urban Mapping dataset show that good performance can be achieved with conventional SAR-optical data fusion (F1 score = 0.682 \(\pm\) 0.014), we also observed a clear under-utilization of optical data. Therefore, future work is required to investigate whether a more balanced utilization of SAR and optical data can lead to performance improvements. Remote sensing, deep learning, data fusion ## I Introduction The extent of urban areas is an important indicator of urbanization. The development of accurate and robust urban mapping methods is, therefore, essential to support sustainable urban development. Urban mapping methods are typically based on remotely sensed data due to their capability to consistently cover large geographical areas. In particular, satellite data from Synthetic Aperture Radar (SAR) sensors have been proven to be an invaluable data source for urban mapping (e.g., [1]). Build-up areas in SAR imagery are characterized by high backscattering due to the double bounce effect of buildings. Apart from SAR data, the success of computer vision techniques such as convolutional neural networks has led to the development of promising mapping methods using optical satellite imagery (e.g., [2, 3]). Recently, several studies investigated multi-modal learning with SAR-optical data fusion for urban mapping [4, 5, 6] and urban change detection [7, 8, 9]. These works performed multi-modal learning by combining Sentinel-1 SAR and Sentinel-2 MultiSpectral Instrument (MSI) data with either input-level or decision-level fusion using dual branch architectures. The aforementioned works, therefore, rely on multi-modal networks to effectively use the additional information in the form of other modalities to improve model performance upon uni-modal networks. However, some studies reported unsatisfactory performance of SAR-optical data fusion [6, 8]. At the root of the problem could be the under-utilization of either modality. In fact, Wu _et al._[10] recently demonstrated that multi-modal learning processes for many domains are predominately relying on the modality that is the faster to learn from. They refer to this as the greedy nature of multi-modal learning, which can hamper the generalization ability of models [10]. For urban mapping with SAR-optical data fusion, this may be a crucial issue, since the dependence on a single modality has been demonstrated to be insufficient for urban mapping at a global scale [5]. In this paper, we aim to uncover imbalances between SAR and optical modality utilization for urban mapping. Specifically, we apply recently published concepts to characterize the greedy nature of multi-modal learning (i.e., [10]) to an urban mapping dataset featuring Sentinel-1 SAR and Sentinel-2 data. Our investigation of a multi-modal network's dependence on SAR and optical data may contribute to the development of models with better generalization ability in the future. ## II Dataset We consider the urban mapping problem with multi-modal satellite data posed by the SEN12 Global Urban Mapping (SEN12_GUM) dataset1[5]. The SEN12_GUM dataset, denoted by \(\mathcal{D}\), consists of multiple instances of mean Sentinel-1 SAR images, \(x_{\text{sar}}\), and median Sentinel-2 MSI images, \(x_{\text{opt}}\), acquired over the same area. In addition, corresponding urban label images, \(y\in\{0,1\}\), are provided. The dataset is partitioned into a training, validation and test set, denoted by \(\mathcal{D}^{\text{train}}\), \(\mathcal{D}^{\text{val}}\) and \(\mathcal{D}^{\text{test}}\), respectively. The 26 training and 4 validation sites are located in the United States, Canada and Australia. On the other hand, the 60 test sites are globally distributed and cover unique geographies to test the generalization ability of networks. An overview of the locations of the sites is given in Figure 1. Footnote 1: [https://doi.org/10.5281/zenodo.6914898](https://doi.org/10.5281/zenodo.6914898) ## III Dataset We use the proposed multi-modal learning (SEN12_GUM) dataset to train the SAR-optical data fusion (SEN12_GUM) dataset. The SEN12_GUM dataset is used to train the SAR-optical data fusion (SEN12_GUM) dataset. ## III Methodology ### _Multi-Modal Deep Learning Model_ We design a multi-modal network, \(f(.)\), consisting of two identical uni-modal branches, \(f_{\rm sar}(.)\) and \(f_{\rm opt}(.)\), each following the architecture of U-Net [11] (Figure 2). To enable the flow of modality-specific information between convolution layers of the uni-modal U-Net branches, four feature fusion modules, specifically Multi-Modal Transfer Modules (MMTMs) [12], are incorporated into the network. MMTMs first squeeze spatial information from the uni-modal feature tensors, \(F_{\rm sar}\) and \(F_{\rm opt}\), into respective vectors, \(h_{\rm sar}\) and \(h_{\rm opt}\), using global average pooling operations. A joint representation, \(Z\), is then predicted from the concatenated vectors using a fully-connected layer. The joint representation is converted to two excitation signals, \(E_{\rm sar}\) and \(E_{\rm opt}\), using two additional fully-connected layers. Finally, the resulting excitation signals are used to recalibrate the uni-modal feature tensors using a simple gating mechanism. Following that, the MMTM cross-modal recalibration generates two outputs, \(\tilde{F}_{sar}\) and \(\tilde{F}_{opt}\), that are passed back to the uni-modal branches. In the end, each uni-modal branch produces a separate prediction for an input \((x_{\rm sar},x_{\rm opt})\): \[p_{\rm sar}=f_{\rm sar}(x_{\rm sar},x_{\rm opt}),\ p_{\rm opt}=f_{\rm opt}(x_{ \rm sar},x_{\rm opt}), \tag{1}\] Finally, the predictions of the branches are combined to form the fusion output of the multi-modal network: \[p=\frac{1}{2}(p_{\rm sar}+p_{\rm opt}). \tag{2}\] The network is trained with two modality-specific losses that are directly applied to the predictions of the uni-modal branches, i.e., the loss is \(\mathcal{L}(y,p_{\rm sar})+\mathcal{L}(y,p_{\rm opt})\), where \(y\) denotes the label. An important characteristic of the multi-modal network is that the connection to share information between the uni-modal branches can be cut off to produce predictions that rely on only one of the two modalities: \[p^{\prime}_{\rm sar}=f^{\prime}_{\rm sar}(x_{\rm sar}),\ p^{\prime}_{\rm opt} =f^{\prime}_{\rm opt}(x_{\rm opt}). \tag{3}\] The cut-off mechanism is implemented in the MMTM by approximating the vectors \(h_{\rm sar}\) and \(h_{\rm opt}\) based on the training set. Specifically, \(p^{\prime}_{\rm sar}\) is obtained by replacing the information coming from the optical branch, i.e., \(h_{\rm opt}\), with the average of \(h_{\rm opt}\) over \(\mathcal{D}^{\rm train}\), and vice versa \(h_{\rm sar}\) is replaced with the average of \(h_{\rm sar}\) over \(\mathcal{D}^{\rm train}\) for \(p^{\prime}_{\rm opt}\). The average values of \(h_{\rm sar}\) and \(h_{\rm opt}\) are defined as follows: \[\overline{h}_{\rm sar}=\frac{1}{n}\sum_{i=1}^{n}h_{\rm sar}(x^{i}),\ \overline{h}_{\rm opt}=\frac{1}{n}\sum_{i=1}^{n}h_{\rm opt}(x^{i}), \tag{4}\] where \(n\) is the number of samples in \(\mathcal{D}_{\rm train}\) and \(h_{\rm sar}(x^{i})\) and \(h_{\rm sar}(x^{i})\) represent the value of \(h_{\rm sar}\) and \(h_{\rm opt}\) with the \(i^{\rm th}\) sample, \(x^{i}\) as input. ### _Measuring Imbalance between Modality Utilization_ The concept of Conditional Utilization Rates (CURs) was recently introduced by Wu _et al._[10] with the goal to analyse a multi-modal network's ability to utilize both modalities. In Fig. 1: Locations of the training, validation, and test sites of the SEN12 Global Urban Mapping dataset [5] Fig. 2: Overview of the architecture consisting of two uni-modal U-Nets [11] that are connected through four Multi-Modal Transfer Modules (MMTMs) [12] for feature modality fusion in convolution layers. more detail, a multi-modal network's CUR for a given modality is the relative difference in accuracy between two sub-networks derived from the network, one using both modalities and the other using only one modality. The CURs for SAR and optical data are defined as follows: \[u(sar|opt)=\frac{A(y,p_{\mathrm{opt}})-A(y,p^{\prime}_{\mathrm{opt}})}{A(y,p_{ \mathrm{opt}})}, \tag{5}\] and \[u(opt|sar)=\frac{A(y,p_{\mathrm{sar}})-A(y,p^{\prime}_{\mathrm{sar}})}{A(y,p_{ \mathrm{sar}})}, \tag{6}\] where \(A(y,p)\) denotes the accuracy value obtained from prediction \(p\) with label \(y\). Consequently, \(A(y,p)\) and \(A(y,p^{\prime})\) correspond to the accuracy values obtained from networks using both modalities and only one modality, respectively. The CUR of SAR given optical, denoted by \(u(sar|opt)\), measures, for example, how important it is for the model to use SAR data in order to reach accurate predictions, given the presence of optical data. In other words, \(u(sar|opt)\) is the marginal contribution of \(x_{\mathrm{sar}}\) to increasing the accuracy of \(p_{opt}\). For this case, it is generally assumed that \(A(y,p_{\mathrm{opt}})>A(y,p^{\prime}_{\mathrm{opt}})\), since \(p_{\mathrm{opt}}\) was obtained from the combination of \(x_{\mathrm{sar}}\) and \(x_{\mathrm{opt}}\), while \(p^{\prime}_{\mathrm{opt}}\) was obtained from \(x_{\mathrm{opt}}\) alone. The difference between CURs is used to measure the imbalance between the utilization of SAR and optical data: \[d_{\mathrm{util}}=u(sar|opt)-u(opt|sar). \tag{7}\] Since CURs are assumed to be positive, \(d_{\mathrm{util}}\) is bound by the range \([-1,1]\). When \(d_{\mathrm{util}}\) is close to either bound, the network's ability to accurately predict \(p_{\mathrm{sar}}\) and \(p_{\mathrm{opt}}\) comes only from one of the modalities. Thus, high \(|d_{\mathrm{util}}|\) values imply an imbalance between the utilization of modalities. ### _Experimental Setup_ We train 5 networks with different seeds for weight initialization, dataset shuffling, and data augmentation. Networks are trained for 15 epochs with a batch size of 8, and AdamW [13] is used as optimizer with an initial learning rate of \(10^{-4}\). Flips (horizontal and vertical) and rotations (\(k*90^{\circ}\), where \(k\in\{0,1,2,3\}\)) are used as data augmentations. As loss function, a Jaccard-like loss, specifically the Power Jaccard loss [14], is used. As mentioned earlier, the loss is separately applied to the predictions of the uni-modal branches of the multi-modal network. To measure the accuracy of predictions, the commonly used accuracy metric F1 score, Precision (P), and Recall (R) are employed. They are defined as follows: \[P=\frac{TP}{TP+FP}\;R=\frac{TP}{TP+FN}\;F1=2*\frac{P*R}{P+R} \tag{8}\] where TP, FP, and FN represent the number of true positive, false positive, and false negative pixels, respectively. ## IV Results Table I lists network performance on the test set for the three network outputs, namely SAR (\(p_{\mathrm{sar}}\)), optical (\(p_{\mathrm{opt}}\)) and fusion (\(p\)). For all three accuracy metrics, the fusion output achieved the highest mean value. With a mean F1 score of 0.682, the fusion prediction of the network is, however, only slightly more accurate than the optical prediction (0.673). Among the uni-modal branches, the optical branch \(f_{\mathrm{opt}}(.)\) performed notably better than the SAR branch \(f_{\mathrm{sar}}(.)\). In particular, the detection rate of urban areas indicated by mean recall, is considerably higher for optical (0.621) in comparison to SAR (0.557). In terms of modality utilization, CURs of 0.37 (\(\pm\) 0.12) and 0.17 (\(\pm\) 0.04) were recorded for \(u(sar|opt)\) and \(u(opt|sar)\), respectively. Consequently, \(|d_{util}|\), the absolute difference between CURs, is 0.20 (\(\pm\) 0.14). These results indicate that the addition of SAR data given optical data considerably increased the accuracy of the prediction of the optical branch \(f_{\mathrm{opt}}\); in contrast, the increase in accuracy from the addition of optical data given SAR data is smaller. Thus, an imbalance between the utilization of SAR and optical data exists, as also indicated by \(|d_{\mathrm{util}}|\). Figure 3 shows the qualitative test results for two sites (rows). The results were produced with the network yielding the median accuracy in terms of F1 score. Generally, it is apparent that the SAR predictions (c & d) are similar, independent of whether the flow of information between the uni-modal branches is on (c) or off (d). This indicates that enabling the flow of optical data to the SAR branch has little effect on the results, which is supported by the network's low CUR of optical data given SAR data (0.19). In contrast, stark differences are apparent between the predictions of the optical branch (e & f). In particular, the recalibration of the optical features with SAR information helped to reduce false negatives (magenta) for all sites. These observations are in line with the CUR of SAR given optical (0.41) However, it can also have negative effects, as exemplified by the green areas indicating false positives for the site on the second row. The fusion output (g), obtained from the outputs of the uni-modal branches (d & f), generally delineates urban areas the most accurately among the other network outputs. ## V Discussion We find MMTMs to be an effective tool for intermediate data fusion as part of the multi-modal network architecture. In fact, with a mean F1 score of 0.682, the network outperforms not only the baseline uni-modal SAR and optical approach on the SEN12_GUM dataset (0.574 and 0.580, respectively), but also the input-level fusion one (0.651) [5]. The network was only outperformed by the unsupervised domain adaptation approach in [5] (F1 score = 0.697); however, the domain adaptation approach exploited additional unlabeled training data to adapt the model to the test set, while the network in this paper is solely relying on labeled data. In terms of modality utilization, we observed an imbalance between the utilization of SAR and optical data for urban mapping. Specifically, while SAR data contributed to increasing the accuracy of the predictions of the optical branch (mean \(u(sar|opt)\) = 0.37), optical data was strongly under-utilized by the SAR branch (mean \(u(opt|sar)\) = 0.17). Thus, there is an imbalance in modality utilization. Wu _et al_. [10] hypothesized that multi-modal networks primarily rely on the modality that is the fastest to learn from. Since our networks showed a strong dependence on SAR data, we assume that the simple representation of built-up areas in SAR imagery (i.e., high backscattering) facilitates learning from SAR data compared to optical data where a complex spectral heterogeneity exists for urban landscapes. Our findings, therefore, indicate that urban mapping from SAR and optical data requires an incentive to utilize optical data in order to balance the multi-modal learning process. ## VI Conclusion In this paper, we employed a dual-branch architecture with MMTM modules to investigate imbalances between SAR and optical utilization for multi-modal urban mapping. Our experiments on the SEN12_GUM dataset demonstrated that the architecture is effective for urban mapping. However, this work also demonstrated that the multi-modal networks trained on SAR and optical data rely more on the former than on the latter modality. Consequently, optical data is under-utilized. Future work will investigate methods to balance the learning from SAR and optical data. In particular, the balanced multi-modal learning algorithm proposed in [10] will be investigated for urban mapping from SAR and optical data.
2310.01379
EXTRACTER: Efficient Texture Matching with Attention and Gradient Enhancing for Large Scale Image Super Resolution
Recent Reference-Based image super-resolution (RefSR) has improved SOTA deep methods introducing attention mechanisms to enhance low-resolution images by transferring high-resolution textures from a reference high-resolution image. The main idea is to search for matches between patches using LR and Reference image pair in a feature space and merge them using deep architectures. However, existing methods lack the accurate search of textures. They divide images into as many patches as possible, resulting in inefficient memory usage, and cannot manage large images. Herein, we propose a deep search with a more efficient memory usage that reduces significantly the number of image patches and finds the $k$ most relevant texture match for each low-resolution patch over the high-resolution reference patches, resulting in an accurate texture match. We enhance the Super Resolution result adding gradient density information using a simple residual architecture showing competitive metrics results: PSNR and SSMI.
Esteban Reyes-Saldana, Mariano Rivera
2023-10-02T17:41:56Z
http://arxiv.org/abs/2310.01379v1
Extrater: Efficient Texture Matching with Attention and Gradient Enhancing for Large Scale Image Super Resolution ###### Abstract Recent Reference-Based image super-resolution (RefSR) has improved SOTA deep methods introducing attention mechanisms to enhance low-resolution images by transferring high-resolution textures from a reference high-resolution image. The main idea is to search for matches between patches using LR and Reference image pair in a feature space and merge them using deep architectures. However, existing methods lack the accurate search of textures. They divide images into as many patches as possible, resulting in inefficient memory usage, and cannot manage large images. Herein, we propose a deep search with a more efficient memory usage that reduces significantly the number of image patches and finds the \(k\) most relevant texture match for each low-resolution patch over the high-resolution reference patches, resulting in an accurate texture match. We enhance the Super Resolution result adding gradient density information using a simple residual architecture showing competitive metrics results: PSNR and SSMI. Esteban Reyes-Saldana, Mariano Rivera+Centro de Investigacion en Matematicas A.C. Guanajuato, Gto., 36023 Mexico {esteban.reyes, mrivera}@cimat.mx Reference based super-resolution, Texture transfer, Transformer, Cross-attention, Gradient density features ## 1 Introduction The paradigm Image Reference-Super Resolution aims to recover high-resolution Images by transferring accurate textures from a reference image (with a certain similarity degree) reducing burned and artifacts. In recent years, vision transformers have improved super-resolution results. For example, TTSR[1] introduces attention to Ref-Super Resolution by successfully transferring textures from the Ref image. They use a learnable VGG pre-trained feature extractor to obtain attention matrices \(Q,K,V\)) to perform a cross-attention mechanism to find the best features for the SR reconstruction. Lin et al. [2] proposed a novel low-resolution backbone capable of extracting a best feature representation and adding a branch to refine the low-resolution and reference features. Some other works [3, 4] claim that a better texture search is required in order to obtain less blurred images and use multiple reference images for a more accurate pattern search. Gou et al. [5] enhance memory efficiency by using low-resolution dimensions to find correlations and filtering patch matches for enhancing the final result and adding gradient information using a pre-existing SISR model for the final result. To address the above problems, we propose a search strategy to efficiently split the images into patches, find the \(top_{k}\) HR matches for each LR patch, and add structural information for enhancing the Super-Resolution result. Specifically, we first extract deep features from a VGG19-based architecture. Different from [1] and most of the recent methods, we split images into patches using a \(6\times 6\) window (instead of \(3\times 3\)) for the deepest feature level, resulting in a more memory efficient usage that can allow us to use large-scale images. Second, we propose a research strategy but different from [5], we use \(top_{k}\) matches between the low-resolution and ref patches instead of the max feature for each low-resolution patch. Finally, we merge textures at different scales and add gradient density information form a better spatial reconstruction using a simple residual network. The primary contributions of this paper are. First. we introduce a Search and Transfer module to identify correlations between low-resolution and reference patches; we use larger window in with state-of-the-art (SOTA) methods. This significantly reduces the dimensionality of the correlation matrix and allows to use the top-\(k\) matches to enhance texture transfer. Second, we introduce a Gradient Density-Enhancing Module (GDE) to improve the merging of textures from different deep levels while considering gradient density information. This module is implemented by a straightforward recurrent network. And third, we conduct extensive experiments on benchmark datasets that provide us strong evidence that the proposal overcomes SOTA methods. ## 2 Related Work In recent years, Single Image Super Resolution (SISR) improved super-resolution methods by using residual blocks[6] and designing deeper networks. These methods use \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) losses as the training objective functions that have demonstrated nonaccuracy for human perception [1]. To solve this, novel methods use a GAN strategy[7] resulting in better satisfying results or adopt classic computer vision transformation such as gradient mapping [8]. Since the appearance of vision transformers, vision tasks has been improved. For example, TTSR[1] introduces cross-attention to Ref-Super Resolution for transferring textures: a patch matching based technique robust to miss-alignment problems [9, 10]. Based on TTSR, Lin et al. [2] add channel-wise attention. [3, 4] and use multiple image patches for transferring textures, resulting in better results. In this direction, cross-attention mechanisms are used and better memory usage is required. Gou et al. [5] enhance memory efficiency by using low-resolution dimensions to find correlations and use classical vision transformation for structural reconstruction, such as gradient density flow. ## 3 Method In this section, we proposed Efficient Texture Matching with Attention and Gradient Enhancing for Image Super Resolution (EXTRACTER). It consists of four modules: Deep Feature Extractor (DFE), Search and Transfer Module (STM), Cross-Scale Feature Integration (CSFI), and Gradient Density Enhancing Module (GDE). The main scheme is shown in Fig. 1. The model produces a \(4\times\) super-resolution image. It inputs \((Lr_{u},Ref_{du},Ref)\). \(Lr_{u}\) represents the bicubic up-sampled low-resolution image and \(Ref_{du}\) represents a bicubic downsampling-upsampling concatenation operation over \(Ref\) image. We produce \(Q,K,V\) feature maps and find the correlation matrix (\(R\)) using \(Q,K\) normalized inner product between patches. Then we filtered the best patches based on correlation matrix \(R\) and then we take the \(top_{k}\) matches for each patch. We integrate the obtained features at three different scales using a Cross Scale Feature Integration[1] and finally, we add gradient density from the LR image to improve structural information and create the Super-Resolution image. ### Deep Feature Extractor We transform the data into a new representation with more evident complex characteristics at different resolutions. For this, we use the VGG19 [11] backbone (previously trained with ImageNet[12]). Let \((Lr_{u},Ref_{du},Ref)\) be input to our Deep Feature Extractor(DFE). The output of DFE can be formulated as \[Q_{i} = DFE_{i}(Lr_{u}) \tag{1}\] \[K_{i} = DFE_{i}(Ref_{du})\] (2) \[V_{i} = DFE_{i}(Ref) \tag{3}\] where \(i\) denotes the feature level of the \(DFE\). We take three scales of features from VGG19 with output channels \([64,128,256]\) and reduce the image \(2\times\) to the original scales at each level. ### Search and Transfer Module Let is omit the \(i\) index from (1) for notation simplification. The following calculations are made for a single level of DFE, is is depict at Fig. 2. We infer correlations between \(LR_{u}\) and \(Ref_{du}\) using attention via \(Q\) and \(K\) at two stages. First we divide \(Q,K\) into overlapping patches \(q_{i}:i\in[1,2,\ldots,H_{LR}\times W_{LR}/s^{2}]\) and \(k_{j}:j\in[1,2,\ldots,H_{Ref}\times W_{Ref}/s^{2}]\), respectively, where \(s\) is the stride setp Figure 1: Efficient Texture Matching with Attention Scheme: We input the low-resolution image, reference down-upsampled, and the reference image, pass it through the Deep Feature Extractor (DFE), and obtain the \(k\) relevance texture and score matrices at multiple levels with the SearchAndTransfer (ST). Then we merge the simple features \(F\) with the attention textures using a Cross Scale Feature Integration (CSFI). Finally, we refine the partial super-resolution result \(x_{TT}\) adding gradient features \(F_{g}\) extracted from the Gradient Density map \(g\) to obtain the final Super-Resolution image. Figure 2: Feature extraction and texture search: The model inputs \(Lr_{u},Ref_{du},Ref\), pass it through a Deep Feature Extractor (DFE) to perform patch Correlation Search. We use the result as index to select the best \(k\)-textures by Transfer. Finally, with textures \(T\) and soft-attention matrices \(S\), we merge them with simple features \(F\) from \(Lr\) to create \(T_{out}\). experiments, we use a window of \(6\) and stride \(s=2\). The correlation matrix is computed as the normalized inner product \[c^{\prime}_{i,j}=\left\langle\frac{q_{i}}{||q_{i}||},\frac{k_{j}}{||k_{j}||} \right\rangle. \tag{4}\] Next, we keep the best score indices of the \(k_{j}\) patches for each of \(q_{i}\)\(H^{\prime}=\arg\max_{j}(C^{\prime})\). Using the \(H^{\prime}\) matrix as index, we extract the most relevance patches of \(K\) as \(K^{\prime}=K_{H^{\prime}}\). Following, we use a re-search strategy by keeping the best score indices of the \(k^{\prime}_{j}\) normalized patches for each of \(q_{i}\) using the \(top_{u}\) largest matches \[H,S=top_{u}(C)\text{ with }c_{i,j}=\left\langle\frac{q_{i}}{||q_{i}||}, \frac{k^{\prime}_{j}}{||k^{\prime}_{j}||}\right\rangle \tag{5}\] with \(S,H\) tensors containing the \(u\)-maximum scores and index for \(C\); i.e., \[H_{0}=\arg\max_{j}C_{ij}\text{, \ }S_{0}=\max_{j}C_{ij} \tag{6}\] and \(H_{1},S_{1}\) be the second maximum indices and scores matches, etc. Now, we select the best textures from \(V\) using the \(H_{i}\), \(i=1,\ldots,u\) matrices: \(T_{i}=V_{H_{i}}\). So that, we extract the best matches using the hard attention matrix as index. Finally, for an output of the Initial Feature Extractor (IFE) of LR image, denote as \(F=IFE(x)\). Hence, we integrate the found features \(T_{i}\): \[F_{TT}=F+\sum_{i=1}^{u}Conv_{i}(Concat(F,T_{i}\otimes S_{i}))\otimes S_{i}; \tag{7}\] where \(\otimes\), \(Conv_{i}(\cdot)\) and \(Concat(\cdot)\) denotes element-wise multiplication, convolutional \(3\times 3\) and concatenation blocks, respectively. ### Cross-Scale Feature Integration Inspired by SoTA methods for style/texture transferring [13, 14, 1], We integrate the previous attention results at different scales following [1]; this can be modeled as \[x_{TT},T_{1},T_{2},T_{3}=CCFI(\{F_{TT}^{(i)}\}_{1=1,2,\ldots});\] where \(x_{TT}\) is the merged super-resolution texture and \(T_{1}\), \(T_{2}\), \(T_{3}\) are the syntetized textures. ### Gradient Enhancing Density Module To give more information about the structure of the low-resolution image, some work has been done [8, 6]. We incorporate a Gradient Enhancing module for adding structural and edge information to the partial output of the \(CSFI(\cdot)\). First, we extract the Gradient Density for each of the RGB image channels we convolve the Image with \(3\times 3\) Sobel filters kernels [15] from \(x\) and \(y\) derivative directions; \(K_{x}\) and \(K_{y}\), respectively. and calculate Gradient Density as \[GD(I)=\sqrt{(K_{x}*I)^{2}+(K_{y}*I)^{2}}.\] Now, we pass the image gradient density \(g\) through a residual feature extractor: \(F_{g}=GFE(g)\). Finally, using the output from \(CSFI(\cdot):x_{TT},T_{1},T_{2},T_{3}\), the SR image is formulate as \[x_{1g} = RB_{1}(Conv(Concat(F_{g},T_{3})))\] \[x_{2g} = RB_{2}(Conv(Concat(x_{1g}\uparrow,T_{2})))\] \[x_{3g} = RB_{3}(Conv(Concat(x_{3g}\uparrow,T_{3})))\] \[SR = Conv(Concat(x_{3g},x_{TT}))\] where \(RB(\cdot)\) represents a residual scheme and \(\uparrow\) is \(2\times\) bicubic upsampling. ### Loss Function The overall loss is \[\mathcal{L}_{total}=\lambda_{1}\mathcal{L}_{rec}+\lambda_{2}\mathcal{L}_{perc}+ \lambda_{3}\mathcal{L}_{grad}+\lambda_{4}\mathcal{L}_{adv} \tag{8}\] where \[\mathcal{L}_{rec}=(chw)^{-1}||SR-HR||_{1},\] with \(c,h,w\) the channel, height, weight of the \(HR\) image. In the aim of enhancing the similarity of the feature space representation of the generated image and the \(SR\) image using the \(vgg19\) feature space [16, 17], we use \[\mathcal{L}_{perc}=(c_{i}h_{i}w_{i})^{-1}||vgg19_{i}(SR)-vgg19_{i}(HR)||_{1},\] with \(c_{i},h_{i},w_{i}\) the channel, height, weight at the corresponding \(i\) level. For structural similarity enhancing, we introduce Gradient Density Loss using (3.4) \[\mathcal{L}_{grad}=(chw)^{-1}||GD(SR)-GD(HR)||_{1},\] with \(c_{i},h_{i},w_{i}\) the channel, height, weight at the corresponding \(i\) level. Similar to [1, 18], we use a WGAN-GP for more stable training. This loss is described as \[\mathcal{L}_{D} = \mathbb{E}_{\mathbb{z}\sim\mathbb{P}_{g}}\left[D(\tilde{x}) \right]-\mathbb{E}_{x\sim\mathbb{P}_{x}}\left[D(x)\right]\] \[+\lambda\mathbb{E}_{\hat{x}\sim\mathbb{P}_{\hat{x}}}\left[\| \nabla_{\tilde{x}}D(\hat{x})\|_{2}-1)^{2}\right],\] \[\mathcal{L}_{G} = -\mathbb{E}_{\hat{x}\sim\mathbb{P}_{g}}\left[D(\tilde{x})\right].\] ### Implementation Details The window size for extracting patches is set as \(k=6\) with padding \(p=2\) and a stride of \(s=2\). In experiments, we explore other configurations. The architecture for the CSFI model is \([16,8,4]\), \([9,9,9]\) for GDE and \(4\) residual blocks for IFE's. For the correlation matrix, we use only the deepest feature extractor level to perform matrix multiplication. We use data augmentation for training by randomly flipping up-down and left-right followed by a random rotation of \(90^{\circ},180^{\circ},270^{\circ}\) with a batch fixed to \(9\). The weights of the loss coefficients are \(1,1e^{-2},1e^{-3},1e^{-3}\) in the same order of equation (8). An Adam optimizer with \(lr=1e^{-4}\), \(\beta_{1}=0.9\), \(\beta_{2}=0.999\) and default \(\epsilon=1e^{-8}\). All the experiments were performed in a single GPU NVIDIA GeForce RTX 3090 using the pytorch framework. ## 4 Experiments and Results Following the recent work, we use two metrics to evaluate the results: Peak Signal to Noise Ratio (PSNR) and Structure Similarity Index (SSIM) [19]. We conduct the training using CUFED5 Dataset [20]. It contains 11,871 pairs consisting of an input and reference image. There are 126 testing images, each having 4 reference images with different similarity levels. We also evaluate our method using different text sets such as Sun80 [21], Urban100 [22], and Set14[23]. Sun80 contains 80 natural images, each of them paired with several reference images. Urban100 and Set14 do not have reference images so we took it randomly from the same dataset. All the SR results are evaluated of PSRN and SSIM on the Y channel of YCbCr space. Following the SOTA methods, we train our model using the train set from CUFED5 and test it on the CUFED5 test set, Sun80, Urban100, and Set14. Two versions of our model were trained, the first one trained only using reconstruction loss and the second using all losses. EXTRACTER-rec outperforms recent methods despite using a bigger window size, as we can see in Table 1. We observe better visual results when all losses were used, Fig. 1 illustrates some visual results with other novel models. We study different configurations for our model. Table 3 shows the number of parameters and the correlation matrix shape during the training phase for the CUFED5 dataset. We found that our method reduces \(4\times\) the shape from the attention mechanism. Table 3 shows the effectiveness of changing the kernel size for the test phase using large image size datasets such as Sun80 and Urban100.
2310.19536
Adversarial Batch Inverse Reinforcement Learning: Learn to Reward from Imperfect Demonstration for Interactive Recommendation
Rewards serve as a measure of user satisfaction and act as a limiting factor in interactive recommender systems. In this research, we focus on the problem of learning to reward (LTR), which is fundamental to reinforcement learning. Previous approaches either introduce additional procedures for learning to reward, thereby increasing the complexity of optimization, or assume that user-agent interactions provide perfect demonstrations, which is not feasible in practice. Ideally, we aim to employ a unified approach that optimizes both the reward and policy using compositional demonstrations. However, this requirement presents a challenge since rewards inherently quantify user feedback on-policy, while recommender agents approximate off-policy future cumulative valuation. To tackle this challenge, we propose a novel batch inverse reinforcement learning paradigm that achieves the desired properties. Our method utilizes discounted stationary distribution correction to combine LTR and recommender agent evaluation. To fulfill the compositional requirement, we incorporate the concept of pessimism through conservation. Specifically, we modify the vanilla correction using Bellman transformation and enforce KL regularization to constrain consecutive policy updates. We use two real-world datasets which represent two compositional coverage to conduct empirical studies, the results also show that the proposed method relatively improves both effectiveness (2.3\%) and efficiency (11.53\%)
Jialin Liu, Xinyan Su, Zeyu He, Xiangyu Zhao, Jun Li
2023-10-30T13:43:20Z
http://arxiv.org/abs/2310.19536v1
Adversarial Batch Inverse Reinforcement Learning: Learn to Reward from Imperfect Demonstration for Interactive Recommendation ###### Abstract Rewards serve as a measure of user satisfaction and act as a limiting factor in interactive recommender systems. In this research, we focus on the problem of learning to reward (LTR), which is fundamental to reinforcement learning. Previous approaches either introduce additional procedures for learning to reward, thereby increasing the complexity of optimization, or assume that user-agent interactions provide perfect demonstrations, which is not feasible in practice. Ideally, we aim to employ a unified approach that optimizes both the reward and policy using compositional demonstrations. However, this requirement presents a challenge since rewards inherently quantify user feedback on-policy, while recommender agents approximate off-policy future cumulative valuation. To tackle this challenge, we propose a novel batch inverse reinforcement learning paradigm that achieves the desired properties. Our method utilizes discounted stationary distribution correction to combine LTR and recommender agent evaluation. To fulfill the compositional requirement, we incorporate the concept of pessimism through conservation. Specifically, we modify the vanilla correction using Bellman transformation and enforce KL regularization to constrain consecutive policy updates. We use two real-world datasets which represent two compositional coverage to conduct empirical studies, the results also show that the proposed method relatively improves both effectiveness (2.3%) and efficiency (11.53%). Inverse Reinforcement Learning, Agent Planning, Interactive Recommendation ## I introduction Modern recommendation technology changes human-machine collaboration from machine-centric searching to human-oriented mining [1], thus widely accelerating applications like e-commerce [2], _etc_. From system perspective, a recommender agent mines personalization from user-agent interactions. As these interactions accumulate chronically, the agent gradually learns to imitate user preference and narrows recommendation down to relevant choices that maximize user satisfaction [3]. Recently, advancement of reinforcement learning (RL) offers new kits to model this maximization procedure as an interactive decision making process, known as interactive recommender system (IRS), as both on-the-spot rewards from previous behavior demonstrations and off-the-spot rewards from future long-term accumulation are valuable [4]. Reward function quantifies user satisfaction in RL, thus learning to reward (LTR) is fundamental [5]. Philosophically, LTR reflects the ability of introspection, a human-level intelligence researchers have pursued. Computationally, rewards transform user satisfaction maximization into discounted future reward cumulation, making it a bottleneck for IRS. However, LTR is challenging: (i). **Reward equivalence**: multiple reward settings can interpret the optimal recommending policy from the demonstration dataset, making LTR underdetermined [6]; (ii). **Exploration-efficiency**[7]: common RL approaches acquire online interaction for policy evaluation, such on-policy planning is constrained in recommendation since under-optimized agents may hurt user satisfaction [8], thus more sample-efficient approach is acquired. Previous works primarily address reward equivalence by employing a separate procedure to approximate rewards based on multiple feedback signals(_e.g.,_ click, purchase, _etc._): Non-adversarial approximations [9, 10] learn a heuristic reward with neural architectures representing inductive bias. Comparatively, adversarial approximation methods learn a discriminating score between demonstration data and recommender agents, when proceeding to the RL planning, this discriminating reward encourages actions similar to behavior patterns. Recently, observing that high-quality demonstrations collected via unknown-yet-unrandom behavior agents are available, several batch RL methods leverage off-policy correction for exploration-efficiency [11, 12]. However, existing RL methods for reward-equivalence and exploration-efficiency do not mutually benefit from each other, recent works [12, 13] aim to bridge the gap, while either still requiring an individual procedure to optimize or assuming data coverage. Learn to reward is challenging. First, joint optimization is desired yet contradictory in hitherto methods, either adversarial or supervised learnt reward in its nature is an immediate credit quantification of user feedback, and requires on-policy update. Second, it is relatively straightforward to define the cumulative rewards episodically [14] rather than immediately [12], immediate credit assignment is more burdensome [15]. Additionally, there are two commonly adopted disciplines for offline environments: imitation learning [13] which converges to an implicit reward with expert demonstrations (perfect coverage), and vanilla batch RL [16] which generally approximates an explicit reward from more random demonstrations (uniform coverage) [17]. However, compositional demonstration sets (imperfect coverage) between these two extremes are more practical, whose quality is guaranteed by unknown prior behavior agents and thus is uncheckable. To address the aforementioned challenges, we propose a novel adversarial batch reinforcement learning method for IRS. We utilize discounted stationary distribution correction to combine LTR and policy REINFORCE without requiring additional pipelines.The Bellman transformation on immediate rewards turns on-policy objective into off-policy procedures. For imperfection demonstration, we leverage KL conservation as a form of pessimism [17] to balance exploitation and exploration. Our main contributions are as follows: * For the first time, we propose a batch inverse RL method for the interactive recommender system with imperfection concerned. It reduces additional learning pipeline for LTR and adapts to different compound demonstrations. * Our conservative learning objective relatively improves 2.3% over the second best comparison with an 11.53% reduction on demonstration consumption. * Empirical studies on two real-world recommendation datasets that represent two compositional coverage also demonstrate the effectiveness of our method. ## II problem statement Interactions between recommender agent and users can be modeled as Markov Decision Process \((\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma)\): * **State space \(\mathcal{S}\in\mathbb{R}^{d_{s}}\)**: State s represents browsing history so far, with each item in the browsing window sorted chronologically to learn state representation. * **Action space \(\mathcal{A}\in\mathbb{R}^{N}\)**: The action at time \(t\) represents an item back to a user. Without loss of generalization, we assume that the agent will return one item at each time, and list-wise extension is straightforward. * **Reward \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\)**: The user \(\mathbf{s}_{t}\) browsers received recommendation \(\mathbf{a}_{t}\) at this time he can skip, click or purchase the recommendation. Then the agent receives an immediate reward \(r(\mathbf{s}_{t},\mathbf{a}_{t})\) quantifying the user feedback. * **Transition probability \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\)**: probability \(p(\mathbf{s}_{t+1}|\mathbf{s}_{t},\mathbf{a}_{t})\) defines the user state transmission at state \(\mathbf{s}_{t}\) after receive recommendation \(\mathbf{a}_{t}\). We assume this transition satisfies the first order Markov property \(p\left(\mathbf{s}_{t+1}\mid\mathbf{s}_{t},\mathbf{a}_{t},\ldots,\mathbf{s}_{1 },\mathbf{a}_{1}\right)=p\left(\mathbf{s}_{t+1}\mid\mathbf{s}_{t},\mathbf{a}_{ t}\right)\). * **Discount factor \(\gamma\in[0,1]\)**: \(\gamma\) characterizes the importance of different timestamps. Specifically, \(\gamma=0\) only values immediate feedback, and \(\gamma=1\) will equally contribute all future reward in interactions. For hitherto interaction IDs, the recommender agent \(\pi_{\theta}(\mathbf{a}\mid\mathbf{s})\) uses constructed demonstrations \(\mathcal{D}=\{(\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t+1})\}_{k=1}^{N}\) to maximize following user satisfaction: \[\max_{\pi_{\theta}}\mathbb{E}_{\tau\sim\pi_{\theta}}\left[\sum_{t=0}^{|\tau|} \gamma^{t}r\left(\mathbf{s}_{t},\mathbf{a}_{t}\right)\right], \tag{1}\] where \(\tau=\left(\mathbf{s}_{0},\mathbf{a}_{0},\ldots,\mathbf{s}_{|\tau|-1},\mathbf{ a}_{|\tau|-1}\right)\) represents an episodic interaction between the agent and users. ## III methodology Current methods either require divided optimization or assume coverage of demonstration sets, both of which are impractical. In this section, we introduce a novel inverse reinforcement learning paradigm based on discounted stationary distribution correction. We implement Bellman transformation on stationary-action valuation so that the vanilla learning objective is off-policy. And we utilize KL conservation as a pessimism to handle compositional coverage. Finally, we introduce an extensible neural architecture for optimization. ### _Adversarial Inverse Reinforcement Learning_ When learning from demonstrations \(\mathcal{D}\), reward \(r(\mathbf{s}_{t},\mathbf{a}_{t})\) guides the recommender agent \(\pi_{\theta}(\mathbf{a}\mid\mathbf{s})\) towards unknown behavior agents which collects \(\mathcal{D}\): \[r\left(\mathbf{s}_{t},\mathbf{a}_{t}\right)=\log\frac{d^{\mathcal{D}}\left( \mathbf{s}_{t},\mathbf{a}_{t}\right)}{d^{\pi_{\theta}}\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)}, \tag{2}\] where \(d^{\pi_{\theta}}(\mathbf{s}_{t},\mathbf{a}_{t})\propto(1-\gamma)\sum_{t=0} \gamma^{t}exp\left(h_{\theta}(\mathbf{s}_{t},\mathbf{a}_{t})\right)\) is parameterized discounted stationary distribution [13], and it induces discounted factor \(\gamma\) to tackle distribution shift [18]. Vanilla RL objective (1) then transforms into: \[\max_{\pi_{\theta}}(1-\gamma)\cdot\mathbb{E}_{(\mathbf{s}_{t},\mathbf{a}_{t}) \sim\mathcal{D}}\left[\sum_{t=0}^{\infty}\gamma^{t}\log\frac{d^{\mathcal{D}} \left(\mathbf{s}_{t},\mathbf{a}_{t}\right)}{d^{\pi_{\theta}}\left(\mathbf{s}_{t },\mathbf{a}_{t}\right)}\right], \tag{3}\] which can be further expanded as [13]: \[\max_{\pi_{\theta}}\log\mathbb{E}_{(\mathbf{s},\mathbf{a})\sim d^{\mathcal{D} }}\left[e^{\tau_{\phi}(\mathbf{s},\mathbf{a})}\right]-(1-\gamma)\mathbb{E}_{( \mathbf{s},\mathbf{a})\sim d^{\pi_{\theta}}}\left[r_{\phi}(\mathbf{s},\mathbf{ a})\right]. \tag{4}\] Although the quality of \(\mathcal{D}\) is unknown in prior, reward offers valuation information that the agent \(\pi_{\theta}\) can later use to reformulate new transitions which has not been yet observed in \(\mathcal{D}\). To learn \(r_{\phi}(\mathbf{s},\mathbf{a})\), we imitate the behavior cumulative valuation in a min-max game which converges to (2): \[\begin{array}{c}\max_{\pi_{\theta}}\min_{r_{\phi}}\log\mathbb{E}_{(\mathbf{s},\mathbf{a})\sim d^{\mathcal{D}}}\left[e^{r_{\phi}(\mathbf{s},\mathbf{a})} \right]\\ -(1-\gamma)\mathbb{E}_{(\mathbf{s},\mathbf{a})\sim d^{\pi_{\theta}}}\left[r_{ \phi}(\mathbf{s},\mathbf{a})\right],\end{array} \tag{5}\] For the minimization part, parameterized reward \(r_{\phi}(s,a)\) tries to imitate valuation pattern in demonstration \(\mathcal{D}\); For the maximization part, the agent \(\pi_{\theta}\) generally reformulates new sub-pattern from existing interaction episodes. ### _Bellman Transformation for Efficiency_ The on-policy evaluation part from \(d^{\pi_{\theta}}\) in (5) leads to low efficiency. We utilize Bellman operator as, \[\mathcal{B}^{\pi}v(\mathbf{s},\mathbf{a})=\gamma\mathbb{E}_{\mathbf{s}^{\prime} \sim p(\cdot|\mathbf{s},\mathbf{a}),\mathbf{a}^{\prime}\sim\pi(\cdot|\mathbf{s }^{\prime})}\left[v\left(\mathbf{s}^{\prime},\mathbf{a}^{\prime}\right)\right], \tag{6}\] where state \(\mathbf{s}^{\prime}\) and action \(\mathbf{a}^{\prime}\) is the next timestamp of state \(\mathbf{s}\), here we temporarily drop time subscripts to represent a general triple \((\mathbf{s},\mathbf{a},\mathbf{s}^{\prime})\). Reward (5) is then computed as a temporal difference between consecutive tuples: \[r_{\phi}(\mathbf{s},\mathbf{a})=v_{\phi}(\mathbf{s},\mathbf{a})-\mathcal{B}^{ \pi}v_{\phi}(\mathbf{s},\mathbf{a}), \tag{7}\] where \(v_{\phi}(\mathbf{s},\mathbf{a})\) is a cumulative state-action valuation approximation (parameterized by \(\phi\)). Combined with Bellman transformation (7), the on-policy part (5) reduces into a linear form which leads to an off-policy version: \[\begin{split}\max_{\pi_{\theta}}\min_{v_{\phi}}& \log\mathbb{E}_{(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime})\sim d^{ \mathcal{D}}}\left[e^{r_{\phi}(\mathbf{s},\mathbf{a})}\right]\\ &-(1-\gamma)\mathbb{E}_{(\mathbf{s}_{0},\mathbf{a}_{0})\sim d^{ \mathcal{D}}}\left[r_{\phi}(\mathbf{s}_{0},\mathbf{a}_{0})\right].\end{split} \tag{8}\] This objective exhibits two characteristics which are absent in previous work: Firstly, it does not acquire another separated training pipeline for LTR, thereby avoiding additional complexity; Secondly, it does not require on-policy interactions from users, and thus improves the efficiency. ### _KL Conservation for Effectiveness_ One problem of the vanilla objective (8) is that it purely relies on a demonstration set. In practice, the quality of \(\mathcal{D}\) can be compositional between perfect coverage and uniform coverage, which goes beyond the original data assumptions. Furthermore, the demonstration sets may lack diversity. Inspired by the concept of pessimism as an inductive bias in risky complex environments [17], we restrain consecutive updates within a divergence measure via: \[\begin{split}\max_{\pi_{\theta}}\min_{v_{\phi}}& \log\mathbb{E}_{(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime})\sim d^{ \mathcal{D}}}\left[e^{r_{\phi}(\mathbf{s},\mathbf{a})}\right]\\ &-(1-\gamma)\mathbb{E}_{(\mathbf{s}_{0},\mathbf{a}_{0})\sim d^{ \mathcal{D}}}\left[r_{\phi}(\mathbf{s}_{0},\mathbf{a}_{0})\right]\\ &-\mathbb{KL}\left[\pi_{\theta}(\cdot|\mathbf{s})||\pi_{\theta^{ \prime}}(\cdot|\mathbf{s})\right].\end{split} \tag{9}\] The KL penalty can be approximated by the Fisher information matrix \(G(\cdot;\theta)\)[19] with the second-order Taylor expansion. Thus we achieve the overall adversarial batch inverse reinforcement learning objective. From optimization perspective, vanilla objective (8) unifies LTR in the minimization and policy REINFORCE [11] in the maximization, KL regularization removes uncertainty and takes conservative gradient steps. This is different from mixture regularization which still acquires online interaction for diversity [13]. ### _Neural Implementation_ In order to estimate the conservative objective (9), we adopt an extensible Actor-Critic architecture, which consists of two components: (i). the **critic**\(v_{\phi}(\mathbf{s},\mathbf{a})\) thatulates the reward implicitly with off-policy demonstrations. (ii). the **actor**\(\pi_{\theta}(\mathbf{a}\mid\mathbf{s})\) that generates recommendation based on its policy. Both components share the same state encoding backbone, which forms a simplified mixture of experts. **Encoder** Given a demonstration \(\{i_{0},i_{1},\dots,i_{t-1}\}\), the encoder first projects recorded item \(i_{t-1}\) into an embedding vector \(\mathbf{e}_{t-1}\in\mathbb{R}^{d_{\pi}}\). We then use autoregressive neural networks to model transition probability \(p(\mathbf{s}_{t}|\mathbf{s}_{t-1},\mathbf{a}_{t-1})\), and the state \(\mathbf{s}_{t}\) can be formalized as follows: \[\mathbf{s}_{t}=h_{\theta_{e}}(\mathbf{s}_{t-1},\mathbf{e}_{t-1}), \tag{10}\] where \(\theta_{e}\) denotes learnable parameters, and the autoregressive model \(h_{\theta_{e}}(\cdot,\cdot)\) can be recurrent neural network, _i.e.,_ GRU [20] or feedforward neural network, _i.e.,_ CNN [21]. For both architectures to capture temporal dynamics of transitions, we use a \(w\)-length window and concatenate the recent interactions \([i_{t-w+2},i_{t-w+3},\dots,i_{t-1}]\) sampled from \(\mathcal{D}\), with truncating (if \(t>w\)) and padding at the rightmost (if \(t<w\)). **Actor** Based on current state \(\mathbf{s}_{t}\), the actor agent generates a list of candidates from the entire item space \(|\mathcal{A}|\) as its next action \(\mathbf{a}_{t}\). A straightforward representation of item \(i\) to be involved under current user state \(\mathbf{s}_{t}\) is thus: \[\pi\left(i\in\mathbf{a}_{t}\mid\mathbf{s}_{t}\right)=\frac{\exp\left(\mathbf{W }_{i}\mathbf{s}_{t}+\mathbf{b}_{i}\right)}{\sum_{j=1}^{|\mathcal{A}|}\exp \left(\mathbf{W}_{j}\mathbf{s}_{t}+\mathbf{b}_{j}\right)}, \tag{11}\] where \(\mathbf{W}_{i}\) is the \(i\)-th row of parametric matrix \(\mathbf{W}_{(a)}\in\mathbb{R}^{|\mathcal{A}|\times d_{s}}\), \(\mathbf{b}_{(a)}\in\mathbb{R}^{d_{\pi}}\) is the corresponding bias. Due to the large action space in recommendation (\(|\mathcal{A}|\gg 1\)), vanilla policy (11) is expensive to enumerate. We utilize the Gumbel-Softmax trick to provide a differentiable approximation: \[\pi\left(i\in\mathbf{a}_{t}\mid\mathbf{s}_{t}\right)=\frac{\exp\left(\left( \log\left(f_{\theta_{a}}\left(\mathbf{s}_{t}\right)\left[\mathbf{e}_{i}\right] \right)+g_{i}\right)/\gamma_{g}\right)}{\sum_{j=1}^{|\mathcal{A}|}\exp\left( \left(\log\left(f_{\theta_{a}}\left(\mathbf{s}_{t}\right)\left[\mathbf{e}_{j} \right]\right)+g_{j}\right)/\gamma_{g}\right)}, \tag{12}\] where \(\{g_{j}\}_{j=1}^{|\mathcal{A}|}\) is i.i.d. samples from Gumbel distribution, \(\gamma_{g}\) is the scalar temperature, \(f_{\theta_{a}}\) is a multi-layer perceptron which maps user current state into action preferences. Equ (12) replace the argmax with discrete softmax, such replacement can avoid distribution mismatch [4]. **Critic** To implicitly learn reward from the compositional demonstration, we take concatenation of current state \(\mathbf{s}_{t}\) and potential action (item) \(\mathbf{e}_{t}^{(\mathbf{a})}\) as the input of the critic \(v_{\phi}(\mathbf{s},\mathbf{a})\), which measures discounted cumulative rewards as: \[v_{\phi}(\mathbf{s},\mathbf{a})=\mathbf{w}_{(c)}^{T}\sigma\left(\mathbf{W}_{(c) }\left[(\mathbf{s}_{t})^{T},(\mathbf{e}_{t}^{(\mathbf{a})})^{T}\right]^{T}+ \mathbf{b}_{(c)}\right), \tag{13}\] where \(\mathbf{W}_{(c)}\in\mathbb{R}^{l\times(d_{s}+d_{e})}\) is the weight matrix, \(\mathbf{b}_{(c)}\in\mathbb{R}^{l}\) denotes the bias term, and \(\mathbf{w}_{(c)}\in\mathbb{R}^{l}\) is the regression parameters. \(\sigma(\cdot)\) is the nonlinear activation such as ReLU. For annotating simplicity, we resemble \(\phi=\{\mathbf{w}_{(c)},\mathbf{W}_{(c)},\mathbf{b}_{(c)}\}\) as the learnable parameters of the critic. ### _Overall Optimization_ Incorporated with the parameterized recommender actor (12) and the valuation critic (13), we can now reformulate the enhanced KL-conservative objective (9) as: \[\max_{\pi_{\theta}}\min_{v_{\phi}}\log\underset{(\mathbf{s}, \mathbf{a},\mathbf{s}^{\prime})\sim d^{D}}{\mathbb{E}}\left[e^{v_{\phi}(\mathbf{ s},\mathbf{a})-\gamma v_{\phi}(\mathbf{s}^{\prime},\mathbf{a}^{\prime})}\right]\] \[-(1-\gamma)\underset{(\mathbf{s}_{0},\mathbf{a}_{0})\sim d^{D}}{ \mathbb{E}}\left[v_{\phi}(\mathbf{s}_{0},\mathbf{a}_{0})\right]-\mathbb{KL} \left[\pi_{\theta}(\cdot|\mathbf{s})||\pi_{\theta^{\prime}}(\cdot|\mathbf{s}) \right]. \tag{14}\] where \(\phi\) denotes the parameters to be optimized in critic network, and \(\theta=\{\theta_{e},\theta_{a}\}\) is the parameters of the recommender agent. Since we have no prior knowledge about demonstration set \(\mathcal{D}\) for optimization, thus sharing bottom encoding knowledge about user state will not only help reducing parameters otherwise an additional encoder for the critic is needed, but also forming an adversarial competition that emphasis different aspects of state encoding: from one aspect, we would like the recommender agent to imitate what is evaluated high by the critic (the minimization subroutine in (14)); from another aspect, we would like the recommender agent to reinforce high-evaluated sub-transitions (the maximization subroutine in (14)) in existing interactive trajectories, but with conservation concerned. Such adversarial knowledge is demonstrated to be useful in previous works [17, 22]. Algorithm 1 shows training details, where we use second-order Taylor expansion to approximate KL conservation \(\mathcal{R}_{\text{AI}}\). ``` 0: compositional demonstration set \(\mathcal{D}\). 0: agent \(\pi_{\theta}(\mathbf{a}\mid\mathbf{s})\) and critic \(v_{\phi}(\mathbf{s},\mathbf{a})\). 1: Initialize parameters \(\theta,\phi\). 2:for\(i=1,\dots,I\)do 3: Sample \(\left\{\left(\mathbf{s}_{0}^{(b)},\mathbf{s}^{(b)},\mathbf{a}^{(b)},\mathbf{s }^{\prime(b)}\right)\right\}_{b=1}^{B}\sim\mathcal{D}\) 4: Compute Fisher information matrix \(G(\mathbf{s},\mathbf{a};\theta)\) on \(\mathcal{D}\) 5:for iteration \(j=1,\dots,B\)do 6:\(\mathbf{a}_{0}^{(j)}\sim\pi_{\theta}\left(\cdot\mid\mathbf{s}_{0}^{(j)}\right)\)\(\triangleright\) (12) 7:\(\mathbf{a}^{\prime(j)}\sim\pi_{\theta}\left(\cdot\mid\mathbf{s}^{\prime(j)}\right)\)\(\triangleright\) (12) 8:endfor 9:\(\hat{J}_{log}=\log\left(\frac{1}{B}\sum_{j=1}^{B}\left(e^{v_{\phi}(\mathbf{s}^{(j)}, \mathbf{a}^{(j)})-\gamma v_{\phi}(\mathbf{s}^{(j)},\mathbf{a}^{(j)})}\right)\right)\) 10:\(\hat{J}_{linear}=\frac{1}{B}\sum_{j=1}^{B}\left((1-\gamma)v_{\phi}(\mathbf{s}_ {0}^{(j)},\mathbf{a}_{0}^{(j)})\right)\) 11:\(\mathcal{R}_{\text{AI}}\approx\frac{1}{B}\sum_{j=1}^{B}\left(\delta\theta^{T}G (\mathbf{s}^{(j)},\mathbf{a}^{(j)};\theta)\delta\theta\right)\) 12: Update \(\phi\leftarrow\phi-\eta_{v}\nabla_{\phi}(\hat{J}_{log}-\hat{J}_{linear})\) 13: Update \(\theta\leftarrow\theta+\eta_{a}\nabla_{\theta}\left(\hat{J}_{log}-\hat{J}_{ linear}-\mathcal{R}_{\text{AI}}(\theta)\right)\) 14:endfor ``` **Algorithm 1** Adversarial Batch Conservative iRL ## IV experiments In this section, we empirically examine and compare our proposed learning algorithm. We perform experiments on two publicly available real-world datasets, aiming to address the following research questions: (i) **Effectiveness.** Does adversarial discounted distribution correction (14) offer more effectiveness compared with other existing methods for interactive recommendation? (ii) **Efficiency.** Does off-policy evaluation induced by Bellman transformation (6) reach the same performance with less demonstration consumption? (iii) **Adaptivity.** Do other architecture implementations share the same benefaction from incorporating learning objective and conservation designs? ### _Experimental Setup_ **Datasets.** We conduct experiments on two real-world interactive recommendation datasets, _i.e._, _Kaggle1_ and _RecSys15_2. Footnote 1: [https://www.kaggle.com/retailrocket/commerce-dataset](https://www.kaggle.com/retailrocket/commerce-dataset) Footnote 2: [https://recsys.acm.org/recsys15/challenge](https://recsys.acm.org/recsys15/challenge) * **Kaggle** This dataset is released by a real-world e-commerce platform and provides a more uniform coverage over interactions, thus is suitable for comparison with reinforcement learning baselines designed for this setting. To align with the _RecSys15_, we consider views as clicks and adding items to the cart as purchases. We remove items interacted fewer than 3 times, as well as interactions smaller than 3, details in Table I. * **RecSys15** This dataset is released by RecSys Challenge 2015 and provides a more compound coverage over interactions, which offers a setting to compare with imitation learning baselines developed for expert demonstrations. We eliminate interactions smaller than 3 and subsequently sample 200,000 interactions, details in Table I. **Metrics.** For offline evaluation, we measure top-k \((k=\{5,10\})\) Hit Ratio (H@k) [15] and Normalized Discounted Cumulative Gain (N@k) [23], which are widely adopted as a measurement for recalling and ranking performance in recent works [4, 15]. To ensure that the dataset is divided into non-overlapping subsets for different purposes, we randomly select 80% as the training set, 10% as the validation set, and the remaining interactions as the test set. **Baselines.** We consider following learning algorithms for comparison: Behavior Cloning (BC) [18] and Supervised Learning (SL) [20], policy gradient for actor with supervised learning to reward (SL+PG) [9], off-policy Actor-Critic (SL+AC) [4], adversarial policy learning (AL+PG) [14] and adversarial Deep Q-Learning (AL+DQN) [12]. Specifically, we adopt original settings [15] for reward-set baselines: 0.2 for click, 1.0 for purchase, and 0.0 for passing as reward-set baselines. A 2-layer GRU with 64 hidden units, is used as backbone for all baselines. We use 10 recent interactions as input length (\(w=10\)), with mini-batch \(B=256\) and state dimensions \(d_{s}=64\). Item embeddings are initialized from Gaussian distribution (\(d_{e}=50\)). For recommender agent (12), we adopt a 2-layer MLP with ReLU as nonlinear activation, the scalar \begin{table} \begin{tabular}{l l l} \hline \hline & _Kaggle_ & _RecSys15_ \\ \hline \#interactions & 195,523 & 200,000 \\ \#items & 70,852 & 26,702 \\ \#clicks & 1,176,680 & 1,110,965 \\ \#purchases & 57,269 & 43,946 \\ \hline \hline \end{tabular} \end{table} TABLE I: Data Statistics. temperature \(\gamma_{g}\) for Gumbel-Softmax is 0.2 and conservative scalar \(\delta\) is \(1e-2\) as SL+AC [4]. We utilize the same MLP for the critic network ((13)), both with 512 hidden units, and the regression parameter is 3-dim (\(l=3\)) as a representation of pass, click and purchase feedbacks. Learning rates \(\eta_{v},\eta_{a}\) are \(5e-3\) for 50 epochs. ### _Experimental Results_ **Overall Performance (i).** Table II shows click performance among comparing baselines, and Table III gives the results on purchase feedback. Both experiments are conducted on GRU backbone. A similar tendency can be observed in both tables. First, we observe that BC works worst among these baselines on both datasets, which demonstrates that vanilla imitation learning does not suit compositional demonstrations in interactive recommendation, and RL-based IRS can reveal new valuable patterns even in offline environments, same as previous work reports [17]. Second, we observe that off-policy methods (SL+AC and AL+DQN) work better than on-policy methods (SL+PG and AL+PG) in either model-based or model-free groups, because on-policy methods generally acquire online interactions to evaluate current agent while this is not available in offline environments. Third, we observe that model-based methods (AL+PG) works better than model-free approach (SL+PG) on more compositional demonstrations, _i.e., RecSys_, and vice versa for more uniform demonstrations, _i.e., Kaggle_, this is consistent with existing works [17]. Finally, our proposed method outperforms all compared learning algorithms, which results from the combination between learning to reward (the minimization in (14)) and policy reinforcement (the maximization in (14)). **Efficiency Study (ii).** Table IV shows the efficiency comparison among RL algorithms in _RecSys_ on GRU backbone. We use SL performance in table II as the threshold, and count iterations needed for the agent to continuously exceed SL in 5 times as a measurement for exploration-efficiency. Epochs averaged over 10 experiments are reported as results. First, we observe that both off-policy approaches exceed on-policy methods, either model-based or model-free, this verifies the motivation to develop an off-policy version of learning objective (8). Next, we observe that adversarial learning (AL+PG and AL+DQN) requires more epochs than supervised learning (SL+AC and SL+PG), since the dynamic equilibrium of the former generally requires more time to fit. Our approach requires the least epochs (relative 11.53% reduction), because Bellman transformation (6) results in off-policy evaluation, and the objective (14) unifies reinforcement learning and auxiliary learning (AL or SL) to reduce complexity. **Adaptivity analysis (iii).** Figure 1 shows the ablation study on _RecSys_ with two kind of backbones: GRU and CNN [4][15]. For the latter, we concatenate input interactions to form a 2D feature map and then conduct convolution upon it. We also implement support constraints [4] as a simplified version of conservation. Figure 0(a) and Figure 0(c) show results on H@10, Figure 0(b) and Figure 0(d) show results on N@10. Vanilla objective (\(w/o\)\(\mathcal{R}(\theta)\)) performs close to SL, since offline demonstrations do not cover all items for the agent to explore. Uncertainty necessitates regularization. SC performs a simplified conservation from a supervised-learnt behavior agent. Since behavior agent estimation has inaccuracy, direct conservation (\(w\)\(\mathcal{R}(\theta)\)) achieves best improvement. ## V Related Works Classic recommendation algorithms _i.e.,_[24] assume that similar users have similar preferences and propose collaborative filtering algorithms based on matrix factorization. However, classic methods cannot effectively model high-order user-agent interaction dynamics. To address this issue, deep sequential recommendation approaches, _i.e.,_[20] treat interaction procedures as temporal sequences, and use latent state vectors to capture the high-order temporal dynamics of user preferences. But in interactive recommendation tasks, there are multi-types feedback signals with different valuations for the RA, _e.g.,_ user click behavior may better reflect their true interests than purchase behavior. Deep sequential models do not contain this difference when modeling user behavior. To further address this, RL-based recommender agent aims to optimize the cumulative reward function from various feedback signals, existing works follow as: (i) **policy-based \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{click} & \multicolumn{4}{c}{ReSys} & \multicolumn{4}{c}{Kegle} \\ \cline{2-9} & **H@5** & N@5 & H@10 & N@10 & H@5 & N@5 & H@10 & N@10 \\ \hline BC & 2107 & 254 & 3179 & 1628 & 1288 & 1.134 & 1.1784 & 1.332 \\ SL & 2.876 & 1.902 & 3793 & 2.279 & 2.223 & 1.735 & 2.8073 & 1.878 \\ \hline SL+PG & 3.002 & 2.106 & 4013 & 2.288 & 2.2904 & 1.9072 & 3.0026 & 2.118 \\ SL+AC & 2.2276 & 2.2309 & 4217 & 2.2592 & 2.2692 & 2.2181 & 3.2304 & 2.2531 \\ \hline AL+PG & 3.034 & 2.064 & 4.0022 & 2.351 & 2.2889 & 2.0053 & 3.1422 & 2189 \\ AL+DQN & 3.249 & 2.271 & -4.028 & -2.583 & 2.658 & **2.229** & 2.2503 & 2.2428 \\ \hline Our & **.3814\({}^{\circ}\)** & **2.572\({}^{\circ}\)** & **.4450\({}^{\circ}\)** & **2.712\({}^{\circ}\)** & **2.7560\({}^{\circ}\)** & **2.431\({}^{\circ}\)** & **.3863\({}^{\circ}\)** & **-2647\({}^{\circ}\)** \\ \hline \hline \end{tabular} \end{table} TABLE II: Effectiveness. Best is bold, and the next best is underlined. “\(*\)” indicates the statistically significant improvements (two-sided t-test with \(p<0.05\)) over the best baseline. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{pathage} & \multicolumn{4}{c}{ReSys} & \multicolumn{4}{c}{Kegle} \\ \cline{2-9} & **H@5** & N@5 & H@10 & N@10 & H@5 & N@5 & H@10 & N@10 \\ \hline BC &.2772 &.1758 &.4142 &.2338 &.1939 &.1618 &.3379 &.1821 \\ SL &.3994 &.2824 &.5183 &.3204 &.4608 &.3834 &.5107 &.3995 \\ \hline SL+PG & 4.325 &.3071 &.5412 &.3414 &.5087 &.4172 &.5062 &.4340 \\ SL+AC & 4.427 &.2219 &.3257 &.5587 &.5414 &.4399 &.5898 &.4687 \\ \hline AL+DQN & 4.204 &.3041 &.5394 &.3306 &.518 &.4328 &.574 &.4577 \\ AL+DQN & 4.4353 &.3183 &.5415 &.3545 &.5214 &.5383 &.52984 &.4219 \\ \hline Our & **.4452\({}^{\circ}\)** & **.3259\({}^{\circ}\)** & **.5637\({}^{\circ}\)** & **.3686\({}^{\circ}\)** & **.5459\({}^{\circ}\)** & **-4460\({}^{\circ}\)** & **-5061\({}^{\circ}\)** & **-4739\({}^{\circ}\)** \\ \hline \hline \end{tabular} \end{table} TABLE III: Effectiveness. Best is bold, and the next best is underlined. “\(*\)” indicates the statistically significant improvements (two-sided t-test with \(p<0.05\)) over the best baseline.
2310.06416
Stochastic representation of processes with resetting
In this paper we introduce a general stochastic representation for an important class of processes with resetting. It allows to describe any stochastic process intermittently terminated and restarted from a predefined random or non-random point. Our approach is based on stochastic differential equations called jump-diffusion models. It allows to analyze processes with resetting both, analytically and using Monte Carlo simulation methods. To depict the strength of our approach, we derive a number of fundamental properties of Brownian motion with Poissonian resetting, such as: the It\^o lemma, the moment-generating function, the characteristic function, the explicit form of the probability density function, moments of all orders, various forms of the Fokker-Planck equation, infinitesimal generator of the process and its adjoint operator. Additionally, we extend the above results to the case of time-nonhomogeneous Poissonian resetting. This way we build a general framework for the analysis of any stochastic process with intermittent random resetting.
Marcin Magdziarz, Kacper Taźbierski
2023-10-10T08:37:25Z
http://arxiv.org/abs/2310.06416v1
# Stochastic representation of processes with resetting ###### Abstract In this paper we introduce a general stochastic representation for an important class of processes with resetting. It allows to describe any stochastic process intermittently terminated and restarted from a predefined random or non-random point. Our approach is based on stochastic differential equations called jump-diffusion models. It allows to analyze processes with resetting both, analytically and using Monte Carlo simulation methods. To depict the strength of our approach, we derive a number of fundamental properties of Brownian motion with Poissonian resetting, such as: the Ito lemma, the moment-generating function, the characteristic function, the explicit form of the probability density function, moments of all orders, various forms of the Fokker-Planck equation, infinitesimal generator of the process and its adjoint operator. Additionally, we extend the above results to the case of time-nonhomogeneous Poissonian resetting. This way we build a general framework for the analysis of any stochastic process with intermittent random resetting. pacs: 05.40.Fb, 02.50-r Introduction In recent years we have been observing a great interest in the field of intermittent stochastic processes, in which the motion of a particle is interrupted by random resetting to an initial state. Random motion with resetting is typically observed in various searching strategies and foraging patterns, where the living organism after an unsuccessful excursion returns to the initial position and start the search again [1]. The justification of such strategy is that if one does not succeed in finding the target within certain not very long time period, sometimes it is more beneficial and safe to move back to the origin and start another excursion, see [2; 3]. On the other hand, in the field of population dynamics resetting could be interpreted as an event reducing population to its natural size according to the environmental capacity [4]. What makes the idea also interesting is the way that random resetting completely changes the properties of the diffusion process. In particular, mean first passage time of diffusion with resetting is finite, which is in sharp contrast to the standard diffusion case [5; 6]. This fact depicts the efficiency of searching strategies with resetting mechanism [7]. However, if the resetting timer has an infinite mean then mean first passage time may be infinite. Intermittent processes have found important application in other fields, such as: phenotypic diversity, population growth and information in fluctuating environments [8], ecology [9], enzymatic reactions [10] or computer science and optimization problems [11]. The historical background of processes with resetting can be found in [12]. The interested reader is referred to a recent comprehensive review by Evans et al. [6]. Some latest results related to non-Markovian resetting models can be found in [13; 14; 15]. We also refer the reader to the following relevant works related to the restarted processes, published well before the terms "restart" and "resetting" became popular [16; 17; 18]. To characterize a process with resetting, one needs to deal with two sources of randomness. The first one accounts for the particle motion between resetting times, the second one is the point process determining the moments of resetting. The usual description of motion with resetting is via CTRW with additional resetting mechanism or via related deterministic partial differential equations of the Fokker-Planck type [6]. Here we use another approach, which is founded on the theory of Levy processes and the corresponding stochastic differential equations (SDEs) - the so-called jump-diffusion models. There is a rich history of research of Levy processes. The most well-known is obviously Brownian motion (or Wiener process). Other important examples include \(\alpha\)-stable, Linnik, Mittag-Leffler, Gamma, or Laplace processes [19]. Their first applications arose quickly in the physical [20; 21] and financial [22; 23; 24] sciences. They have naturally spread to many other fields of science, e.g. as models in biology, chemistry, data mining, statistics, etc. [25]. In 1940's, thanks to Kiyoshi Ito, the theory of stochastic differential equations was developed [26], [27]. The stochastic Ito integral and the celebrated Ito lemma allowed to integrate functions or processes with respect to the Wiener process. Also, other types of stochastic integrals with important role in physics, such as Stratanovich integral, were developed. Jump-diffusion models are defined as SDEs, in which the Brownian noise term is replaced or complemented by arbitrary Levy noise [28]. Since general Levy processes have discontinuous trajectories [19], therefore solutions of such Levy driven SDEs display jumps. Jump-diffusion models have found applications in various fields, such as: finance, statistical physics, pattern theory and computational vision and many more, see [28; 29] and references therein. As we will show in the following, these processes are also tailor-made to describe and analyze processes with resetting. In this paper we introduce a general stochastic framework for processes with resetting in the language of stochastic differential equations (SDEs) with jumps (jump-diffusion models). Using the stochastic approach a version of the Ito lemma for processes with resetting will be derived. Next, applying the latter result we will derive the moment generating function (MGF) and the Fourier transform for diffusion with resetting. Using the inverse Fourier transform we will then determine the probability density function (PDF) of the process in explicit form. The corresponding Fokker-Planck equation will be also derived in various forms. We will discuss the equivalence of our Fokker-Planck equation with the one given in [6]. Moreover we will find the infinitesimal generator and the adjoint operator for the Markovian resetting process. In addition, we will extend the above results to the case of time-nonhomogeneous Poissonian resetting. We will show that, depending on the intensity function of the resetting events, we can observe: convergence to stationary distribution, convergence to a point, subdiffusive or diffusive behaviour, for long times. It should be underlined that the presented here approach can be easily generalized to the case of non-Markovian resetting times as well as other, not necessarily Brownian, driving processes (Levy processes, fractional processes, etc.). Stochastic representation of processes with resetting Let us consider a diffusing particle with initial position \(x_{0}\) and Poissonian resetting with rate \(r\) to position \(x_{R}\). In general the initial position \(x_{0}\) and the resetting position \(x_{R}\) can be distinct, they can be even random (in this paper we assume that \(x_{0}\) and \(x_{R}\) are constant, however the methodology we use, can also be applied to the random case, which will lead to different results and distributions). Position of such particle at time \(t\) is usually defined in the following way [6]: \[\begin{array}{ll}x(t+\mathrm{d}t)=x_{R}&\mbox{with probability }r\,\mathrm{d}t\\ &\\ =x(t)+\xi(t)(\mathrm{d}t)^{\frac{1}{2}}&\mbox{with probability }(1-r\, \mathrm{d}t),\end{array} \tag{1}\] where \(\xi(t)\) is the standard Gaussian white noise with mean \(0\) and variance \(2D\). The typical trajectory of a diffusion process with resetting is depicted in Fig. 1. The Fokker-Planck equation corresponding to (1) has the form [6] \[\partial_{t}p(x,t)=D\partial_{xx}p(x,t)+r\delta(x-x_{R})-rp(x,t), \tag{2}\] with initial condition \(p(x,0)=\delta(x-x_{0})\). Here \(p(x,t)\) is the PDF of the diffusing particle with resetting and \(\delta(\cdot)\) is the Dirac delta. In what follows we will put for simplicity \(D=1/2\). Using the above formalism one can examine the interesting phenomenon of non-equilibrium steady states [6]. Even though the stationary state is independent of time, there is a driving force in form of the resetting that creates the probability flow. In a purely mathematical setting it means that the gradient of stationary state is non-zero. This corresponds to the physical concept of non-equilibrium steady state. A nice review on this topic can be found in [30]. Also in [6] an absorption of the process by a trap was studied. For the standard diffusion, time to absorption follows Levy distribution, which has an infinite mean (being an \(\alpha\)-stable distribution with \(\alpha=1/2\)). When the resetting is introduced, the mean time to absorption becomes finite and we are able to find an optimal resetting rate [6]. This fact clearly shows that resetting can be beneficial in the searching strategies. In this paper we introduce a different approach to define and analyze processes with resetting. Namely, we will use jump-diffusion processes to define the stochastic dynamics with resetting. A jump-diffusion process is defined as the solution of the following SDE [28]: \[dX_{t}=\mu_{t}dt+\sigma_{t}dW_{t}+\nu_{t}dL_{t}\;,\;\;\;X_{0}=x_{0}. \tag{3}\] Here \(W_{t}\) is the standard Brownian motion, \(L_{t}\) the Levy process, it is usually the Poisson process and introduces jumps to the observed dynamics, \(\mu_{t}\), \(\sigma_{t}\) and \(\nu_{t}\) are the appropriate parameters of the model, which can be in general space and time dependent. It is also assumed that \(W_{t}\) and \(L_{t}\) are independent. Recall that Levy process \(L_{t}\) is a stochastic process satisfying [28]: 1. \(L_{0}=0\), 2. \(L_{t}\) has independent increments, 3. \(L_{t}\) has stationary increments, 4. \(\mathop{\forall}\limits_{\varepsilon,t>0}\lim\limits_{h\to 0}P\left(|L_{t+h}-L_{t}|> \varepsilon\right)=0\), i.e. \(L_{t}\) is continuous in probability. Since every Levy process is a Markov process, solutions of SDE (3) are Markovian. Typical examples of Levy processes are: Brownian motion, Poisson process, and \(\alpha\)-stable Levy process. Brownian motion is the only Levy process (up to a constant drift) with continuous trajectories. All the other Levy processes have jumps. Poisson process is a non-decreasing Levy process with jumps of size 1 and flat periods between jumps. Times between consecutive jumps of the Poisson process are independent and drawn from the exponential distribution with mean \(r>0\). The constant \(r\) is called intensity of the Poisson process Now let us consider the following particular case of jump-diffusion process: \[\mathrm{d}X_{t}=\mathrm{d}W_{t}+\left(x_{R}-X_{t}\right)\mathrm{d}N_{t},\quad X _{0}=x_{0}, \tag{4}\] where \(N_{t}\) is the Poisson Process with intensity \(r\), \(x_{0}\) and \(x_{R}\) are constants. We argue that (4) is the continuous-time analogue of diffusion with Poissonian resetting defined in (1). Indeed, whenever \(N_{t}\) has jump, we get that \(dN_{t}=1\) and therefore the term \(\left(x_{R}-X_{t}\right)\mathrm{d}N_{t}\) sends back the process to the resetting position \(x_{R}\). Between the resetting times, i.e. when \(N_{t}\) is constant, we get that \(dN_{t}\)=0. So in this case \(X_{t}=W_{t}\). Thus between consecutive resetting times the particle performs Brownian motion. A typical trajectory obtained using (4) can be seen in Fig, 1. Clearly, we observe a diffusing particle with resetting to \(x_{R}=2\). We can also write down the integrated form of (4): \[X_{t}=x_{0}+W_{t}+\int_{0}^{t}(x_{R}-X_{s})dN_{s}=x_{0}+W_{t}+\sum_{n=1}^{N_{t }}\left(x_{R}-X_{\tau_{n}}\right). \tag{5}\] Here \(\tau_{n}=\max\{t>0:N_{t}\leq n\}\) are the consecutive resetting times. In what follows we will show that the derived here stochastic representation (4) constitutes a general framework for the analysis of processes with resetting. It allows to study many key properties of intermittent processes both analytically and using Monte Carlo simulation methods. Moreover, it should be underlined that the introduced approach can be easily generalized to the case of non-Poissonian resetting times as well as other arbitrary driving processes. More precisely, suppose that we want to define a stochastic process \(\tilde{X}_{t}\) describing an arbitrary dynamics with resetting. Let \(T_{1},T_{2},...\) be an arbitrary sequence of positive random variables describing times between consecutive resetting events. Moreover, let us assume that \(\tilde{W}_{t}\) is an arbitrary stochastic process describing the particle motion between resetting events. Then \(\tilde{X}_{t}\) can be defined as the solution of the following SDE: \[\mathrm{d}\tilde{X}_{t}=\mathrm{d}\tilde{W}_{t}+\left(x_{R}-\tilde{X}_{t} \right)\mathrm{d}\tilde{N}_{t},\quad\tilde{X}_{0}=x_{0}, \tag{6}\] which is a straightforward generalization of (4). Here \(\tilde{N}_{t}=\max\{n\in\mathbb{N}:\sum_{i=1}^{n}T_{i}\leq t\}\). It counts the number of resetting events up to time \(t\). In particular, \(\tilde{N}_{t}\) can be a renewal processes. Consequently, any process with resetting can be written in the form of (6). Figure 1: An exemplary plot of a resetting process. The process starts at \(x_{0}=0\) and then diffuses normally until it moves instantaneously to \(x_{R}=2\) in every consecutive resetting event. ### Ito lemma Ito lemma is the main tool in the analysis of SDEs. It allows to find their solutions as well as many of their key properties. Let us derive a version of the Ito lemma corresponding to the resetting process (4). For completeness we will derive it in a more general setting. Let \(Z_{t}=f(X_{t})\), where \(f\) is appropriately smooth and \(X_{t}\) is governed by the following SDE \[dX_{t}=\mu_{t}dt+\sigma_{t}dW_{t}+(x_{R}-X_{t})dN_{t}\;,\;\;\;X_{0}=x_{0}. \tag{7}\] Note that for \(\mu_{t}=0\) and \(\sigma_{t}=1\) we recover the resetting process (4). Using the Taylor expansion we get \[\mathrm{d}f(X_{t})=\sum_{i=1}^{\infty}\frac{f^{(i)}(X_{t})}{i!}\left(\mathrm{ d}X_{t}\right)^{i}.\] According to classical Ito calculus we have that \((\mathrm{d}W_{t})^{2}=\mathrm{d}t\) and \((\mathrm{d}W_{t})^{n}=0\) for \(n>2\). Moreover \((\mathrm{d}N_{t})^{n}=\mathrm{d}N_{t}\) for any \(n\in\mathbb{N}\) since \(N_{t}\) is the Poisson process. Therefore \[(\mathrm{d}X_{t})^{2}=\left(\sigma_{t}^{2}\,\mathrm{d}t+\left(x_{R}-X_{t} \right)^{2}\mathrm{d}N_{t}\right)\] and \[(\mathrm{d}X_{t})^{i}=\left(x_{R}-X_{t}\right)^{i}\mathrm{d}N_{t}\] for \(i>2\). Consequently \[\mathrm{d}f(X_{t}) =f^{{}^{\prime}}(X_{t})\left(\mu_{t}\,\mathrm{d}t+\sigma_{t}\, \mathrm{d}W_{t}+\left(x_{R}-X_{t}\right)\mathrm{d}N_{t}\right)+\] \[+\frac{1}{2}f^{{}^{\prime\prime}}(X_{t})\left(\sigma_{t}^{2}\, \mathrm{d}t+\left(x_{R}-X_{t}\right)^{2}\mathrm{d}N_{t}\right)\right)+\sum_{i= 3}^{\infty}\frac{f^{(i)}(X_{t})}{i!}\left(x_{R}-X_{t}\right)^{i}\mathrm{d}N_{t}\] \[=f^{{}^{\prime}}(X_{t})\sigma_{t}\,\mathrm{d}W_{t}+\left(f^{{}^{ \prime}}(X_{t})\mu_{t}+f^{{}^{\prime\prime}}(X_{t})\frac{\sigma_{t}^{2}}{2} \right)\mathrm{d}t+\] \[+\left(f^{{}^{\prime}}(X_{t})\left(x_{R}-X_{t}\right)+\frac{1}{2} f^{{}^{\prime\prime}}(X_{t})\left(x_{R}-X_{t}\right)^{2}+\sum_{i=3}^{\infty} \frac{f^{(i)}(X_{t})}{i!}\left(x_{R}-X_{t}\right)^{i}\right)\mathrm{d}N_{t}\] \[=f^{{}^{\prime}}(X_{t})\sigma_{t}\,\mathrm{d}W_{t}+\left(f^{{}^{ \prime}}(X_{t})\mu_{t}+f^{{}^{\prime\prime}}(X_{t})\frac{\sigma_{t}^{2}}{2} \right)\mathrm{d}t+\left(\sum_{i=1}^{\infty}\frac{f^{(i)}(X_{t})}{i!}\left(x_ {R}-X_{t}\right)^{i}+f(X_{t})-f(X_{t})\right)\mathrm{d}N_{t}.\] Finally, using the Taylor formula we get the generalised Ito formula \[\mathrm{d}Z_{t}=f^{{}^{\prime}}(X_{t})\sigma_{t}\,\mathrm{d}W_{t}+\left(f^{{} ^{\prime}}(X_{t})\mu_{t}+f^{{}^{\prime\prime}}(X_{t})\frac{\sigma_{t}^{2}}{2} \right)\mathrm{d}t+\left(f(x_{R})-f(X_{t})\right)\mathrm{d}N_{t}.\] For \(\mu_{t}=0\) and \(\sigma_{t}=1\) we recover the Ito lemma for the resetting process (4) \[\mathrm{d}f(X_{t})=\frac{1}{2}f^{{}^{\prime\prime}}(X_{t})\,\mathrm{d}t+f^{{}^{ \prime}}(X_{t})\,\mathrm{d}W_{t}+\left(f(x_{R})-f(X_{t})\right)\mathrm{d}N_{t}. \tag{8}\] ### Moment generating function and Fourier transform Now, using the above Ito lemma, let us calculate the MGF \(M_{t}(s)=\mathbb{E}\left(e^{sX_{t}}\right)\) of the resetting process \(X_{t}\) given by (4). Here \(\mathbb{E}\) is the expected value. Applying (8) for the function \(f_{s}(x)=e^{sx}\) and putting \(Z_{t}=f_{s}(X_{t})\) we arrive at the following SDE: \[\mathrm{d}Z_{t}=\mathrm{d}f_{s}(X_{t})=sZ_{t}\,\mathrm{d}W_{t}+\frac{1}{2}s^{2 }Z_{t}\,\mathrm{d}t+\left(f_{s}(x_{R})-Z_{t}\right)\mathrm{d}N_{t}. \tag{9}\] Note that \(\mathbb{E}\left(Z_{t}\right)\) is equal to the MGF of \(X_{t}\). Taking the expected value of both sides of (9) we get \[\mathbb{E}\left(\mathrm{d}Z_{t}\right) =s\mathbb{E}\left(Z_{t}\,\mathrm{d}W_{t}\right)+\frac{1}{2}s^{2} \,\mathrm{d}t\,\mathbb{E}(Z_{t})+\mathbb{E}\left(e^{sx_{R}}-f(X_{t})\right) \mathrm{d}N_{t}\] \[=\frac{1}{2}s^{2}\,\mathrm{d}t\,\mathbb{E}(Z_{t})+\sum_{i=0}^{1} \mathbb{E}\left(\left(e^{sx_{R}}-f(X_{t})\right)\mathrm{d}N_{t}\right|\mathrm{ d}N_{t}=i\right)\mathbb{P}\left(\mathrm{d}N_{t}=i\right)\] \[=\frac{1}{2}s^{2}\,\mathrm{d}t\,\mathbb{E}(Z_{t})+\mathbb{E} \left(e^{sx_{R}}-f(X_{t})\Big{|}\,\mathrm{d}N_{t}=1\right)\mathbb{P}\left( \mathrm{d}N_{t}=1\right)\] \[=\frac{1}{2}s^{2}\,\mathrm{d}t\,\mathbb{E}(Z_{t})+r\,\mathrm{d}te ^{sx_{R}}-r\,\mathrm{d}t\,\mathbb{E}\left(f(X_{t})\right).\] The sum in the second line of the above equalities is the consequence of the total probability formula. Dividing both sides by \(\mathrm{d}t\), interchanging the order of integration and differentiation and using the dominated convergence theorem (the process has clearly lighter tails than the ordinary diffusion, and as such the theorem can be applied), substituting \(\mathbb{E}(Z_{t})=M_{t}(s)\) and rearranging the equation, we get a simple ordinary differential equation \[\partial_{t}M_{t}(s)+\left(r-\frac{1}{2}s^{2}\right)M_{t}(s)=re^{sx_{R}}.\] This yields the solution \[M_{t}(s)=\frac{re^{sx_{R}}}{r-\frac{1}{2}s^{2}}+c(s)e^{-\left(r-\frac{1}{2}s^{ 2}\right)t}.\] Using conditions \(M_{0}(s)=e^{sX_{0}}\) and \(M_{t}(0)=1\) for all \(t\) we get that the unique constant \(c(s)\) equals \[c(s)=\left(e^{sX_{0}}-\frac{r}{r-\frac{1}{2}s^{2}}e^{sx_{R}}\right).\] Finally our MGF has the form \[M_{t}(s)=\frac{re^{sx_{R}}}{r-\frac{1}{2}s^{2}}+\left(e^{sX_{0}}-\frac{re^{sx_{R}}} {r-\frac{1}{2}s^{2}}\right)e^{-\left(r-\frac{1}{2}s^{2}\right)t}. \tag{10}\] We can also notice that in case of the lack of resetting (\(r=0\)) the MGF simplifies to the MGF of Brownian motion starting at \(X_{0}\) \[\frac{0e^{sx_{R}}}{0-\frac{1}{2}s^{2}}+\left(e^{sX_{0}}-\frac{0}{0-\frac{1}{2} s^{2}}e^{sx_{R}}\right)e^{-\left(0-\frac{1}{2}s^{2}\right)t}=e^{sX_{0}+ \frac{1}{2}s^{2}}.\] Additionally for large time \(t\rightarrow\infty\) MGF of the process with resetting converges to \[M(s)=\frac{e^{sx_{R}}}{1-\frac{1}{2r}s^{2}},\] which is a MGF of the Laplace distribution with mean \(x_{R}\) and scale parameter \((2r)^{-\frac{1}{2}}\). The Fourier transform of the process with resetting \(\varphi_{t}(s)=\mathbb{E}\left(e^{isX_{t}}\right)\) can be derived Figure 2: A plot of the derived mean of the resetting process (top left), MGF (top right), the real part of the Fourier transform (bottom left) and the imaginary part of the Fourier transform (bottom right) at a different time points (0.1, 0.5, 1.5), as a blue, red and yellow lines respectively. The process starts at 0 and resets with rate 1 to point 5. We can also see the unusual behavior of the MGF at the point \(\sqrt{2}\), due to the divergence of the function. analogously or using the fact that \(\varphi_{t}(s)=M_{t}(is)\), and equals \[\varphi_{t}(s)=\frac{re^{isx_{R}}}{r+\frac{1}{2}s^{2}}+\left(e^{isX_{0}}-\frac{re ^{isx_{R}}}{r+\frac{1}{2}s^{2}}\right)e^{-\left(r+\frac{1}{2}s^{2}\right)t}. \tag{11}\] Plots of \(M_{t}(s)\) and \(\varphi_{t}(s)\) are presented in Fig. 2. We can immediately notice that the MGF is well defined only for \(|s|<\sqrt{2r}\). The moment generating function and Fourier transform are closely related via formula \(\varphi_{t}(s)=M_{t}(is)\). However, the advantage of \(\varphi_{t}(s)\) is that it is well defined for any \(s\in\mathbb{R}\). The moment generating function does not have this property. On the other hand, the advantage of \(M_{t}(s)\) is that it is real not complex. Both functions \(M_{t}(s)\) (if well defined) and \(\varphi_{t}(s)\) determine uniquely the underlying distribution. ### Explicit PDF Now, using the inverse Fourier transform we are able to invert \(\varphi_{t}(s)\). This way we will arrive at the explicit formula for the PDF \(p(x,t)\) of the resetting process \(X_{t}\). An exemplary plot of \(p(x,t)\) is presented in Fig. 3. Let us split the Fourier transform (11) into three parts in the following way \[\varphi_{t}(s)=\varphi_{1}+\varphi_{2}+\varphi_{3},\] Figure 3: A histogram of \(X_{t}\) obtained using Monte Carlo simulations compared with the derived analytical PDF of the process at \(t=0.1\). We observe perfect agreement between both results. The process starts at \(0\) and resets to point \(3\) with rate \(1\). where \(\varphi_{1}=\frac{r}{r+\frac{1}{2}s^{2}}e^{isx_{R}}\), \(\varphi_{2}=e^{isX_{0}}e^{-\left(r+\frac{1}{2}s^{2}\right)t}\), \(\varphi_{3}=-\frac{r}{r+\frac{1}{2}s^{2}}e^{isx_{R}}e^{-\left(r+\frac{1}{2}s^{2 }\right)t}\). Using the well known properties of the Fourier transform, we can already see that the first term is just the Fourier transform of the Laplace distribution with the corresponding PDF [31] \[\mathcal{F}^{-1}\left\{\varphi_{1}\right\}=f_{\mathcal{L}}(x)=\sqrt{\frac{r}{2 }}\exp\left(-\frac{|x-x_{R}|}{(2r)^{-1/2}}\right), \tag{12}\] see also [32; 33] for the relationship between Laplace distribution and stopping times of random motions. We will calculate the second term directly from the definition of the inverse transform \[\mathcal{F}^{-1}\left\{\varphi_{2}\right\} =\frac{1}{2\pi}\int\limits_{\mathbb{R}}e^{-isx}e^{isX_{0}-rt- \frac{t}{2}s^{2}}\,\mathrm{d}s=\frac{e^{-rt}}{2\pi}\int\limits_{\mathbb{R}}e^ {i(X_{0}-x)s-\frac{t}{2}s^{2}}\,\mathrm{d}s\] \[=\frac{e^{-rt-\frac{(x-X_{0})^{2}}{2t}}}{\sqrt{2\pi t}}=e^{-rt}f_ {W_{X_{0}}}(x),\] where \(f_{W_{X_{0}}}(x)=\frac{e^{-\frac{(x-X_{0})^{2}}{2t}}}{\sqrt{2\pi t}}\) is the PDF of standard Brownian motion starting at \(X_{0}\). Let us now write \(\varphi_{3}\) in the following way \[\varphi_{3}=-e^{-rt}\cdot\frac{r}{r+\frac{1}{2}s^{2}}e^{isx_{R}}\cdot e^{- \frac{t}{2}s^{2}}.\] The first term of the product is independent of \(s\), so it's treated as a scalar in the inverse Fourier transform. The second one is the Fourier transform of the Laplace distribution and the third is simply the Fourier transform of the normal distribution \(\mathcal{N}(0,t)\). Having all this and using the well-known fact that \(\mathcal{F}\{f\}\,\mathcal{F}\{g\}=\mathcal{F}\{f*g\}\), where \(f*g\) is the convolution of functions, we finally get that the explicit PDF \(p(x,t)\) of the resetting process (4) equals \[p(x,t)=f_{\mathcal{L}}(x)+e^{-rt}\left(f_{W_{X_{0}}}(x)-f_{\mathcal{N}(0,t)}* f_{\mathcal{L}}(x)\right). \tag{13}\] Here \(f_{\mathcal{N}(0,t)}(x)\) is the PDF of \(\mathcal{N}(0,t)\). Note that if \(f\) and \(g\) are two PDFs of some independent random variables, say V and W, then \(f*g\) is the PDF of \(V+W\). We underline that the convolution of Laplace and normal distributions can be calculated explicitly, see [34]. ### Moments Applying the above formula for PDF we are also able to calculate the mean of the resetting process \[\mathbb{E}(X_{t})=\int_{\mathbb{R}}xp(x,t)\,\mathrm{d}x=\int_{\mathbb{R}}xf_{ \mathcal{L}}(x)\,\mathrm{d}x+e^{-rt}\left(\int_{\mathbb{R}}x\left(f_{W_{X_{0}}} (x)-f_{\mathcal{N}(0,t)}*f_{\mathcal{L}}(x)\right)\mathrm{d}x\right).\] Using the fact that the convolution of PDFs is the PDF of the sum of the corresponding independent random variables, we get that \[\mathbb{E}(X_{t})=x_{R}+e^{-rt}\left(X_{0}-x_{R}\right).\] Let us now derive the formula for the \(n\)-th moment of the resetting process. For simplicity of the notation we will focus on the case \(x_{R}=0\). We have \[\mathbb{E}(X_{t}^{n})=\int\limits_{\mathbb{R}}x^{n}p(x,t)\,\mathrm{d}x=\int \limits_{\mathbb{R}}x^{n}f_{\mathcal{L}}(x)\,\mathrm{d}x+e^{-rt}\left(\int \limits_{\mathbb{R}}x^{n}f_{W_{X_{0}}}(x)\,\mathrm{d}x-\int\limits_{\mathbb{R} }x^{n}f_{\mathcal{N}(0,t)}*f_{\mathcal{L}}(x)\,\mathrm{d}x\right).\] The first integral is the \(n\)-th moment of the Laplace random variable \(L\) with mean \(0\) and scale parameter \((2r)^{-\nicefrac{{1}}{{2}}}\). Using the scaling property it can be rewritten as \(L=(2r)^{-\nicefrac{{1}}{{2}}}Y\), where \(Y\sim\mathcal{L}(0,1)\). It is well known, that the \(n\)-th moment of \(Y\) equals \(n!\) for even \(n\) and otherwise zero. Therefore \[\mathbb{E}(L^{n})=\mathbb{E}\left(((2r)^{-1/2}Y^{n})\right)=(2r)^{-n/2}\, \mathbb{E}\,Y^{n}=(2r)^{-n/2}n! \tag{14}\] for even \(n\) and zero otherwise. The second integral being the \(n\)-th moment of the normal distribution \(\mathcal{N}(X_{0},t)\) can be stated as follows [35] \[\mathbb{E}(W_{X_{0}}^{n})=\begin{cases}(\sqrt{2t})^{n}\frac{\Gamma((n+1)/2)}{ \sqrt{\pi}}\Phi\left(-\frac{n}{2},\frac{1}{2};-\frac{X_{0}^{2}}{2t}\right)& \text{for even }n\\ X_{0}(\sqrt{t})^{n-1}2^{(n+1)/2}\frac{\Gamma(n/2+1)}{\sqrt{\pi}}\Phi\left( \frac{1-n}{2},\frac{3}{2};-\frac{X_{0}^{2}}{2t}\right)&\text{for odd }n.\end{cases} \tag{15}\] Here \(\Phi(a,b;c)\) is the Kummer's confluent hypergeometric function [35]. The case \(X_{0}=0\) simplifies the results even more \[\mathbb{E}(W_{0}^{n})=\mathbb{E}(W^{n})=t^{\frac{n}{2}}(n-1)!! \tag{16}\] for even \(n\) and zero otherwise. The third integral is the most interesting. As we can see, it represents the \(n\)-th moment of the sum of normal random variable \(W\sim\mathcal{N}(0,t)\) and Laplace random variable \(L\sim\mathcal{L}(0,(2r)^{-\nicefrac{{1}}{{2}}})\). Note that \(W\) and \(L\) are independent. Let us calculate the \(n\)-th moment of their sum \[\int\limits_{\mathbb{R}}x^{n}f_{\mathcal{N}(0,t)}*f_{\mathcal{L}}\,\mathrm{d} x=\mathbb{E}((W+L)^{n}).\] Utilizing the binomial theorem we get the following result \[\mathbb{E}((W+L)^{n}) =\mathbb{E}\left(\sum_{k=0}^{n}\binom{n}{k}W^{k}L^{n-k}\right)=\sum_ {k=0}^{n}\binom{n}{k}\left(\mathbb{E}(W^{k})\right)\left(\mathbb{E}(L^{n-k})\right)\] \[=\sum_{\begin{subarray}{c}k=0,\\ k\text{ even}\end{subarray}}^{n}\binom{n}{k}\left(\mathbb{E}(W^{k})\right) \left(\mathbb{E}(L^{n-k})\right), \tag{17}\] for even \(n\) and zero otherwise. This is due to the fact that that both \(W\) and \(L\) are independent and symmetric with mean \(0\). Gathering all the terms (14),(15) and (17) we get the final expression for the \(n-th\) moment of \(X_{t}\) \[\mathbb{E}(X_{t}^{n})=\mathbb{E}(L^{n})+e^{-rt}\,\mathbb{E}(W_{X_{0}}^{n})-e^{ -rt}\,\mathbb{E}\left((W+L)^{n}\right).\] We note that another way of deriving moments of resetting process is by calculating derivatives of the MGF \(M_{t}(s)\) in eq. (10). ### Fokker-Planck equations Having the PDF (13) of the resetting process we can directly derive the corresponding Fokker-Planck equation. Let us calculate the necessary derivatives of (13): \[\partial_{t}p(x,t) =\partial_{t}\left(f_{\mathcal{L}}+e^{-rt}\left(f_{W_{X_{0}}}-f_{ \mathcal{N}(0,t)}*f_{\mathcal{L}}\right)\right)=\partial_{t}\left(e^{-rt} \left(f_{W_{X_{0}}}-f_{\mathcal{N}(0,t)}*f_{\mathcal{L}}\right)\right)\] \[=-re^{-rt}\left(f_{W_{X_{0}}}-f_{\mathcal{N}(0,t)}*f_{\mathcal{L} }\right)+e^{-rt}\partial_{t}\left(f_{W_{X_{0}}}-f_{\mathcal{N}(0,t)}*f_{ \mathcal{L}}\right)\] \[=-rp(x,t)+rf_{\mathcal{L}}+e^{-rt}\left(\partial_{t}f_{W_{X_{0}}} -\left(\partial_{t}f_{\mathcal{N}(0,t)}\right)*f_{\mathcal{L}}\right)\] The second spatial derivative equals to \[\partial_{xx}p(x,t)=\partial_{xx}\left(f_{\mathcal{L}}+e^{-rt}\left(f_{W_{X_{0} }}-f_{W}*f_{\mathcal{L}}\right)\right)=\partial_{xx}f_{\mathcal{L}}+e^{-rt} \left(\partial_{xx}f_{W_{X_{0}}}-(\partial_{xx}f_{W})*f_{\mathcal{L}}\right).\] We can now note that the PDF of the Wiener process fulfills the standard diffusion equation \[\partial_{t}f_{W_{X_{0}}}=\frac{1}{2}\partial_{xx}f_{W_{X_{0}}},\] so the time derivative can be rewritten as \[\partial_{t}p(x,t) =-rp(x,t)+rf_{\mathcal{L}}+\frac{1}{2}\left(\partial_{xx}f_{ \mathcal{L}}+e^{-rt}\left(\partial_{xx}f_{W_{X_{0}}}-\left(\partial_{xx}f_{ \mathcal{N}(0,t)}\right)*f_{\mathcal{L}}\right)-\partial_{xx}f_{\mathcal{L}}\right)\] \[=\frac{1}{2}\partial_{xx}p(x,t)+\left(r-\frac{1}{2}\partial_{xx} \right)f_{\mathcal{L}}-rp(x,t)\] The derivative (in the weak sense) of the density of Laplace distribution is \[\partial_{x}f_{\mathcal{L}}=\partial_{x}\sqrt{\frac{r}{2}}e^{-\sqrt{2r}|x-x_{R}|} =-re^{-\sqrt{2r}|x-x_{R}|}\partial_{x}\left(|x-x_{R}|\right)=-re^{-\sqrt{2r}|x-x _{R}|}\operatorname{sgn}(x-x_{R}),\] while the second spatial derivative is \[\partial_{xx}f_{\mathcal{L}} =\sqrt{2r^{3}}e^{-\sqrt{2r}|x-x_{R}|}\operatorname{sgn}^{2}(x-x _{R})-re^{-\sqrt{2r}|x-x_{R}|}2\delta(x-x_{R})\] \[=e^{-\sqrt{2r}|x-x_{R}|}\left(2r\delta(x-x_{R})-\sqrt{2r^{3}} \operatorname{sgn}^{2}(x-x_{R})\right)\overset{\text{a.e.}}{=}f_{\mathcal{L}} \left(2\sqrt{2r}\delta(x-x_{R})-2r\right).\] Plugging this result to the expression for time derivative we arrive at the final Fokker-Planck equation for the resetting process \[\partial_{t}p(x,t)=\frac{1}{2}\partial_{x}^{2}p(x,t)+\sqrt{2r}\delta(x-x_{R})f _{\mathcal{L}}-rp(x,t).\] The above equation is different from [6], see (2), however we can check the equivalence of both equations by looking at them in the frequency domain via Fourier transform. We only need to look at the component \(\sqrt{2r}\delta(x-x_{R})f_{\mathcal{L}}\). We have \[\mathcal{F}\left\{\sqrt{2r}\delta(x-x_{R})f_{\mathcal{L}}\right\} =\sqrt{2r}\int\limits_{\mathbb{R}}\sqrt{\frac{r}{2}}e^{-\sqrt{2r} |x-x_{R}|}\delta(x-x_{R})e^{isx}\,\mathrm{d}x\] \[=re^{-\sqrt{2r}|x_{R}-x_{R}|}e^{isx_{R}}=re^{isx_{R}}=\mathcal{F} \left\{r\delta(x-x_{R})\right\}.\] Both equations are equal in the frequency domain, implying that their solutions coincide. One should also underline that the difference between both equations is only in the Dirac delta term \(\delta(x-x_{R})\), which affect the solutions only on the set of points with zero Lebesgue measure. Since the random variable \(X_{t}\) is continuous, modification of its PDF on the set of points with zero Lebesgue measure does not change the distribution. Let us note that the above Fokker-Planck equation can be used to derive the stationary PDF of the resetting process just by putting \(\partial_{t}p(x,t)=0\) and calculating the corresponding \(p\). ### Infinitesimal generator The infinitesimal generator is a key operator in the theory of Markov processes [36]. It contains great deal of information about the Markov process. In particular it can be used to find the corresponding evolution equations. Applying the general formula for jump diffusion processes [28], we get that the infinitesimal generator \(\mathcal{A}\) of the resetting process (4) equals: \[\mathcal{A}g(x)=\frac{1}{2}\partial_{xx}g(x)+r\left(g(x_{R})-g(x)\right), \tag{18}\] for appropriately smooth function \(g\). The first component \(\frac{1}{2}\partial_{xx}g(x)\) is the diffusive part of the generator. The second component (jump part) of the above expression can be identified as a generator of the compound Poisson process with jump sizes \(x_{R}-X_{t}\), which corresponds to the jumps of the process to the resetting point \(x_{R}\) with intensity \(r\). Let us now derive the adjoint generator of our resetting process. It follows from the calculations below \[\int f(z)\mathcal{O}g(z)\,\mathrm{d}z =\int g(x_{R})f(z)\,\mathrm{d}z=\int g(u)\delta(u-x_{R})\,\mathrm{ d}u\int f(z)\,\mathrm{d}z\] \[=\iint g(u)\delta(u-x_{R})f(z)\,\mathrm{d}z\,\mathrm{d}u=\int g(u )\delta(u-x_{R})\int f(z)\,\mathrm{d}z\,\mathrm{d}u\] \[=\int g(z)\delta(z-x_{R})\int f(u)\,\mathrm{d}u\,\mathrm{d}z= \int g(z)\mathcal{O}^{*}f(z)\,\mathrm{d}z,\] that the adjoint generator \(\mathcal{A}^{*}\) equals \[\mathcal{A}^{*}g(x)=\frac{1}{2}\partial_{xx}g(x)+r\left(\delta(x-x_{R})\int \limits_{\mathbb{R}}g(z)\,\mathrm{d}z-g(x)\right). \tag{19}\] From the general theory of Markov processes we know that the PDF of a Markov process with adjoint generator \(\mathcal{A}^{*}\) satisfies the following Fokker-Planck equation [37] \[\partial_{t}p=\mathcal{A}^{*}p.\] Therefore, applying (19) we obtain the following Fokker-Planck formula for the resetting process \[\partial_{t}p(x,t)=\frac{1}{2}\partial_{xx}p(x,t)+r\left(\delta(x-x_{R})\int \limits_{\mathbb{R}}p(z,t)\,\mathrm{d}z-p(x,t)\right).\] Since \(\int_{\mathbb{R}}p(z,t)\,\mathrm{d}z=1\) we obtain the same Fokker-Planck equation as the one derived in [6] for resetting processes, cf. equation (2). ### Nonhomogeneous in time Poissonian resetting Non-homogeneous Poisson process (NPP) is a natural generalization of the standard Poisson process to the case of time-dependent intensity. It is defined as a counting process with the following properties [38]: * \(\tilde{N}_{0}=0\), * \(\tilde{N}_{t}\) has independent increments, * the increments \(\tilde{N}_{t+h}-\tilde{N}_{t}\) are Poisson distributed with mean \(\int_{t}^{t+h}r(s)ds\). Here, the non-negative function \(r(t)\) is called the intensity function of \(\tilde{N}_{t}\). For higher values of \(r(t)\) we observe more jumps of \(\tilde{N}_{t}\), whereas small \(r(t)\) gives less jumps on the average. For \(r(t)=r=const\) we recover the standard Poisson process. Now, we introduce a modified SDE defining our resetting process \[\mathrm{d}X_{t}=\mathrm{d}W_{t}+(x_{R}-X_{t})\,\mathrm{d}\tilde{N}_{t}. \tag{20}\] We put \(X_{0}=x_{R}=0\) for simplicity. \(\tilde{N}_{t}\) here is the NPP with intensity function \(r(t)\) and \(W(t)\) is the standard Brownian motion independent of \(\tilde{N}_{t}\). In our further analysis we will focus on the power-law intensity function and the impact it has on both the mean square displacement (MSD) and the overall distribution. We will thus assume that the intensity function has the form \(r(t)=r\cdot(t+1)^{p}\). The previously studied homogeneous resetting is now a special case with \(p=0\), which makes it an interesting starting consideration point. With \(p=0\) the process displayed a nondegenerate stationary distribution, so any increase in resetting frequency should have a profound impact on the probability law, most likely creating a degenerate distribution. On the other hand the decrease of the resetting intensity intuitively should recover the classical diffusion at infinity. This will be verified in detail. We will show that the diffusive, subdiffussive, stationary and deterministic behavior can be observed for long times depending on the parameter \(p\). Now let's define \(R(t)=\int_{0}^{t}r(\omega)\,\mathrm{d}\omega\) as the mean number of resets up to point \(t\). Using exactly the same methods and calculations as in the homogeneous case, based on Ito lemma, we conclude that the Fourier transform of the process must be a solution of the following differential equation \[\partial_{t}\varphi_{t}(s)+(r(t)+\frac{1}{2}s^{2})\varphi_{t}(s)=r(t).\] Integration yields the Fourier transform \[\varphi_{t}(s)=e^{-R(t)-\frac{1}{2}ts^{2}}\left(1+\int_{0}^{t}r(w)e^{R(w)+ \frac{1}{2}ws^{2}}\,\mathrm{d}w\right) \tag{21}\] which after simple inspection can be inverted to obtain the density of the process given in (20) \[f_{X_{t}}(x,t)=e^{-R(t)}f_{\mathcal{N}(0,t)}(x)+e^{-R(t)}\int\limits_{0}^{t}r( \omega)e^{R(\omega)}f_{\mathcal{N}(0,t-\omega)}(x)\,\mathrm{d}\omega. \tag{22}\] We can easily extract the MSD simply by multiplying by \(x^{2}\) and integrating. We get \[MSD(X_{t})=e^{-R(t)}MSD(W_{t})+e^{-R(t)}\int\limits_{0}^{t}r(\omega)e^{R( \omega)}MSD(W_{t-\omega})\,\mathrm{d}\omega,\] which after simple calculations turns into \[MSD(X_{t})=e^{-R(t)}\int\limits_{0}^{t}e^{R(\omega)}\,\mathrm{d}\omega. \tag{23}\] Now, let us assume that the intensity function has the form \(r(t)=r\cdot(t+1)^{p}\), where \(p\) can be any real number and \(r\) is a positive constant. Then we get \(R(t)=\int_{0}^{t}r(\omega)\,\mathrm{d}\omega=\frac{r}{p+1}\left((t+1)^{p+1}-1 \right))\) if \(p\neq-1\) and \(R(t)=\ln{(t+1)^{r}}\) if \(p=-1\). We already know the asymptotic behavior of \(X_{t}\) for \(p=0\) from previous sections. Let us now check the case \(p=-1\). We have \[MSD(X_{t})=e^{-\ln{(t+1)^{r}}}\int\limits_{0}^{t}e^{\ln{(\omega+1)^{r}}}\, \mathrm{d}\omega=\frac{1}{(t+1)^{r}}\int\limits_{0}^{t}(\omega+1)^{r}\, \mathrm{d}\omega\] The MSD is now equal to \(\frac{t+1}{r+1}-\frac{1}{(t+1)^{r}(r+1)}\), which for large times gives us linear diffusive scaling. Next, if \(p\in(-1,0)\) then \(r(t)\) is decreasing, but \(R(t)\) is increasing, giving us the most interesting case to investigate. We obtain \[MSD(X_{t})=e^{-\frac{r}{p+1}\left(\left(t+1\right)^{p+1}-1\right)}\int\limits _{0}^{t}e^{\frac{r}{p+1}\left(\left(\omega+1\right)^{p+1}-1\right)})\,\mathrm{ d}\omega=e^{-\frac{r}{p+1}(t+1)^{p+1}}\int\limits_{0}^{t}e^{\frac{r}{p+1}( \omega+1)^{p+1}}\,\mathrm{d}\omega.\] Applying the de l'Hospital rule we get for large times \[MSD(X_{t})=\frac{\int_{0}^{t}e^{\frac{r}{p+1}(\omega+1)^{p+1}}\,\mathrm{d} \omega}{e^{\frac{r}{p+1}(t+1)^{p+1}}}\approx\frac{e^{\frac{r}{p+1}(t+1)^{p+1} }}{r(t+1)^{p}e^{\frac{r}{p+1}(t+1)^{p+1}}}\approx\frac{1}{r}(t+1)^{-p}.\] The process now displays subdiffusive behavior with exponent \(-p\in(0,1)\). Consequently, for \(p<-1\) we obtain for large times \[MSD(X_{t})=e^{-\frac{r}{p+1}(t+1)^{p+1}}\int\limits_{0}^{t}e^{\frac{r}{p+1}( \omega+1)^{p+1}}\,\mathrm{d}\omega\approx\int\limits_{0}^{t}e^{\frac{r}{p+1}( \omega+1)^{p+1}}\,\mathrm{d}\omega\] Again, applying the de l'Hospital rule we get \[MSD(X_{t})\approx t\] for large times. As expected, the process displays normal diffusive scaling of the MSD. Examining the case \(p>0\) in analogous way, we find that \(MSD(X_{t})\approx\frac{1}{r}(t+1)^{-p}\), which converges to \(0\) for large times. Let us now focus on the asymptotic distributions. Again, we expect different results for different \(p\). The case \(p=0\) corresponds to the homogeneous Poisson process and was analyzed previously, giving rise to the Laplace stationary distribution. For \(p>0\) we get from (21) and from de l'Hospital rule that the Fourier transform satisfies \[\varphi_{t}(s)\to 1\] for large times. This implies that \(X_{t}\overset{d}{\to}x_{R}=0\) as \(t\to\infty\). Here \(\overset{d}{\to}\) means convergence in distribution. This result agrees with our intuition. For \(p>0\) the number of resetting times increases with time, therefore \(X_{t}\) keeps coming back to \(x_{R}\) more and more often. For \(p\in(-1,0)\) we get from (21) and from de l'Hospital rule that the Fourier transform satisfies \[\varphi_{t}(s)\approx\frac{1}{1+\frac{s^{2}}{2r(t+1)^{p}}}\] for large times. This means that the distribution of \(X_{t}\) is asymptotically Laplace with subdiffusive \(MSD(X_{t})\approx t^{-p}\) for large times. Ananlogous result is obtained for \(p=-1\): \[\varphi_{t}(s)\approx\frac{1}{1+\left(\frac{t+1}{2r}\right)s^{2}}\] for large times, with linear in time asymptotic MSD. Looking at \(p<-1\) we get that for large times the corresponding characteristic function satisfies \[\varphi_{t}(s)\approx e^{-R(t)}e^{-ts^{2}/2}+\frac{r(t)(t+1)}{p}\frac{1}{1+ \frac{t+1}{2p}s^{2}},\] which is a combination of Gaussian and Laplace distributions. The corresponding MSD is asymptotically linear in time. Note that for \(p\to-\infty\), i.e. when the resetting events vanish, only the diffusive Brownian part is left in the above formula. Summarizing the above findings for large times: * for \(p>0\) we observe degenerate distribution concentrated in \(x_{R}\) with \(MSD(X_{t})\approx const\); * for \(p=0\) a stationary Laplace distribution with \(MSD(X_{t})\approx const\) is observed; * for \(p\in(-1,0)\) we get a non-stationary Laplace distribution with subdiffusive \(MSD(X_{t})\approx t^{-p}\); * for \(p=-1\) we get a non-stationary Laplace distribution with diffusive \(MSD(X_{t})\approx t\); * for \(p<-1\) we observe a combination of normal and Laplace distributions with diffusive \(MSD(X_{t})\approx t\). As \(p\rightarrow-\infty\), the standard Brownian diffusion is recovered. In Figs. 4-5 we can observe the approximated MSD with a fitted (anomalous) diffusion exponent. We observe perfect agreement between simulation and theory. Figure 4: A Monte-Carlo simulation results (solid lines) compared with the theoretical asymptotics (dotted lines) of the MSD of Brownian motion under nonhomogeneous Poisson resetting with \(r(t)=r\cdot(t+1)^{p}\), calculated for different \(p\leq 0\). The parameter \(\mu\) here is the corresponding power-law exponent of the theoretical asymptotic MSD. The time horizon of the simulations was of order \(10^{2}\), while the number of samples was equal to \(10^{4}\). We can observe the convergence of the anomalous diffusion exponents to the theoretical ones. ## III Conclusions Summarizing, we have introduce a general stochastic representation for processes with resetting. It allows to analyze such processes using stochastic differential equations, Ito lemma and related tools. We have shown robustness of our approach for the case of homogeneous, and nonhomogeneous Poissonian resetting. We would like to underline that the derived in this paper results build a general framework for the analysis of stochastic processes with intermittent random resetting. It allows to analyze processes with resetting both, analytically and using Monte Carlo simulation methods. Moreover, the presented approach can be easily generalized to the case of non-Markovian resetting times as well as other, not necessarily Brownian, driving processes (Levy flights, Levy processes, fractional processes, etc.). Figure 5: A Monte-Carlo simulation (solid lines) of the MSD of the process with nonhomogeneous Poisson resetting with intensity function \(r(t)=r\cdot(t+1)^{p}\), calculated for different \(p>0\). Asterixes are the corresponding theoretical values of the MSD obtained via numerical evaluation of (23). We can see the convergence of the MSD to \(0\), indicating the convergence to the degenerate distribution concentrated in \(x_{R}=0\). The convergence rate increases with the increase of the \(p\) parameter. ## Acknowledgments This research was partially supported by NCN Sonata Bis-9 grant nr 2019/34/E/ST1/00360.
2310.09725
KGQuiz: Evaluating the Generalization of Encoded Knowledge in Large Language Models
Large language models (LLMs) demonstrate remarkable performance on knowledge-intensive tasks, suggesting that real-world knowledge is encoded in their model parameters. However, besides explorations on a few probing tasks in limited knowledge domains, it is not well understood how to evaluate LLMs' knowledge systematically and how well their knowledge abilities generalize, across a spectrum of knowledge domains and progressively complex task formats. To this end, we propose KGQuiz, a knowledge-intensive benchmark to comprehensively investigate the knowledge generalization abilities of LLMs. KGQuiz is a scalable framework constructed from triplet-based knowledge, which covers three knowledge domains and consists of five tasks with increasing complexity: true-or-false, multiple-choice QA, blank filling, factual editing, and open-ended knowledge generation. To gain a better understanding of LLMs' knowledge abilities and their generalization, we evaluate 10 open-source and black-box LLMs on the KGQuiz benchmark across the five knowledge-intensive tasks and knowledge domains. Extensive experiments demonstrate that LLMs achieve impressive performance in straightforward knowledge QA tasks, while settings and contexts requiring more complex reasoning or employing domain-specific facts still present significant challenges. We envision KGQuiz as a testbed to analyze such nuanced variations in performance across domains and task formats, and ultimately to understand, evaluate, and improve LLMs' knowledge abilities across a wide spectrum of knowledge domains and tasks.
Yuyang Bai, Shangbin Feng, Vidhisha Balachandran, Zhaoxuan Tan, Shiqi Lou, Tianxing He, Yulia Tsvetkov
2023-10-15T04:00:36Z
http://arxiv.org/abs/2310.09725v3
# KgQuiz: Evaluating the Generalization of Encoded Knowledge in Large Language Models ###### Abstract. Large language models (LLMs) demonstrate remarkable performance on knowledge-intensive tasks, suggesting that real-world knowledge is encoded in their model parameters. However, besides explorations on a few probing tasks in limited knowledge domains, it is not well understood how to evaluate LLMs' knowledge systematically and how well their knowledge abilities generalize, across a spectrum of knowledge domains and progressively complex task formats. To this end, we propose KgQuiz1, a knowledge-intensive benchmark to comprehensively investigate the knowledge generalization abilities of LLMs. KgQuiz is a scalable framework constructed from triplet-based knowledge, which covers three knowledge domains and consists of five tasks with increasing complexity: true-or-false, multiple-choice QA, blank filling, factual editing, and open-ended knowledge generation. To gain a better understanding of LLMs' knowledge abilities and their generalization, we evaluate 10 open-source and black-box LLMs on the KgQuiz benchmark across the five knowledge-intensive tasks and knowledge domains. Extensive experiments demonstrate that LLMs achieve impressive performance in straightforward knowledge QA tasks, while settings and contexts requiring more complex reasoning or employing domain-specific facts still present significant challenges. We envision KgQuiz as a testbed to analyze such nuanced variations in performance across domains and task formats, and ultimately to understand, evaluate, and improve LLMs' knowledge abilities across a wide spectrum of knowledge domains and tasks. Footnote 1: The KgQuiz benchmark and code are available at [https://github.com/leopdwhite/KgQuiz](https://github.com/leopdwhite/KgQuiz). + Footnote †: isbn To this end, we propose KGQuiz, a comprehensive benchmark designed to evaluate the knowledge abilities of LLMs across multiple knowledge utilization patterns in diverse knowledge domains. Specifically, KGQuiz is constructed with structured information from knowledge graphs (KGs) from three varying domains, representing commonsense, encyclopedic, and domain-specific (biomedical) knowledge. For each knowledge graph, KGQuiz presents a collection of 41,000 knowledge-intensive questions, covering five tasks of increasing complexity: _true-or-false_, _multiple choice_, _blank-filling_, _multi-hop factual editing_, and _open-ended text generation_. These progressively difficult tasks represent the multitudes of LLM knowledge and reasoning abilities, providing a comprehensive and comparative setting to assess LLMs' abilities: they respectively test LLMs' abilities to _judge factual correctness_, _select facts based on model confidence_, _retrieve entities_, _perform factual editing_, and _generate long-form knowledge documents_, presenting a holistic probe of LLM knowledge abilities in different application scenarios. We evaluate 10 open-source and black-box LLMs on the KGQuiz benchmark to better understand which LLM covers what knowledge domain better, and under which utilization contexts. Our experiments demonstrate that: 1) **LLM performance greatly varies across knowledge domains.** For instance, on _Task 5: Open-Ended Text Generation_, ChatGPT [45], ChatGLM [14], and text-davinci-003 [45] respectively perform best when it comes to YAGO, ConceptNet, and UMLS, three knowledge graphs representing varying knowledge domains. 2) **Knowledge utilization greatly impacts LLM's ability to retrieve and employ factual knowledge.** For instance, ChatGPT's performance on biomedical knowledge drops by 30% from the fill-in-the-blank task to the factual editing task, suggesting that the additional multi-hop context in factual editing poses new challenges to LLM knowledge abilities. Together, our extensive experiments demonstrate that probing the knowledge abilities of LLMs is nuanced and multi-faceted, with the largest LLMs excelling in simple knowledge utilization tasks on general knowledge domains, while advanced knowledge contexts and domain-specific information remain open challenges. We envision KGQuiz as a valuable testbed to understand, evaluate, and improve LLM knowledge abilities across varying knowledge domains and utilization contexts. ## 2. The KGQuiz benchmark KGQuiz employs knowledge graphs from diverse domains to construct five knowledge-intensive tasks with increasing complexity. We denote a knowledge graph as a set of triples \(\mathcal{T}\), where the \(k\)-th triple is \(\mathcal{T}_{k}=(h_{k},r_{k},t_{k})\), and \(h_{k},r_{k}\) and \(t_{k}\) represent the head entity, relation, and tail entity, respectively. We use \(\mathcal{E}\) and \(\mathcal{R}\) to denote the sets of all entities and relations in the knowledge graph. ### _Task 1: True-or-False_ As a base assessment of knowledge abilities, True-or-False questions ask whether a given statement is factually correct or not. In a way, this task tests the LLMs' ability to verify the factuality of KG-based information, which is the most fundamental ability to distinguish between true and false knowledge [10]. **Task Formulation** We construct two sets of KG triples to represent positive and negative samples (\(\mathcal{T}_{pos}\) and \(\mathcal{T}_{neg}\)). For a positive triple \((h,r,t)\in\mathcal{T}_{pos}\), we replace the tail entity \(t\) with another entity \(t^{\prime}\) to generate a negative sample and add it to \(\mathcal{T}_{neg}\). We then use the prompt for the positive or negative triple \((h,r,t)\): "_Is the statement \(h\,r\) t True or False?_". We expect LLMs to answer with _True_ or _False_, indicating their judgment of the knowledge statement based on their parametric knowledge. **Negative Sampling** We propose four approaches to sample negative entities \(t^{\prime}\) in the knowledge graph to obtain increasingly challenging negative samples. * **Random** We randomly sample an entity from a set of entities not connected to the head entity \(h\) as \(t^{\prime}\), formally \(t^{\prime}\in\mathcal{E}-\mathcal{E}(h)\), where \(\mathcal{E}(h)\) denotes the set of entities connected to \(h\). * **Semantic Similarity** We hypothesize that semantically similar entities could provide a more challenging setting with harder negative examples. We first use the **Random** method to sample \(m\) negative entities. These sampled entities form the set \(\mathcal{E}_{m}\). Then, we employ an encoder-based language model, denoted as \(\text{enc}(\cdot)\), to encode the names of these entities. Finally, we use cosine similarity \(\text{sim}(\cdot,\cdot)\) to select an entity \(t^{\prime}\) that is most similar to \(t\) in the embedding space. Formally, \(t^{\prime}=\text{argmax}_{v\in\mathcal{E}_{m}}\text{sim}(\text{enc}(\text{e},\text{enc}(\text{t}))\). * **Relation Sharing** We hypothesize that using entities sharing the same relation, \(r\), as the selected negative sample would provide a challenging adversarial setting. We first obtain the set of entities that are also associated with relation \(r\) as \(\mathcal{E}^{(r)}\), then randomly sample one entity from \(\mathcal{E}^{(r)}\) as the negative sample \(t^{\prime}\). * **Network Proximity** We hypothesize that entities that are close to \(h\) in the KG could also present a hard negative example. We obtain the set of entities that are connected to \(h\) and randomly sample one entity from it as the negative sample \(t^{\prime}\). **Evaluation** We use accuracy as the evaluation metric for the binary output of _True_ or _False_. ### _Task 2: Multiple-Choice_ Building up from the True-or-False task, the multiple-choice task introduces distractors [22, 50, 56]. This task not only tests the ability of LLMs to determine what is factually correct, but also their ability to discern the false options from the true option. Therefore, the Multiple-choice task presents a higher degree of complexity, as LLMs need to evaluate the plausibility of different answer options based on their parametric knowledge. **Task Formulation** We randomly sample a subset of the knowledge graph, denoted as \(\mathcal{T}_{s}\). For \((h,r,t)\in\mathcal{T}_{s}\), we replace the tail entity \(t\) with _[MASK]_ and provide \(m\) answer options, including the correct entity \(t\) and \(m-1\) distractors. We follow the same negative sampling strategies in _Task 1: True-or-False_ to obtain the distractors. **Evaluation** We similarly use accuracy as the evaluation metric. ### _Task 3: Blank-Filling_ The Blank-filling task requires LLMs to directly generate the missing information for a given statement [48], compared to the two previous tasks where the correct answer already appeared somewhere in the prompt context. While in tasks 1 and 2, models might just take guesses as they can simply choose one of the available options without knowing the actual answer, in _Task 3: Blank-Filling_, LLMs are required to retrieve the correct answer without any hints or options. **Task Formulation** We randomly sample one subset of the knowledge graph, denoted as \(\mathcal{T}_{\text{s}}\). For \((h,r,t)\in\mathcal{T}_{\text{s}}\), we replace the tail entity \(t\) with _[MASK]_. The model is asked to generate the correct answer to replace _[MASK]_. **Evaluation** We denote the model output as \(t_{o}\) and we use the following metrics for evaluation: * **LCS**: We denote the Longest Common Subsequence of \(t_{o}\) and \(t\) as \(\mathbf{s}\), and LCS is defined as: \(\text{LCS}=\frac{\text{Len}(\mathbf{s})}{\max\{\text{Len}(t_{o}),\text{Len}(t)\}}\) * **F1-score**: We denote the set of common tokens in both \(t_{o}\) and \(t\) as \(C\). We denote the F1-score of \(t_{o}\) and \(t\) as \(\text{F1}=\frac{2PR}{P+R}\), where \(P=\frac{|C|}{|t_{o}|}\),\(R=\frac{|C|}{|t_{g}|}\). * **Semantic Match**: We measure semantic similarity between the model's output and the correct answer using cosine similarity on embeddings obtained via InstructGPT Ada LLM \(\text{enc}(\cdot)\). This gives us the AdaScore\((t_{o},t)=\text{sim}(\text{enc}(t_{o}),\text{enc}(t))\). A threshold \(\theta\) of Adscore is based on a held-out validation set (detailed in Appendix D) to determine whether the model-generated answer and the ground truth are a semantically exact match. Concretely, we define the semantic match metric as \(\text{SM}(t_{o},t)=1\) if \(\text{ AdaScore}(t_{o},t)\geq\theta\), else 0. ### _Task 4: Factual Editing_ The Factual Editing task presents enhanced challenges compared to task 3 by moving from a single knowledge statement to a multi-hop knowledge statement. Task 4 requires LLMs to not only memorize and recall the facts, but also to identify which part of multi-hop knowledge is inconsistent and revise accordingly. While previous works have also explored LLMs' potential in factual editing [2, 6], we uniquely focus on a multi-hop format where one of the hops features inconsistent factual information. This task tests LLMs' abilities to handle multi-hop information, localize errors, edit factual inconsistencies, and more. **Task Formulation** Given a knowledge graph, we first sample a \(k\)-hop path, and we use a structured format to present the multi-hop knowledge path as \(\mathbf{d}=\)(\(h_{1}\), \(r_{1}\), \(e_{1}\), \(r_{2}\),..., \(t_{k}\)).2 We then randomly replace one of the entities in the path (denoted as \(\epsilon_{\text{s}}\)) with \(\mathbf{e}^{\prime}\) sampled with the negative sampling strategies described in Section 5 to obtain \(\mathbf{d}^{\prime}\). We concatenate the names of original entities and relations to form a multi-hop knowledge statement denoted as \(\mathbf{d}\) and swap one entity with its negative sample to obtain \(\mathbf{d}^{\prime}\). This task prompts LLMs to correct the factual inconsistency in \(\mathbf{d}^{\prime}\). Footnote 2: To avoid confusion, we denote \(e_{m}\) as the tail entity \(t_{m}\) of the \(m\)-th triple in the knowledge path. At the same time, it also serves as the head entity \(h_{m+1}\) of the \((m+1)\)-th triple in the knowledge path. **Evaluation** We denote the left part of \(\mathbf{d}\) (tokens before \(\epsilon(e_{\text{s}})\)) as \(\mathbf{L}\), and the right part of \(\mathbf{d}\) (tokens after \(\epsilon(e_{\text{s}})\)) as \(\mathbf{R}\). We first perform the longest common substring match between the output \(\mathbf{d}^{(o)}\) of the model and \(\mathbf{L}\), \(\mathbf{R}\) in turn, and delete the obtained common substring from \(\mathbf{d}^{(o)}\) to retrieve the revised entity given by LLMs. Then, We adopt the same set of evaluation metrics as task 3, namely LCS, F1-score, and Semantic Match, to compare the ground truth entity \(\epsilon_{\text{s}}\) and the revised entity given by LLMs. Figure 1. Overview of the KGQuiz Benchmark, featuring five knowledge-intensive tasks with increasing complexity. We illustrate the diverse tasks employed in KGQuiz to test large language models, highlighting the examples and corresponding natural language prompts used to examine their knowledge abilities across domains and contexts. ### Task 5: Open-Ended Text Generation The Open-Ended Text Generation task moves from handling isolated facts (as in the previous tasks) to generating multiple factual associations about a given entity. We evaluate whether the generated factual associations are aligned with the information in existing knowledge graphs. This comparison aims to measure the ability of LLMs to generate accurate and comprehensive factual knowledge of a particular entity. In addition, while tasks in previous works mostly focus on a single factual association (Zhou et al., 2017; Wang et al., 2018), we propose the Open-Ended Text Generation task to encourage the knowledge abilities of LLMs in multi-fact and knowledge synthesis settings. **Task Formulation** We randomly sample one subset of KG, denoted as \(\mathcal{T}_{\text{s}}\). For \((h,r,t)\in\mathcal{T}_{\text{s}}\), we ask the model to _"Tell me some facts about \(h\)"_. We denote all triplets containing \(h\) in the knowledge graph as \(\mathcal{G}=\{(h,r_{\text{o}},t_{\theta})\in\mathcal{T}\}\). **Evaluation** We evaluate Open-Ended Text Generation generation by comparing the model outputs with the information about entity \(h\) in the original knowledge graph, denoted as \(\mathcal{G}\). Concretely, we first prompt a GPT-3.5 LLM to turn the given model output in natural language into a list of fact triplets \(\mathcal{O}=\{(h,r_{\text{o}},t_{\text{o}})\}\) inspired by previous works (Wang et al., 2018; Wang et al., 2018), where we further evaluate this approach in Appendix D. We then employ the semantic match metric SM in task 3, we define the Precision and Recall between model predictions \(\mathcal{O}\) and ground truth \(\mathcal{G}\) as: \(\text{Precision}=\frac{|\mathcal{O}\cap\mathcal{G}|}{|\mathcal{O}|}\), \(\text{Recall}=\frac{|\mathcal{O}\cap\mathcal{G}|}{|\mathcal{G}|}\), where \(\mathcal{O}\cap\mathcal{G}\) denotes the set of triples that are both in model predictions and the knowledge graph with SM = 1. ## 3. Experiment Settings Knowledge DomainsIn our experiments, we posit that the performance of LLMs in knowledge-intensive tasks is greatly influenced by diverse knowledge domains. Thus, we consider knowledge graphs from three distinct domains in our experiments: commonsense, encyclopedic, and domain-specific. For commonsense knowledge, we leverage the ConceptNet knowledge graph (Wang et al., 2018) with 1,103,036 entities, 47 relations, and 3,098,674 triples. For encyclopedic knowledge, we adopt the YAGO knowledge graph (Wang et al., 2018) with 123,182 entities, 37 relations, and 1,089,040 triples. For domain-specific knowledge, we mainly consider the biomedical domain and adopt the UMLS knowledge graph (Beng et al., 2018) with 297,554 entities, 98 relations, and 1,212,586 triples. By conducting our evaluations across knowledge graphs that span varying domains, we aim to provide a comprehensive assessment of how the knowledge abilities of LLMs rare across diverse knowledge domains. Models and SettingsWe evaluate both black-box and open-source LLMs on the KGQuiz benchmark. For black-box LLMs, we adopt InstructGPT (Wang et al., 2018) (text-ada-001, text-abagge-001, text-cruie-001, and text-davinci-003) and ChatGPT (gpt-3.5-turbo) through the OpenAI API. For open-source LLMs, we adopt GPT-J (Wang et al., 2018), OPT (6.7B) (Wang et al., 2018), ChatGLM (Chen et al., 2018), LLAMA (7B) (Wang et al., 2018), and Alpaca (Wang et al., 2018) in the experiments. We use a temperature of \(\tau\) = 0 to reduce randomness. Task SettingsFor _Task 1: True-or-False_, we construct 10k examples for each knowledge graph and adopt semantic similarity as the default negative sampling method. In our experiments, we noticed that some LLMs could not answer true-or-false questions based on zero-shot instructions, thus we have added one in-context example to demonstrate the QA format. For _Task 2: Multiple-Choice_, we use four answer options as the default setting and construct 10k examples for each knowledge graph. Here, too, we incorporate a single in-context example for clarification. For _Task 3: Blank-Filling_, we randomly sample 10k triplets for each knowledge graph to generate the blank-filling questions. Moving on to _Task 4: Factual Editing_, we construct 10k knowledge walks for each knowledge graph with the default walk length \(k=3\). Given that some LLMs struggled with this task, an in-context example is provided. Lastly, for _Task 5: Open-Ended Text Generation_, we select 1k entities in each knowledge graph and ask LLMs to perform open-ended generation3. We use _Semantic Similarity_ to sample negative examples in our subsequent experiments.4 Footnote 3: For some tasks, we use in-context examples. More details in Appendix D. Footnote 4: The specific effect of these four strategies and our choice for _Semantic Similarity_ is detailed in section 5.1. ## 4. Results We first present the average ranking across the five knowledge reasoning tasks and the three knowledge domains in Table 1. In terms of knowledge domains, we observe a considerable discrepancy in \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**Task**} & \multicolumn{4}{c}{**Domain**} & \multirow{2}{*}{**Avg.**} \\ \cline{2-2} \cline{4-9} & **T1** & **T2** & **T3** & **T4** & **T5** & **YAGO** & **CPNet** & **UMLS** \\ \hline Ada & 8.3 & 9.7 & 6.1 & 5.1 & 4.8 & \(\uparrow\)6.5 & 6.8 & 7.1 & 6.5 \\ Babbage & 7.0 & 6.0 & 5.0 & 5.0 & 3.8 & 5.7 & 5.5 & \(\uparrow\)4.8 & 5.7 \\ Curie & 8.7 & 9.3 & 2.8 & 4.0 & 2.7 & \(\uparrow\)5.2 & 6.1 & 5.2 & 5.2 \\ Davinci & 2.0 & **2.0** & **1.7** & 1.6 & 3.0 & **\(\uparrow\)1.9** & **2.0** & **2.3** & **1.9** \\ Turbo & **1.0** & **1.0** & 3.0 & 3.9 & 2.8 & \(\uparrow\)2.3 & 2.4 & 2.3 & 2.3 \\ GPT-J & 7.0 & 7.3 & 8.7 & 7.7 & 9.0 & 8.0 & \(\uparrow\)7.6 & 8.1 & 8.0 \\ OPT & 9.0 & 7.0 & 8.0 & 7.8 & 9.8 & \(\uparrow\)8.2 & 8.5 & 8.3 & 8.2 \\ ChatGLM & 4.7 & 3.0 & 4.0 & 7.1 & 3.8 & 4.3 & \(\uparrow\)4.0 & 5.3 & 4.3 \\ LLAMA & 4.0 & 5.7 & 8.9 & 8.1 & 7.3 & 7.2 & 7.1 & \(\uparrow\)6.1 & 7.2 \\ Alpaca & 3.3 & 4.0 & 6.9 & 4.8 & 7.8 & 5.6 & \(\uparrow\)4.9 & 5.6 & 5.6 \\ \hline \hline \end{tabular} \end{table} Table 1. Overall average rankings of ten LLMs on KGQuiz across five tasks and three knowledge domains. Bold, underline represents the highest and the second highest ranking on each task (or knowledge domain). \(\uparrow\) denotes the knowledge domain on which each model has its best ranking. Figure 2. Model performance on _Task 1: True-or-False_. Larger LMs are better at judging factual correctness, while the same LM performs differently across varying knowledge domains. the performances across different domains for the same LLM. This finding highlights that LLM knowledge abilities are greatly impacted by knowledge domain, supporting the need for multi-domain knowledge probing benchmarks such as KGQuiz. Regarding knowledge utilization, the format in which knowledge is presented and required to be utilized by LLMs also significantly impacts their overall performance, as the best model across the five tasks could be quite different. We further analyze each individual task in the following. ### _Task 1: True-or-False_ As depicted in Figure 2, among the assessed LLMs, four of them (text-davinci-003, gpt-3.5-turbo, ChatGLM) performed substantially better than random chance (50%) on all KGs. Notably, gpt-3.5-turbo achieved the best overall performance, showcasing its ability to discern correct from incorrect knowledge statements. Observation of improved performance with larger model sizes suggests that models with more parameters can encode more knowledge and leverage the stored knowledge to accurately identify the veracity of knowledge statements. Additionally, Even in the simple binary task, many LLMs show accuracy close to 50%, indicating difficulty in distinguishing true and false statements. This suggests a need for further improvement in LLMs' knowledge abilities, particularly for smaller language models. ### _Task 2: Multiple-Choice_ Figure 3 showcases that text-davinci-003 and gpt-3.5-turbo consistently outperform other LLMs in understanding and applying knowledge across all KGs and domains. An observation from tasks comparison revealed that text-davinci-003 and gpt-3.5-turbo's improved performance in _Task 2: Multiple-Choice_ compared to _Task 1: True-or-False_. However, Alpaca's relative performance dwindled in Task 2, suggesting that the specific knowledge utilization format significantly influences an LLM's ability to retrieve potentially correct answers. ### _Task 3: Blank-Filling_ Compared to true-or-false and multiple-choice questions, blank filling requires LLMs to retrieve the correct answer from their parametric knowledge without relying on any options. In Table 2, the overall low LCS scores reflect that LLMs' generated answers struggle to match the exact target answer. Moreover, the models' abilities differ significantly, with text-davinci-003 excelling in two domains (YAGO and ConceptNet) but gpt-3.5-turbo performing better in the biomedical domain (UMLS). Additionally, we observe a noticeable decrease in performance in the biomedical domain, suggesting that the models may not be as proficient in handling domain-specific knowledge. ### _Task 4: Factual Editing_ Compared to blank-filling, _Task 4: Factual Editing_ involves identifying and rectifying factual inconsistencies within given knowledge statements. According to the results in Table 3, the additional context indeed aids certain models in generating fact-checked responses on certain KGs (YAGO and ConceptNet), with text-davinci-003 and gpt-3.5-turbo scoring well for YAGO and ConceptNet respectively, and ChatGLM excelling on UMLS. It highlights that tasks such as dialogue generation and summarization, which usually come with relevant context, may work better with LLMs. However, when provided only with a short question, QA models may get confused easily. The task-wise change in top-performing models indicates that the form of knowledge utilization impacts an LLM's knowledge abilities significantly. ### _Task 5: Open-Ended Text Generation_ Open-ended generation tasks present a more complex challenge to LLMs as it requires not just specific factual associations, but also the generation of a consistent paragraph about a certain entity encapsulating assorted facts and knowledge. As observed in Table 4, text-davinci-003 tops the chart with the highest AdaScore_s score across all three KGs, denoting its proficient ability to produce well-structured and factually accurate knowledge paragraphs. text-cruie-001 stands out with the highest Precision score, indicating its preference to generate knowledge closely in line with the respective knowledge graph. From a Recall perspective, the best performances are achieved by gpt-3.5-turbo, ChatGLM, and text-davinci-003 on the three respective KGs. These findings emphasize that the knowledge domain significantly affects the performance of LLMs in knowledge-intensive tasks, underscoring the need for comprehensive evaluations of LLMs' knowledge abilities that consider varying knowledge domains. ## 5 Analysis ### Negative Sampling Strategy In section 2.1, we propose and formalize four negative sampling methods to generated questions in the KGQuiz benchmark. In order to investigate their impact on the difficulty of the task, we use the four negative sampling strategies, _Random_ (RA), _Semantic Similarity_ (SS) _Relation Sharing_ (RS), and _Network Proximity_ (NP) to generate questions for _Task 1: True-or-False_ based on the YAGO Figure 3: LLM performance on _Task 2: Multiple-Choice_. Davinci and Turbo consistently outperform other models, indicating their superior knowledge abilities under the multiple-choice knowledge utilization format. knowledge graph. We evaluate text-davinci-003 and gpt-3.5-turbo as shown in Figure 4. These results show that different negative sampling methods _do_ impact on the difficulty of the problem, ranging from easy to difficult in the following order: _Random_, _Semantic Similarity_, _Relation Sharing_, and _Network Proximity_. It is also demonstrated that whether LLMs can select the correct answer is impacted by the plausibility of negative examples. In particular, we employed _Semantic Similarity_ as an intermediate strategy presenting reasonable complexity. This strategy, while challenging, does not make the task excessively difficult. Furthermore, while we propose this specific strategy, KGQuiz benchmark supports the flexibility of adopting other negative sampling settings. ### Consistency Study In this study, we investigate the robustness towards minor changes in prompts and knowledge statements. We select 100 questions from the YAGO knowledge graph in _Task 1: True-or-False_ and evaluate with five different prompts and instructions (more details in Appendix E.3). We measure response consistency of the five black-box LLMs using the Fleiss Kappa measure [17]. The experiment results show that LLMs have varying robustness towards prompt formats: \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**YAGO**} & \multicolumn{2}{c}{**ConceptNet**} & \multicolumn{2}{c}{**UMLS**} \\ \cline{2-7} & **Precision** & **Recall** & **Precision** & **Recall** & **Precision** & **Recall** \\ \hline Ada & 75.84 & 34.89 & 90.93 & 24.90 & 59.45 & 19.47 \\ Babbage & 84.66 & 35.34 & 95.01 & 18.84 & 81.52 & 22.93 \\ Curie & **85.69** & 38.64 & **96.59** & 22.46 & **83.43** & 26.80 \\ Davinci & 76.39 & 53.96 & 88.12 & 41.55 & 77.48 & **46.06** \\ Turbo & 77.28 & **57.63** & 89.39 & 40.53 & 75.94 & 43.89 \\ GPT-J & 11.97 & 8.78 & 24.11 & 12.07 & 10.72 & 5.96 \\ OPT & 14.06 & 7.72 & 16.89 & 5.26 & 10.35 & 5.43 \\ ChatGLM & 71.00 & 54.54 & 88.05 & **46.49** & 63.59 & 39.72 \\ LLAMA & 39.17 & 29.29 & 36.78 & 11.78 & 26.14 & 11.85 \\ Alpaca & 22.96 & 17.77 & 28.63 & 13.94 & 12.69 & 7.53 \\ \hline \hline \end{tabular} \end{table} Table 4: Model performance on _Task 5: Open-Ended Text Generation_. Different from previous tasks, generating long and open-ended statements about entities poses new challenges to LLMs. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**YAGO**} & \multicolumn{2}{c}{**ConceptNet**} & \multicolumn{2}{c}{**UMLS**} \\ \cline{2-10} & **F1-score** & **LCS** & **Sem. Match** & **F1-score** & **LCS** & **Sem. Match** & **F1-score** & **LCS** & **Sem. Match** \\ \hline Ada & 2.26 & 18.24 & 61.67 & 1.24 & 11.76 & 45.43 & 5.72 & 19.43 & 55.52 \\ Babbage & 2.60 & 17.63 & 60.48 & 2.07 & 12.06 & 64.67 & 10.37 & 21.68 & 71.43 \\ Curie & 5.38 & 19.63 & 71.54 & 3.32 & 15.11 & 78.68 & 10.90 & 26.04 & 84.70 \\ Davinci & **14.02** & **28.65** & **73.00** & **6.27** & **27.40** & **91.19** & 8.28 & 23.81 & 87.88 \\ Turbo & 4.47 & 11.83 & 52.33 & 5.56 & 14.42 & 80.48 & **19.44** & **28.18** & **89.27** \\ GPT-J & 0.56 & 10.75 & 24.55 & 1.20 & 4.53 & 39.07 & 9.38 & 11.74 & 73.17 \\ OPT & 0.66 & 10.75 & 27.33 & 0.75 & 4.40 & 45.55 & 6.88 & 11.21 & 73.52 \\ ChatGLM & 3.53 & 21.50 & 72.27 & 2.35 & 20.15 & 88.07 & 4.04 & 19.45 & 58.71 \\ LLAMA & 1.24 & 11.43 & 35.97 & 1.03 & 3.42 & 25.96 & 7.44 & 9.31 & 76.64 \\ Alpaca & 3.16 & 10.37 & 41.52 & 1.92 & 6.25 & 56.55 & 10.63 & 13.61 & 81.88 \\ \hline \hline \end{tabular} \end{table} Table 2: LLM performance on _Task 3: Blank-Filling_. Sem. Match is short for the semantic match metric. Davinci leads on YAGO and ConceptNet, while Turbo performs best on UMLS, indicating that LLM knowledge abilities vary greatly across knowledge domains. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**YAGO**} & \multicolumn{2}{c}{**ConceptNet**} & \multicolumn{2}{c}{**UMLS**} \\ \cline{2-10} & **F1-score** & **LCS** & **Sem. Match** & **F1-score** & **LCS** & **Sem. Match** & **F1-score** & **LCS** & **Sem. Match** \\ \hline Ada & 2.50 & 14.51 & 86.76 & 0.12 & 14.65 & 83.84 & 2.50 & 18.11 & 59.85 \\ Babbage & 2.90 & 9.47 & 90.68 & 0.02 & 10.42 & 86.53 & 2.90 & 17.78 & 60.03 \\ Curie & 6.21 & 8.93 & 91.20 & 0.10 & 15.92 & 83.14 & **6.21** & **19.76** & 60.24 \\ Davinci & **16.99** & **20.58** & **91.77** & **5.15** & **17.31** & 93.25 & 5.44 & 7.28 & 64.19 \\ Turbo & 12.29 & 13.24 & 91.06 & 0.51 & 1.28 & **93.32** & 0.88 & 8.93 & 59.05 \\ GPT-J & 0.03 & 0.17 & 90.34 & 0.00 & 0.22 & 93.21 & 0.20 & 0.71 & 59.98 \\ OPT & 0.01 & 0.06 & 90.37 & 0.00 & 0.06 & 93.24 & 0.30 & 0.88 & 59.96 \\ ChatGLM & 4.94 & 1.32 & 89.66 & 0.14 & 4.57 & 90.62 & 0.42 & 2.58 & **76.26** \\ LLAMA & 0.03 & 0.04 & 90.33 & 0.00 & 0.00 & 93.20 & 0.43 & 1.81 & 59.98 \\ Alpaca & 6.80 & 12.27 & 90.20 & 0.87 & 14.84 & 93.20 & 1.46 & 8.66 & 59.93 \\ \hline \hline \end{tabular} \end{table} Table 3: LLM performance on _Task 4: Factual Editing_. Model performance is generally higher than blank-filling, indicating the helpfulness of additional context and emphasizing the influence of knowledge utilization. Models such as Turbo, Davinci, and ChatGLM show variations in performance across different knowledge graphs, highlighting the influence of knowledge domains. Turbo (0.645) has the highest score, suggesting a moderate level of agreement. Davinci (0.285) exhibits a lower but still positive value. However, Ada (-0.187), Babbage (-0.057), and Curie (-0.168) show negative Fleiss Kappa values, indicating poor agreement and suggesting that model responses are less consistent towards minor changes in knowledge probing instructions. This study highlights that the robustness to minor changes in knowledge-intensive prompts is in itself part of LLM's knowledge abilities. ### Exact Match vs. Semantic Match We conduct qualitative analysis for _Task 3: Blank-Filling_ and present a few examples in Table 5. It is demonstrated that answers generated by LLMs do not exactly match the gold label, where the exact match (EM) metric would treat the answer as incorrect. However, the generated responses are semantically equivalent. For instance, in the first example, the word order is different but both answers convey the same meaning. Similarly, in the third example, "Tokyo, Japan" is more general than the gold answer "Shibuya, Tokyo" but it still provides the correct location information. While the exact match metric would treat them as incorrect, under our proposed _Semantic Match_, all four answers are deemed as correct, indicating that _Semantic Match_ presents a better evaluation metric in LLM knowledge probing given the nuanced nature of entity names [31]. ### Question Sampling In KGQuiz, for each task, we generate questions by randomly sampling triplets (or head entities) from the KG, while whether the randomly sampled subsets is represented of the whole KG remain underexplored. To this end, we design two additional ways to sample a problem subset: * **Relation Proportion**: We first calculate the proportion of relations in the KG, then sample triplets based on the relation distribution. This ensures that the proportion of relations in the sampled triples is consistent with the proportion of relations in the entire knowledge graph. * **Entity Clustering**: First, we use knowledge graph embedding model TransE [5] to obtain the embedding for each entity, then we use K-means to obtain 10 clusters of entities. We sample triplets based on the proportions of the number of entities in each cluster. We generated 1,000 _Task 1: True-or-False_ questions and 1,000 _Task 2: Multiple-Choice_ questions on ConceptNet using these two methods respectively. According to Figure 5, we find that after changing to these two sampling methods that can theoretically better represent the features of the knowledge graph, the performance of each model did not change significantly (compared to random sampling). This indicates that randomly sampled triples can also reflect the features of the entire knowledge graph and the corresponding results are representative. ### Negative Sampling Evaluation _Validity of Negative Samples._ Regarding the four negative sampling methods we proposed, a potential issue is that the sampled data may not be genuine negative samples. Therefore, in order to investigate the effectiveness of our negative sampling methods, we manually evaluated 20 samples for each method. In our manual evaluation, all the sampled examples were indeed true negative samples, which validated the effectiveness of our negative sampling methods. ### Number of Options Although extra answer options could serve as context information aid LLMs (as we analyzed in Section 4.2, we hypothesize that an increasing amount of distractors might sway LLMs away from the Figure 4. Performance on _Task 1: Ture-or-False_ with varying negative sampling methods. The figure illustrates the performance of text-davinci-003 and gpt-3.5-turbo on the YAGO knowledge graph when using the four negative sampling strategies, showing that the choice of negative sampling has a significant impact on the difficulty of the task. Figure 5. Comparison of model performance across different question sampling methods. Models are evaluated on 1,000 _Task 1: True-or-False_ questions and 1,000 _Task 2: Multiple-Choice_ questions sampled via three different methods. The results show the model’s performance is not significantly affected by the sampling method. \begin{table} \begin{tabular}{l l l} \hline \hline **Question** & **Prediction** & **Gold** \\ \hline Bob Hawkce graduated from \_. & Oxford University & University of Oxford \\ \hline Rosemary Sucliff has won prize \_. & The Carnegie Medal & Carnegie Medal (literary award) \\ \hline Taito Corporation is \_. & Tokyo, Japan & Shibuya, Tokyo \\ located in \_. & & \\ \hline \hline \end{tabular} \end{table} Table 5. Qualitative analysis of _Task 3: Blank-Filling_, suggesting that our proposed _Semantic Match_ presents a more nuanced metric for knowledge probing. correct answer. To this end, we study the impact of the number of options on the difficulty of _Task 2: Multiple-Choice_. We follow the settings in Section 3 but change the number of options to 2, 3, 5, and 10 respectively. We present the performance of text-davinci-003 and gpt-3.5-turbo on YAGO in Figure 6. We find that, although a small number of options providing extra context can give the model hints to answer questions, as the number of options increases, the model's performance gradually declines due to the increasing number of distractors. ### Generating Triplets vs. Text We use text-davinci-003 and gpt-3.5-turbo to directly generate factual triplets about a certain entity (by giving an in-context example) and reported the precision and recall in Table 6. It can be observed that although the precision has improved, the recall has dropped significantly. We analyzed that this is due to the model generating only a few high-confidence triplets when directly asked for triplets, which led to the aforementioned results. However, for other smaller-scale models, directly generating factual triplets is not feasible, as they cannot adequately understand the prompt's instructions, resulting in poor performance. ## 6. Related Work LLM Knowledge ProbingResearch into what knowledge is stored in LLMs has drawn significant interest. Pioneering work like LAMA [48], TempLAMA [12], MMLU [21] quantitatively measured the factual knowledge in these models. Other approaches have expanded these probing techniques, exploring topics like few-shot learning and 2-hop relational knowledge [20]. Furthermore, open-domain question-answering benchmarks like Natural Questions [29], and TriviaQA [25] have been used to measure the practical knowledge abilities of these models, aligning the probing tasks with real-world applications. Improving LLM Knowledge AbilitiesEfforts to enhance LLM's knowledge abilities include augmenting language models with KGs for structured, factual knowledge [42, 49] and using retrieval-augmented methods like RAG [30], REALM [19], and REPLUG [51] to incorporate external documents as a dynamic knowledge source. Further, REMEDI [23] aims to create a finer control over knowledge in LLMs by understanding fact encodings in the model's internal representation system. In parallel, the framework CooK [15] suggests using specialized language models to provide modular and up-to-date knowledge in a collaborative process. Extracting Knowledge from LLMsThe extraction of knowledge from LLMs has become an emerging topic in the research community. Some works focus on constructing KGs from the LLMs [11, 59]. For example, Crawling Robots [11] uses a robot role-play setting to extract named entities and relations by encoding them into actions. Other works utilize the prompt-based paradigm, where they generate knowledge probes in the form of structured prompts [35, 65]. These tools aim to extract and organize the knowledge within an LLM in a human-readable and interpretable way. Furthermore, other techniques involve augmenting training data with recitation tasks to express internally represented knowledge explicitly [54]. Investigating the Limitation of LLM Knowledge AbilitiesAs LLMs have shown promise in knowledge-based tasks, researchers have also started examining the limitations of these models' knowledge abilities. This includes their ability to handle conflicted information [8, 61], recall abilities [39], and self-evaluating skills [27]. By investigating these limitations, researchers aim to not only devise ways to address them but also shed light on how LLMs can operate more effectively in more sophisticated tasks, particularly in professional domains [41, 55]. In summary, while considerable work has been done in probing the knowledge abilities of LLMs, improving these abilities, extracting knowledge, and investigating their limitations, two major aspects have seen less consideration: knowledge utilization and knowledge breadth. These areas are vital for understanding and evaluating the performance of LLMs in more real-world, complex scenarios. Therefore, this calls for a more comprehensive approach, which our proposed KGQuiz benchmark aims to address, making strides towards a future where LLMs exhibit robust knowledge abilities applicable to a wider range of domains and utilization contexts. ## 7. Conclusion We propose KGQuiz, a benchmark for probing the knowledge generalization abilities of Large Language Models (LLMs). Unlike \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Text**} & \multicolumn{2}{c}{**Triplets**} \\ \cline{2-5} & **Precision** & **Recall** & **Precision** & **Recall** \\ \hline Davinci & 76.39 & 53.96 & 85.21 & 37.58 \\ Turbo & 77.28 & 57.63 & 91.42 & 37.21 \\ \hline \hline \end{tabular} \end{table} Table 6. Comparison of precision and recall for open-ended text generation and direct triplet generation using text-davinci-003 and gpt-3.5-turbo. Direct triplet generation results in higher precision but lower recall than open-ended generation. Figure 6. Impact of the number of answer options on LLM performance. The figure illustrates the performance of text-davinci-003 and gpt-3.5-turbo on _Task 2: Multiple-Choice_ (Multiple-Choice) using YAGO knowledge graph, with varying numbers of answer options (2, 3, 4, 5, and 10). The results show that as the number of options increases, the model’s performance declines, indicating that a higher number of distractors makes the task more challenging. previous work, our benchmark focuses on two often-overlooked aspects: the complexity of knowledge utilization and the breadth of knowledge domains. Our benchmark uses structured information from knowledge graphs (KGs) across three diverse domains, and it consists of several tasks representing increasingly complex forms of knowledge utilization. Our experimental results illustrate varying performances of several LLMs across different domains and tasks, underscoring the multi-faceted nature of knowledge abilities in LLMs. This also demonstrates the importance of considering Knowledge Utilization and Knowledge Breadth. We envision KGQUIZ as a comprehensive testbed to evaluate, understand, and improve the knowledge abilities of LLMs across varying domains and tasks.
2308.05341
Classification of Human- and AI-Generated Texts: Investigating Features for ChatGPT
Recently, generative AIs like ChatGPT have become available to the wide public. These tools can for instance be used by students to generate essays or whole theses. But how does a teacher know whether a text is written by a student or an AI? In our work, we explore traditional and new features to (1) detect text generated by AI from scratch and (2) text rephrased by AI. Since we found that classification is more difficult when the AI has been instructed to create the text in a way that a human would not recognize that it was generated by an AI, we also investigate this more advanced case. For our experiments, we produced a new text corpus covering 10 school topics. Our best systems to classify basic and advanced human-generated/AI-generated texts have F1-scores of over 96%. Our best systems for classifying basic and advanced human-generated/AI-rephrased texts have F1-scores of more than 78%. The systems use a combination of perplexity, semantic, list lookup, error-based, readability, AI feedback, and text vector features. Our results show that the new features substantially help to improve the performance of many classifiers. Our best basic text rephrasing detection system even outperforms GPTZero by 183.8% relative in F1-score.
Lorenz Mindner, Tim Schlippe, Kristina Schaaff
2023-08-10T05:09:42Z
http://arxiv.org/abs/2308.05341v1
# Classification of Human- and AI-Generated Texts: Investigating Features for ChatGPT ###### Abstract Recently, generative AIs like ChatGPT have become available to the wide public. These tools can for instance be used by students to generate essays or whole theses. But how does a teacher know whether a text is written by a student or an AI? In our work, we explore traditional and new features to (1) detect text generated by AI from scratch and (2) text rephrased by AI. Since we found that classification is more difficult when the AI has been instructed to create the text in a way that a human would not recognize that it was generated by an AI, we also investigate this more _advanced_ case. For our experiments, we produced a new text corpus covering 10 school topics. Our best systems to classify _basic_ and _advanced human-generated/AI-generated_ texts have F1-scores of over 96%. Our best systems for classifying _basic_ and _advanced human-generated/AI-rephrased_ texts have F1-scores of more than 78%. The systems use a combination of perplexity, semantic, list lookup, error-based, readability, AI feedback, and text vector features. Our results show that the new features substantially help to improve the performance of many classifiers. Our best _basic_ text rephrasing detection system even outperforms GPTZero by 183.8% relative in F1-score. **Keywords:** Prompting, ChatGPT, AI in Education, Natural Language Processing ## 1 Introduction In recent years, chatbots have become a popular tool in everyday life [1]. These systems are capable of imitating human-like conversations with users [2], can provide assistance [3], information [4], and emotional support [5]. OpenAI's ChatGPT has become one of the most commonly utilized chatbots out of all of them. The fact, that ChatGPT was able to reach over one million users in only five days [6] undermines this statement. The users of ChatGPT range from children seeking assistance with their homework, and individuals searching for medical advice, to users who use it as a daily source of companionship. The more systems like ChatGPT make their way into our everyday lives, the more important it becomes to differentiate between human- and artificial intelligence (AI)-generated content. Although both can communicate information, an important difference lies in the intent of the text. _Human-generated_ content is created with the specific intention of communicating something, while _AI-generated_ content is created by algorithms designed to generate text that sounds like it was written by a human. _AI-generated_ text may contain repetitive or formulaic phrases or patterns, while _human-generated_ text is more likely to be original and creative. Moreover, texts generated by large language models (LLMs) often sound reliable even though they are based on word probabilities instead of facts. The better the algorithms of generative AI become, the more difficult it is to detect _AI-generated_ content properly. This poses serious problems in many areas, including plagiarism, the generation of fake news, and spamming. Therefore, there is a strong need for tools that can differentiate between these two kinds of texts. For this reason, in our current study, we want to gain insights into the differences between human language use and _AI-generated_ text, and how these differences can be leveraged to improve the accuracy of the detection of _AI-generated_ text. Furthermore, this research will provide a valuable benchmark for future _AI-generated_ text classification studies. To the best of our knowledge, we are the first to evaluate features such as the degree of objectivity of a text, list lookup features like the repetitions of the title in the text, or error-based features such as the number of grammatical errors. We collected a new corpus of nearly 500 articles covering 10 topics--the _Human-AI-Generated Text Corpus_. To contribute to the improvement of low-resource languages, we share the corpus with the research community1. We decided to use ChatGPT for our research as this is currently the most widely used tool to generate texts. Since it has been trained on extensive data sets and has a huge number of parameters it is currently the best-performing system that is publicly available. Footnote 1: [https://github.com/LorenzM97/human-AI-generatedTextCorpus](https://github.com/LorenzM97/human-AI-generatedTextCorpus) ## 2 Related Work In this chapter, we will describe the related work regarding ChatGPT and the classification of _human-_ and _AI-generated_ texts. ### ChatGPT ChatGPT is an advanced chatbot developed by OpenAI that leverages natural language processing to generate text in response to user prompts, making it a multi-functional tool across various domains. ChatGPT has been successfully applied in domains such as education [7], medicine [8], or language translation [9]. As the name indicates, ChatGPT is built on the Generative Pretrained Transformers (GPT) language model and was fine-tuned using reinforcement learning with human feedback, enabling it to grasp the meaning and intention behind user queries and provide relevant and helpful responses. A large dataset of text data was incorporated into the training of ChatGPT to ensure safety and accuracy in the text generated. Although the exact amount of training data for ChatGPT has not been published, the previous GPT-3 model had 175 billion parameters and was trained with 499 billion crawled text tokens, which is substantially larger than other language models [10] like Bidirectional Encoder Representations from Transformers (BERT) [11], Robustly Optimized BERT Pretraining Approach (RoBERTa) [12], or Text-to-Text Transfer Transformer (T5) [13]. By learning the nuances of human language from this extensive dataset, ChatGPT is able to generate text that is hard to distinguish from text written by humans [14]. ### Classification of Human- and AI-Generated Texts The more ChatGPT is used in various contexts and its abilities improve, the more it becomes important to be able to identify _AI-generated_ and _human-generated_ texts. As the quality of _AI-generated_ texts improves, human capabilities to detect generated texts can already be outperformed by machines [15]. Numerous tools like GPTZero2, AI Content Detector3, or GPT-2 Output Detector4 exist which aim to find out if a text has been _AI-generated_. These tools are based on analyzing text patterns. For instance, GPTZero--which is amongst the most popular AI-detection tools--uses perplexity and burstiness to identify _AI-generated_ texts. However, these tools still have limitations in terms of the detection accuracy [16]. Footnote 2: [https://gptzero.me](https://gptzero.me) Footnote 3: [https://writer.com/ai-content-detector](https://writer.com/ai-content-detector) Footnote 4: [https://openai-openai-detector.hf.space](https://openai-openai-detector.hf.space) In recent studies, approaches like XGBoost [17], decision trees [18], or transformer-based models [14, 19] have been evaluated to detect _AI-generated_ texts. [14] discussed several characteristics of _AI-generated_ texts from customer reviews and built a transformer-based classifier that was able to differentiate _AI-generated_ text from _human-generated_ text with an accuracy of 79%. In an analysis based on decision trees, [18] were able to achieve an overall accuracy of 100% combining several stylometric features (bigrams, positioning of commas, and the rate of function words) in differentiating _AI-generated_ from _human-generated_ texts. However, these analyses were limited to the Japanese language which has very different characteristics from the English language. [17] addressed the issue of generated essays. The proposed model was based on XGBoost and was able to achieve an accuracy of 98% using features generated by TF-IDF and a set of handcrafted features. In a comparison of text summarizations, [15] were able to achieve an accuracy of 90% for the classification of _AI-generated_ vs. _human-generated_ summaries using DistilBERT. One of the major downsides of the previously described studies is that they have only been tested on texts which have been generated with _basic_ prompts asking ChatGPT to simply generate or rephrase a text. To the best of our knowledge, more _advanced_ prompts which ask ChatGPT to generate a text in a certain way (e.g., in a way a human does not notice) have not been included in the evaluation. ## 3 Our Human-AI-Generated Text Corpus To derive our statistics from the features and train models for the classification of _human-_ and _AI-generated_ text, we leverage Wikipedia articles to generate an English text corpus--our _Human-AI-Generated Text Corpus_. Since the focus of our work is to recognize whether texts in an educational environment were written by humans or AI, we built a data corpus that covers various topics from this environment. For this purpose, we defined the following 10 text categories: _biology_, _chemistry_, _geography_, _history_, _IT_, _music_, _politics_, _religion_, _sports_, and _visual arts_. For every text category, we selected 10 topics. Moreover, we used different ways to generate text. Firstly, we generated texts in a _basic_ way without additional instructions. Secondly, texts were generated in an _advanced_ way using additional instructions for the generation. We also evaluated the rephrasing of texts in the same way. The following sections describe how we built our text corpus. ### Basic AI-Generated Texts When identifying _AI-generated_ text, we wanted to recognize (1) text that was generated entirely by an AI (_AI-generated_), and (2) text that was rephrased by an AI based on an existing text (_AI-rephrased_). To generate 100 _AI-generated_ texts for our 10 categories in a _basic_ way, we prompted ChatGPT "Generate a text on the following topic: \(<topic>\)". To obtain 100 _AI-rephrased_ texts for the respective categories in a _basic_ way, we used the following prompt: "Rephrase the following text: \(<\)text from Wikipedia article\(>\)" for every sample. Both prompts are illustrated in Figure 1. We used the original Wikipedia text excerpts to gather _human-generated_ texts. To ensure that we did not accidentally take text generated by ChatGPT, we only used text from Wikipedia articles created before November 2022, when ChatGPT was released to the public. The statistics of the _human-generated_ texts compared to the _basic AI-generated_ and _AI-rephrased_ texts are summarized in Table 1. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Human**} & \multicolumn{3}{c}{**AI-generated**} & \multicolumn{3}{c}{**AI-rephrased**} \\ **Category** & **P** & **S** & **W** & **P** & **S** & **W** & **P** & **S** & **W** \\ \hline Biology & 44 & 188 & 3739 & 54 & 139 & 2500 & 21 & 96 & 1899 \\ Chemistry & 44 & 167 & 3590 & 56 & 140 & 2684 & 28 & 129 & 2539 \\ Geography & 35 & 167 & 3386 & 60 & 167 & 3006 & 27 & 114 & 2540 \\ History & 43 & 189 & 4578 & 61 & 148 & 3017 & 26 & 146 & 3205 \\ IT & 40 & 141 & 2916 & 51 & 129 & 2624 & 24 & 91 & 1872 \\ Music & 39 & 191 & 4177 & 53 & 154 & 2701 & 27 & 137 & 2900 \\ Politics & 43 & 172 & 4298 & 56 & 131 & 2866 & 25 & 104 & 2341 \\ Religion & 40 & 171 & 3796 & 51 & 138 & 2684 & 25 & 108 & 2409 \\ Sports & 51 & 204 & 4692 & 59 & 143 & 2904 & 30 & 128 & 2913 \\ Visual arts & 36 & 147 & 3165 & 54 & 136 & 2686 & 22 & 85 & 2024 \\ \hline \hline \end{tabular} \end{table} Table 1: _Basic AI-Generated/Rephrased_ Text (P = #paragraphs, S = #sentences, W = #words). ### Advanced AI-Generated Texts Additionally, our intention was to investigate the more _advanced_ case where the AI was told to write or rephrase the text in a way that a human would not realize it was generated by an AI, as depicted in Figure 2. To get 100 _advanced AI-generated_ example texts, we asked ChatGPT "Generate a text on the following topic in a way a human would do it: \(<topic>\)" for the 10 topics of each category. Additionally, we collected 100 _advanced AI-rephrased_ example texts by asking ChatGPT "Rephrase the following text in a way a human would do it: \(<\)text from Wikipedia article\(>\)" for the 10 topics of each category. Figure 1: Prompt and ChatGPT’s Response: Basic Text Generation & Rephrasing. Figure 2: Prompt and ChatGPT’s Response: Advanced Text Generation & Rephrasing. The statistics of the _human-generated_ texts compared to the _advanced AI-generated_ and _AI-rephrased_ texts are summarized in Table 2. ## 4 Our Features for the Classification of Human- and AI-Generated Texts For the classification, we implemented the feature categories _perplexity features_, _semantic features_, _list lookup features_, _document features_, _error-based features_, _readability features_, _AI feedback features_, and _text vector features_ which performed specifically well in the related work plus new features which have not been analyzed so far. In this section, we will describe each feature category. The features from all categories are summarized in Table 3 at the end of this section. In ge ### Perplexity-Based Features _Perplexity_ is a measure of how well a language model is able to predict a sequence of words [20]. In other words, it measures how surprised the language model is when it encounters a new sequence of words. A lower perplexity indicates that the language model is better at predicting the next word in a sequence. When it comes to distinguishing between _human-generated_ and _AI-generated_ texts, one key difference is that _human-generated_ text tends to be more varied and unpredictable than _AI-generated_ text. Human writers can use their creativity, knowledge, and experience to produce texts that are full of unexpected word combinations, ideas, and structures. On the other hand, _AI-generated_ text is often based on statistical patterns and rules and can be more predictable and repetitive. Therefore, researchers like [21], [14] and [19] used perplexity as a feature to distinguish between _human-generated_ and _AI-generated_ text. In our study, we investigated two perplexity features: The mean perplexity (\(PPL_{mean}\)) is calculated by taking the average perplexity across all the sentences in a corpus of text. The maximum perplexity (\(PPL_{max}\)) is the highest perplexity that \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Human**} & \multicolumn{3}{c}{**AI-generated**} & \multicolumn{3}{c}{**AI-rephrased**} \\ **Category** & **P** & **S** & **W** & **P** & **S** & **W** & **P** & **S** & **W** \\ \hline Biology & 44 & 188 & 3739 & 47 & 111 & 2057 & 18 & 79 & 1487 \\ Chemistry & 44 & 167 & 3590 & 48 & 124 & 2374 & 22 & 103 & 1859 \\ Geography & 35 & 167 & 3386 & 49 & 136 & 2575 & 24 & 106 & 2028 \\ History & 43 & 189 & 4578 & 52 & 132 & 2583 & 22 & 113 & 2415 \\ IT & 40 & 141 & 2916 & 51 & 128 & 2538 & 16 & 86 & 1541 \\ Music & 39 & 191 & 4177 & 48 & 128 & 2426 & 21 & 109 & 2145 \\ Politics & 43 & 172 & 4298 & 52 & 127 & 2672 & 25 & 111 & 2300 \\ Religion & 40 & 171 & 3796 & 48 & 122 & 2623 & 28 & 120 & 2488 \\ Sports & 51 & 204 & 4692 & 53 & 143 & 2685 & 31 & 118 & 2407 \\ Visual arts & 36 & 147 & 3165 & 43 & 119 & 2238 & 23 & 95 & 1931 \\ \hline \hline \end{tabular} \end{table} Table 2: _Advanced AI-Generated/Rephrased_ Text (P = #paragraphs, S = #sentences, W = #words). the language model encounters when processing the corpus of text. It represents the most difficult sentence or sequence of words for the language model to predict. In our implementation, we used a Natural Language Toolkit (NLTK) [22] script to compute the perplexities of the texts using a GPT-2 model5 as [19] and [14]. Footnote 5: [https://github.com/openai/gpt-2](https://github.com/openai/gpt-2) Figure 3 compares the \(PPL_{mean}\) distributions in the _human-generated_, _AI-generated_ and _AI-rephrased_ texts. A high percentage of _AI-generated_ texts with values around 25 have lower perplexities than _human-generated_ texts with perplexities closer to 50. However, this is not the case for _AI-rephrased_ texts, suggesting that this feature will have more problems with _AI-rephrased_ than with _AI-generated_ texts. ### Semantic Features _Semantic features_ refer to the attributes or properties of words or phrases that can be used to represent the meaning of the words or phrases. These properties can be the sentiment polarity and the degree of subjectivity or objectivity. While [14] and [19] used the sentiment polarity as a feature to distinguish between _human-_ and _AI-generated_ text--to the best of our knowledge --we are the first to analyze the degree of objectivity and subjectivity as a feature to differ _human-generated_ and _AI-generated_ texts. Sentiment analysis is the process of automatically detecting a sentiment from textual information and presenting the information in classes such as _negative_, _neutral_ or _positive_[23, 24, 25] or with a _sentiment score_. Applying sentiment analysis to a text can help to distinguish if it has been _human-generated_ or _AI-generated_ as reported in [14] and [19]. We analyzed two semantic features: First, we applied the sentiment analysis system from TextBlob6, a Python library for text processing operations, to retrieve a sentiment polarity score (\(sentiment_{polarity}\)) between -1 and +1, where -1 represents a very negative text and +1 a very positive text. Second, we used the same Python library to retrieve a subjectivity score (\(sentiment_{subjectivity}\)) between 0 and +1, where 0 represents a very objective text and +1 a very subjective text. Footnote 6: [https://textblob.readthedocs.io/en/dev/quickstart.html#sentiment-analysis](https://textblob.readthedocs.io/en/dev/quickstart.html#sentiment-analysis) Figure 4 illustrates that more _AI-generated_ and _AI-rephrased_ texts have higher \(sentiment_{subjectivity}\) scores than _human-generated_ texts. We explain this distribution by the fact that ChatGPT was fine-tuned using reinforcement learning from human feedback [26]. ### List Lookup Features _List lookup features_ provide information about the category of a word or character [27]. For example, if a word is found in a stop word list (e.g., "a", "an", "the", "of"), we know that it is a stop word. [17] and [28] could classify _human-_ and _AI-generated_ texts using the number of stop words (\(stopWord_{count}\)) and the number of special characters (\(specialChar_{count}\)) as features. Consequently, we used both features for our classification. As a new promising list lookup feature, we included the number of discourse markers (\(discourseMarker_{count}\)), such as "however", "furthermore", or "moreover". Additionally, we took the absolute and relative numbers of repetitions of the article's title (\(titleRepetition_{count}\), \(titleRepetition_{relative}\)) since we detected that _AI-generated_ text often repeats keywords from the title. Figure 5: \(specialChar_{count}\) Distribution. Figure 5 visualizes the \(specialChar_{count}\) as a representative of the _list lookup features_. The figure indicates that the \(specialChar_{count}\) is more widely distributed when the text is _human-generated_. ### Document Features Document features are defined by the content and the structure of a document [27]. Document features can go beyond single-word and multi-word expressions containing meta-information and corpus statistics (multiple occurrences, local syntax, word frequency, etc.). In our experiment, we used _document features_ related to the frequencies of words, sentences, punctuation marks, characters, and part-of-speech tags which were successful in the classification of _human-_ and _AI-generated_ texts in [19], [28] and [18]. Since in [28] the standard deviation of words and sentences performed well, we also included the standard deviation of the number of unique words per sentence (\(uniqWordsPerSentence_{stdev}\)) as a new feature. In addition, the number of quotation marks (\(quotation_{count}\)) is used, as we found that AI produces fewer quotation marks. For example, Figure 6 illustrates that _AI-generated_ and _AI-rephrased_ texts contain few to no quotation marks, with over 80% and over 60% of texts having no quotation marks, respectively. _Human-generated_ text on the other hand has one quotation mark in over 20% and four quotation marks in over 15% of our text examples. ### Error-Based Features We observed that in _AI-generated_ text fewer spelling and grammar errors occur than in _human-generated_ text. Therefore, we introduce _error-based features_ as a new feature category and tested the number of spelling and grammar errors (\(grammarError_{count}\)) as well as the number of multiple blanks (\(multiBlank_{count}\)) as features from this category. To detect the spelling and grammar errors, we used _LanguageTool_7, an open-source grammar tool, also known as the spellchecker for OpenOffice. We detected multiple blanks using regular expressions. Footnote 7: [https://github.com/jxmorris12/language_tool_python](https://github.com/jxmorris12/language_tool_python) Figure 7 demonstrates the distribution of \(grammarError_{count}\) in the _human-generated_, _AI-generated_ and _AI-rephrased_ texts. We observe that _LanguageTool_ detects more spelling and grammar errors in the _human-generated_ than in the _AI-generated_ and _AI-rephrased_ texts. ### Readability Features Since _readability features_ were among the top 5 in [17], we also use them for our detection. Following [17], we implemented the _Flesch Reading Ease_ score (\(fleschReadingEase\)) and _Flesch-Kincaid Grade Level_ (\(fleschKincaidGradeLevel\)). The _Flesch Reading Ease_ measures the ease of readability of a text, with higher scores indicating greater ease of reading and lower scores indicating greater difficulty [29]. The _Flesch-Kincaid Grade Level_ formula provides a numerical rating equivalent to a U.S. grade level [30]. This allows educators, guardians, librarians, and others to assess the comprehensibility level of different texts and books with greater ease. The _Flesch Reading Ease_ and _Flesch-Kincaid Grade Level_ scores are calculated according to a formula that includes the number of words, sentences and syllables [29, 30]. Figure 8 depicts the _Flesch-Kincaid Grade Level_ score distribution in our data set. While a higher number of _AI-generated_ texts is on a higher level between 50-60, the Figure 8: \(fleschKincaidGradeLevel\) Distribution. \(fleschKincaidGradeLevel\) distributions of _human-generated_ and _AI-rephrased_ text look comparable. ### AI Feedback Features Another novel feature that--to the best of our knowledge--has not yet been used in the detection of _AI-generated_ text is the _AI feedback feature_. For this feature, we asked ChatGPT directly if it generated a text. If ChatGPT answers 'yes', we assign the value 2 to the feature, if it answers 'no', we assign the value 0. In case ChatGPT answers that it is not sure, we assign 1. However, looking at the distribution of the numerical values, Figure 9 shows that the \(AIFeedback\) feature does not seem to discriminate between _AI-generated_ and _AI-rephrased_ text. ### Text Vector Features To classify the texts using their content, we also analyzed _text vector features_. _TF-IDF_ performed well in [17] and [31]. To take advantage of the benefits of the semantic vector space, we also experimented with _Sentence-BERT_[32]. As we detected that _AI-generated_ text often contains repetitive phrases or patterns, we also computed the average distance of Sentence-BERT vectors (_Sentence-BERT-dist_) to detect repetitions as the word embeddings of similar sentences are closer in the semantic vector space. ### Summary of Our Analyzed Features In our experiments, we analyzed a total of 37 features which can be grouped into 8 different categories. Besides those features which have already been studied in related analyses, we included 10 new features. All features which were subject to our analyses are summarized in Table 3. Figure 9: \(AIFeedback\) Distribution. ## 5 Experiments and Results In this chapter, we will describe our experiments with the different feature categories and three different classification approaches: The two more traditional approaches XGBoost and random forest (RF) as well as a neural network-based approach with multilayer perceptrons (MLP). As in other studies like [14, 19, 28], we evaluated our systems' classification performance with accuracy (_Acc_) and F1-score (_F1_). Tables 4 and 5 show the _Acc_ and _F1_ of detecting the _basic_ way of text generation and rephrasing, i.e., without any additional instructions. Tables 6 and 7 refer to the _advanced_ way, i.e., where the AI was told to write or rephrase the text in a way that a human would not realize it was generated by an AI. First, we built _basic text generation detection systems_ which were trained, fine-tuned, and tested with our _human-generated_ and _basic AI-generated_ texts. Second, we implemented _basic text rephrasing detection systems_ which were trained, fine-tuned, \begin{table} \begin{tabular}{l l l l} \hline \hline **Category** & **Feature** & **Description** & **Reference** \\ \hline Perplexity & \(\begin{array}{l}PPL_{mean}\\ PPL_{max}\end{array}\) & mean PPL & \(\begin{array}{l}\text{\@@cite[cite]{[\@@bibref{}{21}{}{}]} \\ \text{\@@cite[cite]{[\@@bibref{}{21}{}{}]} \\ \text{\@@cite[cite]{[\@@bibref{}{21}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{21}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{21}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{22}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{23}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{24}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{25}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{26}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{27}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{28}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bib{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bib{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bib{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bib{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text[\@@cite]{[\@@bib{}{}{29}{}]} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bib{}{}{29}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}{}]}} \\ \text[\@@cite]{[\@@bib{}{}{29}{}]} \\ \text{\@@cite[cite]{[\@@bibref{}{}{29}{}]}} \\ \text[\@@cite]{[\@@bib{}{29}{}{}]} \\ \text[\@@cite]{[\@@bibref{}{29}{}{}]} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text[\@@cite]{[\@@bibref{}{29}{}{}]} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}]}} \\ \text{\@@cite[cite]{[\@@bibref{}{29}{}{}]}} and tested with our _human-generated_ and _basic AI-rephrased_ texts. Third, we built _advanced text generation detection systems_ which were trained, fine-tuned, and tested with our _human-generated_ and _advanced AI-generated_ texts. Finally, we built _advanced text rephrasing detection systems_ which were trained, fine-tuned, and tested with our _human-generated_ and _advanced AI-rephrased_ texts. To provide stable results, we performed a 5-fold cross-validation, randomly dividing our corpus in each fold into 80% for training, 10% as a validation set to optimize the hyperparameters, and an unseen test set containing 10% of the texts. The numerical values in all tables are the average of the test set results. The best performances are highlighted in bold. For a comparison with the state-of-the-art technology, we additionally report GPTZero's performances on our texts. Table 4 demonstrates that the best performing feature categories are the combination of traditional and our new features from the _document_ category (\(Document_{traditional+new}\)) and the features from the _text vector_ category (\(TextVector_{traditional+new}\)). With 97% _Acc_ and 97% _F1_\(Document_{traditional+new}\) performs substantially better with MLP than XGBoost and RF, while \(TextVector_{traditional+new}\) is most successful with RF (_Acc_=95.0%, _F1_=94.9%). Most of our systems were able to outperform GPTZero (\(Acc_{GPTZero}\) = 76.0%, \(F1_{GPTZero}\) = 78.9%). Our best system \(All_{traditional+new}\) even performs better than GPTZero by 28.9% relative in _Acc_ and 24.2% relative in _F1_. Table 5 indicates that the results of the _basic text rephrasing detection systems_ are consistently worse than the results of the _basic text generation detection systems_. The best performing feature categories are \(ListLookup_{traditional}\) (\(Acc\)=73.0%, _F1_=74.6%), \(Document_{traditional}\) (\(Acc\)=75.0%, _F1_=73.9%), and \(TextVector_{traditional+new}\) (\(Acc\)=79.0%, _F1_=78.2%). We observe \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{**XGBoost**} & \multicolumn{2}{c}{**RF**} & \multicolumn{2}{c}{**MLP**} \\ **Feature Category** & **Acc** & **F1** & **Acc** & **F1** & **Acc** & **F1** \\ \hline \(Perplexity_{traditional}\) & 83.0\% & 82.2\% & **87.0\%** & **85.3\%** & 82.0\% & 82.1\% \\ \hline \(Semantic_{traditional}\) & 62.0\% & 62.3\% & 66.0\% & 63.6\% & 65.0\% & 61.6\% \\ \(Semantic_{traditional+new}\) & 72.0\% & 72.9\% & **75.0\%** & **75.6\%** & 73.0\% & 72.3\% \\ \hline \(ListLookup_{traditional}\) & 77.0\% & 78.0\% & 82.0\% & 83.3\% & **84.0\%** & **83.7\%** \\ \(ListLookup_{traditional+new}\) & 83.0\% & 82.8\% & 80.0\% & 81.1\% & 81.0\% & 82.9\% \\ \hline \(Document_{traditional}\) & 90.0\% & 90.9\% & 91.0\% & 91.4\% & 94.0\% & 94.1\% \\ \(Document_{traditional+new}\) & 90.0\% & 90.9\% & 93.0\% & 93.3\% & **97.0\%** & **97.0\%** \\ \hline \(ErrorBased_{new}\) & 55.0\% & 61.7\% & 55.0\% & 61.7\% & **56.0\%** & **63.9\%** \\ \hline \(Readability_{traditional}\) & 60.0\% & 56.3\% & **63.0\%** & **59.3\%** & 60.0\% & 56.8\% \\ \hline \(AIFeedback_{new}\) & 62.0\% & 67.1\% & 62.0\% & 67.1\% & **62.0\%** & **68.1\%** \\ \hline \(TextVector_{traditional}\) & 90.0\% & 89.9\% & **95.0\%** & 94.7\% & 86.0\% & 86.3\% \\ \(TextVector_{traditional+new}\) & 90.0\% & 89.9\% & **95.0\%** & **94.9\%** & 81.0\% & 80.6\% \\ \hline \(All_{traditional}\) & 92.0\% & 92.7\% & 97.0\% & 97.0\% & 89.0\% & 89.0\% \\ \(All_{traditional+new}\) & 90.0\% & 90.9\% & **98.0\%** & **98.0\%** & 87.0\% & 87.8\% \\ \hline \hline \end{tabular} \end{table} Table 4: Results for Basic Text Generation: XGBoost vs. RF vs. MLP (\(Acc_{GPTZero}\) = 76.0%, \(F1_{GPTZero}\) = 78.9%). that for \(ErrorBased_{new}\) the _Acc_ and _F1_ values of our three classifiers are the same. This is due to the fact that \(ErrorBased_{new}\) has only 2 dimensions and the classifiers then decide for the same classification. This time XGBoost outperforms RF and MLP. All our systems were able to outperform GPTZero (\(Acc_{GPTZero}\)=43.0%, \(F1_{GPTZero}\)=27.8%). Our best system \(All_{traditional}\) (\(Acc\)=79.0%, \(F1\)=78.9%) performs much better than GPTZero by 83.7% relative in _Acc_ and even 183.8% relative in _F1_. Table 6 shows that the results of our _advanced text generation detection systems_ are almost as good as those of our _basic text generation detection systems_ which demonstrates that the detection of the _advanced AI-generated_ text is not a major challenge for our features. The best performing feature categories are \(TextVector_{traditional+new}\) (\(Acc\)=97.0%, _F1_=96.9%), \(Document_{traditional}\) (\(Acc\)=93.0%, _F1_=93.6%), \(Perplexity_{traditional}\) (\(Acc\)=85.0%, _F1_=83.8%), and \(ListLookup_{traditional+new}\) (\(Acc\)=83.0%, _F1_=84.8%). Again, among XGBoost, RF, and MLP, no classifier shows the best results across all feature categories. Some systems are better than GPTZero (\(Acc_{GPTZero}\)=79.0%, \(F1_{GPTZero}\)=82.7%). Our best system \(TextVector_{traditional+new}\) (\(Acc\)=97.0%, _F1_=96.9%) outperforms GPTZero by 22.8% relative in _Acc_ and 17.2% relative in _F1_. Table 7 indicates that the results of the _advanced text rephrasing detection systems_ are worse than the _advanced text generation detection systems_ but even slightly better than the results of the _basic text rephrasing detection systems_ which demonstrates that the detection of the _advanced AI-rephrased_ text is not a major challenge for our features. The best performing feature categories are \(TextVector_{traditional}\) (\(Acc\)=80.0%, _F1_=77.6%), \(ListLookup_{traditional+new}\) (\(Acc\)=76.0%, _F1_=75.5%) and \(Document_{traditional+new}\) (\(Acc\)=77.0%, _F1_=77.5%). Among XGBoost, RF and MLP, no classifier shows best results across all feature categories. Again, all our systems \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{**XGBoost**} & \multicolumn{2}{c}{**RF**} & \multicolumn{2}{c}{**MLP**} \\ **Feature Category** & **Acc** & **F1** & **Acc** & **F1** & **Acc** & **F1** \\ \hline \(Perplexity_{traditional}\) & 52.0\% & 48.7\% & 55.0\% & 54.6\% & **56.0\%** & **63.2\%** \\ \hline \(Semantic_{traditional}\) & 63.0\% & 61.1\% & 66.0\% & 66.0\% & 59.0\% & 61.7\% \\ \(Semantic_{traditional+new}\) & **66.0\%** & **64.4\%** & 66.0\% & 64.3\% & 52.0\% & 54.3\% \\ \hline \(ListLookup_{traditional}\) & 72.0\% & **74.6\%** & 69.0\% & 69.5\% & **73.0\%** & 74.2\% \\ \(ListLookup_{traditional+new}\) & 72.0\% & 73.7\% & 66.0\% & 64.9\% & 64.0\% & 63.9\% \\ \hline \(Document_{traditional}\) & **75.0\%** & **73.9\%** & 73.0\% & 73.0\% & 73.0\% & 71.2\% \\ \(Document_{traditional+new}\) & 72.0\% & 70.9\% & 69.0\% & 68.2\% & 74.0\% & 73.4\% \\ \hline \(ErrorBased_{new}\) & **62.0\%** & **68.0\%** & **62.0\%** & **68.0\%** & **62.0\%** & **68.0\%** \\ \hline \(Readability_{traditional}\) & **54.0\%** & **51.1\%** & **54.0\%** & 47.8\% & 50.0\% & 50.2\% \\ \hline \(AIFeedback_{new}\) & **52.0\%** & **50.9\%** & 50.0\% & 39.8\% & 45.0\% & 30.1\% \\ \hline \(TextVector_{traditional}\) & 75.0\% & 73.2\% & 77.0\% & 72.2\% & 68.0\% & 63.7\% \\ \(TextVector_{traditional+new}\) & **79.0\%** & **78.2\%** & 75.0\% & 71.0\% & 69.0\% & 65.1\% \\ \hline \(All_{traditional}\) & **79.0\%** & **78.9\%** & 73.0\% & 71.6\% & 66.0\% & 65.6\% \\ \(All_{traditional+new}\) & 77.0\% & 77.6\% & 71.0\% & 69.8\% & 72.0\% & 71.9\% \\ \hline \hline \end{tabular} \end{table} Table 5: Results for Basic Text Rephrasing: XGBoost vs. RF vs. MLP (\(Acc_{GPTZero}\) = 43.0%, \(F1_{GPTZero}\) = 27.8%). are better than GPTZero (\(Acc_{GPTZero}\)=52.0%, \(F1_{GPTZero}\)=45.8%). Our best system \(All_{traditional+new}\) (\(Acc\)=82.0%, _F1_=81.7%) performs much better than GPTZero by 57.7% relative in _Acc_ and even 78.4% relative in _F1_. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{**XGBoost**} & \multicolumn{2}{c}{**RF**} & \multicolumn{2}{c}{**MLP**} \\ **Feature Category** & **Acc** & **F1** & **Acc** & **F1** & **Acc** & **F1** \\ \hline \(Perplexity_{traditional}\) & 83.0\% & 82.2\% & **85.0\%** & **83.8\%** & 83.0\% & 82.6\% \\ \hline \(Semantic_{traditional}\) & 68.0\% & 65.9\% & 68.0\% & 69.0\% & 72.0\% & 70.3\% \\ \(Semantic_{traditional+new}\) & 75.0\% & 71.1\% & **76.0\%** & **75.1\%** & 73.0\% & 70.2\% \\ \hline \(ListLookup_{traditional}\) & 75.0\% & 76.7\% & 75.0\% & 75.3\% & 78.0\% & 79.0\% \\ \(ListLookup_{traditional+new}\) & **83.0\%** & **84.8\%** & 82.0\% & 82.6\% & 73.0\% & 73.2\% \\ \hline \(Document_{traditional}\) & 90.0\% & 90.7\% & **93.0\%** & **93.6\%** & 90.0\% & 89.4\% \\ \(Document_{traditional+new}\) & 90.0\% & 90.7\% & 91.0\% & 91.8\% & 92.0\% & 91.8\% \\ \hline \(ErrorBased_{new}\) & **62.0\%** & **71.7\%** & **62.0\%** & **71.7\%** & 59.0\% & 67.8\% \\ \hline \(Readability_{traditional}\) & 60.0\% & 59.7\% & 59.0\% & 56.8\% & **65.0\%** & **63.2\%** \\ \hline \(AllFeedback_{new}\) & **66.0\%** & **71.1\%** & **66.0\%** & **71.1\%** & **66.0\%** & **71.1\%** \\ \hline \(TextVector_{traditional}\) & 90.0\% & 89.1\% & 90.0\% & 89.6\% & 79.0\% & 79.2\% \\ \(TextVector_{traditional+new}\) & 90.0\% & 89.1\% & **97.0\%** & **96.9\%** & 75.0\% & 73.8\% \\ \hline \(All_{traditional}\) & 89.0\% & 90.0\% & **95.0\%** & 95.0\% & 86.0\% & 86.0\% \\ \(All_{traditional+new}\) & 93.0\% & 94.0\% & **95.0\%** & **95.9\%** & 84.0\% & 82.5\% \\ \hline \hline \end{tabular} \end{table} Table 6: Results for Advanced Text Generation: XGBoost vs. RF vs. MLP (\(Acc_{GPTZero}\) = 79.0%, \(F1_{GPTZero}\) = 82.7%). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{**XGBoost**} & \multicolumn{2}{c}{**RF**} & \multicolumn{2}{c}{**MLP**} \\ **Feature Category** & **Acc** & **F1** & **Acc** & **F1** & **Acc** & **F1** \\ \hline \(Perplexity_{traditional}\) & **66.0\%** & 65.6\% & 65.0\% & **65.8\%** & 60.0\% & 63.7\% \\ \hline \(Semantic_{traditional}\) & 58.0\% & 53.6\% & 61.0\% & 59.4\% & **64.0\%** & **67.1\%** \\ \(Semantic_{traditional+new}\) & 55.0\% & 56.3\% & 63.0\% & 61.5\% & 61.0\% & 63.6\% \\ \hline \(ListLookup_{traditional}\) & 73.0\% & 73.3\% & 75.0\% & 75.3\% & 69.0\% & 63.3\% \\ \(ListLookup_{traditional+new}\) & **76.0\%** & **75.5\%** & 75.0\% & 75.3\% & 72.0\% & 70.4\% \\ \hline \(Document_{traditional}\) & **77.0\%** & 76.7\% & 76.0\% & 77.0\% & 76.0\% & 75.4\% \\ \(Document_{traditional+new}\) & 76.0\% & 74.9\% & 76.0\% & 76.2\% & **77.0\%** & **77.5\%** \\ \hline \(ErrorBased_{new}\) & **62.0\%** & **71.7\%** & **62.0\%** & **71.7\%** & 55.0\% & 62.2\% \\ \hline \(Readability_{traditional}\) & 58.0\% & 55.0\% & **67.0\%** & 66.0\% & **67.0\%** & **68.0\%** \\ \hline \(AllFeedback_{new}\) & **58.0\%** & **61.7\%** & **58.0\%** & **61.7\%** & **58.0\%** & **61.7\%** \\ \hline \(TextVector_{traditional}\) & 77.0\% & 72.6\% & **80.0\%** & **77.6\%** & 61.0\% & 56.0\% \\ \(TextVector_{traditional+new}\) & 71.0\% & 66.5\% & 78.0\% & 75.0\% & 73.0\% & 71.3\% \\ \hline \(All_{traditional}\) & 81.0\% & 80.6\% & 76.0\% & 76.2\% & 71.0\% & 69.0\% \\ \(All_{traditional+new}\) & **82.0\%** & **81.7\%** & 76.0\% & 76.3\% & 77.0\% & 75.2\% \\ \hline \hline \end{tabular} \end{table} Table 7: Results for Advanced Text Rephrasing: XGBoost vs. RF vs. MLP (\(Acc_{GPTZero}\) = 52.0%, \(F1_{GPTZero}\) = 45.8%). ## 6 Conclusion and Future Work In this paper, we explored traditional and new features to detect _AI-generated_ texts. We produced a new data corpus covering 10 school topics. We were able to achieve an F1-score of 98.0% for _basic human-generated_/_AI-generated_ texts and an F1-score of 78.9% for _basic human-generated_/_AI-rephrased_ texts. Furthermore, we reported an F1-score of 96.9% for _advanced human-generated_/_AI-generated_ texts and an F1-score of 81.7% for _advanced human-generated_/_AI-rephrased_ texts. Our best _basic_ text rephrasing detection system even outperforms GPTZero by 183.8% relative in F1-score. Our results show that the new features can help to improve classification performance. As tools like ChatGPT are nowadays easy to access, generated exams or student papers have become a serious issue as this can undermine students' learning outcomes and academic integrity. Our results can make an important contribution to the detection of _AI-generated_ texts and to help teachers to identify generated texts. So far we investigated features to detect _AI-generated_ and _AI-rephrased_ text in English. Additionally, we plan to classify text in other languages. Moreover, we demonstrated that the performances of the systems which combine all features are very close. Consequently, in future work, it is interesting to consider a system combination that has the potential to even further increase performance. While we have already analyzed two types of prompts--the _basic_ and the _advanced_ variants, our goal is to investigate further variants and their impact on classification performance.
2306.02182
FlairNLP at SemEval-2023 Task 6b: Extraction of Legal Named Entities from Legal Texts using Contextual String Embeddings
Indian court legal texts and processes are essential towards the integrity of the judicial system and towards maintaining the social and political order of the nation. Due to the increase in number of pending court cases, there is an urgent need to develop tools to automate many of the legal processes with the knowledge of artificial intelligence. In this paper, we employ knowledge extraction techniques, specially the named entity extraction of legal entities within court case judgements. We evaluate several state of the art architectures in the realm of sequence labeling using models trained on a curated dataset of legal texts. We observe that a Bi-LSTM model trained on Flair Embeddings achieves the best results, and we also publish the BIO formatted dataset as part of this paper.
Vinay N Ramesh, Rohan Eswara
2023-06-03T19:38:04Z
http://arxiv.org/abs/2306.02182v1
FlairNLP at SemEval-2023 Task 6b: Extraction of Legal Named Entities from Legal Texts using Contextual String Embeddings ###### Abstract Indian court legal texts and processes are essential towards the integrity of the judicial system and towards maintaining the social and political order of the nation. Due to the increase in number of pending court cases, there is an urgent need to develop tools to automate many of the legal processes with the knowledge of artificial intelligence. In this paper, we employ knowledge extraction techniques, specially the named entity extraction of legal entities within court case judgements. We evaluate several state of the art architectures in the realm of sequence labeling using models trained on a curated dataset of legal texts. We observe that a Bi-LSTM model trained on Flair Embeddings achieves the best results, and we also publish the BIO formatted dataset as part of this paper. ## 1 Introduction The Legal Entity Extraction task Kalamkar et al. (2022) aims at developing a tool for the identification of named entities within Indian legal texts. Much of the Indian legal texts, such as court judgements are in English, however they assume a very unique format. This unstructured nature of Indian court judgements leads to a difficulty in parsing using simpler techniques such as regular expressions. Moreover, the entities which we are interested to extract are unique to the domain and already existing baseline models prove to be ineffective. Techniques in NLP has made tremendous leaps in the last decade. While in the past, it would struggle to classify the sentiment of a sentence, the models today can classify text and generate sentences with almost no context Topal et al. (2021). Many newer language models are trained on a general domain, but further fine-tuned to be used for a specific domain (e.g., science) Jeong and Kim (2022). Indeed, these methods are achieving state-of-the-art results on Named Entity Recognition, Dependency Parsing and Relation Classification Zhou et al. (2016) tasks. In this paper, we propose training a deep neural language model using a labeled legal dataset for the task of Named Entity Recognition. We model a Bi-LSTM layer for token vectorization followed by a CRF layer for sequence labeling. To account for information from contexts, we use the Flair embeddings Akbik et al. (2019), which is currently the state-of-the-art in sequence labeling tasks. Moreover, we curate the dataset used for training in the IOB format Jiang et al. (2016) and release the dataset to the community. Besides the description discussed, we make the following observations from our experiments * Contextual string embeddings provide context to the sequence labeling tasks, improving the accuracy of identification of custom named entities. * Bi-LSTM layer uses the context in both forward and backward direction to generate context vector for individual tokens * The CRF layer uses these token probabilities to obtain the best path vector of sequence labels. We also make the code available on this repository1. Footnote 1: [https://github.com/VinayMR/legaleval-2023](https://github.com/VinayMR/legaleval-2023) ## 2 Background Named Entity Recognition (NER) Nadeau and Sekine (2007) is an important natural language task which is used in Question Answering, Information Retrieval, Co-reference Resolution. Identification of named entities also paves way for word sense disambiguation and summarization tasks Aliwy et al., 2021). Legal NER has been a topic of interest in the research community. (Dozier et al., 2010) introduces NER on legal text and entity linking and resolution of those named entities. They categorize US legal texts into 5 classes - judges, attores, companies, courts and jurisdictions. In the context of Indian legal system, (Kalamkar et al., 2022) introduces structuring court judgements that are segmented into topical and coherent parts. They show the application of rhetorical roles to improve performance on legal summarization and judgement prediction. (Paul et al., 2022) proposes using a graph-based model for the task of legal statute identification. They enhance their learning by using the citation networks of legal documents along with textual data. In the space of court judgement predictions, (Malik et al., 2021) establishes the baseline of 78 percent accuracy. (Chalkidis et al., 2020) introduces LegalBERT which is a trained BERT model on legal corpus for specific downstream tasks. We build on the existing knowledge of employing pre-trained models on a specific domain, along with contextual string embeddings to train a Bi-LSTM CRF model. In the domain of legal NER, we match the state-of-the-art results seen earlier. ## 3 Model Architecture We introduce a contextual string embedding based deep neural architecture for the task of legal named entity recognition. Unlike many other language models (Devlin et al., 2018) trained on large corpus of text, we employ a character based language model. These contextual string embeddings allows us to pre-train on large, unlabeled corpus as well as learn different embeddings for the same words depending on the context. Figure 1. explains the architecture of the model. Each input token Xi is passed through an embedding layer to get a vector representation. This is then provided as input to a Bi-LSTM layer which learns the contextual information of the words in a sentence. The CRF layer is then trained to learn the best path sequence from the output of the LSTM layer. ### Problem Statement Formally introducing the problem, we have a set of tokens X = x1, x2,..., xn for which we need to identify spans of entities that are predefined. As per the task, we have 14 classes of entities to categorize - COURT, PETITIONER, RESPONDENT, JUDGE, LAWYER, DATE, ORG, GPE, STATUTE, PROVISION, PRECEDENT, CASENUMBER, WITNESS, OTHERPERSON We use the IOB formatted dataset to train, therefore the number of classes is effectively 29. We train a sequence labeling model to identify the named entity for a span of tokens and minimize the Viterbi Loss. ### Data Preparation The dataset consists of 11970 samples found in the Preamble and the Judgement where each sample is labeled for named entities. The dataset also has an equal distribution of classes to avoid problems concerning Imbalanced Classification (Kaur et al., 2019). Figure 2. and Figure 3. illustrates the class distribution in our training and validation dataset respectively. For training, we parse each of the samples and convert it to an IOB format and each token of a sample is on a new line identified by its corresponding tag. We remove stop words from each of the sentences and also purge all white-space characters. ### Mathematical Formulation #### 3.3.1 Bi-LSTM networks LSTMs are variants of Recurrent Neural Networks that have the ability to learn long-term dependen Figure 1: Model Architecture cies in sequential data. The LSTM units contains special gates to control the flow of information into and out of these LSTM units, which are eventually used to form the LSTM network. Two networks stacked form the bidirectional LSTM, which learns contexts from both directions. This output is fed to the following CRF layer to predict the label sequence. The equations to update an LSTM unit or cell at each time step \(t\) is given below : \[i_{t}=\sigma(W_{i}[x_{t},h_{t-1}]+b_{i}), \tag{1}\] \[f_{t}=\sigma(W_{f}[x_{t},h_{t-1}]+b_{f}), \tag{2}\] \[o_{t}=\sigma(W_{o}[x_{t},h_{t-1}]+b_{o}), \tag{3}\] \[\text{\textasciitif{c}}^{-}=tanh(W_{c}[x_{t},h_{t-1}]+b_{c}), \tag{4}\] \[c_{t}=f_{t}\odot c_{t-1}+i_{t}\odot\textasciitif{c}^{-}t, \tag{5}\] \[ht=o_{t}\odot tanh(c_{t}) \tag{6}\] #### 3.3.2 Conditional Random Fields Assuming that a sequence of input words **X** = \(x_{1},x_{2},x_{3}.....x_{n}\) needs to be labeled a sequence of output tags **Y** = \(y_{1},y_{2},y_{3}.....y_{n}\), then we can define Conditional Random Fields as discriminative sequence models that computes the posterior probability p(**Y** | **X**) directly, and thereby learns to differentiate between the possible tag sequences. The highest posterior probability is chosen as the best sequence. ## 4 Experimental Setup and Results We train our model on a 16GB RAM, 4-core x86 CPU on the dataset prepared during the staging step. The training details are mentioned below. ### Stacked Embeddings As many sequence labeling models often combine different types of embeddings by concatenating each embedding vector to form the final word vectors. We similarly experiment with different stacked embeddings. We add classic word embeddings such as Glove which can yield greater latent word-level semantics. ### Training The dataset consists of 9896 labeled training samples of the legal documents. We also split the dataset into validation and test sets to observe the F1 scores during training. Table 1. lists the distribution of classes in each of the sets. The dev and test data label distribution are also similar to that \begin{table} \begin{tabular}{|c|c|c|} \hline **Class** & **Training** & **Validation** \\ \hline Court & 2367 & 296 \\ \hline Petitioner & 3067 & 211 \\ \hline Respondent & 3862 & 296 \\ \hline Lawyer & 3503 & 585 \\ \hline Judge & 2324 & 174 \\ \hline Org & 1441 & 157 \\ \hline Other & 2653 & 276 \\ \hline Witness & 881 & 58 \\ \hline GPE & 1398 & 183 \\ \hline Statute & 1804 & 222 \\ \hline Date & 1880 & 218 \\ \hline Provision & 2384 & 258 \\ \hline Precedent & 1350 & 175 \\ \hline CaseNumber & 1038 & 121 \\ \hline \end{tabular} \end{table} Table 1: Class Distribution Figure 3: Validation Class Distribution Figure 2: Training Class Distribution of training data. Table 2. summarizes the hyper-parameters that were selected for the best performing model. After obtaining the optimal values for the hyper-parameters, validation set is combined with the training set and the model is trained again to evaluate the final performance of the model. We record F1 scores and accuracy of the model across the validation datasets on every epoch. We adopt early stopping of training by checking the validation accuracy scores, so to avoid over-fitting on the training set. ### Analysis of Results Our experimental results are summarized in Table 3. We find that this approach achieves 72% F1-scores in the legal entity labeling task and that the proposed contextual string embeddings for the model is indeed useful for sequence labeling. In figure 4. we plot the training and the validation loss with respect to epochs trained. As we observe the rise in validation loss, we save it as the best possible generalized model and report the scores on it. ## 5 Conclusion In this paper, we developed a statistical based Named Entity Recognition model for labeling legal documents for the LegalNER task. We constructed our model using two LSTM layers in both directions to create a context vector for each token and used a CRF layer to find the best label sequence. We also incorporated the contextual string embedding as the input to LSTM layer, which has proved effective to vectorize polysemous tokens. We also produce an IOB formatted legal dataset which was used during the training stages of the model. We show that the system produces results with 75% F1-scores with respect to legal NER. This is an important preprocessing step for many of NLP tasks ranging from Chatbots, Information Extraction and Entity Linking. We believe this can lead to wider adoption of Natural Language techniques in legal domains.
2307.04802
Out-of-equilibrium dynamics of quantum many-body systems with long-range interactions
Experimental progress in atomic, molecular, and optical platforms in the last decade has stimulated strong and broad interest in the quantum coherent dynamics of many long-range interacting particles. The prominent collective character of these systems enables novel non-equilibrium phenomena with no counterpart in conventional quantum systems with local interactions. Much of the theory work in this area either focussed on the impact of variable-range interaction tails on the physics of local interactions or relied on mean-field-like descriptions based on the opposite limit of all-to-all infinite-range interactions. In this Report, we present a systematic and organic review of recent advances in the field. Working with prototypical interacting quantum spin lattices without disorder, our presentation hinges upon a versatile theoretical formalism that interpolates between the few-body mean-field physics and the many-body physics of quasi-local interactions. Such a formalism allows us to connect these two regimes, providing both a formal quantitative tool and basic physical intuition. We leverage this unifying framework to review several findings of the last decade, including the peculiar non-ballistic spreading of quantum correlations, counter-intuitive slowdown of entanglement dynamics, suppression of thermalization and equilibration, anomalous scaling of defects upon traversing criticality, dynamical phase transitions, and genuinely non-equilibrium phases stabilized by periodic driving. The style of this Report is on the pedagogical side, which makes it accessible to readers without previous experience in the subject matter.
Nicolò Defenu, Alessio Lerose, Silvia Pappalardi
2023-07-10T18:00:16Z
http://arxiv.org/abs/2307.04802v2
# Out-of-equilibrium dynamics of quantum many-body systems with long-range interactions ###### Abstract Experimental progress in atomic, molecular, and optical platforms in the last decade has stimulated strong and broad interest in the quantum coherent dynamics of many _long-range interacting_ particles. The prominent collective character of these systems enables novel non-equilibrium phenomena with no counterpart in conventional quantum systems with local interactions. Much of the theory work in this area either focussed on the impact of variable-range interaction tails on the physics of local interactions or relied on mean-field-like descriptions based on the opposite limit of all-to-all infinite-range interactions. In this Report, we present a systematic and organic review of recent advances in the field. Working with prototypical interacting quantum spin lattices without disorder, our presentation hinges upon a versatile theoretical formalism that interpolates between the few-body mean-field physics and the many-body physics of quasi-local interactions. Such a formalism allows us to connect these two regimes, providing both a formal quantitative tool and basic physical intuition. We leverage this unifying framework to review several findings of the last decade, including the peculiar non-ballistic spreading of quantum correlations, counter-intuitive slowdown of entanglement dynamics, suppression of thermalization and equilibration, anomalous scaling of defects upon traversing criticality, dynamical phase transitions, and genuinely non-equilibrium phases stabilized by periodic driving. The style of this Report is on the pedagogical side, which makes it accessible to readers without previous experience in the subject matter. + Footnote †: journal: Elsevier ###### Contents * 1 Introduction * 2 Equilibrium properties of long-range interacting quantum spin systems * 2.1 Variable-range quantum XY model * 2.2 Equilibrium phase diagram in a nutshell * 2.3 Low-energy spectrum with infinite-range interactions (\(\alpha=0\)) * 2.3.1 Mean-field theory as an exact classical limit * 2.3.2 Collective quantum fluctuations and excitations * 2.3.3 "Spin-wave" excitations * 2.4 Finite-range interactions (\(\alpha>0\)) 2.4.1 Perturbation to mean-field 2.4.2 Quantum paramagnetic phase 2.4.3 Quantum ferromagnetic phase 2.5 Structure of the spectrum beyond linear spin-wave theory * Loschmidt echo 3.4.1 Spherical spin-wave theory 3.4.2 Step approximation 3.4.3 Strong long-range regime * Order parameter 4.1.3 Semiclassical dynamics of quantum fluctuations 4.1.4 Scrambling dynamics 4.1.5 Entanglement dynamics 4.2 Quench dynamics of long-range interacting spin systems (\(\alpha>0\)) 4.2.1 Dynamics of quantum fluctuations with finite interaction range 4.2.2 Prethermal freezing of spin-wave excitations 4.2.3 Impact of finite-range interactions on dynamical phase transitions 4.2.4 Scrambling dynamics with variable-range interactions 4.2.5 Entanglement entropy dynamics: Spin-squeezing vs Quasiparticle picture * 5 Dynamical phases induced by periodic driving 5.1 Kapitza phases 5.1.1 Fully-connected limit \(\alpha=0\): Non-equilibrium phases by driving 5.1.2 Quantum many-body Kapitza phases for \(\alpha>0\) 5.1.3 Prethermalization and heating 5.2 Discrete time crystals 5.2.1 Mean-field DTC 5.2.2 Finite-size and finite-range effects 5.2.3 Order parameter * 6 Conclusions and perspectives * A Semiclassical spectrum ## 1 Introduction An increasing interest in quantum many-body physics with long-range interactions is being driven by growing experimental capabilities in controlling and manipulating atomic, molecular, and optical systems (AMO). Currently, various platforms such as Rydberg atoms, dipolar quantum gases, polar molecules, quantum gases in optical cavities, and trapped ions, have native two-body long-range interactions which can be modeled as algebraically decaying \(J/(\Delta r)^{\alpha}\) with the distance \(\Delta r\)[1; 2; 3; 4; 5; 6]. The exponent \(\alpha\) can in some cases be experimentally tuned -- e.g. through off-resonant coupling of internal levels of trapped ions to motional degrees of freedom, or trapping neutral atoms in photonic modes of a cavity. Additionally, the effective interaction range can be efficiently tuned in systems of Rydberg atoms in one- and two-dimensional arrays [7; 8] or by Rydberg dressing [9; 10]. The versatility of the aforementioned AMO platforms spurred intense theoretical and experimental explorations. These studies established that long-range interactions provide clear routes to circumventing the constraints imposed by either conventional thermalization [11] or conventional bounds on information spreading [12]. Accordingly, the prominent collective character of systems with long-range interactions can lead to a kaleidoscope of novel phenomena which cannot be observed in systems with local interactions. Major examples include: the observation of "super-luminal" correlation and entanglement spreading [3; 13; 14] (to be contrasted with the conventional light-cone behavior in presence of local interactions [15]); dynamical phase transitions [16; 17; 18; 19; 20; 21; 22]; exotic defect scaling [23; 24]; self-organized criticality [25]; time-translation symmetry breaking [26; 27; 28]; quantum many-body chaos [29; 30; 31]. As such, control of long-range interacting assemblies stands out as a promising ingredient for future quantum-technological applications, including quantum metrology and quantum computation. While this great diversity of platforms and research directions largely contributes to generate widespread excitement about long-range interactions, it has at the same time certain drawbacks. The backgrounds and interests of the numerous research groups active in this area span a very wide spectrum. On one hand, experimental interpretations are often based on a few-body, mean-field-like way of thinking [32; 33; 34; 35]. Albeit remarkably simple and powerful, this perspective may fail to fully capture the complexity of non-equilibrium phenomena with long-range interactions. On the other hand, theoretical investigations have often prioritized mathematically rigorous efforts aimed at characterizing the departure from known properties of locally-interacting systems [36, 37, 38, 39, 40, 41, 42, 43, 44]. Albeit sometimes in synergy with experiments [45], this perspective may obscure the construction of an intuitive physical picture applicable to the broad range of out-of-equilibrium phenomena mentioned above. Despite recent attempts to recompose the corresponding mosaic in equilibrium [46], this complementarity of perspectives on similar phenomena still struggle to come together and cement a unified research field and community. As a consequence, the current understanding of the out-of-equilibrium dynamics of long-range interacting quantum many-body systems still seems to lack a systematic organization comparable to that of quantum locally interacting [11] or classical long-range interacting [47] systems. The purpose of this Report is to provide a systematic and intuitive theoretical approach to non-equilibrium phenomena arising from non-random long-range interactions in quantum many-body systems. Our effort aims at bridging the various complementary views in this wide research area and creating a unifying framework. We will review a selection of significant findings in the field, emphasizing how they can be encompassed within a common basic theoretical language and formalism. The approach reviewed in this Report is suited to bridge the simple mean-field description -- which applies to infinite-range interactions, i.e. \(\alpha=0\) -- to the description of systems with quasi-local interactions, i.e. \(\alpha\gg d\), which allow a well-defined notion of locality in spite of non-local interaction tails. The _strong long-range regime_ in between will be the focus of this Report; we will frequently emphasize the _leitmotiv_ that _the physics in this regime interpolates between conventional few-body and conventional many-body physics_. The reach of this unifying framework will be illustrated using prototypical models of interacting quantum spin lattices. This choice does not only serve the purpose of directly relating our results to paradigmatic locally-interacting systems [48, 49], but it is also allows us to make direct connections with the major AMO experimental platforms recalled above. This Report is organized as follows: * Our journey will start in Sec. 2 with a review of _equilibrium properties_ of ferromagnetic quantum spin systems exemplified by a variable-range quantum XY model 2.1, including a discussion of the equilibrium phase diagram upon varying parameters and interaction range (via \(\alpha\)) 2.2 and a critical examination of the mean-field limit 2.3. Hence, in Sec. 2.4 we will review the low-energy description in terms of bosonic excitations (spin waves) across the phase diagram, with emphasis on spectral properties arising from a long interaction range. Finally in Sec. 2.5 we will discuss spectral properties beyond linear spin-wave theory. * The low-energy description reviewed in Sec. 2 can be used to investigate near-equilibrium dynamics. This setup allows to study the peculiar properties of spatial propagation of quantum correlation in presence of long-range interactions [50, 51] as well as their unusual equilibration dynamics [52], reviewed in Sec. 3.1 and 3.2 respectively. Both these phenomena can be studied in quantum quenches lying within the supercritical phase and, therefore, only represent a small departure from equilibrium. More surprisingly, we are going to show that the low-energy description is also capable of addressing dynamical scaling phenomena arising after quenches across the critical point. This is the case of the universal defect formation following a quasi-static sweep across the quantum critical point [53, 54], see Sec. 3.3 and the rise of dynamical quantum phase transitions [55], which we will treat in Sec. 3.4 (the latter, however, will require to modify the simple low-energy description employed before). * Section 4 is devoted to the study of _dynamics far away from equilibrium_, induced by _quantum quenches_. We will first consider in Sec. 4.1 the fully-connected model with all-to-all uniform interactions, and examine the simplest instances of dynamical phenomena in this limit, which can be understood in terms of few-body semiclassical dynamics. For finite-range interactions, however, the motion of semiclassical collective degrees of freedom is coupled to many quantum-fluctuation modes with various wavelengths. In Sec. 4.2 we review a systematic approach to the resulting complex many-body problem, originally developed in Refs. [56; 57]. As a first implication stemming from this approach we will review lower bounds on thermalization time scales associated with long-range interactions, establishing the genuinely non-equilibrium nature of dynamical phenomena in these systems. Hence, we will examine the impact of many-body quantum fluctuations on dynamical criticality and quantum information spreading far away from equilibrium upon tuning the interaction range. Throughout, we will highlight the role of long-range interactions in generating novel phenomena. * In Sec. 5 we will employ the methodology of Sec. 4 to describe coherent dynamics subject to _periodic driving_. Here we will review how long-range interactions allow to stabilize genuinely non-equilibrium phases, without an equilibrium counterpart, in low-dimensional quantum systems of the kind routinely realized in AMO experiments. This will include phases that may be viewed as quantum many-body realizations of the celebrated Kapitza pendulum (Sec. 5.1) as well as discrete-time crystals, which spontaneously break time-translation symmetry (Sec. 5.2). * Finally, in the conclusive Section, we spell out the topics which are _not_ covered in this Report: from effects of inhomogeneities, to frustrated, random, or noisy interactions, to dissipative and monitored dynamics. Throughout the presentation, our goal is to provide both physical intuition and systematic theoretical understanding of experimentally relevant phenomena. We kept the style of the Report on the pedagogical side, as we hope this work will also be useful to inexperienced readers who only possess a ground knowledge of quantum many-body theory and are interested in taking their first dive into the realm of quantum dynamics in presence of long-range interactions. ## 2 Equilibrium properties of long-range interacting quantum spin systems In this Section we summarize and discuss basic _equilibrium_ properties of quantum spin lattices with variable interaction range, which will come in useful in the rest of this Report. For definiteness we will focus on a class of ferromagnetic XY quantum spin models, introduced in Sec. 2.1, and review its equilibrium phase diagram in Sec. 2.2. We will then work out its low-energy description in terms of bosonic excitations ("spin waves") both in the fully-connected limit (Sec. 2.3) and with finite-range interactions (Sec. 2.4), with emphasis on the peculiar features of the quasiparticle spectrum such as discreteness [58; 52], divergent group velocity [59; 60; 61; 50], and dressing effects. Finally, in Sec. 2.5 we will discuss finer low-energy properties beyond spin-wave theory, including domain-wall (de)confinement [62; 63]. In this Section we will keep the model parameters fully general. In the rest of the Report we will frequently restrict the model for simplicity, but all the results can always be straightforwardly extended, drawing on the general setup introduced here and in Sec. 4.2.1 below. ### Variable-range quantum XY model Throughout this work we will consider a prototypical model implemented in AMO platforms, a quantum XY spin model with tunable interaction range. We take a \(d\)-dimensional square lattice of \(N=L^{d}\) quantum spins-\(s\) described by a Hamiltonian of the form \[\hat{H}_{\alpha}=-\sum_{\mathbf{r},\mathbf{r}^{\prime}}J_{\mathbf{r},\mathbf{r}^ {\prime}}(\alpha)\bigg{(}\frac{1+\gamma}{2}\hat{\sigma}_{\mathbf{r}}^{x}\hat{ \sigma}_{\mathbf{r}^{\prime}}^{x}+\frac{1-\gamma}{2}\hat{\sigma}_{\mathbf{r}}^ {y}\hat{\sigma}_{\mathbf{r}^{\prime}}^{y}\bigg{)}-h\sum_{\mathbf{r}}\hat{ \sigma}_{\mathbf{r}}^{z} \tag{1}\] In this equation \(\hat{\sigma}_{\mathbf{r}}^{\mu}=\hat{s}_{\mathbf{r}}^{\mu}/s\) are operators corresponding to the normalized spin components in the \(\mu=x,y,z\) direction, acting on site \(\mathbf{r}=(r_{1},\dots,r_{d})\) of the lattice, \(r_{1},\dots,r_{d}=1,\dots,L\). This represents a generalization of the standard spin-\(1/2\) case, where \(\hat{\sigma}_{\mathbf{r}}^{\alpha}\) reduce to the standard Pauli matrices. Such a normalization allows us to keep track of the role of quantum fluctuations, which are suppressed in the classical limit \(s\to\infty\). The quantity \(\gamma\) parametrizes the XY anisotropy. In this Report we will consider anisotropic spin systems, i.e. \(\gamma\neq 0\); for definiteness we assume \(\gamma>0\) (negative values equivalent upon rotating the spins around the \(z\)-axis by \(\pi/2\)), and we will frequently set \(\gamma=1\) (quantum Ising model). We will occasionally comment on the isotropic limit \(\gamma\to 0\) when relevant. The quantity \(h\) represents the transverse magnetic field strength, which we assume \(h\geq 0\) (negative values are equivalent upon rotating the spins around the \(x\)-axis by \(\pi\)). Throughout this report we will always use units such that Planck's constant is \(\hbar\equiv 1\). The ferromagnetic couplings \(J_{\mathbf{r},\mathbf{r}^{\prime}}(\alpha)\equiv J_{\Delta r}(\alpha)\) depend on the distance \(\Delta r=\|\mathbf{r}-\mathbf{r}^{\prime}\|\) between the two involved sites, and we will be interested in tuning their spatial range through the parameter \(\alpha\). Specifically, we consider interactions algebraically decaying with the distance,1 Footnote 1: Note that for spins-\(1/2\) the terms \(\mathbf{r}=\mathbf{r}^{\prime}\) produce an inconsequential additive constant \(E=\sum_{\mathbf{r}}J_{\mathbf{r},\mathbf{r}}/2\), as Pauli matrices square to \(1\). Diagonal terms may be important for higher-spin Hamiltonians. As a rule we will set \(J_{\mathbf{r},\mathbf{r}}=0\). We will occasionally comment on interesting phenomena associated with spin self-interactions further below. \[J_{\mathbf{r},\mathbf{r}^{\prime}}(\alpha)=\frac{J}{\|\mathbf{r}-\mathbf{r}^{ \prime}\|^{\alpha}}. \tag{2}\] To impose periodic boundary conditions, various equivalent choices of distance function are possible; we take \(\|\mathbf{r}-\mathbf{r}^{\prime}\|\equiv\sqrt{\sum_{\mu=1}^{d}[\min(|r_{\mu}-r _{\mu}^{\prime}|,L-|r_{\mu}-r_{\mu}^{\prime}|)]^{2}}\). The overall constant \(J>0\) is chosen in such a way to fairly compare the models with different \(\alpha\), i.e. to make the mean-field interaction strength \(J_{0}\) independent of \(\alpha\): \[J=\frac{J_{0}}{2\mathcal{N}_{\alpha,L}},\qquad\mathcal{N}_{\alpha,L}\equiv \frac{1}{2}\sum_{\mathbf{r}}\frac{1}{\|\mathbf{r}-\mathbf{r}^{\prime}\|^{ \alpha}}\,. \tag{3}\] This prescription -- known as Kac normalization [64] -- is necessary to make the thermodynamic limit well defined for \(\alpha\leq d\), where the divergence \(\mathcal{N}_{\alpha,L}\sim L^{d-\alpha}\) with the system size ensures that energy scales extensively. For \(\alpha>d\) the Kac rescaling factor saturates to a finite value in the thermodynamic limit \(\mathcal{N}_{\alpha,L}\to\mathcal{N}_{\alpha}\). **Summary**: We consider a \(d\)-dimensional quantum XY spin model with non-random ferromagnetic interactions that decay algebraically with exponent \(\alpha\). The interaction strength is rescaled to make the mean field independent of \(\alpha\). ### Equilibrium phase diagram in a nutshell Quite generally for ferromagnetic interactions \(J_{\mathbf{r},\mathbf{r}^{\prime}}\geq 0\) the system has an equilibrium zero- temperature phase transition for small enough \(|h|\), associated with the spontaneous breaking of its \(\mathbb{Z}_{2}\) spin-inversion symmetry of the \(x\)-component. The longitudinal magnetization \(\langle\hat{\sigma}_{\mathbf{r}}^{x}\rangle\) undergoes an abrupt change from \(\langle\hat{\sigma}_{\mathbf{r}}^{x}\rangle=0\) in the unique paramagnetic ground state for \(|h|>h_{\rm cr}\) to \(\langle\hat{\sigma}_{\mathbf{r}}^{x}\rangle_{\pm}=\pm m(h)\neq 0\) in the two degenerate ferromagnetic ground states for \(|h|<h_{\rm cr}\). When interactions are local, the universality class of this quantum phase transition is the same as that of the \((d+1)\)-dimensional classical Ising model [48]. The possible emergence of an ordered phase at _finite temperature_\(T>0\), i.e. of long-range ordered excited states at finite energy density, depends on the dimensionality \(d\) and on the decay exponent \(\alpha\) of the interactions. For strictly finite range, finite-temperature order is stable only for \(d\geq 2\); in this case, the universal properties of the thermal phase transition are the same as that of the corresponding \(d\)-dimensional classical Ising model [48; 65]. Increasing the interaction range, however, enhances the effective lattice connectivity, somewhat similarly to the effect of increasing the lattice dimensionality [66; 67; 68]. While frustration prevents antiferromagnetic interactions from creating collective ordering, ferromagnetic interactions do cooperate to suppress the effect of spatial fluctuations, generally resulting in a qualitative enhancement of the system's ability to order as the interaction range increases [69]. The analogy between integer-dimension long-range systems and local systems in lower fractional dimensions has been quantitatively tested in multiple studies in recent years, both in classical [72; 73; 74; 68] and quantum [70; 75] long-range systems. Leading-order perturbation theory results support the exact correspondence between the universal behavior of long-range interacting systems with dimension \(d\) and decay exponent \(\alpha\) and locally-interacting system with dimension \(d_{\rm eff}=2(d+z)/(\alpha-d)\), where \(z\) is the dynamical critical scaling exponent [75; 76; 77]. Advanced renormalization group studies highlighted deviations from this correspondence, which only occur beyond the leading order and hence remain small [70]. Therefore, it is possible to employ the effective-dimension relation above to get the qualitative shape of the phase diagram: For \(\alpha<\frac{5}{3}d\) the universal scaling behavior is captured by mean-field theory, Figure 1: **Phase diagram of long-range interacting quantum Ising model.** (a) Mean-field theory properly describes the universal scaling behavior in the cyan shaded region. The white background with the red LR label represents the region in the phase diagram where the universal behavior is correlated and influenced by the presence of relevant long-range couplings. The boundary of the LR region is simply \(\alpha=d+2\) in mean-field theory (vertical shaded line), but gets displaced to \(\alpha_{*}=d+2-\eta_{\rm w}\) by two-loop corrections (red line). Finally, the white area on the right of the red line signals the region of irrelevant long-range couplings where the universal behavior is controlled by the local part of the interactions. Figure reproduced from Ref. [70]. (b) Finite temperature phase-diagram of the one-dimensional Ising model, in the transverse field \(h\), interaction exponent \(\alpha\), and temperature \(T\) space, in the units of \(J\) (without Kac normalization). For \(\alpha<1\), the system is in the mean-field regime (striped region), for \(\alpha<5/3\) the mean-field universality is exact, while for \(\alpha>3\) the model is in the same universality class of the short-range Ising model (dark grey). Figure adapted from Ref.[71]. while for \(\frac{\pi}{3}d<\alpha<\alpha_{*}\) the system displays correlated critical behavior influenced by the presence of long-range interactions. Finally, for \(\alpha>\alpha_{*}\) the interaction tails become irrelevant and the critical exponents coincide with the ones of the model with local interactions [70]. The location of \(\alpha_{*}\) was subject to multiple controversies, but the result \(\alpha_{*}=d+2-\eta_{\rm sr}\)[68; 72], with \(\eta_{\rm sr}\) the anomalous dimension of the model with local interactions, appears now to be established [70] in agreement with extensive numerical simulations in classical models [73]. Due to the dependence of \(\alpha_{*}\) on the universal equilibrium properties of the local model, this boundary _does_ depend on the particular model. For the model considered in this Report, the phase diagram of the equilibrium critical problem is displayed in Fig. 1: in Fig. 0(a) we report the universality properties for different dimension [70], while in Fig. 0(b) we show the phase diagram for finite temperature of the one-dimensional model [71]. Decreasing the decay exponent below the dimension of the system, i.e. \(\alpha<d\), does not produce any major implications in the equilibrium critical scaling, but it does modify the thermodynamic properties. In the regime \(\alpha<d\), the system becomes non-additive and the boundary contribution to thermodynamic quantities cannot be neglected, leading to the violation of several established equilibrium properties including the equivalence of thermodynamic ensembles [78]. Given this scenario, long-range interacting systems can be classified in the following way [46]: for \(\alpha<d\) they are in the so-called _strong long-range regime_, for \(d<\alpha<\alpha^{*}\) they are in the _weak long-range regime_, while for \(\alpha>\alpha^{*}\) one retrieves short-range properties.2 In this Report we will mainly focus on quantum spin systems around the strong long-range regime, i.e. \(0\leq\alpha\lesssim d\), and we aim at providing a cohesive picture for their distinctive dynamics. Footnote 2: We warn the readers that the nomenclature we adopt here is far from being universally established in the vast literature on long-range interactions. In the light of the above discussion, the role of the spatial dimension is diminished in systems with variable-range interactions, as the relevant parameter in equilibrium is the effective dimension \(d_{\rm eff}\). Yet, the one-dimensional case is particularly interesting: In presence of local interactions, \(d=1\) systems cannot exhibit ordering at finite temperature, because isolated topological defects of a ferromagnetically ordered pattern (domain-wall-like excitations) cost a finite energy [69]; a longer range of ferromagnetic interactions induces _binding_ between domain walls, and hence a tendency to stabilize ferromagnetic order. The effect of long-range interactions is thus most dramatic for \(d=1\): The algebraically decaying interactions in Eq. (1) allow to stabilize ferromagnetic order in the thermodynamic limit upon decreasing \(\alpha\) below 2. This happens as the interaction potential between two domain walls becomes _confining_ at large distances, such that free isolated domain walls cost an infinite energy. \begin{tabular}{|p{341.4pt}|} \hline **Summary**: In equilibrium, the universal critical properties with \(J/(\Delta r)^{\alpha}\)-interactions are close to those of the locally interacting version of the system (\(\alpha=\infty\)) in a higher effective dimension \(d_{\rm eff}=2(d+z)/(\alpha-d)\). For \(d=1\) and \(\gamma\neq 0\), finite-temperature ordering becomes possible for \(\alpha\leq 2\). \\ \hline \end{tabular} ### Low-energy spectrum with infinite-range interactions (\(\alpha=0\)) Let us start by discussing the exactly solvable infinite-range limit \(\alpha\to 0\). This will be the starting point to analyze the behavior for \(\alpha>0\). #### 2.3.1 Mean-field theory as an exact classical limit Increasing the range of interactions \(\alpha\to 0\) weakens spatial fluctuations, leading the system toward its mean-field limit -- similarly to the effect of increasing the system dimensionality \(d\to\infty\). This can be seen explicitly by rewriting the Hamiltonian (1) in terms of the collective spin components \[\hat{S}^{\mu}=\sum_{i=1}^{N}\hat{s}^{\mu}_{i}\,\quad\mu=x,y,z\,, \tag{4}\] which gives the expression \[\hat{H}_{\alpha=0}=-\frac{J_{0}}{N\,s^{2}}\,\bigg{(}\frac{1+\gamma}{2}(\hat{S}^ {x})^{2}+\frac{1-\gamma}{2}(\hat{S}^{y})^{2}\bigg{)}-\frac{h}{s}\,\hat{S}^{z}\ \, \tag{5}\] where we used \(J=J_{0}/(N-1)\approx J_{0}/N\). This expression highlights that the \(\alpha=0\) Hamiltonian is a function of a single degree of freedom: the collective spin. All other non-collective spin modes are frozen and do not participate in dynamics. The collective spin magnitude \(\hat{S}^{2}=(\hat{S}^{x})^{2}+(\hat{S}^{y})^{2}+(\hat{S}^{z})^{2}=S(S+1)\) with \(S=Ns,Ns-1,Ns-2,\ldots,0\) or \(1/2\) is conserved, \[\big{[}\hat{S}^{2},\hat{H}_{\alpha=0}\big{]}=0. \tag{6}\] The Hilbert space sector \(\mathcal{H}_{\hat{S}^{2}=S(S+1)}\) associated with the quantum number \(S\) contains \(g_{N,S}\) copies of a spin-\(S\) representation of SU(2), where \(g_{N,S}=\dim\mathcal{H}_{\hat{S}^{\prime}=\text{S}}-\dim\mathcal{H}_{\hat{S}^{ \prime}=\text{S}+1}\). This combinatorial number depends implicitly on \(s\); in the simplest case \(s=1/2\) we have \[g_{N,S}=\binom{N}{N/2-S}-\binom{N}{N/2-S-1}=\frac{2S+1}{N+1}\binom{N+1}{N/2-S}. \tag{7}\] In each such \((2S+1)\)-dimensional space, the Hamiltonian acts as Eq. (5) thought as the Hamiltonian of a single spin of size \(S\). For all states with large \(S\) growing with \(N\), the thermodynamic limit \(N\to\infty\) is equivalent to a semiclassical limit for the collective spin: The rescaled spin satisfies commutation relations of the form \[\bigg{[}\frac{\hat{S}^{\mu}}{S},\frac{\hat{S}^{\nu}}{S}\bigg{]}=\frac{i}{S}\ \epsilon_{\mu\nu\rho}\ \frac{\hat{S}^{\rho}}{S}\ ; \tag{8}\] and the Hamiltonian can we rewritten in terms of the rescaled spin as \[\hat{H}_{\alpha=0}=(S/s)\bigg{\{}-J_{0}\rho\,\bigg{[}\frac{1+\gamma}{2}\bigg{(} \frac{\hat{S}^{x}}{S}\bigg{)}^{2}+\frac{1-\gamma}{2}\bigg{(}\frac{\hat{S}^{y} }{S}\bigg{)}^{2}\bigg{]}-h\,\frac{\hat{S}^{z}}{S}\bigg{\}}\, \tag{9}\] where \(\rho\equiv S/(Ns)\) is a constant depending on the collective spin sector, with \(0\leq\rho\leq 1\). Thus, the system manifestly has an effective Planck constant \(\hbar_{\text{eff}}\equiv 1/S\). Keeping in mind that a meaningful thermodynamic limit requires to take \(J_{0}\rho\) as a constant independent of \(N\), we conclude that the limit \(N\to\infty\) realizes a classical limit with a continuous spin \[\frac{\langle\hat{\vec{S}}\rangle}{S}\rightsquigarrow\vec{\mathcal{S}} \tag{10}\] of (conserved) length 1 governed by the classical Hamiltonian \(\hat{H}_{\alpha=0}/(S/s)\rightsquigarrow\mathcal{H}_{\text{cl}}\), \[\mathcal{H}_{\text{cl}}(\vec{\mathcal{S}})=-\rho J_{0}\,\bigg{(}\frac{1+ \gamma}{2}(\mathcal{S}^{x})^{2}+\frac{1-\gamma}{2}(\mathcal{S}^{y})^{2}\bigg{)} -h\mathcal{S}^{z}\,, \tag{11}\] Canonical variables can be taken as, e.g., \(\mathcal{S}^{z}=\cos\theta\) and \(\arctan_{2}(\mathcal{S}^{x},\mathcal{S}^{y})=\phi\). The absolute ground state minimizes energy across all sectors; for ferromagnetic interactions the ground state is realized for maximal collective spin polarization, \(S=Ns\), i.e. for \(\rho=1\). A rigorous implication of the classical limit [79] is that, as \(N\to\infty\), the ground state expectation values \(\left\langle\vec{S}\right\rangle_{\rm GS}/S\) of the collective spin components converge to the minimum point \(\vec{\mathcal{S}^{*}}\) of the classical Hamiltonian \(\mathcal{H}_{\rm cl}\) on the unit sphere. For later purpose it is convenient to define a rotated reference frame \((\mathbf{X},\mathbf{Y},\mathbf{Z})\) adapted to the ground state polarization, i.e., such that \(\mathbf{Z}\equiv\vec{\mathcal{S}^{*}}\). Using spherical coordinates we can parametrize \[\mathbf{X}\equiv\begin{pmatrix}\cos\theta\cos\phi\\ \cos\theta\sin\phi\\ -\sin\theta\end{pmatrix},\quad\mathbf{Y}\equiv\begin{pmatrix}-\sin\phi\\ \cos\phi\\ 0\end{pmatrix},\quad\mathbf{Z}\equiv\begin{pmatrix}\sin\theta\cos\phi\\ \sin\theta\sin\phi\\ \cos\theta\end{pmatrix}. \tag{12}\] Crucially, the quantum uncertainty associated with spin fluctuations in the transverse directions \(\mathbf{X}\) and \(\mathbf{Y}\) spans a phase-space area of order \(h_{\rm eff}=2\pi/S\), which is vanishingly small as \(N\to\infty\).3 Footnote 3: The full-connected Hamiltonian with \(\alpha=0\) is a function of collective spin variables only. The thermodynamic limit realizes a semiclassical limit for the collective spin with an effective Planck constant \(h_{\rm eff}\propto 1/N\). The discussion above is valid for generic infinite-range Hamiltonians. For our model in Eq. (5), minimization of \(\mathcal{H}_{\rm cl}\) on the unit sphere gives the ground-state polarization \(\vec{\mathcal{S}^{*}}=(\pm\sin\theta^{*},0,\cos\theta^{*})\), with \[\theta^{*}=\begin{cases}0&\text{ for }h>h_{\rm cr}\equiv\rho J_{0}(1+\gamma) \\ \arccos\left(\frac{h}{\rho J_{0}(1+\gamma)}\right)&\text{ for }0\leq h\leq h_{\rm cr}\equiv\rho J_{0}(1+\gamma) \end{cases} \tag{13}\] The ferromagnetic phase transition at \(h=h_{\rm cr}\) is associated with the bifurcation of the minimum. The ferromagnetic energy \(\mathcal{E}\equiv\mathcal{H}_{\rm cl}(\vec{\mathcal{S}^{*}})=-\rho J_{0}(1+ \gamma)/2-h^{2}/[\rho J_{0}(1+\gamma)]\) is extremal for \(\rho=1\), in agreement with the claim anticipated above and with intuition. See plots in Figs. 1(a) and 1(b). #### 2.3.2 Collective quantum fluctuations and excitations It is important to stress that, in spite of the exact classical limit, the ground-state wavefunction is _not_ a product state of \(N\) spins pointing in the direction \(\vec{\mathcal{S}^{*}}\): Collective interactions generate global (_multipartite_) quantum entanglement among all spins. Such quantum correlations stem from quantum fluctuations of the collective spin around the average direction \(\vec{\mathcal{S}^{*}}\). Such effects can be understood via semiclassical analysis to leading order in \(h_{\rm eff}\). Let us first compute the low-energy spectrum of the infinite-range Hamiltonian (5) thought as a single-spin Hamiltonian, with \(S\) growing with \(N\). The collective spin moves in an energy landscape whose depth grows with \(N\). The ground state wavefunction is localized around its global minimum (or minima). Expansion of \(\mathcal{H}_{\rm cl}\) around the minimum gives access to the ground-state fluctuations and low-lying harmonic excitations. This can be conveniently done via a Holstein-Primakoff transformation [80]: Recalling \(\hat{S}^{\pm}=\hat{S}^{x}\pm i\hat{S}^{y}\), \[\begin{cases}\hat{S}^{-}=\hat{b}^{\dagger}\sqrt{2S-\hat{b}^{\dagger}\hat{b}} \,,\\ \hat{S}^{+}=\sqrt{2S-\hat{b}^{\dagger}\hat{b}}\ \ \hat{b}\,,\\ \hat{S}^{z}=S-\hat{b}^{\dagger}\hat{b}\,.\end{cases} \tag{14}\] These equations represent an exact embedding of a quantum spin into a bosonic mode. This procedure is simplest in the paramagnetic phase. For large \(h\gg h_{\rm cr}\) the ground state approaches the uncorrelated state fully polarized along \(z\), and the elementary excitations approach the tower of spin lowering excitations. For finite \(h>h_{\rm cr}\) the collective spin fluctuates along the transverse directions -- more prominently along the "soft" direction \(x\) and more weakly along the "stiff" direction \(y\).4 Such fluctuations can be described by mapping \(\hat{S}^{x}\) and \(\hat{S}^{\gamma}\) to canonical bosonic operators via Eq. (14), Footnote 4: This point will be further discussed at length in Sec. 4.1.3. \[\begin{cases}\hat{S}^{x}\approx\sqrt{S}\ \hat{q}\,,\\ \hat{S}^{y}\approx\sqrt{S}\ \hat{p}\,,\\ \hat{S}^{z}=S-\hat{n}_{0}=S-\frac{\hat{q}^{2}+\hat{p}^{2}-1}{2}\,.\end{cases} \tag{15}\] Using \([\hat{q},\hat{p}]=i\) one can check that for large \(S\) the spin commutation relations are satisfied by the right-hand sides of Eqs. (15) to leading order. In a classical phase-space description, the approximation given by the above truncated Holstein-Primakoff transformation corresponds to replacing the surface of the sphere by its tangent plane at the North pole. Using Eq. (15), the Hamiltonian (5) can be approximated by neglecting terms of order \(1/S\), and hence easily diagonalized. We find: \[\hat{H}_{\alpha=0} \approx-N\rho h+\frac{h}{s}\frac{\hat{q}^{2}+\hat{p}^{2}-1}{2}- \frac{\rho J_{0}}{s}\bigg{(}\frac{1+\gamma}{2}\hat{q}^{2}+\frac{1-\gamma}{2} \hat{p}^{2}\bigg{)} \tag{16a}\] \[=-N\rho h+\frac{1}{s}\bigg{(}\frac{\omega_{>}-\omega_{>}^{(0)}}{2 }\bigg{)}+\frac{1}{s}\omega_{>}\ \hat{n}\, \tag{16b}\] where \[\omega_{>}=\sqrt{[h-\rho J_{0}(1-\gamma)][h-\rho J_{0}(1+\gamma)]},\qquad \omega_{>}^{(0)}=h. \tag{17}\] The first term in the last line of Eq. (16b) represents the classical energy, and the second one is the variation of the zero-point energy due to quantum fluctuations around the classical configuration. In the last term, \(\hat{n}\) is the harmonic excitation quanta of energy \(\omega_{>}\) (not to be confused with the "bare" spin-lowering excitation quanta \(\hat{n}_{0}\)). For \(h>h_{\rm cr}\), the number \(\langle\hat{n}_{0}\rangle=\langle\hat{q}^{2}+\hat{p}^{2}-1\rangle/2\) of bare collective spin excitations in the ground state is finite, and it diverges as \(h\searrow h_{\rm cr}\), signaling a critical phenomenon (see Fig. 2d). Indeed the energy gap \(\omega_{>}/s\) closes at \(h=h_{\rm cr}\), with a mean-field critical exponent \(1/2\) (see Fig. 2c). For \(h<h_{\rm cr}\) the frequency \(\omega_{>}\) becomes imaginary, which signals instability of the paramagnetic state. In order to determine the ground state and the elementary excitations in the broken-symmetry phase, let us start from some general considerations. For \(h<h_{\rm cr}\) the classical landscape presents two symmetric minima, as discussed above. Below the energy \(E_{\rm dyn}\equiv\mathcal{H}_{\rm cl}(\theta=0)\) of the classical phase-space separatrix, two symmetric families of classical trajectories fill the two energy wells. In the thermodynamic limit, this corresponds to two towers of pairwise degenerate energy levels, associated with wavefunctions localized in the two wells. At finite size \(N\), however, the energy eigenstates below the critical energy are nondegenerate and alternately even and odd with respect to the \(\mathbb{Z}_{2}\) symmetry of the Hamiltonian. For large \(N\), they approach even and odd superpositions of the localized wavefunctions. The energy splitting between each pair of quasidegenerate eigenstates is proportional to the quantum tunneling amplitude across the energy barrier, which is exponentially small in the height of the barrier [81], and hence exponentially small in \(N\). Accordingly, tunneling between the two broken-symmetry sectors is practically suppressed even for moderate system sizes, and it is extremely fragile to tiny symmetry-breaking perturbations. For these reasons it makes sense to consider the two towers of symmetry-breaking states independently of each other. To compute the spectrum explicitly, it is convenient to introduce a procedure which will lend itself to powerful generalizations in the rest of this Report. We rewrite the components of the collective spin in a rotated frame \((\mathbf{X},\mathbf{Y},\mathbf{Z})\), cf. Eq. (12), by angle \(\theta\) in the \(xz\)-plane, i.e., \[\hat{S}^{x}=\cos\theta\,\hat{S}^{X}+\sin\theta\,\hat{S}^{Z},\qquad\hat{S}^{y}= \hat{S}^{Y},\qquad\hat{S}^{z}=-\sin\theta\,\hat{S}^{X}+\cos\theta\,\hat{S}^{Z}\,. \tag{18}\] Performing a Holstein-Primakoff transformation with rotated quantization axis \(\mathbf{Z}\) and neglecting terms of order \(1/S\), \[\begin{cases}\hat{S}^{X}\approx\sqrt{S}\ \hat{q},\\ \hat{S}^{Y}\approx\sqrt{S}\ \hat{p},\\ \hat{S}^{Z}=S-\hat{n}_{0}=S-\frac{\hat{q}^{2}+\hat{p}^{2}-1}{2},\end{cases} \tag{19}\] Figure 2: **Equilibrium properties of the fully-connected quantum Ising model across the phase diagram.** All quantities are computed for the Hamiltonian (5) with \(\gamma=1\), and display singularities at the phase transition \(h=2J_{0}\) with mean-field critical exponents. (The behavior of the corresponding quantities for a general anisotropic XY model \(\gamma\neq 0\) is analogous.) Panel (a): The order parameter is determined by Eq. (13). Panel (b): The ground-state energy density is the minimum of the classical energy (11) on the unit sphere. Panel (c): The dark-blue curve corresponds to \(\omega_{<}\) in Eq. (22) and \(\omega_{>}\) in Eq. (17), respectively below and above the phase transition; the light-blue curve corresponds to \(\omega_{\text{dw}}\), cf. Eq. (24). Panel (d): The number of collective spin excitations is computed by diagonalizing Eqs. (16a) and (20), respectively below and above the phase transition. Panel (e): The bipartite (half-system) entanglement entropy in fully-connected spin models is a function of \(\langle\hat{h}_{0}\rangle\) only, see Sec. 4.1.5 below; in the quantum ferromagnetic phase, the finite-size ground state approaches a symmetric superposition of two symmetry-breaking ground states for large \(N\), which yields an extra bit of entropy; the divergence at criticality is logarithmic in system size, see Refs. [82, 83]. we get \[\hat{H}_{\alpha=0} \approx -N\left(h\rho\cos\theta+J_{0}\frac{1+\gamma}{2}\rho^{2}\sin^{2} \theta\right)\] \[-\sqrt{\frac{\rho N}{s}}\sin\theta\Big{(}h-\rho J_{0}(1+\gamma) \cos\theta\Big{)}\hat{q}\] \[+\frac{1}{s}\Big{[}\left(\rho J_{0}(1+\gamma)\sin^{2}\theta+h\cos \theta\right)\frac{\hat{q}^{2}+\hat{\rho}^{2}-1}{2}-\rho J_{0}(1+\gamma)\cos^{ 2}\theta\,\frac{\hat{q}^{2}}{2}-\rho J_{0}(1-\gamma)\,\frac{\hat{\rho}^{2}}{2 }\,\Big{]}. \tag{20}\] In order for the bosonic variables to describe quantum fluctuations it is necessary to align the frame with the classical configuration, in such a way that linear terms in the second line vanish. This condition leads to \(\theta^{*}\) as in Eq. (13). The resulting quadratic Hamiltonian can then be readily diagonalized: \[\hat{H}_{\alpha=0}\approx-\frac{N}{2}\bigg{(}\frac{h^{2}}{J_{0}(1+\gamma)}+ \rho^{2}J_{0}(1+\gamma)\bigg{)}+\frac{1}{s}\bigg{(}\frac{\omega_{<}-\omega_{< }^{(0)}}{2}\bigg{)}+\frac{1}{s}\omega_{<}\,\hat{n}, \tag{21}\] where \[\omega_{<}=\sqrt{\left[\rho^{2}J_{0}^{2}(1+\gamma)^{2}-h^{2}\right]\,\frac{2 \gamma}{1+\gamma}},\qquad\omega_{<}^{(0)}=\rho J_{0}(1+\gamma)\,. \tag{22}\] Analogously to Eq. (16), the first term in the last line of Eq. (21) represents the classical energy, the second one expresses the shift in the zero-point energy due to quantum fluctuations around the classical minimum configuration, while the last one [arising from diagonalization of Eq. (20)] is the energy of the harmonic excitations, with \(n=0,1,2,\dots\). In Fig. 2 we plotted the exact ground state energy density \(\mathcal{E}\) [panel (b)], the energy gap of collective spin excitations \(\omega_{>,<}\) (dark-blue curve) [panel (c)], and the number of "bare" collective spin excitations \(\langle\hat{n}_{0}\rangle\) [panel (d)], of the infinite-range quantum Ising model (\(\gamma=1\)) in the thermodynamic limit \(N\to\infty\), as a function of the ratio \(h/J_{0}\). The results (16) and (21) are asymptotically exact for \(n/N\to 0\), and _fully nonperturbative_ in the Hamiltonian parameters \(h,J_{0},\gamma\). Systematic improvements in powers of \(n/N\) can be worked out with a more refined analysis [84]. This is particularly relevant to understand the finite-size scaling \(\omega\sim N^{-1/3}\) of the energy gap at criticality \(h=h_{\rm cr}\): see A for an elementary semiclassical derivation. (For completeness, Fig. 2e also reports the ground-state bipartite entanglement entropy across the phase diagram. This quantity can be computed numerically for large \(N\)[82] and compared with analytical calculations in the large-\(N\) limit based on semiclassical fluctuations [83]. This analytical procedure can be deduced as particular case of the more general discussion on entanglement dynamics in Sec. 4.1.5 below; for this reason, we do not discuss this here.) **Summary**: The collective spin low-energy spectrum is described by bosonic excitations, obtained by a Holstein-Primakoff expansion around the classical ground state. #### 2.3.3 "Spin-wave" excitations The analysis above concerns collective spin quantum fluctuations and excitations within a fixed sector with collective spin length \(S\) -- and we are ultimately interested in the ground state sector with maximal \(S=Ns\). Different families of spin excitations lower the collective spin length to \(S=Ns-n_{\rm sw}\), with \(n_{\rm sw}=0,1,2,\dots\). (For reasons that will become clear below, we will refer to the quantum number \(n_{\rm sw}\) as the _total occupation of spin-wave modes_ _with non-vanishing momenta._) Their spectrum can also be straightforwardly obtained from semiclassical arguments: Recalling the definition \(\rho=S/(Ns)\) above, we have \[\rho=1-\frac{n_{\rm sw}}{Ns}. \tag{23}\] Substituting into Eqs. (16) and (21) and consistently neglecting terms of higher order in \(1/N\), we obtain the complete spectrum of low-lying excitations above the ground state to leading order in \(n/N\) and \(n_{\rm sw}/N\): \[\begin{split}& H_{a=0}\approx-Nh+\frac{\omega_{>}-\omega_{>}^{(0)} }{2s}+\frac{1}{s}\big{(}\omega_{>}\,\hat{n}+h\,\hat{n}_{\rm sw}\big{)},\\ & H_{a=0}\approx-\frac{N}{2}\bigg{(}\frac{h^{2}}{(1+\gamma)J_{0} }+J_{0}(1+\gamma)\bigg{)}+\frac{\omega_{<}-\omega_{<}^{(0)}}{2s}+\frac{1}{s} \big{(}\omega_{<}\,\hat{n}+(1+\gamma)J_{0}\,\hat{n}_{\rm sw}\big{)},\end{split} \tag{24}\] valid for \(h>(1+\gamma)J_{0}\) and \(h<(1+\gamma)J_{0}\), respectively. (Here \(\omega\)'s are taken at \(\rho=1\).) In Fig. 2c we additionally reported the "spin-wave" excitation gap \(\omega_{\rm sw}=h\) or \(J_{0}(1+\gamma)\) in the two phases. Note that the Hilbert space sector dimension grows exponentially with \(n_{\rm sw}\) [cf. the exact expression in Eq. (7)]; however, because of permutational invariance, these energy levels are exactly degenerate. As discussed so far, the properties of infinite-range spin Hamiltonians can be efficiently computed either analytically (via a large-\(N\) asymptotic expansion) or numerically (via exact diagonalization of the single-spin problem for \(S\leq N/2\approx 10^{5}\)). In closing this Subsection it is worth to briefly mention that the Hamiltonian (5) is equivalent to the Lipkin-Meshkov-Glick model of nuclear physics [85; 86; 87], which is actually Bethe-ansatz solvable [88]; however, this solution is not practically useful for large \(N\), and semiclassical or numerical techniques give much easier access to the relevant information. **Summary**: "Spin-wave" excitations -- lowering the collective spin length -- remain gapped and dispersionless throughout the phase diagram for \(\alpha=0\). ### Finite-range interactions (\(\alpha>0\)) The tendency of long-range interactions to form collective spin alignment and to preserve it even in excited states becomes increasingly prominent as \(\alpha\) is decreased. To quantify this aspect, it is convenient to view a long-range interacting system as a "perturbation" of the infinite-range interacting system with all-to-all interactions (\(\alpha=0\)). #### 2.4.1 Perturbation to mean-field This viewpoint can be made explicit by rewriting the Hamiltonian in momentum space. To this aim, we Fourier transform the spin operators \(\hat{s}_{\bf r}^{\mu}\) for \(\mu=x,y,z\): \[\hat{S}_{\bf k}^{\mu}=\sum_{\bf r}e^{i{\bf k}\cdot{\bf r}}\hat{s}_{\bf r}^{\mu }\,, \tag{25}\] with \[{\bf k}\equiv{\bf k}_{\mathbf{\ell}}=2\pi{\mathbf{\ell}}/L,\quad{\mathbf{\ell}}=(\ell_{1}, \ldots,\ell_{d}),\quad\ell_{a}=0,\pm 1,\pm 2,\ldots,\pm\lfloor L/2\rfloor \tag{26}\] (for \(L\) even \(\ell_{a}=\pm L/2\) coincide). We also define \(\tilde{S}_{\bf k}^{\pm}=\tilde{S}_{\bf k}^{\pm}\pm i\tilde{S}_{\bf k}^{\gamma}\). Note that \[\tilde{\tilde{S}}_{\bf k=0}\equiv\hat{\tilde{S}}=\sum_{\bf r}\hat{\tilde{s}}_ {\bf r} \tag{27}\] is the system's collective spin. It is straightforward to separate the variable-range quantum XY Hamiltonian (1) into the \(\alpha\)-independent collective part -- given by the \(\mathbf{k}=\mathbf{0}\) terms -- and the "perturbation" controlled by \(\alpha\): \[\tilde{H}_{\alpha}=\tilde{H}_{\alpha=0}+\tilde{V}_{\alpha} \tag{28}\] with5 Footnote 5: Note that in this expression the various \(\mathbf{k}\)-modes are _not_ dynamically decoupled, since \(\left[\tilde{S}_{\mathbf{k}}^{\mu},\tilde{S}_{\mathbf{q}}^{v}\right]=i\epsilon^ {\mu\alpha\lambda}\tilde{S}_{\mathbf{k}+\mathbf{q}}^{\lambda}\). \[\hat{V}_{\alpha}=-\frac{J_{0}}{4s^{2}N}\sum_{\mathbf{k}\neq\mathbf{0}}\hat{f}_ {\mathbf{k}}(\alpha)\Big{[}\big{(}\tilde{S}_{\mathbf{k}}^{+}\tilde{S}_{- \mathbf{k}}^{-}+\tilde{S}_{\mathbf{k}}^{-}\tilde{S}_{-\mathbf{k}}^{+}\big{)} +\gamma\big{(}\tilde{S}_{\mathbf{k}}^{+}\tilde{S}_{-\mathbf{k}}^{+}+\tilde{S}_ {\mathbf{k}}^{-}\tilde{S}_{-\mathbf{k}}^{-}\big{)}\Big{]}\,. \tag{29}\] In Eq. (29) we defined the function \[f_{\mathbf{k}}(\alpha)=\sum_{\mathbf{r}\neq\mathbf{0}}\frac{\cos(\mathbf{k} \cdot\mathbf{r})}{\|\mathbf{r}\|^{\alpha}}\bigg{/}\sum_{\mathbf{r}\neq\mathbf{ 0}}\frac{1}{\|\mathbf{r}\|^{\alpha}} \tag{30}\] which depends implicitly on the dimensionality \(d\) of the lattice. By construction, \(f_{\mathbf{k}=\mathbf{0}}(\alpha)=1\). When \(\alpha\to 0\) the couplings \(f_{\mathbf{k}\neq\mathbf{0}}(\alpha)\) turn off [Eq. (33)], and \(\tilde{H}_{\alpha}\) reduces to a Hamiltonian describing a single collective degree of freedom. The effect of spatially modulated interactions \(\alpha\neq 0\) is then to couple the collective spin to all finite-wavelength modes describing spatially non-trivial spin fluctuations, resulting in complex interacting many-body dynamics. The form of this coupling is dictated by the function \(f_{\mathbf{k}}(\alpha)\), which plays a crucial role in the physics of long-range interacting systems. In Appendix C we derive the following asymptotic estimates: \[f_{\mathbf{k}_{\ell}\neq\mathbf{0}}(\alpha)\equiv f_{\ell\neq \mathbf{0}}(\alpha)\quad\underset{|\mathbf{r}|\to\infty}{\sim}\quad\frac{A( \alpha)}{|\mathbf{\ell}|^{d-\alpha}}+\frac{B(\alpha)}{|\mathbf{\ell}|^{(d+1)/2}} \quad\text{for }0<\alpha<d\,; \tag{31}\] \[f_{\mathbf{k}\neq\mathbf{0}}(\alpha)\quad\underset{\mathbf{k} \to\mathbf{0}}{\sim}\quad 1-\tilde{A}(\alpha)|\mathbf{k}|^{\alpha-d}-\tilde{B}( \alpha)|\mathbf{k}|^{2} \quad\text{for }\alpha>d\,. \tag{32}\] The sharp changes in behavior are summarized in Fig. 3, where we plot \(f_{k}(\alpha)\) for a range of values of \(\alpha\) and \(d=1\). Its shape shrinks from \(f_{\mathbf{k}}(\alpha\to\infty)=\cos k\) to \[f_{k}(\alpha\to 0)=\delta_{k,0}\,, \tag{33}\] Figure 3: Plots of the function \(f_{\alpha,k}\) (30) for \(d=1\). (Left panel): \(f_{k}(\alpha)\) is shown for several values of \(\alpha\), for \(N=L=500\). The function squeezes towards \(k=0\) for \(0\leq\alpha\leq 1\). For \(1<\alpha<2\), \(f_{\mathbf{k}}(\alpha)\) becomes a finite function with a cusp behavior for small \(k\), while for \(\alpha\gg 2\) it is a cosine-like function. (Central panel): \(f_{\mathbf{k}}(\alpha)\) is shown for \(\alpha=0.7\) and increasing values of \(N\). Qualitatively similar behavior occurs for \(0\leq\alpha\leq 1\). Squeezing towards a delta function as \(N\to\infty\) occurs with a speed \(N^{-(1-\alpha)}\) for \(\alpha<1\) and \(1/\ln N\) for \(\alpha=1\). (Right panel): a “zoom” of the plot in the bottom left panel is shown, for larger values of \(N\). The rescaled function in the vicinity of \(k=0\) converges to a finite limiting curve as \(N\to\infty\). This _discrete_ structure approaches a continuum as \(\alpha\nearrow 1\). becoming increasingly singular at \(k=0\) as \(\alpha\) is decreased: \[f_{k_{\ell}}(\alpha) \equiv f_{\ell}(\alpha) \sim c(\alpha)|\ell|^{-(1-\alpha)} \text{for}\;\alpha<1; \tag{34a}\] \[f_{k}(\alpha) \sim 1-c(\alpha)|k|^{\alpha-1} \text{for}\;1<\alpha<3;\] (34b) \[f_{k}(\alpha) \sim 1-c(\alpha)k^{2} \text{for}\;\alpha>3. \tag{34c}\] For long-range interactions \(0<\alpha<1\), the values of \(f_{k}(\alpha)\) progressively squeeze onto the vertical axis as \(L\to\infty\); upon zooming near \(k=0\) one finds a sequence of _discrete_ finite values, see the right panel of Fig. 3[52; 58]. This phenomenon can be physically interpreted as follows: interactions decay so slowly with the spatial distance that the system behaves as a permutationally invariant system over finite length scales, hence observables are unable to resolve finite wavelengths. Only modes with extensive wavelengths \(k_{\ell}\propto 1/L\) may impact the physical properties. As \(\alpha\) is increased to values larger than \(d\), all modes \(k\neq 0\) get eventually activated. Despite its simplicity, the result in Eq. (31) has significant physical implications: As we will show below, the low-energy spectrum of a quantum system with long-range interactions remains discrete in the thermodynamic limit. In this precise sense we may say that _long-range interacting systems with \(0<\alpha<d\) interpolate between few-body and many-body physics._ At the same time, Eq. (32) showcases another fundamental properties of long-range interacting systems: the singularity at small momenta gives rise to a divergent velocity of propagation of quantum information across the system for \(\alpha<d+1\), violating the famous Lieb-Robinson light-cone bound of short-range interacting systems [12]. This property is actually completely general, as it does not rely on any low-energy description. We will further discuss its consequences in Sec. 3.1.1. [backgroundcolor=gray!20, linecolor=gray! spin model as a non-linear bosonic Hamiltonian, where the state \(|\Uparrow\rangle\) in Eq. (35) corresponds to the Fock space vacuum \(|\emptyset\rangle\). The mapping (36) should be understood as an _embedding_ of the two-dimensional Hilbert space of a spin-\(1/2\) in the infinite-dimensional Hilbert space of a bosonic mode. The states \(|\uparrow\rangle\) and \(|\downarrow\rangle\) are mapped onto \(|0\rangle\equiv|\emptyset\rangle\) and \(|1\rangle\equiv b^{\dagger}|\emptyset\rangle\). The operators on the right-hand sides of Eqs. (36) act non-trivially on the full bosonic space; however, they are block-diagonal, as their matrix elements between the physical spin subspace and its orthogonal complement are vanishing; their action on the physical spin subspace coincides with the operators on the left-hand sides. It is convenient to write the bosonic Hamiltonian directly in momentum space. To this aim, we define the Fourier-transformed bosonic modes6 Footnote 6: Note that we take a unitary Fourier transformation on the bosonic modes, while the convention for spins in Eq. (25) was such that \(\tilde{S}^{\pm N,\gamma,z}_{\mathbf{k}-\emptyset}=\tilde{S}^{\pm N,z}\) (collective spin projections). \[\tilde{b}^{\dagger}_{\mathbf{k}}=\frac{1}{\sqrt{N}}\sum_{\mathbf{r}}\epsilon^ {\mathbf{k}\cdot\mathbf{r}}\tilde{b}^{\dagger}_{\mathbf{r}}\,. \tag{37}\] We now formally expand the Holstein-Primakoff mapping (36) in \(1/s\) and Fourier-transform term by term: \[\begin{cases}\tilde{S}^{-}_{\mathbf{k}}\approx(2Ns)^{1/2}\,\tilde{b}^{\dagger }_{\mathbf{k}}-\frac{1}{2(2Ns)^{1/2}}\sum_{\mathbf{q}_{1}\cdot\mathbf{q}_{2}} \tilde{b}^{\dagger}_{\mathbf{q}_{1}}\tilde{b}^{\dagger}_{\mathbf{q}_{2}}\tilde {b}_{\mathbf{q}_{1}+\mathbf{q}_{2}-\mathbf{k}},\\ \tilde{S}^{+}_{\mathbf{k}}\approx(2Ns)^{1/2}\,\tilde{b}_{-\mathbf{k}}-\frac{1 }{2(2Ns)^{1/2}}\sum_{\mathbf{q}_{1}\cdot\mathbf{q}_{2}}\tilde{b}^{\dagger}_{ \mathbf{q}_{1}+\mathbf{q}_{2}+\mathbf{k}}\tilde{b}_{\mathbf{q}_{1}}\tilde{b}_ {\mathbf{q}_{2}},\\ \tilde{S}^{z}_{\mathbf{k}}=Ns\;\delta_{\mathbf{k},0}-\sum_{\mathbf{q}}\tilde{b }^{\dagger}_{\mathbf{q}+\mathbf{k}}\tilde{b}_{\mathbf{q}}.\end{cases} \tag{38}\] It is worth to stress here the connection with the previously introduced expansion. First of all, we immediately recognize that the bosonic mode with \(\mathbf{k}=\mathbf{0}\) coincides with the previously introduced collective bosonic mode in Eq. (19). Furthermore, by expanding \(\tilde{S}^{2}\) using Eqs. (38), one can check that \(\hat{n}_{\mathbf{k}=\mathbf{0}}\) cancels to leading order [80]: \[\hat{n}_{\mathrm{sw}}\equiv Ns-S=\sum_{\mathbf{k}\neq\mathbf{0}}\tilde{b}^{ \dagger}_{\mathbf{k}}\tilde{b}_{\mathbf{k}}. \tag{39}\] This substantiates the naming of this quantity introduced before Eq. (23). Making the substitutions (38) into Eq. (28) we obtain an expression of the form \[\hat{H}_{\alpha}=\frac{1}{s}\bigg{[}(Ns)^{1}\mathcal{E}_{0}+(Ns)^{0}\hat{H}_{ 2}+(Ns)^{-1}\hat{H}_{4}+\dots\bigg{]}, \tag{40}\] where: \[\mathcal{E}_{0}=\mathcal{H}(\mathbf{z})=-h \tag{41}\] is the classical (mean-field) energy density of the paramagnetic state; \[\hat{H}_{2}=\sum_{\mathbf{k}}h\,\tilde{b}^{\dagger}_{\mathbf{k}}\tilde{b}_{ \mathbf{k}}-\sum_{\mathbf{k}}J_{0}\,f_{\mathbf{k}}(\alpha)\left(\frac{\tilde{ b}_{\mathbf{k}}\tilde{b}^{\dagger}_{\mathbf{k}}+\tilde{b}^{\dagger}_{- \mathbf{k}}\tilde{b}_{-\mathbf{k}}}{2}+\gamma\frac{\tilde{b}_{\mathbf{k}} \tilde{b}_{-\mathbf{k}}+\tilde{b}^{\dagger}_{-\mathbf{k}}\tilde{b}^{\dagger}_ {\mathbf{k}}}{2}\right) \tag{42}\] describes semiclassical (Gaussian) spin fluctuations; \[\hat{H}_{4} =\frac{J_{0}}{2}\sum_{\mathbf{k},\mathbf{q}_{1},\mathbf{q}_{2}}f_{ \mathbf{k}}(\alpha)\times\] \[\left[\left(\tilde{b}^{\dagger}_{-\mathbf{k}+\mathbf{q}_{1}+ \mathbf{q}_{2}}\tilde{b}_{\mathbf{q}_{1}}\tilde{b}_{\mathbf{q}_{2}}\tilde{b}^{ \dagger}_{\mathbf{k}}+\tilde{b}^{\dagger}_{\mathbf{q}_{1}}\tilde{b}^{\dagger}_ {\mathbf{k}}\tilde{b}_{\mathbf{k}+\mathbf{q}_{1}+\mathbf{q}_{2}}\tilde{b}_{- \mathbf{k}}+\tilde{b}_{\mathbf{k}}\tilde{b}^{\dagger}_{\mathbf{q}_{4}}\tilde{b} ^{\dagger}_{\mathbf{k}}\tilde{b}_{-\mathbf{k}+\mathbf{q}_{1}+\mathbf{q}_{2}}+ \tilde{b}^{\dagger}_{-\mathbf{k}}\tilde{b}^{\dagger}_{\mathbf{k}+\mathbf{q}_{ 1}+\mathbf{q}_{2}}\tilde{b}_{\mathbf{q}_{1}}\tilde{b}_{\mathbf{q}_{2}}\right)\right.\] \[+\gamma\Big{(}\tilde{b}^{\dagger}_{-\mathbf{k}+\mathbf{q}_{1}+ \mathbf{q}_{2}}\tilde{b}_{\mathbf{q}_{1}}\tilde{b}_{\mathbf{q}_{2}}\tilde{b}_ {-\mathbf{k}}+\tilde{b}_{\mathbf{k}}\tilde{b}^{\dagger}_{\mathbf{k}+\mathbf{q }_{1}+\mathbf{q}_{2}}\tilde{b}_{\mathbf{q}_{1}}\tilde{b}_{\mathbf{q}_{2}}+ \tilde{b}^{\dagger}_{-\mathbf{k}}\tilde{b}^{\dagger}_{\mathbf{q}_{1}}\tilde{ b}^{\dagger}_{\mathbf{q}_{2}}\tilde{b}_{-\mathbf{k}+\mathbf{q}_{1}+\mathbf{q}_{2}}+ \tilde{b}^{\dagger}_{\mathbf{q}_{1}}\tilde{b}^{\dagger}_{\mathbf{q}_{2}}\tilde {b}_{\mathbf{k}+\mathbf{q}_{1}+\mathbf{q}_{2}}\tilde{b}^{\dagger}_{\mathbf{k} }\Big{)}\Big{]} \tag{43}\] represents the 2-body non-linear interactions between spin fluctuations. One can similarly derive \((Ns)^{-2}\hat{H}_{6}\) etc. While the full exact bosonic representation is cumbersome, its usefulness rests on the approximability of highly polarized spin states with bosonic states. To this aim we introduce the number of bosons \[\hat{n}_{\text{tot}}\equiv\hat{n}_{0}+\hat{n}_{\text{sw}}=Ns-\hat{S}^{z} \tag{44}\] and we approximate well-polarized states with \(n_{\text{tot}}\ll Ns\) with dilute Fock states with \(n_{\text{tot}}\) boson. In such corner of the Hilbert space, the bosonic modes turn out to provide an accurate description of spin states and operators. Intuitively, by inspecting Eqs. (38), one recognizes that the action of the non-linear terms (second on the right-hand side) on a dilute Fock state is suppressed by a density factor \(n_{\text{tot}}/N\) compared to the action of the leading terms. Thus, up to an error of order \(\mathcal{O}[(n_{\text{tot}}/N)^{2}]\), we may identify \(\hat{S}^{-}_{\mathbf{k}}\propto\tilde{b}^{\dagger}_{\mathbf{k}}\), \(\hat{S}^{+}_{\mathbf{k}}\propto\tilde{b}_{-\mathbf{k}}\). We now show that the ground state of long-range interacting spin models lives exactly in this corner of the spin space. An approximate solution of the bosonic Hamiltonian (40) can be found by neglecting the terms with \(\hat{H}_{4}\) and higher order -- an approximation usually termed _linear spin-wave (LSW) theory_. The quality of the result heavily depends on the parameters and in particular on \(\alpha\). Our purpose is to show that the LSW description of low-energy properties becomes _exact_ for \(\alpha<d\), and quantify its accuracy for \(\alpha>d\). The quadratic spin-wave Hamiltonian can be diagonalized via a standard Bogolubov transformation, \(\tilde{b}_{\mathbf{k}}=\cosh\theta_{\mathbf{k}}\beta_{\mathbf{k}}+\sinh \theta_{\mathbf{k}}\beta^{\dagger}_{-\mathbf{k}}\), with \[\tanh(2\theta_{\mathbf{k}})\equiv\frac{\gamma J_{0}f_{\mathbf{k}}(\alpha)}{h- J_{0}f_{\mathbf{k}}(\alpha)}\,. \tag{45}\] The result is \[Ns\,\mathcal{E}_{0}+\hat{H}_{2}=Ns\,\mathcal{E}_{2}+\sum_{\mathbf{k}}\omega_{ \mathbf{k},>}(\alpha)\hat{\beta}^{\dagger}_{\mathbf{k}}\hat{\beta}_{\mathbf{k }}\,, \tag{46}\] where we identify the excitation spectrum \[\omega_{\mathbf{k},>}(\alpha)=\sqrt{\left[h-J_{0}f_{\mathbf{k}}(\alpha)\right] ^{2}-\gamma^{2}\big{[}J_{0}f_{\mathbf{k}}(\alpha)\big{]}^{2}}=\sqrt{\left[h-J_ {0}f_{\mathbf{k}}(\alpha)(1-\gamma)\right]\left[h-J_{0}f_{\mathbf{k}}(\alpha)( 1+\gamma)\right]}\,, \tag{47}\] and the ground-state energy \[Ns\,\mathcal{E}_{2}=Ns\,\mathcal{E}_{0}+\frac{1}{2}\sum_{\mathbf{k}}\left[ \omega_{\mathbf{k},>}(\alpha)-\omega_{>}^{(0)}\right] \tag{48}\] where \(\omega_{>}^{(0)}=h\) [cf. Eq. (17)]. Within LSW theory, the ground-state wavefunction is given by \[|\text{GS}_{2}\rangle=\prod_{\mathbf{k}}\exp\left[\frac{\theta_{\mathbf{k}}}{2 }\big{(}\tilde{b}_{\mathbf{k}}\tilde{b}_{-\mathbf{k}}-\tilde{b}^{\dagger}_{- \mathbf{k}}\tilde{b}^{\dagger}_{\mathbf{k}}\big{)}\right]|\emptyset\rangle \propto\prod_{\mathbf{k}}\exp\bigg{(}-\frac{\epsilon_{\mathbf{k}}}{4\gamma J_{ 0}\tilde{f}_{\mathbf{k}}(\alpha)}\tilde{b}^{\dagger}_{-\mathbf{k}}\tilde{b}^{ \dagger}_{\mathbf{k}}\bigg{)}\,|\emptyset\rangle \tag{49}\] here \[\epsilon_{\mathbf{k}}\equiv 2h-2J_{0}f_{\mathbf{k}}(\alpha)-\omega_{\mathbf{k},>}( \alpha)\geq 0\,. \tag{50}\] Meaningfulness of the LSW solution is determined by \(\omega_{\mathbf{k},>}\) being real. This requires \(h\geq h_{\mathrm{cr}}\), where \[h_{\mathrm{cr}}\equiv J_{0}(1+\gamma). \tag{51}\] The minimum of \(\omega_{\mathbf{k},>}\) is attained as \(\mathbf{k}\to\mathbf{0}\). To expand around this limit we write \(f_{\mathbf{k}}(\alpha)\equiv 1-\sigma_{\mathbf{k}}(\alpha)\). Calculation gives \[\omega_{\mathbf{k},>}(\alpha)\underset{\mathbf{k}\to\mathbf{0}}{\sim}2h\, \sqrt{a+b\sigma_{\mathbf{k}}(\alpha)}, \tag{52}\] with dimensionless coefficients \(a\) and \(b\).7 For \(h>h_{\mathrm{cr}}\) one has \(a>0\) and thus \(\omega_{\mathbf{k},>}\underset{\mathbf{k}\to\mathbf{0}}{\sim}2h\sqrt{a}+h(b/ \sqrt{a})\sigma_{\mathbf{k}}\). For short-range interactions \(\alpha\geq 3\) the gapped dispersion relation is parabolic, \(\sigma_{\mathbf{k}}\sim|\mathbf{k}|^{2}\); for longer range \(d<\alpha<d+2\) it behaves as \(\sigma_{\mathbf{k}}\sim|\mathbf{k}|^{a-d}\); for \(\alpha<d\) the spectrum becomes discrete, \(\sigma_{\mathbf{k}_{\prime}}\equiv\sigma_{\ell}\).8 At the critical point \(h=h_{\mathrm{cr}}\), one has \(a=0\) and hence \(\omega_{\mathbf{k},>}\underset{\mathbf{k}\to\mathbf{0}}{\sim}2h\sqrt{b\sigma _{\mathbf{k}}}\), signaling closure of the spectral gap at \(\mathbf{k}=\mathbf{0}\). However, for \(\alpha<d\), the spectrum of spin-wave excitations with \(\mathbf{k}\neq\mathbf{0}\) is discrete. Footnote 7: Explicitly, \(a=1-2J_{0}/h+(1-\gamma^{2})(J_{0}/h)^{2}\) and \(b=2(J_{0}/h)[1-(1-\gamma^{2})(J_{0}/h)]\). To assess the accuracy of LSW theory we evaluate the depletion of spin polarization, i.e. \[\langle\hat{n}_{\mathrm{tot}}\rangle=Ns-\langle\hat{S}^{z}\rangle=\sum_{ \mathbf{k}}\langle\mathrm{GS}|\tilde{b}_{\mathbf{k}}^{\dagger}\tilde{b}_{ \mathbf{k}}|\mathrm{GS}\rangle. \tag{53}\] Approximating the ground-state \(|\mathrm{GS}\rangle\) by the LSW theory ground-state \(|\mathrm{GS}_{2}\rangle\) in Eq. (49) we obtain the explicit expression \[\langle\hat{n}_{\mathrm{tot}}\rangle=\frac{1}{2}\sum_{\mathbf{k}}\frac{ \epsilon_{\mathbf{k}}}{\omega_{\mathbf{k},>}}\,. \tag{54}\] This quantity depends on \(h\), \(\alpha\) and \(\gamma\); in particular, it is suppressed as \(h\to\infty\) or \(\alpha\to 0\) or \(\gamma\to 0\). In Fig. 4 we plot the depletion per spin \(\langle\hat{n}_{\mathrm{tot}}\rangle/N\) given by Eq. (54) at fixed \(\gamma=1\) (quantum Ising model) and for \(d=1\). As is evident, the effect of spin fluctuations is enhanced as the interactions become short-ranged, i.e. \(\alpha\to\infty\), or as the critical point \(h_{\mathrm{cr}}=2J_{0}\) is approached. Figure 4: Ground-state spin depletion density \(\langle\hat{n}_{\mathrm{tot}}\rangle/N\) in the quantum paramagnetic phase, cf. Eq. (54), for \(\gamma=1\) and \(d=1\) (variable-range quantum Ising chain). All the qualitative aspects of this plot can be understood analytically. Specifically, for \(h>h_{\rm cr}\), we have \[\frac{\langle\hat{n}_{\rm tot}\rangle}{Ns}=\frac{1}{16}\left(\frac{h_{\rm cr}}{h }\right)^{2}\frac{1}{Ns}\sum_{\bf k}f_{\bf k}^{2}(\alpha)+\mathcal{O}\bigg{(} \bigg{(}\frac{h_{\rm cr}}{h}\bigg{)}^{3}\frac{1}{Ns}\sum_{\bf k}f_{\bf k}^{3}( \alpha)\bigg{)}. \tag{55}\] The behavior of the right-hand side as \(N\to\infty\) depends qualitatively on \(\alpha\): For \(\alpha>d\) the limit is a finite number, \[\frac{1}{N}\sum_{\bf k}f_{\bf k}^{2}(\alpha)\sim\int_{-\pi}^{\pi}\ldots\int_{- \pi}^{\pi}\frac{dk_{1}\ldots dk_{d}}{(2\pi)^{d}}f_{\bf k}^{2}(\alpha) \tag{56}\] (cf. left panel of Fig. 3). As \(\alpha\searrow d\) the function \(f_{\bf k}(\alpha)\) squeezes on the vertical axis (cf. Fig. 3), suppressing the value of the integral. This means that the spin depletion becomes subextensive for \(\alpha<d\): using \(f_{\bf f}(\alpha)\sim|\mathbf{\ell}|^{-(1-\alpha)}\) [cf. Eq. (34)], one finds \[\langle\hat{n}_{\rm tot}\rangle\sim\sum_{|\mathbf{\ell}|<L/2}|f_{\bf f}(\alpha)|^ {2}\sim\begin{cases}\mathcal{O}(1)&\text{for }0<\alpha<d/2,\\ \log L&\text{for }\alpha=d/2,\\ L^{2\alpha-d}&\text{for }d/2<\alpha<d.\end{cases} \tag{57}\] On the other hand, for \(h=h_{\rm cr}\), we have \[\frac{\langle\hat{n}_{\rm tot}\rangle}{Ns}=\frac{1}{4Ns}\sum_{\bf k}\bigg{(} \frac{1+\sigma_{\bf k}(\alpha)}{\sqrt{\sigma_{\bf k}(\alpha)}}-1\bigg{)}. \tag{58}\] Here the behavior of the right-hand side as \(L\to\infty\) depends even more strongly on \(\alpha\), in particular for one-dimensional systems \(d=1\): For \(\alpha>3\) the sum is divergent, \(\frac{1}{N}\sum_{k}\frac{1}{\sqrt{\sigma_{\bf k}(\alpha)}}\sim\int_{-\pi}^{ \pi}\frac{dk}{2\pi}\frac{1}{|\mathbf{k}|}=\infty\). This divergence witnesses the inadequacy of LSW theory to describe critical behavior of one-dimensional systems with short-range interactions. Contrarily, for \(1<\alpha<3\) one has \(\sigma_{k}(\alpha)\sim|\mathbf{k}|^{\alpha-1}\), and the integral is convergent. As \(\alpha\searrow 1\) the depletion per spin is suppressed, making LSW theory increasingly accurate. Finally, for \(\alpha<1\), one finds the same subextensive scaling as in Eq. (57). Note, however, that the collective spin mode \(k=0\) yields an additional divergent (but still subextensive) contribution \(\langle\hat{n}_{0}\rangle\sim N^{1/3}\) to \(\langle\hat{n}_{\rm tot}\rangle\) at the critical point, which can be shown by semiclassical analysis (see A); such a contribution is thus dominant for \(0<\alpha<\frac{2}{3}d\) and subleading for \(\alpha>\frac{2}{3}d\). The bottom line of this Section is that the Holstein-Primakoff description of spin fluctuations is _exact_ in the thermodynamic limit for \(0<\alpha<d\), and otherwise an increasingly accurate approximation as \(\alpha\) is decreased towards \(d\). Importantly, this result is true _regardless_ of the value of \(s\), down to \(s=1/2\). Accuracy for low \(s\) may be surprising at first sight, considering Eq. (36). Its origin can be traced back to the observation that the truncated Holstein-Primakoff mapping gives exact matrix elements within the subspace with at most one boson on each site, for arbitrary \(s\). Thus, what really controls the quality of the approximation is the ground-state spin-wave _density_: For \(0<\alpha<d\) the probability of finding two or more bosons in a given site in \(|\text{GS}_{2}\rangle\) is vanishingly small in the thermodynamic limit, and it is finite but parametrically small for \(\alpha\gtrsim d\). **Summary**: The paramagnetic ground state can be determined via linear spin-wave theory, which becomes exact as the strong long-range regime is approached. #### 2.4.3 Quantum ferromagnetic phase To derive the low-energy spectrum in the quantum ferromagnetic phase for \(\alpha>0\), we promote the frame rotation in Eq. (18) from the level of the collective spin to the level of individual spins: \[\hat{s}_{\mathbf{r}}^{x}=\cos\theta\,\hat{s}_{\mathbf{r}}^{X}+\sin\theta\,\hat{s} _{\mathbf{r}}^{Z},\qquad\hat{s}_{\mathbf{r}}^{y}=\hat{s}_{\mathbf{r}}^{Y},\qquad \hat{s}_{\mathbf{r}}^{z}=-\sin\theta\,\hat{s}_{\mathbf{r}}^{X}+\cos\theta\, \hat{s}_{\mathbf{r}}^{Z}\,. \tag{59}\] Hence we perform a Holstein-Primakoff expansion of individual spins with quantization axis \(\mathbf{Z}\) and Fourier-transform, \[\begin{cases}\tilde{S}_{\mathbf{k}}^{X}\approx(Ns)^{1/2}\,\frac{\tilde{b}_{ \mathbf{k}}^{\dagger}+\tilde{b}_{-\mathbf{k}}}{\sqrt{2}}\,,\\ \tilde{S}_{\mathbf{k}}^{Y}\approx(Ns)^{1/2}\,\frac{\tilde{b}_{\mathbf{k}}^{ \dagger}-\tilde{b}_{-\mathbf{k}}}{\sqrt{2}i}\,,\\ \tilde{S}_{\mathbf{k}}^{Z}=Ns\;\delta_{\mathbf{k},0}-\sum_{\mathbf{q}}\tilde {b}_{\mathbf{q}+\mathbf{k}}^{\dagger}\tilde{b}_{\mathbf{q}}\,,\end{cases} \tag{60}\] and substitute into the Hamiltonian (28). As in Eq. (40) we obtain a formal series in inverse powers of \(Ns\), including the classical energy \((Ns)^{1}\mathcal{H}(\mathbf{Z})\), the quadratic bosonic Hamiltonian \[\hat{H}_{2}=\left(h\cos\theta+J_{0}(1+\gamma)\sin^{2}\theta\right) \sum_{\mathbf{k}}\tilde{b}_{\mathbf{k}}^{\dagger}\tilde{b}_{\mathbf{k}}\\ -J_{0}\sum_{\mathbf{k}}f_{\mathbf{k}}(\alpha)\bigg{[}\left(\frac{ 1+\gamma}{2}\cos^{2}\theta+\frac{1-\gamma}{2}\right)\frac{\tilde{b}_{\mathbf{ k}}\tilde{b}_{\mathbf{k}}^{\dagger}+\tilde{b}_{-\mathbf{k}}^{\dagger}\tilde{b}_{ \mathbf{k}}}{2}\\ +\left(\frac{1+\gamma}{2}\cos^{2}\theta-\frac{1-\gamma}{2}\right) \frac{\tilde{b}_{\mathbf{k}}\tilde{b}_{-\mathbf{k}}+\tilde{b}_{-\mathbf{k}}^{ \dagger}\tilde{b}_{\mathbf{k}}^{\dagger}}{2}\bigg{]}\,, \tag{61}\] as well as quartic and higher-order interactions involving an even number of bosons. However, unlike in Eq. (40), we also get an additional term \((Ns)^{1/2}\hat{H}_{1}\) linear in \(\hat{q}_{\mathbf{k}=\mathbf{0}}=(\hat{b}_{\mathbf{k}=\mathbf{0}}+\hat{b}_{ \mathbf{k}=\mathbf{0}}^{\dagger})/\sqrt{2}\) -- cf. the first line of Eq. (20) -- as well as other odd terms \((Ns)^{-1/2}\hat{H}_{3}\) and so on. The rotation angle \(\theta^{*}\) must be determined by imposing that the expectation value of \(\hat{q}_{\mathbf{k}=\mathbf{0}}\) vanishes. To lowest order this gives the mean-field solution in Eq. (13). Paralleling the derivation in the previous Section we can then solve the LSW Hamiltonian \(\hat{H}_{2}(\theta=\theta^{*})\), which yields the spectrum \[\omega_{\mathbf{k},<}(\alpha)=\sqrt{\left[J_{0}^{2}(1+\gamma)^{2}-h^{2}f_{ \mathbf{k}}(\alpha)\right]\left[1-f_{\mathbf{k}}(\alpha)\frac{1-\gamma}{1+ \gamma}\right]} \tag{62}\] as well as the zero-point energy shift \(\frac{1}{2}\sum_{\mathbf{k}}(\omega_{\mathbf{k},<}(\alpha)-\omega_{<}^{(0)})\), where \(\omega_{<}^{(0)}=J_{0}(1+\gamma)\) [cf. Eq. (22)]. The analysis of spectral properties and of the spin depletion in the quantum paramagnetic phase can be repeated for the quantum ferromagnetic phase, with qualitatively similar conclusions. The mean-field description of local observables is _exact_ for \(0<\alpha<d\) in the thermodynamic limit. For \(\alpha>d\) finite corrections to the mean-field results arise. Such corrections can be evaluated within the bosonic formalism [57, 89]. In particular, the downward shift of the quantum critical point \(h_{\mathrm{cr},\alpha}=h_{\mathrm{cr},\alpha=0}-\delta h_{\mathrm{cr},\alpha}\) due to quantum fluctuations amounts to [57, 89] \[\frac{\delta h_{\mathrm{cr},\alpha}}{h_{\mathrm{cr},\alpha=0}}\ =\ \frac{\gamma}{s}\,\frac{2 \,+3\gamma}{4(1+\gamma)^{2}}\ \int_{-\pi}^{\pi}\ldots\int_{-\pi}^{\pi}\frac{dk_{1}\ldots dk_{d}}{(2\pi)^{d} }f_{\mathbf{k}}^{2}(\alpha)\,. \tag{63}\] The right-hand side is in fact vanishing for \(0<\alpha<d\) and grows finite for \(\alpha>d\). Note that effects of quantum fluctuations are suppressed as \(s\to\infty\). For completeness, let us mention that for \(\alpha<d\), the ground state shares the same basic properties of the fully-connected limit [90]. Long-range interactions however can induce unexpected entanglement properties. For instance, for \(d<\alpha<d+1\), the ground state entanglement entropy the long-range Dyson Hierarchical model obeys an area law at criticality [91], due to its special Tree Tensor Network structure [92]. On the other hand, numerical studies for antiferromagnetic long-range systems have shown violations of area-law scaling also in the gapped phase [93; 94; 95; 96]. **Summary**: The ferromagnetic ground state can be determined via linear spin-wave theory in a rotated frame. This approach is exact in the strong long-range regime and it determines corrections to the location of quantum critical point. ### Structure of the spectrum beyond linear spin-wave theory In this final Subsection we comment on the structure of the many-body low-energy spectrum beyond LSW theory. To grasp such effects we will make use of degenerate perturbation theory -- i.e., of the Schrieffer-Wolff transformation -- around points in the two phases where \(\hat{H}_{\alpha}\) becomes diagonal. Quite generally, spin waves provide a rather complete description of low energy properties in the quantum paramagnetic phase, even beyond LSW theory. This is best understood in the regime of large external field \(h\), where the ground state is \(|\text{GS}\rangle=|\text{ \[E\Big{(}s_{\bf r}^{x}=s-2,\ s_{{\bf R}\neq{\bf r}}^{x}=s\Big{)}=E_{\rm GS}+2\frac{ J_{0}}{s}\qquad\mbox{(for $s>1/2$ only)}\,. \tag{67}\] This implies an attractive potential between spin excitations,11 Footnote 11: Note that spin excitations on the same site (relevant for \(s>1/2\) only) do not feel any attraction or repulsion, unless the Hamiltonian features self-interaction terms. \[V_{{\bf r},{\bf r}^{\prime}}(\alpha)=-\frac{J_{{\bf r},{\bf r}^{\prime}}( \alpha)}{s^{2}}=-\frac{J_{0}}{2s^{2}\mathcal{N}_{\alpha,L}}\ \frac{1}{\|{\bf r}-{\bf r}^{\prime}\|^{\alpha}}\,. \tag{68}\] Similarly one can compute the unperturbed energy of more complex spin configurations with three or more spin excitations. Such excited states acquire a non-trivial dispersion relation upon turning on \(h\neq 0\) or \(\gamma\neq 1\). Using lowest-order degenerate perturbation theory, it is straightforward to check that processes generated by \(1-\gamma\neq 0\) induce a variable-range hopping of individual spin excitations, whereas processes generated by \(h\neq 0\) do not induce any resonant transitions to lowest order. We thus retrieve a dispersion relation \(\sim\frac{J_{0}}{s}[1-f_{\bf k}(\alpha)(1-\gamma)/2]\) for individual spin excitations, in agreement with Eq. (62) from LSW theory. The number \(N_{b}\) of stable spin-wave bound states depends on the relative magnitude of "classical potential well depths", controlled by \(\alpha\), and "quantum hopping amplitudes", controlled by \(1-\gamma\) and \(h\). This number grows unbounded upon reducing the quantum fluctuations. Estimating \(N_{b}\) as well as the lifetime of unstable bound states depending on the interaction range and in one or higher dimensions is in general a challenging problem which, to the best of our knowledge, has not been discussed extensively; see however Refs. [62, 97]. All the observations above carry over to short-range interacting systems \(\alpha=\infty\), _provided_ the system dimensionality is large enough (\(d\geq 2\)). In one dimension LSW is still a meaningful description of the paramagnetic spectrum (asymptotically exact for large external field). In the quantum ferromagnetic phase, however, LSW theory completely misses the relevant degrees of freedom, i.e. topological domain-wall-like excitations. In the simplest case \(s=1/2\) these fractionalized spin excitations can be described as fermions, as the exact solution of the XY quantum spin chain [98] makes manifest. The qualitative effect of longer-range interactions is then to create an effective attractive potential \(v_{\Delta r}(\alpha)\) between domain walls at a distance \(\Delta r\) (NB not to be confused with the attractive potential \(V_{\Delta r}(\alpha)\) between individual spin excitations introduced above!). Taking for simplicity the classical Ising limit \(\gamma=1\), \(h=0\) as a reference, one can straightforwardly compute the excess energy of a spin configuration with two domain walls separated by \(\Delta r\) sites: Assuming \(\alpha>1\),12 Footnote 12: For \(0<\alpha<1\) we have \(v_{\Delta r}(\alpha)=2J_{0}\Delta r\) in the thermodynamic limit, in agreement with naive LSW theory. In this case, however, the spatial configuration of the \(\Delta r\) flipped spins becomes immaterial. Thus, it is not meaningful to speak about “domain-wall confinement”. \[v_{\Delta r}(\alpha)\ \ \sum_{\Delta r\to\infty}\ \frac{2J_{0}}{\zeta( \alpha)}\times\left\{\begin{array}{ll}\frac{\Delta r^{2-\alpha}}{(2-\alpha)( \alpha-1)}&\mbox{for $1<\alpha<2$,}\\ \log\Delta r&\mbox{for $\alpha=2$,}\\ \zeta(\alpha-1)-\frac{\Delta r^{-(\alpha-2)}}{(\alpha-2)(\alpha-1)}&\mbox{ for $\alpha>2$.}\end{array}\right. \tag{69}\] For \(\alpha>2\) finitely many bound states coexist with unbound deconfined domain walls. As anticipated above, the cost of having a deconfined domain wall blows up as \(\alpha\searrow 2\), which witnesses the stabilization of long-range order by long-range interactions in 1d. Upon decreasing \(\alpha\) the lowest excitation in the spectrum -- the tightest bound state between two domain walls -- is increasingly well described by LSW theory. We finally note that while long-range interacting quantum spin chains do not naturally map to local lattice gauge theories [99], except in special cases [100], the spatial confinement, the spatial confinement of domain walls bears qualitative resemblance with quark confinement in high energy physics [101]. This bridge led to insights on the anomalous non-equilibrium dynamics of these systems [62; 63; 102]. Furthermore, although domain-wall deconfinement prevents finite-temperature ordering for \(\alpha>2\), it has been shown that the existence of low-lying bound states is associated with a severe suppression of the dynamical melting rate of the order parameter after shallow quantum quenches [103]. **Summary**: The low-energy spectrum in the quantum ferromagnetic phase hosts a rich structure of spin-wave bound states for \(d\geq 2\), or for \(d=1\) provided \(\alpha\) is low enough. In \(d=1\) deconfined domain-wall-like excitations appear for \(\alpha>2\) along with confined spin-wave-like bound states. ## 3 Low-energy dynamics The previous Section shows how the main impact of long-range interactions on low-energy equilibrium properties of the Hamiltonian in Eq. (1) can be traced back to the quasi-particle spectrum, in turn determined by the function \(f_{\mathbf{k}}(\alpha)\), see Fig. 3. Interestingly, the same is true for several types of non-equilibrium phenomena at low energies. In the present Section, we focus on dynamics following weak perturbations of the ground state, which is captured by the quadratic quasi-particle Hamiltonian (such as Eq. (42)). This allows us to capture the dynamics of quantum correlations at low energies [41; 50; 51; 104], see Sec. 3.1, the appearance of long-live metastable state [42; 52], i.e. the quasi-stationary states (QSSs), see Sec. 3.2, the universal defect formation upon slowly traversing criticality [53; 105; 106; 54; 107], see Sec. 3.3, and the appearance of dynamical quantum phase transitions in the Loschmidt echo (DQPTs) [108; 109], see Sec. 3.4. Since Sec. 3.1 and 3.2 focus on super-critical quenches, the dynamics occurs in the near equilibrium regime, where the spin-wave expansion around the equilibrium state remains applicable. On the other hand, universal defect scaling and DQPTs are observed for quenches across the critical point, making the applicability of the low-energy theory a priori questionable. Nevertheless, we are going to show how the salient features of those critical quenches actually arise from a low density of excitations above the ground state. ### Spreading of correlations In systems governed by local Hamiltonians, out-of-equilibrium quantum correlations are known to spread within a "light cone": The propagation of information in non-relativistic quantum lattice systems with bounded local Hilbert space obeys a speed limit given by the _Lieb-Robinson theorem_[110]. This states that the support of an operator \(\hat{A}_{\mathbf{r}}\) initially localized in a finite region around site \(\mathbf{r}\) and evolving in the Heisenberg representation with a local Hamiltonian \(\hat{H}\), spreads in space with a finite (model-dependent) velocity \(v_{\mathrm{LR}}\). Formally, for any locally interacting lattice system there exist positive constants \(\xi,\mu\) and \(v_{\mathrm{LR}}\) such that the correlator between two operators at distance \(\Delta r=|\mathbf{r}^{\prime}-\mathbf{r}|\) \[|\!|\!|\hat{A}_{\mathbf{r}}(t),\hat{B}_{\mathbf{r}^{\prime}}(0)|\!|\!|\leq\! |\hat{A}_{\mathbf{r}}|\!|\!|\!|\!|\hat{B}_{\mathbf{r}^{\prime}}|\!|\!|\!\!|e^{- \mu\,\max(0,\Delta r-v_{\mathrm{LR}}t)}\;, \tag{71}\] where \(\|\cdot\|\) is the operator norm. Namely, the weight of the time-evolved operator outside the "light-cone" region \(t\geq\Delta r/v_{\rm LR}\) is exponentially suppressed with \(\Delta r-v_{\rm LR}t\to\infty\). In other words, it takes at least a time proportional to the distance \(t\propto\Delta r\) to send information at a distance \(\Delta r\). Such light-cone propagation of information is by now theoretically well understood in short-range interacting systems, and it goes hand-in-hand with a linear dynamical increase of bipartite entanglement out of equilibrium [111, 112, 113, 114]. Experimental observation of linear light-cone propagation [15, 115] has been accompanied by abundant numerical confirmations [116, 117, 118, 119]. In presence of long-range interactions, the standard behavior of locally interacting systems changes substantially: the bounds on the group velocity may not hold anymore, and the spreading of correlations and information scrambling may be drastically boosted. The study of the impact of algebraically decaying interactions on correlation spreading has been addressed as a function of the different values of the power-law exponent \(\alpha\). Part of the current understanding is based on assessing the behavior of the spatial spreading of connected correlations, e.g. \[G_{\alpha\beta}(r,t)=\langle\hat{\sigma}^{\alpha}_{i+r}(t)\hat{\sigma}^{\beta }_{i}(0)\rangle-\langle\hat{\sigma}^{\alpha}_{i+r}(t)\rangle\langle\hat{ \sigma}^{\beta}_{i}(0)\rangle\;, \tag{72}\] in paradigmatic quantum spin chains or tight-binding models; the expectation value is taken over some initial state \(|\psi_{0}\rangle\) and \(\alpha,\beta=x,y,z\). Generalized bounds have been derived for long-range systems [121, 122], see Ref.[123] for a recent comprehensive review. The related experiments and numerical investigations have, however, led to conflicting pictures [3, 14, 50, 59, 60, 124, 125]. For instance, experiments on ion chains [3] and numerical simulations within truncated Wigner approximation [126] for the one-dimensional long-range XY model point towards bounded, super-ballistic, propagation for all values of \(\alpha\). In contrast, experiments on the long-range transverse Ising model reported ballistic propagation of correlation maxima with, however, observable leaks that increase when \(\alpha\) decreases [14]. Moreover, time-dependent density matrix renormalization group (t-DMRG) and variational Monte-Carlo (t-VMC) numerical simulations indicate the existence of three distinct regimes, namely instantaneous, sub-ballistic, and ballistic, for increasing values of the exponent \(\alpha\), see Ref. [50, 59, 60, 124, 125, 127]. In the following, we will see how these difficulties can be overcome in the restricted setting of near-equilibrium dynamics, by studying correlation spreading within linear spin-wave theory. **Summary**: The Lieb-Robinson bound forbids super-ballistic spreading of quantum correlations in locally interacting systems. Long-range interactions allow to circumvent this constraint. Figure 5: Spatial spreading of correlations in systems with power-law interactions. (a) Connected correlation function in a long-range trapped ion platform following a global quench with \(\alpha\approx 0.64\). Image adapted from Ref. [3]. (b) Violation of the Lieb-Robinson bound in Eq. (71) for long-range interacting systems with power-law interactions for \(\alpha>d\). Image adapted from Ref. [120]. #### 3.1.1 Weak long-range regime (\(\alpha>d\)) Let us first consider the case of the Ising Hamiltonian, i.e. Eq. (1) with \(\gamma=1\), but restrict our study to the spin-wave representation in Eq. (46). In this Section, we aim at characterizing the universal scaling of correlations following Ref. [51]. Let us simplify the spin-wave dispersion relation in Eq. (47) by considering its low-momentum asymptotic expression, \[\omega_{\mathbf{k}}\mapsto\omega_{\mathbf{k}}^{\text{low}}=\Delta+ck^{\zeta}, \tag{73}\] where the gap \(\Delta=\sqrt{h\left(h+2J_{0}f_{0}(\alpha)\right)}\) is finite, \(c=\sqrt{\frac{h}{h+2J_{0}f_{0}(\alpha)}}J_{0}\frac{\partial f_{0}(\alpha)}{ \partial k}\), and \(\zeta=\alpha-d\). As long as \(\alpha>d\) the quasi-particle energy remains finite, while the group velocity diverges for \(d<\alpha<d+1\). The system is prepared in its ground state and the coupling is suddenly quenched from \(J_{0}^{i}\to J_{0}^{f}\equiv J_{0}\) at the initial time \(t=0\). When considering longitudinal spin correlations, i.e. Eq. (72) with \(\alpha=\beta=x\) one can employ the formula \[G_{xx}\left(r,t\right)=g(r)-\int_{\mathcal{B}}\frac{d^{d}\mathbf{k}}{\left(2 \pi\right)^{d}}\mathcal{F}\left(k\right)\frac{e^{i\left(k+2\alpha_{\mathbf{k }}\right)t}+e^{i\left(k+r-2\alpha_{\mathbf{k}}t\right)}}{2}, \tag{74}\] where the integral spans the first Brillouin zone \(\mathcal{B}\). In the following we are going to ignore the time-independent function \(g(r)\) and focus on the time evolution of the correlations, which can be readily obtained \[\mathcal{F}(k)=\frac{2h\left(J_{0}^{\text{i}}-J_{0}^{\text{f}}\right)f_{k} \left(\alpha\right)}{\left[h+2J_{0}^{\text{f}}f_{k}\left(\alpha\right)\right] \sqrt{h\left[h+2J_{0}^{\text{i}}f_{k}\left(\alpha\right)\right]}}. \tag{75}\] The amplitude of the quench is directly proportional to the difference between the initial \(J_{0}^{\text{i}}\) and final \(J_{0}^{\text{f}}\) couplings. Both these values are chosen to maintain the system within the paramagnetic phase \(h>h_{\text{cr}}\). The time-dependent correlation function \(G_{xx}(r,t)\) of the long-range Ising model obtained by Eq. (74) is displayed in Fig. (6a) for \(\alpha=1.7\). The front of the correlation is highlighted by a green line. Its scaling is not linear but algebraic as expected for long-range interactions [121; 122]. Nevertheless, the front propagation does not saturate the Figure 6: **Spreading of connected spin-spin correlation function.** Panel **(a)** displays \(G_{xx}(r,t)\) for the quantum Ising chain with \(\alpha=1.7\) for a sudden quench for the quench in the paramagnetic phase from \(h/J_{0}^{\text{i}}=50\) to \(h/J_{0}=1\). The green line is the correlation front which scales sub-ballistically (white dashed line represents ballistic propagation). The dashed black line are the analytic scaling \(r^{\zeta}\) obtained in Eq. (79). Panel **(b)** reports \(G_{xx}(r,t)\) for the quantum \(XY\) model with \(\alpha=3\) in \(d=2\) for a quench starting from the fully polarized state along \(x\) and evolved with the Hamiltonian in Eq. (1) with \(\gamma=h=0\) (i.e. a quench from \(\gamma=1\) to \(\gamma=0\)). The dashed black lines show the scaling of the maxima which is linear in the axis \(r^{\zeta XY}\). Panel **(a)** is adapted from Ref. [51] and panel **(b)** from Ref. [104]. conventional super-ballistic bounds [128], rather it displays a sub-ballistic scaling, i.e. \(t\sim r^{\beta_{\rm front}}\) with \(\beta_{\rm front}>1\), which is represented as a solid green line. Inside the correlation front the scaling changes and for the Ising model the correlation maxima (light yellow areas) propagate ballistically with \(t\sim r\). It is interesting to use the stationary phase approximation in order to evaluate Eq. (74) in the large size and long-time limit. Indeed, for \(t,r\to\infty\) the integral in Eq. (74) is dominated by the configurations with \[\nabla_{k}\omega_{k}=r/t \tag{76}\] where the group velocity diverges as \(k^{\zeta-1}\) in the \(k\to 0\) limit. Thus, quasi-particles with momentum \(k_{\rm sp}=\left(2|c|\zeta t/r\right)^{1/(1-\zeta)}\) fulfil Eq. (76) at any given point \((t,r)\) in space-time. The leading contribution to the correlation front propagation comes from the low-energy divergence of quasi-particle group velocity. In order to evaluate the leading contribution to the correlation function we assume that the amplitude function obeys \(\lim_{k\to 0}\mathcal{F}(k)\sim k^{\eta}\), leading to \[G_{xx}(r,t)\propto\frac{t^{\gamma}}{r^{\chi}}\cos\left[A_{\zeta}\left(\frac{t }{r^{\zeta}}\right)^{\frac{1}{1-\zeta}}-2\Delta t+\frac{\pi}{4}\right]\,, \tag{77}\] with \(\gamma=\frac{\eta+d/2}{1-\zeta},\chi=\frac{\eta+d(2-\zeta)/2}{1-\zeta}\), and \(A_{z}=2|c|(1-\zeta)(2|c|\zeta)^{\frac{\zeta}{1-\zeta}}\). It follows from Eq. (77) that the correlation front is obeys the relation \(t^{\gamma}\approx r^{\chi}\) and \[t\propto r^{\beta_{\rm front}},\qquad\beta_{\rm front}=\chi/\gamma. \tag{78}\] Interestingly, the propagation of the wave-front does not depend only on the universal scaling exponent \(\zeta\) but also on the specific correlation function into consideration, since the exponent \(\eta\), which describes the low energy scaling of \(\mathcal{F}\) (\(k\)) in Eq. (77), enters in the determination of the ration \(\chi/\gamma\). As the local limit is approached, the quasi-particle velocity ceases to diverge \(\zeta\to 1\) and linear spreading of the wave-front is recovered so that Eq. (77) reproduces the Lieb-Robinson expectation [12]. The relation \(\chi=\gamma+d/2\) yields \(\beta_{\rm front}>1\) and imposes sub-ballistic wave-front propagation. The quench protocol into consideration stays within the paramagnetic phase, which is characterized by a gapped dispersion relation, see Eq. (73). This, in turns, leads to \(\lim_{k\to 0}\mathcal{F}(k)\sim\mathcal{O}(1)\) and the scaling exponent of the correlation function vanishes, i.e. \(\eta=0\). Thus, only the exponent \(\zeta\) determines the front propagation scaling \(\beta_{\rm front}=2+d-\alpha\). The theoretical prediction for \(\alpha=1.7\) and \(d=1\) produces \(\beta_{\rm front}=1.3\), which is in perfect agreement with the one observed in the numerical computation displayed in Fig. 6a. The formula \(\beta_{\rm front}=2+d-\alpha\) also matches the exact result obtained in Ref. [60] for \(d=1\) and \(\alpha=3/2\), also confirmed by t-VMC calculations. Within the causal region delimited by the wave-front, the local maxima are determined by the maxima of the cosine function in Eq. (77). Thus, the correlation maxima occur at the time \(t_{\rm max}\), whose value does not depend on the shape of \(\mathcal{F}(k)\), but only on the value of the \(\zeta\) exponent, yielding \[t_{\rm max}\propto r^{\zeta}, \tag{79}\] at least for a gapless dispersion relation. According to this analysis, the maxima of the correlations, located at the time \(t_{\rm max}\) spread super-ballistically for weak long-range interactions. This has to be contrasted with the sub-ballistic scaling obtained in Eq. (78) for the front propagation. The result in Eq. (79) is consistent with the experimental observation on trapped ions [3] as well as with the truncated Wigner approximation analysis [126, 129]. However, for the long-range Ising model the dynamical protocol under consideration remains within the paramagnetic phase, leading to a finite gap \(\Delta\neq 0\). Therefore, the argument of the cosine function in Eq. (77) is insensitive to the non-analytic scaling of the dispersion relation in the low-energy limit, becoming constant in the large \(t\) and \(r\) limit with \(t/r\sim const\). Thus, Eq. (79) has to be substituted with \(t_{\rm max}\propto r\) for a gapped dispersion relations. It follows that the local maxima are always ballistic, \(\beta_{\rm max}=1\) for quenches within gapped phases, see Fig. 6a. The ballistic motion of local maxima has also been observed with a trapped ion quantum simulator [14]. Based on the above discussion, it can be deduced that the correlations spreading reflect the low-energy properties of the long-range model. It is evident that the scaling of correlations is universal in long-range systems, in the sense that it reflects the low-energy properties of the model. Then, a very different picture is obtained by studying a quantum quench within a gapless phase. In order to investigate this dependence we prepare the system in the state fully polarized along the direction \(x\) and evolve it with the Hamiltonian in Eq. (1) with \(\gamma=h=0\), i.e. the long-range XY Hamiltonian, which has \(U(1)\) symmetry. Following Ref. [104] we consider the case \(d=2\). The dispersion relation can be obtained by Eq. (62) by setting \(\gamma=h=0\) \[\omega_{k}=J_{0}\,\sqrt{1-f_{k}(\alpha)}. \tag{80}\] As expected, changing the symmetry of the final Hamiltonian modifies the low-energy dispersion relation, which now scales as \(\omega_{k}^{\rm low}\propto k^{\zeta_{\rm XY}}\) with \(\zeta_{\rm XY}=(\alpha-d)/2\), leading to a diverging quasi-particle group velocity in the \(k\to 0\) limit for \(d<\alpha<d+2\). A straightforward computation for the long-range XY model [104; 51] produces \(\eta=\zeta\) leading to \(\beta_{\rm front}=1+d(2+d-\alpha)/(2\alpha)\). On the other hand, Eq. (79) remains unchanged and it yields \(\beta_{\rm max}=\zeta_{\rm XY}\), as it is visible in Fig. 6b and verified by DMRG calculations in Ref. [104]. **Summary**: In the weak long-range regime, the correlations front spreads non-linearly, with exponents that depend on the details of the underlying low-energy dispersion. #### 3.1.2 Strong long-range regime (\(0<\alpha<d\)) We now consider the Hamiltonian (1) in the strong long-range regime \(0<\alpha<d\). Following Ref. [50], in this Subsection the interactions are _not_ Kac-normalized, i.e. we set \(J\equiv J_{0}\) in Eq. (3). This leads us to discuss the effect of a divergent quasi-particle energy for \(k\to 0\) onto the correlation spreading. [Note that this is different from the rest of the Report where we focus on the discrete spectrum in Eq. (34) at low \(k\)!] Within this framework, we approximate the low-energy dispersion relation with the expression \[\omega_{k}\approx\frac{e_{0}}{k^{\gamma}}, \tag{81}\] where \(e_{0}=\sqrt{2hJ_{0}}\) and \(\gamma=\frac{d-\alpha}{2}\). Including the modified dispersion relation into the Eq. (74) one gets \[G_{xx}(r,t)\sim\int_{\omega}d\Omega\int_{\pi/L}^{\pi}dk\,k^{d-1+\gamma}e^{kr \cos(\theta)}\left[1-\cos\left(2e_{0}tk^{-\gamma}\right)\right], \tag{82}\] where the factor \(k^{\gamma}\) comes from the low energy limit of the amplitude function in Eq. (75), i.e. \(\mathcal{F}(k)\sim k^{\gamma}\). Due to the divergent nature of the quasi-particle spectrum, one can introduce a low-energy cutoff \(\sim 1/L\) in the momentum integral in Eq. (82). After expanding the exponential term (82) in powers of the distance \(r\), the integration is performed term by term [50]. Then, after discarding finite term in the system size one finds \[G_{xx}(r,t)\sim\lim_{L\to\infty}\frac{\sin\left(L^{\gamma}\tau\right)}{\tau} \frac{\int d\Omega e^{\epsilon\frac{\theta}{L}\cos(\theta)}}{L^{2\gamma+D}}, \tag{83}\] where is the dimensionless time variable \(\tau=2\epsilon_{0}t\). Due to the algebraic divergence of the quasi-particles spectrum the time scale for signal spreading in the system vanishes in the thermodynamic limit. Accordingly, the vanishing of the signal spreading time displays the same scaling exponent \(\gamma\) as the divergence of the quasi-particle energy, which for the Ising model reads \(\gamma=\frac{d-\alpha}{2}\). This analytic derivation has been also corroborated by numerical analysis of the spin-wave dynamics in Ref. [50]. It is worth noting that the same scaling has been derived within a generalized Lieb-Robinson bound in long-range fermionic systems [41]. [style=MyFrame] **Summary**: In the strong long range regime without Kac normalization, the divergent quasi-particle energy leads to instantaneous correlation spreading in the thermodynamic limit. #### 3.1.3 Other directions The present Subsection has been devoted to the study of correlation spreading within linear spin-wave theory [50; 51; 60; 104]. This approximation proved capable to capture the salient features of several numerical simulations [59; 124; 126; 127; 130]. In particular, in the case of the quantum Ising chain [Eq. (1) with \(d=1\)], numerical matrix-product state calculations have shown that the emergence of a short-range-like light-cone behavior for \(\alpha>2\)[127] as confirmed by the study in Sec. 3.1.1. On the other hand, for \(1<\alpha<2\), the model displays clear light cone but with an infinite propagation speed of almost all excitations. This is linked to the divergence of the maximum group velocity, which leads to a scenario of multispeed prethermalization [104], see again Sec. 3.1.1. For \(\alpha<1\), all studies report a clear nonlocal regime, with instantaneous transmission of correlations between distant sites, in agreement with the study reported in Sec. 3.1.2. Despite the successes of linear spin-wave theory, it would be interesting to reconsider these results using the time-dependent framework that we will present in Sec. 4.2.1. One may compute \(G(r,t)\) by expressing spin operators in a time-dependent frame and expanding them using Holstein-Primakoff bosons. This very analysis has been performed has been performed in a related model in Ref. [57] in connection with dynamical phase transitions; see also the analogous calculation for scrambling dynamics in Eq. (143). Finally, a very successful research direction based on rigorous mathematical investigations was pursued to generalize the Lieb-Robinson bound for power-law decaying interactions [123]. In a seminal work of 2006, Hastings and Koma [121] showed that it takes a time \(t\gtrsim\log r\) to propagate information at distance \(r\) for all \(\alpha>d\). However this bound is far from tight, since it does not recover the linear light-cone in Eq.(71) for large \(\alpha\). Several efforts in the past years have led to a greatly improved picture [36; 38; 41; 122; 131; 132; 133; 134; 135; 136; 137; 138; 139]. Firstly, it was proved the existence of the linear light-cone \(t\gtrsim r\) for \(\alpha>2d+1\)[40; 140]. Secondly, it was shown that this result becomes \(t\gtrsim r^{\min(\alpha-2d,1)}\) for \(\alpha>2d\)[120; 139]. On the other hand, in the strong long-range regime \(0<\alpha<d\), correlations between distant degrees of freedom can propagate instantaneously, since the bounds on the light-cone time-scale can vanish with the system size [39; 41; 137]. So far, the best estimate for interacting systems is \(t\gtrsim N^{\alpha/d-1}\log N\), that can be made tighter \(t\gtrsim N^{\alpha/d-1/2}\) for free models with \(\alpha<d/2\)[39]. Notably, the violations of Lieb-Robinson bound have been experimentally probed on trapped ions quantum simulators for \(0.6\lesssim\alpha\lesssim 1.2\) in Refs. [3; 14]. [style=MyFrame] **Summary**: In addition to low-energy approximations, the spreading of correlations has been tackled with various approaches ranging from numerical simulations to mathematically rigorous bounds. The current established scenario is rather complete and satisfactory. ### Metastability In the following, we are going to show that metastability in quantum strong long-range systems may be traced back to their discrete quasi-particle spectrum, which hinders the applicability of the kinematical chaos hypothesis [141]. #### 3.2.1 State of the art Diverging equilibration times in the thermodynamic limit are a well-known feature of long-range interacting systems. This phenomenon is widespread within the classical long-range physics world [142; 143], but multiple theoretical observation occurred also in the quantum realm [42; 144]. Long-lived pre-thermalization is also expected to occur in cold atom clouds confined into optical resonators [145], where semi-classical analysis of the Fokker-Planck equation directly connects to the Hamiltonian mean-field model [146; 147], the workhorse of classical long-range physics [148]. Recent studies have directly linked the absence of equilibration in strong long-range quantum systems to the discreteness of the quasi-particle spectrum, see Eq. (34a). This results in a violation of Boltzmann's H-theorem and leads to the emergence of finite Poincare recurrence times in the thermodynamic limit [52]. This section discusses the appearance of diverging equilibration times for quantum long-range systems in the thermodynamic limit. This is consistent with the properties discussed in Section 2.4, which are common to both large long-range systems and finite local ones. Examples include the inability to completely disregard boundary effects over bulk phenomena [149; 150], the existence of concave entropy regions [151], and the presence of a macroscopic energy gap between the ground state and the first excited state [152; 153]. It is worth noting that our description mostly pertains isolated quantum systems, while multiple theoretical and experimental observations in cavity systems evidenced a substantial role of dissipation [154; 155]. The crucial aspect is that the excitation spectrum of non-interacting systems does not become continuous in the thermodynamic limit. The eigenvalues of a long-range coupling matrix have been shown to remain discrete even in the infinite components limit, forming a pure point spectrum [156] similar to that observed in strongly disordered systems [157; 158; 159; 160]. A discussion on the spectral discreteness of long-range couplings in the thermodynamic limit has been presented in Ref. [52] for a few quadratic models and used to explain the observation of diverging equilibration times in a long-range Ising model, quenched across its quantum critical point [42]. We refer the readers to Sec. 2.4 and C. #### 3.2.2 Quasi-stationary states and spectral properties The first evidence of quasi-stationary states (QSS) in quantum systems was described in the prototypical example of the long-range quantum Ising chain [see Eq. (1)]. QSS were shown to appear for quenches starting well inside the paramagnetic phase in the \(h\to\infty\) limit and ending deep in the ferromagnetic phase at \(h=0\). Here, the system is prepared in the transversally polarized ground state and evolved according to the classical ferromagnetic Hamiltonian in Eq. (1) in the absence of the transverse field \(h=0\). As a result, the expectation of the global operator \(m_{z}=\langle\sum_{i}\sigma_{i}^{z}\rangle/N\) evolves from the initial value \(\lim_{t\to 0}m_{z}=1\) to the equilibrium expectation \(\lim_{t\to\infty}m_{z}=0\), if the system actually equilibrates. These observations have been extended to any choice of the initial and final magnetic fields \(h_{i}\), \(h_{f}\) using the Jordan-Wigner representation of the Ising model. The appearance of the QSS has been frequently linked to the scaling of equilibration times of critical observables, such as the magnetization [161; 47; 143]. However, persistent time fluctuations have also been found in generic thermodynamic observables of classical systems, such as the evolution of internal energy in systems of particles with attractive power-law pair interactions [95]. Similar phenomena can be observed in our system, by considering just the leading order low-energy theory. In order to simplify the study we restrict our analysis to the paramagnetic quantum Ising chain, whose quasi-particle dispersion is \[\omega_{k}=\sqrt{h(h-2J_{0}f_{k}(\alpha))}\,, \tag{84}\] cf. Eq. (47). It is worth noting that the present spin-wave approximation corresponds to the time-dependent Hartree-Fock approximation of the Ising and \(O(N)\) rotor models. Accordingly, several phenomena occurring in the out-of-equilibrium low-energy dynamics of the Ising Hamiltonian can be also observed in the large-\(N\) limit of \(O(N)\) models [162, 163], including prethermalization [164, 165], defect formation [166], dynamical phase transitions [109]. In particular, the dynamics induced by a sudden quench leads to universal relaxation properties [109, 163, 167]. In this regime, equilibration does not occur in the non-additive regime due to the discrete quasi-particle spectrum \(f_{k}(\alpha)\). In order to demonstrate this fact let us consider a sudden magnetic field quench \(h^{\dagger}\to h^{\dagger}\) in the Hamiltonian (46). The quench occurs within the normal phase \(h>h_{\rm cr}\) so that no magnetization occurs. Nevertheless, a finite spin-wave density will arise due to the sudden quench and will contribute to the evolution of any internal observable of the system. In order to make a direct parallel with the classical case described in Ref. [168] we consider the evolution of the spin-wave Kinetic energy \[K(t)=\sum_{k}\langle\hat{\rho}_{k}^{2}\rangle/2N=-\frac{\omega_{k}}{4}\left< \left(\beta_{k}^{\dagger}-\beta_{k}\right)^{2}\right>\, \tag{85}\] where \(\beta_{k}\) and \(\beta_{k}^{\dagger}\) diagonalize the quadratic Hamiltonian in Eq. (42). The calculation is rather straightforward, since the system is assumed to lie in the ground-state before the sudden quench. After the quench, each spin-wave occupies a squeezed state, so that the system lies in the quantum state \(\Pi_{k}\hat{S}_{k}(\zeta)|0\rangle\), where \(|0\rangle\) is the vacuum and the squeezing operator \(\hat{S}_{k}(\zeta)|0\rangle\) reads \[\hat{S}_{k}(\zeta)=\exp\frac{\left(\zeta^{*}(\hat{\beta}_{k})^{2}-\zeta(\hat{ \beta}_{k}^{\dagger})^{2}\right)}{2}. \tag{86}\] The squeezing parameter \(r\) is defined by rewriting \(\zeta\) in polar coordinates \(\zeta=re^{i\phi}\). Then, it is rather straightforward to rewrite the squeezing parameter in terms of the effective oscillator length \(\xi(t)\) \[\tanh r_{k}=\sqrt{\frac{\left(\frac{1}{2\xi_{k}(t)^{2}}-\omega_{k}\right)^{2} +\frac{\xi_{k}(t)^{2}}{\xi_{k}(t)^{2}}}{\left(\frac{1}{2\xi_{k}(t)^{2}}+ \omega_{k}\right)^{2}+\frac{\xi_{k}(t)^{2}}{\xi_{k}(t)^{2}}}}. \tag{87}\] To obtain the spin-wave dynamics is then sufficient to solve the Ermakov equation, which describes the evolution of the effective length [169] \[\ddot{\xi}_{k}(t)+\omega_{k}^{2}(t)\xi_{k}(t)^{2}=\frac{1}{4\xi_{k}(t)^{3}}. \tag{88}\] The solution of the sudden quench dynamics is readily obtained by introducing \(\omega_{k}(t)=\theta(-t)\,\omega_{k,i}+\theta(t)\,\omega_{k,f}\) in Eq. (88). The resulting dynamical evolution for the spin-wave kinetic energy is displayed in Fig. 7 for \(\alpha\in[0.15,0.35,0.65,0.95]\). In analogy with the classical case the observable \(K(t)\) displays persistent dynamical oscillations, which do not wash out in the thermodynamic limit. The smaller the \(\alpha\) the wider the amplitude of those fluctuations. A simple explanation of this phenomenon is found into the fully connected limit (\(\alpha\to 0\)), where the function \(f_{k}(\alpha)\) separates between two distinct energy levels in the thermodynamic limit: a non-degenerate ground-state with energy \(-J\) and a \(N-1\) degenerate excited states with zero energy. In presence of any given set of boundary conditions, the degeneracy is split and the system behaves at finite size as a set of harmonic oscillators with discrete energies. As the size increases, the spectrum accumulates at high energy where the eigenvalues \(f_{k}(\alpha)\) of the coupling matrix become all identical, making the system equivalent to a single quenched harmonic oscillator. To characterize equilibration, we introduce the characteristic function of any observable \(A\), i.e. \(\chi_{A}(t)=A(t)-\tilde{A}\). This quantity captures the dynamical fluctuations around the average value of the observable. Equilibration of the observable \(A(t)\) in closed quantum systems occurs when the long-time Cesaro's average of the squared fluctuation vanishes [170; 171; 172]: \[\lim_{T\to\infty}Q_{A}(T)\equiv\lim_{T\to\infty}\langle|\chi_{A}(t)|^{2}\rangle_ {T}=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}|\chi_{A}(t)|^{2}dt=0\;, \tag{89}\] while metastability shall be associated with a finite value \(\lim_{T\to\infty}Q_{A}(T)\neq 0\). The equilibration of the kinetic energy \(K\) in Eq.(85) - or other physical observables - follows a similar argument as the fidelity of a quantum system in the context of the spectrum of long-range systems [52]. In _weak-long range interacting systems_ with translational invariance, the spectrum becomes absolutely continuous in the thermodynamic limit. This implies that \(\lim_{t\to\infty}\chi_{K}(t)\to 0\) outside of the Cesaro's average due to the Riemann-Lebesgue lemma. For quantum systems with initial states having no overlap with pure point portions of the spectrum, equilibration as defined in equation (89) is ensured by Wiener's theorem [156]. Considering the thermodynamic limit of a _strong long-range interacting_ system, where the system size \(N\) increases, the eigenmodes \(f_{k}(\alpha)\) of the Hamiltonian tend to accumulate at high energy, near \(\omega_{k}|_{h_{f}}\sim 2h_{f}\). In fact, in the case of _flat interactions_ (\(\alpha=0\)), the spectrum consists of a single infinite degenerate eigenstate, resulting in dynamics that precisely correspond to a single harmonic oscillator. Figure 7: **Equilibration of long-range spherical model.** Panel (a) displays the dynamical evolution of the kinetic energy following the sudden quench \(h_{i}\to h_{f}\). After a steady decay during the initial dynamics \(t\lesssim 10^{2}\), dynamical oscillations set to a finite value that remain steady in the long-time regime. The amplitude of dynamical fluctuations for the kinetic energy after a time \(T\) is quantified by the quantity \(Q_{K}(T)\) see Eq. (89) displayed in panel (b). #### 3.2.3 Equilibration in presence of disorder We would like to comment on the relation between these results and the metastability that arises in disordered systems. From the perspective of spectral discreteness, the metastability observed for quantum systems in the strong long-range regime is fundamentally different. Indeed, when flat interactions (\(\alpha=0\)) are perturbed by Gaussian distributed weak couplings \(u_{ij}\), namely \(J^{\rm dis}_{\mathbf{r},\mathbf{r}^{\prime}}=J_{\mathbf{r},\mathbf{r}^{\prime}}+u_{ij}\) with \[P(u_{ij})\propto\exp\left(-N\,u_{ij}^{2}/2w^{2}\right) \tag{90}\] whose width \(2w\) represents the disorder strength. The disordered couplings lift the infinite (\(\sim N-1\)) degeneracy of the excited state at zero energy and the spectrum becomes continuous apart from the single non-degenerate ground-state at energy \(J_{0}\), where \(J_{0}>w\) is the strength of the flat homogeneous interactions [173; 174; 175]. The density of states of the continuous spectrum follows the celebrated Wigner semicircle law [176]. In analogy with the non disordered case, we initialize the dynamics at equilibrium for \(h=2.2h_{\rm cr}\) at \(t\leq 0\); then, at \(t>0\) the magnetic field suddenly switches at \(h_{f}=1.1h_{\rm cr}\). The continuum nature of the spectrum leads the spin-wave kinetic energy \(K(t)\) to exponentially equilibrate, see Fig. 8a. Indeed, the amplitude of dynamical fluctuations in disordered systems decays exponentially, allowing for the introduction of the equilibration time \(\tau_{\rm eq}\): \[Q_{K}(T)\sim e^{-T/\tau_{\rm eq}}\, \tag{91}\] Figure 8: **Equilibration of the disordered long-range Ising model within spin-wave approximation.** Panel (a): Dynamical fluctuations decay as a function of the disorder strength for the kinetic energy \(K\), see Eq. (85). As the disorder strength is decreased the decay rate also decreases until it vanishes in the zero disorder limit (upper blue curve), where dynamical fluctuations persist at all times and equilibration never occurs. Panel (b): the equilibration time of the kinetic energy observable obtained by fitting the curves in panel (a) via the exponential form in Eq. (91). The divergence in the clean case (\(2w\to 0\)) is evident. Figures reproduced from Ref. [52]. Numerical analysis shows that the equilibration time \(\tau_{\rm eq}\) monotonically decreases with increasing disorder strength \(J\), as expected (see Fig. 8b). The exponential decay of dynamical fluctuations and the definition of the equilibration time provide insights into the equilibration dynamics of disordered systems. Interestingly, the results obtained in Fig. 8 for quantum systems show remarkable similarities to those obtained for classical spherical models with disordered couplings, as shown in Chapter 4 of Ref. [177]. There, the Langevin dynamics of the disordered classical spherical model it is shown not to exhibit metastability, as long as the initial state is not magnetized. The analysis presented here focuses on characterizing the long-time equilibration dynamics of many-body quantum systems, where the thermodynamic limit is generally taken before the long-time limit. However, similar conclusions can be obtained by considering the long-time limit of dynamical fluctuations in finite systems, which yields [170; 171; 172]: \[\lim_{T\to\infty}Q_{K}(T)\propto\frac{1}{d_{\rm eff}} \tag{92}\] Here, \(d_{\rm eff}\) roughly represents the number of modes participating in the dynamics. In the case of finite systems, where the entire spectrum is discrete and only a finite number of modes exist, \(d_{\rm eff}\) captures this finite nature. As the thermodynamic limit is approached and the spectrum becomes continuous, eigenvalues become dense in arbitrarily small energy ranges. In the continuous limit, \(d_{\rm eff}\to\infty\) for dynamics involving initial states in the continuous spectrum. However, for long-range systems with \(\alpha<d\), where \(d\) is the dimension of the system, a continuous theory cannot be defined. This is because the only dense region in the spectrum occurs around the energy maximum, where infinitely many degenerate eigenvalues emerge. This violates the assumption of non-degenerate energy gaps underlying equation (92) [178]. ### Kibble-Zurek mechanism Within the realm of quantum systems, the Landau-Zener problem provides the earliest, and possibly simplest, example of defect formation during a quasi-static drive [179; 81; 180]. The problem describes a two-level system slowly driven across an avoided level crossing. Although initialized in the ground state at the initial time \(t_{i}<0\), the system gradually populates the excited state, whose energy separation slowly decreases during the dynamics. The energy gap starts from a minimum at \(t=0\) and increases back until the time reaches the endpoint of the dynamics \(t_{f}>0\). The dynamical evolution is controlled by the rate parameter \(\delta\), i.e. \(H(t)\equiv H(\delta\cdot t)\), so that the quasi-static limit is reached as \(\delta\to 0\). A straightforward criterion to establish whether a quasi-static transformation remains adiabatic is to ensure that the rate of change of the instantaneous minimal gap \(\Delta=|E_{0}-E_{1}|\) remains smaller than the square gap itself \[\dot{\Delta}(t)\ll\Delta^{2}(t). \tag{93}\] Latter criterion only involves equilibrium quantities since \(E_{\ell}(t)\) represents the spectrum of the instantaneous Hamiltonian at the time \(t\). While Eq. (93) has been obtained by heuristic arguments a more rigorous derivation of an adiabatic criterion and a discussion of how it compares with Eq. (93) can be found in Ref. [181]. Eq. (93) has been introduced for the Landau-Zener problem, but the argument in Ref. [181] applies to generic quantum systems. In general, as long as Eq. (93) is satisfied, the excited state population of a quasi-statically driven quantum system decreases with the drive rate and the hypothesis for the quantum adiabatic theorem are satisfied [182]. However, as the drive approaches a quantum critical point the correlation length of a quantum system diverges and the instantaneous gap vanishes \(\Delta\to 0\). As a result, the dynamical scaling of the observables close to the quantum phase transition is reminiscent of the thermodynamic scaling at equilibrium. Yet, in order for such scaling to be displayed, the drive has to be slow enough that the dynamical evolution actually occurs in the vicinity of the equilibrium critical point. Let us focus on the concrete case of the Hamiltonian in Eq. (1) whose internal control parameter is defined as \(\lambda=h(t)-h_{\rm cr}\), such that the ferromagnetic quantum critical point occurs at \(\lambda_{c}=0\). For a moment, let us imagine that the system is finite, so that the spectrum remains gapped also at criticality. Then, the hypothesis of the quantum adiabatic theorem [182] remains fulfilled and any slow enough drive of internal parameters \(\lambda(t)\sim\delta t\) only generates adiabatic corrections \(\sim\delta^{2}\) to the observables expectations with respect to the equilibrium value, as it can be deduced by simple thermodynamic arguments [183]. However, in the thermodynamic limit, crossing the equilibrium critical point breaks down the conventional adiabatic picture and the residual energy (heat) generated by the drive displays non-analytic behaviour \(E_{\rm res}\approx\delta^{\theta}\) with \(\theta<2\)[184]. Our task within the present section is the determination of the universal scaling index \(\theta\) for quantum long-range systems. #### 3.3.1 State of the art The Kibble-Zurek mechanism allows to _relate the value of the non-analytic exponent \(\theta\) with the equilibrium critical exponents of the model_. This ingenious theory relies on the adiabatic-impulse approximation, which assumes that the dynamical evolution of a system starting in its ground-state at \(t=-\infty\) adiabatically follows the drive until the freezing time \(-\hat{t}\). Beyond the freezing time, the equilibration rate of the system becomes too small with respect to the drive velocity, violating the adiabatic condition in Eq. (93). Then, the freezing time satisfies the condition \(\dot{\Delta}(\hat{t})/\Delta(\hat{t})^{2}=1\). Due to the critical scaling of the instantaneous gap \(\Delta\propto\lambda^{\nu}\), latter relation leads to the freezing time inheriting the equilibrium critical scaling \(\hat{t}\propto\delta^{1/(1+\nu)}\). For \(t>-\hat{t}\), the state dynamics is assumed to remain frozen for the entire interval \(t\in[-\hat{t},\hat{t}]\) until the unfreezing time \(\hat{t}\), where adiabaticity is restored (for simplicity we have assumed a symmetric gap). Once the system has unfrozen the state evolution will resume on the opposite side of the transition, where the Hamiltonian ground-state is supposed to break the Hamiltonian symmetry. Then, the dynamics will induce a transition between the symmetric and a symmetry-broken state. However, this transition will occur at finite correlation length \(\hat{\xi}\), since the process can only start at the time \(t\geq\hat{t}\) well within the symmetric phase of the model. The dynamics has thus modified the character of the continuous phase transition, making it rather similar to a first-order one, and the system will likely form topological defects, whose size would be roughly proportional to the (finite) correlation volume \(\hat{\xi}^{d}\). Therefore, the total defect density scales according to \(N_{\rm exc}\propto\hat{\xi}^{-d}\propto\delta^{d\nu/(1+\nu)}\)[185]. Several verifications of the Kibble-Zurek scaling exist in local systems, both via numerical simulations, exact theoretical studies and experiments [186]. In particular, first studies of defect formation in quantum systems have been pursued on the Hamiltonian in Eq. (1) in the \(\alpha\to\infty\) limit, i.e. the nearest neighbour Ising model, where finite size scaling arguments led to the prediction \[N_{\rm exc}^{\rm fss}\approx\delta^{\frac{1}{c}} \tag{94}\] where the superscript fss stands for finite size scaling. Eq. (94) produces \(N_{\rm exc}\approx\sqrt{\delta}\) in agreement with the Kibble-Zurek prediction \(N_{\rm exc}\propto\delta^{d\nu/(1+\nu)}\) since \(z=\nu=1\) in this case [187]. Soon after this seminal investigation, an exact solution to the universal slow dynamics of the Ising model has been provided by mapping it to a infinite sum of Landau-Zener problems, each representing the dynamics of a single fermionic quasi-particle [188]. This exact solution provides a different scaling theory for the defect density of the Ising model, which is given by \[\int N_{\text{exc}}(k)dk\approx\delta^{\frac{1}{z_{\Delta}}} \tag{95}\] where we have defined \(z_{\Delta}\) from the scaling of the dynamical gap. The result in Eq. (95) has been also employed to prove validity of the Kibble-Zurek argument in Kitaev chains with long-range pairing terms [189] where \(z_{\Delta}=z\), as well as in the perfect local case \(\alpha=\infty\)[188]. Apart for the aforementioned results, which explicitly refer to quadratic Fermi systems, the application of adiabatic perturbation theory to slow quenches close to quantum critical points predicts the scaling of the defect density to be in agreement with the classical Kibble-Zurek prediction \[\int N_{\text{exc}}^{\text{KZ}}(k)\,dk\approx\delta^{\frac{\mu}{1+\alpha}}, \tag{96}\] which also leads to the scaling exponent \(\theta=z\nu/(1+z\nu)\) for the residual energy [190]. Such prediction comes from the assumption that the scaling form of the critical propagator reproduces the equilibrium critical exponents. Since for \(1d\) Fermi systems, one has \(z\nu=1\), the perturbative argument yields \(d\nu/(z\nu+1)=1/2z\) in agreement with the finite size scaling argument in Eq. (94). Interestingly, long-range anisotropic interactions with different ranges depending on the type of coupling are known to violate the perturbative assumption [191] even in the finite range case [192; 185; 193]. In summary, the applicability of the Kibble-Zurek result in quantum systems is supported by two main arguments. The finite size scaling argument reported in Eq. (94) and the perturbative argument, which reproduces the traditional Kibble-Zurek scaling in Eq. (96). Both the arguments coincide for local quantum many-body systems where the femionic quasi-particle description applies. This is the case of the Ising Hamiltonian in Eq. (1) with \(\alpha\gg d+3\) as confirmed by the exact solution obtained at \(\alpha=\infty\). First indications that the scaling of the defect density in the \(\alpha=0\) Ising model did not follow the Kibble-Zurek prediction appeared in Ref. [194]. However, later investigations were found to obey the Kibble-Zurek mechanism, at least for slow ramps terminating at the critical point, i.e. \(t\in[-1/\delta,0]\)[106]. This apparent inconsistencies triggered more intensive numerical studies, which unveiled a complicated landscape where the adiabatic crossing of the equilibrium quantum critical point does not display any scaling with the ramp rate \(\delta\), but rather featured a novel form of dynamical universality as a function of the scaled variable \(\Lambda=N\,\delta\)[105]. **Summary**: In locally-interacting systems the Kibble-Zurek mechanism is supported by finite-size scaling and perturbative arguments, which however break down in presence of long-range interactions. #### 3.3.2 Quasi-static dynamics for \(\alpha=0\) The mosaic can be easily decomposed by the study of the slow drive dynamics within the linear spin-wave theory in Eq. (46). This strategy coincides with the one employed in Sec. 3.2, but with two important differences: first, since we are considering a quasi-static drive, the dynamical evolution remain close to the instantaneous equilibrium state and the only relevant source of deviations from adiabaticity originates from the lowest energy mode. Therefore, we can safely limit ourselves to the case \(\alpha=0\), where just a single spin-wave exist. Secondly, and more importantly, we are going to consider a time dependent magnetic field of the form \(h(t)=h_{\rm cr}+\delta\,t\) for \(t\in[-h_{\rm cr}/\delta,h_{\rm cr}/\delta]\), so that the dynamics initiates in the ferromagnetic state before crossing the critical point. A full treatment of the ferromagnetic state dynamics shall also include the motion of the classical magnetization, whose coupling with the quantum modes is suppressed by a factor \(1/N\) in the thermodynamic limit. In the following, we are going to discard the contribution of the classical mode to the dynamics, since a classical variable in a bounded (singular) potential generates a correction which scales at most as \(\sim\delta^{2}\) and is, therefore, negligible with respect to the contribution of the quantum mode [81]. Within the aforementioned assumptions, the quasi-static dynamics of the \(\alpha=0\) Ising model reduces to the evolution of a single spin-wave. As for the sudden quench case, see Sec. 3.2, the dynamics initialized in the ground state remains in a squeezed state at all times [195, 196, 197]. Then, the dynamics generates any two particle states as follows from Eq. (86). We focus on a cyclic transformation where the system is initially in the ground state of the equilibrium Hamiltonian, thus, it is convenient to rewrite the single spin-wave state as \[\psi_{0}(x,t)=\left(\frac{1}{2\pi\xi^{2}(t)}\right)^{\frac{1}{2}}e^{-W(t)\frac {\lambda^{2}}{2}}e^{-i\frac{\phi t^{2}}{2}}. \tag{97}\] with the effective time dependent frequency \(W(t)=-i\frac{\dot{\xi}(t)}{\dot{\xi}(t)}+\frac{1}{2\xi^{2}(t)}\), and the ininfluential phase \(\varphi(t)=\int^{t}\frac{dt^{\prime}}{2\xi^{2}(t^{\prime})}\). Thus, even in the linear ramp case the entire dynamics is described by the differential Eq. (88). In order to determine the excitation density and the ground state fidelity with respect to the instantaneous equilibrium solution of the problem, we define the adiabatic basis \(\psi_{n}^{\rm ad}(x,t)\), which is obtained taking the equilibrium spin-wave eigenstates and replacing the constant frequency with the time-dependent one [169]. Accordingly, one can expand the exact time-dependent state in terms of the adiabatic basis \(\psi(x,t)=\sum c_{n}(t)\psi_{n}^{\rm ad}(x,t)\), leading to the following results Figure 9: **Defect formation in the \(\alpha=0\) Ising model during a slow quench: Panel (a): Residual energy as a function of the drive rate \(\delta\) for different values of the final gap for a slow dynamics terminating exactly at the critical point. Panel (b): Residual energy after a full ramp across the quantum critical point for two different system sizes and three different values of the universal scaling variable \(\Lambda=N\delta=15,3.75,0.94\) from top to bottom.** for the excitation density \[N_{\rm exc}(t)=\langle\dot{n}\rangle=\sum_{n\in 2\mathbb{N} \downarrow}n|c_{n}|^{2}=\frac{\xi(t)^{2}}{2\omega(t)}\,\left[\left(\frac{1}{2 \xi(t)^{2}}-\omega(t)\right)^{2}+\left(\frac{\dot{\xi}(t)}{\xi(t)}\right)^{2} \right], \tag{98}\] and the adiabatic ground-state fidelity \[f(t)=|c_{0}|^{2}=\frac{1}{\xi(t)}\,\sqrt{\frac{2\omega(t)}{\left( \frac{1}{2\xi(t)^{2}}+\omega(t)\right)^{2}+\left(\frac{\dot{\xi}(t)}{\xi(t)} \right)^{2}}}. \tag{99}\] Interestingly, one can relate the former expressions to the squeezing paramenter in Eq. (87) by the simple relation \(\tanh(r)=\sqrt{N_{\rm exc}(t)f(t)^{2}}\). An analytic solution can be found for a linear ramp across the quantum critical point with the resonant spin-wave having the dynamical frequency \[\omega(t)^{2}=4h(t)(h(t)-2J_{0})\approx 8\delta|t| \tag{100}\] where the last expression on the r.h.s. has been obtained substituting \(h(t)=h_{\rm cr}-\delta t\) and expanding for small \(\delta t\). The linear scaling of \(\omega(t)^{2}\) is the consequence of the gap scaling \(z\nu=1/2\) of the equilibrium problem, see Eq. (47). Eq. (101) represents the perfect crossing of the quantum critical point, since the instantaneous spin-wave frequency perfectly vanishes at \(t=0\). In order to effectively incorporate finite size effects, we shall introduce a small deviation from perfect degeneracy and rewrite Eq. (101) as \[\omega(t)^{2}=8\delta|t|+\Delta_{N}^{2}, \tag{101}\] where \(\Delta_{N}\) is the minimal gap of the finite size system. Obviously, \(\lim_{N\to\infty}\Delta_{N}\to 0\) and the system attains perfect criticality in the thermodynamic limit. Moreover, due to universality, the minimal gap display power-law scaling of the form \(\Delta_{N}^{2}\approx N^{-1/\nu_{*}}\) with \(\nu_{*}=3/2\) as predicted by finite size scaling theory [198] and confirmed by exact studies on the fully-connected quantum Ising model and related flat interacting models [84, 199, 200, 201]. The model in Eq. (101) describes a cyclic transformation of the single Hamiltonian mode and, in the limit \(\delta\to 0\), it can be used to describe a quasi-static cycle in quantum systems with infinitely degenerate spectrum. According to the behaviour of the fidelity and excitation density in the quasi-static limit \(\delta\to 0\) the system presents three stages 1. Perturbative regime (\(N<\infty\)). 2. Kibble-Zurek regime (\(N\to\infty\) and \(t\in[-h_{\rm cr}/\delta,0]\)). 3. Non-adiabatic regime (\(N\to\infty\) and \(t\in[-h_{\rm cr}/\delta,h_{\rm cr}/\delta]\)). Regime (1) occurs for a finite minimal gam \(\Delta_{N}>0\): there adiabatic perturbation is applicable and the dynamics remains adiabatic, i.e. \(N_{\rm exc}\propto\delta^{2}\)[202]. Regime (2) is realised for a thermodynamic system (\(\Delta_{N}\to 0\)) whose dynamics terminates exactly at the quantum critical point \(t=\Delta_{\infty}=0\), where non-analytic corrections of the form \(\delta^{\theta}\) appear in the residual energy. As we are gonna see in the following this regime is properly described by the Kibble-Zurek argument. An actual crossing of the quantum critical point only occurs in regime (3) and the system enters the non-adiabatic regime, where the residual energy and the fidelity acquire dynamical correction which do not depend on the drive rate. The latter result can be easily shown rephrasing Eq. (88) in a rate independent form via the transformations \[t=\delta^{-\frac{1}{4}}\tilde{t},\quad\xi=\delta^{-\frac{1}{4} }\tilde{\xi} \tag{102}\] which reduce Eq. (88) to the \(\delta=1\) case. The expressions in Eqs. (98) and (99) are invariant under the transformations in Eq. (102) in such a way that the fidelity and excitation density at real times can be obtained by \(\tilde{\xi}_{\Delta}(t)=\lim_{\delta\to 1}\xi_{\tilde{\Delta}}(t)\). The subscript \(\Delta\) has been introduced to explicit the dependence on \(\Delta\) of the solution of Eq.(88) In the new variables, the only dependence of the dynamics on the rate \(\delta\) remains in \(\tilde{\Delta}_{N}=\Delta_{N}/\delta^{1/3}\). While the transformations (102) have been reported for the case of a linear quench, they can be easily generalized to any non-linear drive \(\lambda(t)=(\delta|t|)^{\tau}\), obtaining results analogous to the one described in the present section [54]. Thus, the invariance of Eq.(88) with respect to the rescaling in Eq. (102) is enough to demonstrate that the dynamical evolution of the system only depends on the combined variable \(\Lambda=\delta N=\tilde{\Delta}_{N}^{-3}\) as first evidenced by the numerical study in Ref. [105]. **Summary**: For a quasi-static drive terminating at the critical point Kibble-Zurek scaling is observed, while for dynamical protocols crossing the critical point the amount of defects is independent of the quench rate. #### 3.3.3 Adiabaticity breaking However, in order to provide estimates for the defect density and fidelity in the quasi-static limit one has to solve Eq. (88) exactly. In the following we are going to drop all the \(\sim\) superscripts over the rescaled variables, in order to ease the notation. The crucial condition of adiabatic dynamics is for the system to start in the ground-state at the beginning of the dynamics, i.e. \(\lim_{t\to-\infty}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Regime (3) is obtained considering directly the thermodynamic limit case \(\Delta_{N}=0\) and taking the dynamics in \(t=h_{\rm cr}/\delta\approx\infty\) limit, which yields the \(\delta\)-independent results \[\lim_{t\to\infty}N_{\rm exc}(t) =\frac{1}{3} \tag{108}\] \[\lim_{t\to\infty}f(t) =\frac{\sqrt{3}}{2}\, \tag{109}\] which characterise the non-adiabatic dynamics as they remain finite in the \(\delta\to 0\) limit. The analytical results in Eqs. (108) and (109) are universal in the traditional of Kibble-Zurek mechanis result. So, they faithfully reproduce the slow drive limit \(\delta\to 0\) of any dynamical protocol which crosses the critical point. The universality phenomenon is analysed in details in Ref. [53]. On a less accurate, but perhaps more quantitative level, it can be numerically verified that the analytic solution in Appendix D.3 accurately describes any drive \(\omega^{\prime}(t_{*})\) such that \(|\omega^{\prime}(\hat{t})-\omega(\hat{t})|^{2}\ll 1\), where \(\omega(t)\) is given in Eq. (101) [54]. It is worth noting that the non-adiabatic regime described by Eq. (109) and Eq. (108) is profoundly different from the one described in Ref. [203] for low-dimensional systems. There, the spin-wave description is applied to the case \(\alpha\to\infty\). Then, differently from our case on has to integrate over the continuous spin-wave spectrum and non-adiabaticity may arise also for a quench to the critical point due to the flat density of states of 2d systems. In summary, dynamical corrections to a quasi-static drive solely depend on the universal variable \(\Lambda=N\delta\). Indeed, the instantaneous minimal gap of a finite system scales as \(\Delta_{N}\propto\Lambda^{-1/3}\), it follows that the thermodynamic limit (\(N\to\infty\)) and the adiabatic one (\(\delta\to 0\)) do not commute. Accordingly, the observable expectations are universal when displayed as a function of the universal variable \(\Lambda\), see Fig. (10b). This is in perfect agreement with the numerical Figure 10: **Kibble-Zurek mechanism in the fully-connected model.** Panel **(a)** shows the residual energy as a function of the ramp speed in the case of a half ramp \(t\in[-h_{\rm cr}/\delta,0]\) for \(\Lambda=\{10^{9},3\cdot 10^{7},10^{6},3\cdot 10^{4},10^{3},3\cdot 10,1\}\) from top to bottom. The crossover between the Kibble-Zurek scaling (black dashed line) at large \(\Lambda\) and the analytic scaling (gray dashed line) at small \(\Lambda\) is evident. Panel **(b)** displays the residual energy after a quasi-static drive obtained by spin-wave theory. The result is obtained within regime (3) and perfectly reproduces the slow-drive universality numerically found in Ref. [105]. Each colour represents a different value of \(\Lambda=N\delta\) with \(N=2^{9}\) and \(N=2^{11}\) (dashed and solid lines in panel b), i.e. the same values displayed for the exact numerical study in Fig. (b)b. The curves at different sizes perfectly collapse when drawn as a function of the scaling variables. Moreover, the agreement between the spin-wave theory and the numerical study for the different values of \(\Lambda\) is rather remarkable. findings of Ref. [53; 105]. #### 3.3.4 Full counting statistics of defects Recently, interest has raised around the universality of the higher cumulants of the defect statistics following a quasi-static ramp. In general, the process of defect formation in finite local systems has been argued to follow a Binomial distribution [204], making the process of defect formation across a conventional quantum phase transition akin to the classical process of a coin toss [205]. Approaching the thermodynamic limit the probability to generate \(n\) defects becomes normal and reads \[P_{\rm local}(n)\approx\frac{1}{\sqrt{2\pi(1-p)\langle n\rangle}}\exp\left( \frac{(n-\langle n\rangle)^{2}}{2(1-p)\langle n\rangle}\right)\, \tag{110}\] where the average number of defects follows Kibble-Zurek scaling \(\langle n\rangle\propto\delta^{\frac{\delta}{1+\alpha}}\) and \(p\) is the probability for the formation of a single defect. The above theory can be exactly verified in the nearest neighbour transverse field Ising model, whose full counting statistics can be calculated exactly. While for a finite quench rate \(\delta\) all moments of the distributions remain finite in the slow drive limit one recovers Eq. (110) with \((1-p)=3/\pi^{2}\) and \(\langle n\rangle=\frac{N}{2\pi}\sqrt{\frac{\delta}{2J_{0}}}\). These findings have been also demonstrated on different quantum computing platforms [206; 207]. The phenomenology of local systems is certainly rich, but does not present any peculiar features due to quantum fluctuations. Indeed, the same theoretical framework can be applied to describe the statistics of the defects generated across a classical and a quantum phase transition [204]. As we have already argued in previous section, long-range interaction radically alter this picture as they suppress the defect contribution arising from semi-classical critical dynamics and promote the single quantum model as the leading source of non-adiabatic corrections. Interestingly, using the same methodology employed to derive Eq. (108), one can obtain the full counting defect statistics. Indeed, the probability to generate \(n\) defects in our problem is just given by the \(|c_{n}|^{2}\) coefficient in Eq. (98) [208]. Some simple manipulations of Eq. (77) yield \[P_{\rm LR}(n)\approx\binom{n+k-1}{n}\text{sech}(r)^{2k}\tanh(r)^{n}, \tag{111}\] which is a negative binomial distribution. The parameter \(\tanh(r)\) defines the squeezing parameter, see Eq. (87) of the single spin-wave of the system In light of Eq. (111) the parallel between the local and long-range full counting statistics becomes rather striking. The negative binomial distribution described the probability to obtain \(n\) failures before a given (non-deterministic) number \(k\) of successes occurs. So, the probability for a defect (actually a pair since \(n\in 2\mathbb{Z}\)) to arise in the quantum long-range system coincides with the probability to observe \(n/2\) failures before the \(k\)-th success. However, \(k=1/2\) in the present problem. Negative binomials of fractional order are often dubbed Polya distributions and they do not have any equivalent in classical processes but they naturally emerge in the defect formation of quantum long-range systems due to the quantum nature of the problem. This has to be contrasted with the case of local critical theories where defect formation is a purely classical process [205]. The implications of these fundings to the quantum thermodynamics of the systems are discussed in Ref. [208; 209]. ### Dynamical quantum phase transitions - Loschmidt echo Up to this point, our discussion focused on the most traditional examples of dynamical critical phenomena, but, recently, experimental advancements in quantum simulations with cold atoms [22; 210; 211; 212; 213; 214; 215] and trapped ions [45] raised the interest on novel form of dynamical criticality [11]. This is the case of dynamical phase transitions. On one side, the name referred to the study of the out-of-equilibrium behavior of order parameters [216; 217; 218; 219; 220; 221], which we refer to _dynamical phase transition (DPT) in the order parameter_ and that will be discussed in details in Section 4.1.2. On the other, a novel form of dynamical criticality was discussed, where _nonanalytic cusps in the Loschmidt echo_ rate function appear after a quantum quench [222; 223; 55]. We refer to the latter here as _dynamical quantum phase transitions (DQPT) in the Loschmidt echo_. During a quench, the system initially prepared in an initial state \(\ket{\Psi_{0}}\) is evolved through a time independent final Hamiltonian \(H\), i.e. \(\ket{\Psi(t)}=\exp(-\mathrm{i}Ht)\ket{\Psi_{0}}\). The Loschmidt amplitude describes the amplitude of the system returning in its initial state at time \(t\) and reads \[\mathcal{G}(t)=\bra{\Psi_{0}}\Psi(t)\rangle=\bra{\Psi_{0}}\exp(-\mathrm{i}Ht) \ket{\Psi_{0}} \tag{112}\] whose expression closely resembles the classical finite-temperature partition function \(Z(\beta)=\mathrm{Tr}\exp(-\beta H)\). The Loschmidt echo is simply obtained by squaring the amplitude in Eq. (113) yielding \[\mathcal{L}(t)=|\mathcal{G}(t)|^{2}. \tag{113}\] The Loschmidt amplitude and Loschmidt echo play central roles in the theory of DQPTs and appear in various contexts in quantum many-body theory [224; 225; 226; 227; 228; 229]. They exhibit a functional dependence on the system size \(N\), and in the limit of large \(N\), they can be described by rate functions that capture their scaling behavior [222; 230]. DQPTs are defined as nonanalytic behaviors of the Loschmidt amplitude or Loschmidt echo as a function of time. They can be considered as phase transitions in time, analogous to equilibrium phase transitions associated with nonanalytic structures of the free energy [222; 231]. A DQPT is characterized by a sudden qualitative change in the dynamics of the system, typically accompanied by a kink or nonanalyticity in the rate function of the Loschmidt amplitude or Loschmidt echo. This nonanalytic behavior can vary depending on the system and dimensionality, including power-law singularities, logarithmic singularities, and other forms [222; 232; 233]. Theoretical evidences of DQPTs in the Loschmidt echo return rate was found in numerous quantum systems [222; 223; 234; 235; 236; 237; 238; 239; 240] and connected with the behaviour of different local observables [241], including different definitions of the order parameter [242; 243]. Given the many successful experimental realizations of this kind of DQPTs, especially in trapped ion systems with long-range interactions [244; 21], it is not surprising that long-range interacting models were also a privileged tool for the theoretical characterization of DQPTs [242; 243; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 284; 287; 288; 289; 291; 285; 286; 287; 288; 289; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 322; 323; 324; 325; 326; 327; 328; 329; 333; 334; 335; 336; 337; 338; 339; 340; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 402; 409; 403; 404; 405; 406; 407; 409; 408; 403; 409; 410; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 423; 424; 425; 426; 427; 428; 429; 430; 431; 432; 433; 434; 435; 436; 437; 438; 439; 444; 445; 446; 447; 448; 449; 450; 451; 452; 453; 454; 455; 456; 457; 458; 459; 459; 460; 461; 462; 463; 464; 465; 466; 467; 468; 469; 470; 471; 472; 473; 474; 475; 476; 477; 478; 479; 480; 481; 482; 483; 484; 485; 486; 487; 488; 489; 490; 482; 489; 491; 483; 480; 48; 481; 48; 48; 48; 48; 48; 48; 48; 48; 48; 48; 48; 48; 48; 48; 48; 48; 48; 48; 48; 48; 489; 492; 493; 494; 500; 501; 502; 503; 504; 505; 506; 507; 508; 511; 509; 510; 511; 512; 513; 513; 524; 53; 540; 55; 55,56; 57; 58; 59; 59; 511; 525; 526; 537; 541; 538; 542; 559; 55; 56; 57; 59; 58; 59; 527; 59; 500; 512; 539; 54; 55; 57; 59; 501; 528; 55; 58; 59; 502; 513; 53; 54; 55; 59; 503; 514; 55; 56; 57; 58; 59; 59; 504; 515; 57; 59; 52; 53; 58; 59; 505; 59; 529; 506; 516; 59; 507; 517; 52; 54; 508; 52; 53; 54; 56; 58; 59; 53; 59; 540; 51; 54; 57; 55; 59; 50; 54; 58; 59; 509; 50; 55; 51; 52; 55; 53; 56; 59; 51; 52; 540; 55; 57; 58; 59; 50; 59; 50; 51; 53; 59; 50; 52; 541; 55; 56; 57; 58; 59; 51; 53; 59; 50; 54; 52; 55; 58; 59; 53; 59; 54; 59; 55; 56; 57; 58; 59; 50; 57; 59; 51; 54; 59; 52; 50; 53; 57; 58; 59; 59; 50; 54; 59; 51; 55; 58; 51; 59; 50; 56; 57; 59; 52; 58; 59; 50; 59; 53; 59; 50; 57; 51; 58; 59; 50; 59; 50; 51; 54; 59; 50; 51; 52; 53; 54; 51; 55; 56; 57; 58; 59; 52; 59; 53; 57; 59; 50; 58; 59; 51; 59; 50; 59; 50; 51; 50; 51; 52; 53; 540; 53; 56; 59; 52; 51; 54; 57; 58; 59; 50; 51; 55; 59; 52; 543; 53; 56; 59; 53; 57; 51; 58; 59; 54; 51; 59; 50; 56; 57; 58; 59; 50; 57; 59; 510; 58; 59; 51; 59; 50; 59; 520; 51; 53; 59; #### 3.4.1 Spherical spin-wave theory In the following, we are going to show how the emergence of DQPTs in the long-range Ising model can be captured by the study of the harmonic Hamiltonian in Eq. (46). However, most DQPTs occur when a parameter of the Hamiltonian is quenched across an underlying equilibrium phase transition and the equilibrium spin-wave theory is not capable to capture both side of the transition within the same formalism. It is worth noting that there are cases where DQPTs can arise independently of conventional phase transitions [235; 236; 251; 252; 235]. In order to capture a sudden quench across the dynamical critical point we are going to consider a slightly modified version of the linear theory Eq. (46), namely the quantum spherical model [253]. The idea consists in modifying the dispersion of the linear spin-waves as where we have used the transformation \[\omega_{k}=\sqrt{h(\mu+2J_{0}f_{k}(\alpha))}, \tag{114}\] where once again we have set \(\gamma=1\). While in Eq. (47) the excitation spectrum only depends on the magnetic field and the coupling, the novel dispersion relation contains an addition parameter \(\mu\). This has to be calculated self-consistently by imposing a constraint on the spin-wave potential energy, namely \[\sum_{k}\frac{\left\langle\left(\beta_{k}^{\dagger}+\beta_{k}\right)^{2} \right\rangle}{2h\omega_{k}}=\frac{N}{4}. \tag{115}\] The spherical constraint in Eq. (115) may induce a quantum critical point in the quadratic model at a field value \(h_{\mathrm{cr}}^{\mathrm{sph}}\) depending on the features of the function \(f_{k}(\alpha)\). Below the critical field strength \(h_{\mathrm{cr}}^{\mathrm{sph}}\) the constraint condition in Eq. (115) causes the spin-wave to form a condensate state at \(k=0\), whose formation follows the mean field universality. In its classical version, the spherical model has been introduced to mimic the finite temperature free-energy of \(O(n)\)-symmetric spin systems in the \(n\to\infty\) limit [254]. **Summary**: The spherical model corresponds to the low-energy long-range Hamiltonian equipped with an additional parameter, determined by a self-consistent equation. Its critical behavior belongs to the mean-field universality. #### 3.4.2 Step approximation The DQPT occurs in the spherical model following a sudden quench. Thus, we have to consider the average in Eq. (115) over a time dependent state \(|\Psi(t)\rangle=\prod_{k}|\psi_{k,0}(t)\rangle\), where \(|\psi_{k,0}(t)\rangle\) is the single-spin wave state given by Eq. (207) with \(n=0\). The explicit calculation leads to the dynamical constraint equation \[\frac{h}{N}\sum_{k}\frac{\xi_{k}^{2}(t)}{2}=1/4, \tag{116}\] where \(\xi_{k}^{2}(t)\) is the solution of the Ermakov Eq. (88). Similarly to Sec. 3.1, we are going to consider a sudden quench of the ferromagnetic coupling \(J_{0}^{i}\to J_{0}^{f}\), but now the final coupling value \(J_{0}^{f}\) shall drive the system across the quantum phase boundary. In this case, a solution of the differential Eq. (88) together with the dynamical constraint in Eq. (116) is rather complicated and, up to our knowledge, has not been attempted yet. On the other hand, a convenient simplifying assumption consists in assuming that, as the ferromagnetic coupling \(J_{0}\) is quenched, the parameter \(\mu\) also undergoes a discontinues jump between two constant values \(\mu_{0}\) at \(t<0\) and \(\mu_{f}\) for \(t\geq 0\). This procedure goes under the name of _step approximation_ and has been introduced in Refs. [163, 167]. In order for this approximation to be sensible, one should choose the final value \(\mu_{f}\) in order to reproduced its expected long-time (equilibrated) value. As long as \(\alpha>d\) the system can be safely assumed to equilibrate at long time as shown in Ref. [109, 167], Within the framework of the step-approximation, the frequency suddenly change from their initial value \(\omega_{k,i}\) to a final value \(\omega_{k,f}\), leading to the following sudden quench solution for the effective length of each spin-wave \[\xi_{k}(t)=\sqrt{\frac{1+\epsilon_{k}\sin^{2}(\omega_{k,f}t)}{2\omega_{k,i}}} \tag{117}\] with the quench parameter \(\epsilon_{k}=\left(\frac{\omega_{k,i}}{\omega_{k,i}}\right)^{2}-1\). In the thermodynamic limit \(N\to\infty\) the sum in Eq. (116) can be turned into an integral. Then, taking into account the explicit solution in Eq. (117) one obtains \[\int\!\frac{dk}{2\pi}\frac{h}{2\omega_{k,i}}\left[\frac{\epsilon_{k}}{2}(1- \cos 2\omega_{k,f}t)\right]=0. \tag{118}\] Latter equation cannot be fulfilled at all times due to the oscillatory time term. Yet, in the limit \(t\to\infty\) the dephasing between the different modes washes away the time dependence in Eq. (118), making the solution in Eq. (117) exact also for the constraint problem as long as the final value of \(\mu\) is chosen in order to satisfy the following expression \[\int\frac{dk}{2\pi}\frac{\epsilon_{k}}{\omega_{k,i}}=0. \tag{119}\] This implicit equation determines the long-time asymptotic value of \(\mu_{f}\) through the \(\mu\) dependence of \(\omega_{k,f}\). The consistency of the equilibration assumption and, overall, of the step approximation can be verified by the inspection of the numerical solution of the exact problem, see Ref. [109]. Eq. (119) can be used to determine the dynamical critical coupling \(J_{0}^{\rm c,dyn}\) at which the dynamical excitation become gapless and the constraint parameter \(\mu_{f}\) in Eq. (119) approaches its critical value \(\mu_{c}\). Using these definitions, Eq. (119) can be rewritten as \[\frac{1}{2}=\sqrt{h}\int\frac{dk}{2\pi}\frac{\sqrt{2\mu_{0}+2J_{0}^{i}f_{k}( \alpha)}}{2\mu_{c}+2J_{0}^{\rm c,dyn}f_{k}(\alpha)}, \tag{120}\] where \(\mu_{c}\) is the equilibrium critical value. The existence of a finite value \(J_{0}^{\rm c,dyn}\) satisfying Eq. (120) depends both on the parameters \(h,\mu_{0}\) and on the value of \(\sigma\). The dynamical phase diagram of the model is reported in Ref. [109]. In the present section, for the sake of simplicity, we are going to assume that \(J_{0}^{i}\) lies above its equilibrium critical value \(J_{0}^{i}>J_{0}^{\rm c}\), i.e. in the condensate phase, and consider the case \(\alpha<d+2\). Given the quadratic nature of the spherical model, the overlap function can be calculated analytically \[\mathcal{G}(t)=\prod_{k}\left\{(8\omega_{k,i})^{1/4}e^{-i\varphi_{k}(t)}\bigg{(} 2\omega_{k,i}\xi_{k}(t)+\frac{1}{\xi_{k}(t)}-{\rm i}2\dot{\xi}_{k}(t)\bigg{)} ^{-1/2}\right\}\,, \tag{121}\] where the unessential phase \(\varphi(t)\) is defined below Eq. (88). The Loschmidt echo rate function is obtained by taking the logarithm of the squared overlap, yielding \[r(t)=-\lim_{N\to\infty}\frac{1}{N}\log\lvert G(t)\rvert^{2}=-\log 2+\int \frac{dk}{2\pi}\log\lvert X_{k}(t)\rvert, \tag{122}\] where \[X_{k}(t)=\frac{1}{\sqrt{8\omega_{k,i}}}\bigg{(}2\omega_{k,i}\xi_{k}(t)+\frac{1} {\xi_{k}(t)}-\mathrm{i}2\dot{\xi}_{k}(t)\bigg{)} \tag{123}\] As long as \(\Omega_{k,i}\) is gapped, the expression in Eq. (123) remains smooth and no cusp appears at finite time for the rate function defined in Eq. (122). The non-analytic cusps characterizing DQPTs will only appear for a sudden quench from the broken phase where \(\Omega_{k,i}\) is gapless. This result demonstrates how one can observe and characterize DQPTs by just analyzing the quasi-particle spectrum. Therefore, as already mentioned, we are going to consider a sudden quench of the coupling \(J_{0}^{\mathrm{i}}\to J_{0}^{\mathrm{f}}\), with the initial coupling within the ferromagnetic phase and the final one above the dynamical critical threshold \(J_{0}^{\mathrm{f,dyn}}\). Since the dynamics is initiated in the broken phase, a complete treatment of the problem shall also include the evolution of the classical mode representing the condensate fraction of spin-waves as it was done in Ref. [238]. However, in the present description we have discarded this contribution as it is not necessary to observe the DQPTs. The signature of DQPT in the Loschmidt echo dynamics is reported in Fig. 11, the Loschmidt echo is shown for a quench from \(J_{0}^{i}=2J_{0}^{c}\) to \(J_{0}^{f}=J_{0}^{c}/2\) for different values of \(\sigma\). The rate function clearly shows non-analyticities at the critical times: \[t_{m}^{*}=\frac{m\pi}{\omega_{k,\mathrm{f}}}\quad m\in\mathbb{N} \tag{124}\] which appear due to logarithmic divergences in the integrand in Eq. (122). Since the critical time scale is set by the post-quench gap, we do not expect to see nonanalytic cusps in the Loschmidt echo for a quench into the gapless phase, as previously mentioned. Upon differentiating Eq. (122) with respect to time \(n\) times, we encounter terms that are proportional to \(1/\omega_{k,i}^{n}\) when \(t=t_{m}^{*}\). These terms diverge as \(k^{-n(\alpha-d)/2}\). However, since the integration over \(k\) still has to be performed, the \(n\)-th derivative of the rate function only diverges if \(n(\alpha-d)/2>1\), or equivalently, \(n(\alpha-d)>2\). Thus, the smaller \(\sigma\) the larger the order of the derivative for which the cusps are expected. This analysis holds true throughout the entire region where \(d<\alpha<d+2\). On the other hand, it does not apply when \(\alpha\geq d+2\) since there is no gapless phase to initiate the calculation. The discussion presented above offers further evidence that the emergence of nonanalytic cusps is not solely a feature of the step approximation. Instead, it is a consequence of the initial conditions, specifically starting in the gapless phase, as well as the specific form of the function \(\xi_{k=0}(t)\) and its time derivatives, which remain unchanged in the exact calculation. Furthermore, the structure of these cusps remains unaltered even in the long-time limit when \(\mu(t)\) reaches equilibrium and the step approximation becomes exact. #### 3.4.3 Strong long-range regime The aforementioned analysis cannot be extended to the regime \(\alpha<d\) as the system does not equilibrate and the assumptions at the root of the step approximation outlined in Sec. 3.4.2 fail. Moreover, due to the gapped nature of the spectrum the importance of the condensate motion of the system at \(\alpha<d\) is more prominent and cannot be easily discarded. Several numerical simulations and analytical arguments have been used to show the existence of the DQPT in the Loschmidt echo also for the actual Ising Hamiltonian for different values of \(\alpha\)[234; 247; 248; 255]. In particular, extensive numerical studies have been devoted to investigate the connection between the DQPTs occurring in the Loschmidt echo and the DPTs defined via the dynamical scaling of the order parameter [242; 246], see also Section 4.1.2. Also, the relation between the two different notions and the quasi-particle properties of the model have been largely investigated, but mostly close to the local limit [243; 249; 250]. In the next section, we are going to introduce the dynamical Holstein-Primakoff transformation, which represents the proper formalism to describe the motion of spin-wave coupled to the classical order parameter and their feedback effect. However, we are only going to use it to describe DPTs in the order parameter leaving aside further comments on singularities of the Loschmidt echo. **Summary**: In the strong long-range regime, the leading effect in the Loschmidt echo DQPT comes from the "high-energy" dynamics of the order parameter, and it can be related to other forms of dynamical criticality. ## 4 Dynamics in highly excited states In this Section we will discuss the treatment of out-of-equilibrium dynamics involving arbitrarily high energy initial states, as in standard quantum quench protocols. We will begin in Sec. 4.1 by reviewing mean-field dynamical phenomena for \(\alpha=0\). In Sec. 2.3 we showed that the fully-connected limit of quantum spin systems reduces to the physics of a single collective degree of freedom. In this Subsection we will discuss how this statement applies to non-equilibrium dynamics as well. Secondly, in Sec. 4.2, we will discuss how finite-range interactions with \(\alpha>0\) affect mean-field dynamical phenomena. Long-range interacting system can be formally viewed as a Figure 11: **DPT in the spherical model.** The Loschmidt echo rate function of the spin-wave theory (in presence of the spherical constraint) after a sudden quench of the ferromagnetic coupling \(J_{0}^{I}=2J_{0}^{c}\) to \(J_{0}^{f}=J_{0}^{c}/2\). The cusps in the rate function are clearly evident and the second derivative is divergent since the dynamics considered is for \(\sigma=1.5\), see the discussion in the text. Figure adapted from Ref. [109]. perturbation of the mean-field limit, as reviewed in Sec. 2.4. The perturbation term couples the collective degree of freedom to many spin-fluctuation modes with various wavelengths, resulting in a genuine many-body problem, which can be addressed via the non-equilibrium spin-wave formalism developed in Refs. [56; 57]. The coupling strength to mode \(\mathbf{k}\) strongly depends on the interaction range governed by the exponent \(\alpha\), as encoded by the function \(f_{\mathbf{k}}(\alpha)\). As a result of this tunable decoupling, long-range interactions give rise to anomalous non-equilibrium many-body phenomena. Collective spin ordering is remarkably resilient out of equilibrium, generating long pre-thermal stages of dynamics characterized by long-lived oscillating collective spin polarization. This behaviour was observed in numerical simulations performed with a range of techniques [125; 127; 242] and theoretically understood via the aforementioned approach [56; 57]. This analysis shows that the duration of the prethermal stage increases as \(\alpha\) is decreased, and diverges with the system size when \(\alpha<d\)[58; 256]. ### Quench dynamics of fully-connected spin systems (\(\alpha=0\)) This Section is devoted to the non-equilibrium dynamics of fully connected spin systems. We study the time-evolution starting from ground states \(\ket{\psi_{0}}\) of a pre-quench Hamiltonian \(\hat{H}(h_{0})\) evolving with a different post-quench Hamiltonian \(\hat{H}(h_{f})\). For the sake of definiteness we will mostly consider the Ising model, Eq. (1) with \(\gamma=1\), and quenches in the transverse field from \(h_{0}\) to \(h_{f}\). As described in Section 2.3.2, ground states of long-range Hamiltonians as (1) can be though as _coherent states_ in some directions. Their dynamical behavior is determined by a classical mean-field description emerging in the thermodynamic limit when initialized in fully polarized states, as described in Section 4.1.1. The resulting dynamics of collective observables can give rise to new forms of dynamical criticality, such as dynamical phase transitions, discussed in Section 4.1.2. The semiclassical framework also allows us to describe the growth of quantum fluctuations, which coincides with the flow of linearized shifts around classical trajectories and is thus related to the standard quantifiers of classical chaos, reviewed in Section 4.1.3. When the quantum fluctuations become comparable to the typical length of the phase space, this description breaks down, defining an Ehrenfest time that diverges with \(N\) for this class of systems. Remarkably, such semiclassical framework grasps crucial aspects of quantum dynamics, such as the dynamics of scrambling or entanglement as reviewed in Section 4.1.4 and 4.1.5 respectively. #### 4.1.1 Mean-field classical limit The dynamics of a system with unbroken full permutational symmetry take place in the totally-symmetric subspace (TSS) of the many-body Hilbert space simultaneously invariant under all permutations. Such dynamics is amenable to an exact representation in terms of few collective degrees of freedom, characterized by an effective Planck constant \(\hbar_{\mathrm{eff}}\sim 1/N\) suppressed with system size [219; 257]. We refer to B for a general discussion. For systems of interacting quantum spins the limiting semiclassical description may be formulated more directly and intuitively in terms of states with maximal collective spin \(S=Ns\) -- the so-called _Dicke manifold_. As discussed in Sec. 2.3.1, the collective spin approaches a classical limit for large \(N\).13 For the infinite-range XY Hamiltonian in Eq. (5) the classical limit \(\hat{H}_{\alpha=0}/N\to\mathcal{H}_{\mathrm{cl}}\) is given by Eq. (11) in Sec. 2.3.1, where we discussed equilibrium properties. In this Section, we will use the same approach to discuss _out-of-equilibrium_ properties. For definiteness, throughout this Section, we will set \(\gamma=1\) (quantum Ising model). Footnote 13: For \(s=1/2\), the TSS coincides with the Dicke manifold. For larger \(s\) there are more permutationally invariant states with lower \(S\) (dimTSS \(\sim N^{2s}\) for large \(N\)). However, for Hamiltonians without spin self-interactions, one may always consider dynamics within the Dicke manifold. The non-equilibrium evolution \(\langle\hat{\vec{S}}(t)\rangle/N\) generated by a sudden change ("quench") of a Hamiltonian parameter is described by a classical trajectory \(\vec{\mathcal{S}}(t)\) on the unit sphere governed by \(\mathcal{H}_{\mathrm{cl}}\), i.e., \[\dot{\vec{\mathcal{S}}}=\left\{\vec{\mathcal{S}},\mathcal{H}_{\mathrm{cl}}\right\}\,, \tag{125}\] with the canonical Poisson brackets \(\{\mathcal{S}^{\mu},\mathcal{S}^{\nu}\}=\epsilon_{\mu\nu\rho}\mathcal{S}^{\rho}\). Evolution can be recast in terms of the spherical angles \(\theta(t),\,\phi(t)\). In the case of the Hamiltonian (5), the non-linear precession of the collective spin is described by the classical equations of motion14 Footnote 14: For convenience, we rescale time by a factor \(s\) \[\begin{cases}\dot{\theta}=2J_{0}\sin\theta\cos\phi\sin\phi\,,\\ \dot{\phi}=-h+2J_{0}\cos\theta\cos^{2}\phi\,.\end{cases} \tag{126}\] As the Hamiltonian governs a single degree of freedom, the classical limit is trivially integrable and characterized by regular periodic trajectories in phase space. Such behaviour corresponds to persistent spin oscillations after a quench, whose period depends on the initial state. For \(|h|<2J_{0}\), the phase-space also features a separatrix with a diverging classical period, terminating at the saddle point \(\theta=0\) and characterized by an exponential instability rate \[\lambda=\sqrt{h(J_{0}-h)} \tag{127}\] (i.e. the eigenvalue of the stability matrix at the saddle point). While such fully-connected spin models generically exhibit periodic orbits, semiclassical chaotic behavior can occur in a number of relevant situations. A standard example comes from introducing time-dependent driving, thus breaking energy conservation: The quantum kicked top [258; 259], corresponding to a step-wise driving protocol applied to the model above, gives a paradigmatic regular-to-chaotic crossover as a function of the driving parameters. Another source of chaoticity comes from coupling the spins to other degrees of freedom, such as a cavity mode, which gives rise to the textbook Dicke model [260; 261]. Finally, chaotic behaviour can arise from self-interactions of higher spins \(s>1/2\), which are generally described by \(n=2s>1\) collective degrees of freedom: In the absence of additional symmetries, self-interactions will break classical integrability. These extended possibilities can be addressed with the method summarized in B. In the rest of this Section, we will refer to them when discussing the impact of chaos on the quantum dynamics of fully-connected systems. **Summary**: In the thermodynamic limit the quench dynamics of fully-connected spin systems is described by classical periodic trajectories of a single collective degree of freedom. #### 4.1.2 Dynamical phase transitions - Order parameter The non-equilibrium evolution described above may or may not result in collective spin ordering at long times. An abrupt change of dynamical ordering properties as a function of driving control parameters is referred to as a _dynamical phase transition_ (DPT) [218; 219; 221; 245; 262; 263; 264; 265; 266; 267; 268]. In particular, when a system is quenched from a symmetric state across the equilibrium critical point, dynamical scaling properties associated with aging or coarsening may appear [164; 167; 269; 270]. Conversely, when a system undergoes a sudden quench from a broken-symmetry state, the resulting out-of-equilibrium dynamics may display two different phases. One can define a non-equilibrium order parameter by time-averaging the corresponding equilibrium order parameter. This quantity may be vanishing or not depending on whether the symmetry is dynamically restored after the quench. The associated _dynamical critical point_ is believed to have a universal character. A special interest was placed on systems that fail to rapidly approach thermal equilibrium after the quench, as their dynamical universality may have no equilibrium counterpart [257; 263]. Fully-connected spin systems provide the simplest instance of genuinely dynamical phase transitions. To illustrate this we consider the infinite-range quantum Ising model [Eq.(5) with \(\gamma=1\)]. The character of non-equilibrium dynamics is encoded in the classical trajectories of the collective spin, which may have paramagnetic or ferromagnetic character. Here one studies quenches in the transverse field \(h_{0}\to h_{f}\) for which DPTs have been extensively studied [219; 245; 248; 271]. The two non-equilibrium phases are distinguished by the time average of the equilibrium order parameter, \[\overline{\mathcal{S}^{x}}=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}dt\frac{ \langle\hat{S}^{x}(t)\rangle}{Ns} \tag{128}\] which serves a non-equilibrium order parameter: It is finite \(\overline{\mathcal{S}^{x}}\neq 0\) for shallow quenches in the Figure 12: Equilibrium configurations and possible instances of non-equilibrium dynamics in the fully-connected quantum Ising model. (a-c) Pictorial representation on the Bloch sphere of the collective spin for the post-quench Hamiltonian. (a-b) For \(h_{f}<|J|\), the energy possesses two minima characterized by non-vanishing, opposite magnezations along \(x\). (c) For \(h_{f}>|J|\), the system is paramagnetic with a single equilibrium configuration in the direction of the field. Initial fully polarized states at \(t=0\) are pictured as a point on the Bloch sphere, surrounded by a small grey circle representing their transverse quantum fluctuations. Labels (a1-3)* represent possible instances of such initial conditions. (a1-a3) Semiclassical phase portrait of the ferromagnetic post-quench Hamiltonian, where the initial states move along a nontrivial nonequilibrium trajectory, corresponding to the initial conditions (a1-3)* respectively. (a4-a5) Associated dynamics of the classical magnetization. Labels (a) refer to initial ferromagnetic initial states \(h_{0}<|J|\). Their time-evolution is characterized by ferromagnetic periodic (green) trajectories [see (a1) and (a4)] or paramagnetic (blue) ones [see (a3) and (a6)] with \(\overline{S}_{x}(t)\neq 0\) and \(\overline{S}_{x}(t)=0\), respectively. These are separated by the unstable (red) trajectory occurring at \(h_{\mathrm{d}x}\) [see (a2) and (a5)]. Labels (b-c) refer to an initial paramagnetic state \(h_{0}=\infty\) evolved with two different Hamiltonians: (b) quench performed to a ferromagnetic Hamiltonian \(h_{f}<|J|\), the initial state lies on the unstable trajectory, (c) quench performed to a different \(h_{f}>|J|\) paramagnetic configuration. Images adapted from Ref.[57; 58]. dynamical ferromagnetic phase and it vanishes abruptly \(\overline{\mathcal{S}^{x}}=0\) at the dynamical critical point \(h_{\text{der}}=(h_{0}+J_{0})/2\), associated with the critical trajectory corresponding to the phase-space separatrix; for deeper quenches \(h_{f}>h_{\text{der}}\) the system lies in the dynamical paramagnetic phase. See Fig. 12(a-b) for an illustration. This kind of DPT has been realized experimentally with cold atoms in optical cavities [22] or in a superconducting quantum simulator [272]. The spectral counterparts of these DPTs are given by excited-state quantum phase transitions (ESQPT) [273; 274; 275; 276; 277; 278; 279]. This corresponds to singularities of the density of states at some finite energy density which distinguishes eigenstates with ferromagnetic nature from those with paramagnetic nature: See Ref. [280] for a recent review. The notion of DPT discussed here is in general distinct from that of DQPT discussed in Sec. 3.4, and therefore they may even not occur concomitantly in the same model. However, a connection has been pointed out whenever both phenomena are present [242] (see also Refs. [238; 57; 239]). Below, in Section 4.2.3, we will discuss how this out-of-equilibrium phenomenon is affected by decreasing the interaction range. Let us mention that, due to the intrinsic semiclassical nature of dynamics in this class of models, observables can be efficiently simulated using phase-space numerical techniques [281; 282; 283], such as the Truncated Wigner Approximation (TWA) [284; 285; 282] or its discrete [129; 286] or clustered [287] versions. These methods have been intensively used also to explore dynamics with finite \(\alpha\) interactions, which we discuss in the next section [288; 289; 290; 291; 292; 293; 294; 295; 296]. #### 4.1.3 Semiclassical dynamics of quantum fluctuations The classical description of dynamics outlined above is exact in the thermodynamic limit \(N\to\infty\). In finite systems, however, it has a limited time scale \(T_{\text{Ehr}}(N)\) of validity, known as _Ehrenfest time scale_: At long times \(t\gtrsim T_{\text{Ehr}}(N)\), quantum fluctuations around the classical limit will dominate the behavior of time-dependent local observables and entanglement quantifiers. \(T_{\text{Ehr}}\) can be estimated as the time at which the size of quantum fluctuations becomes comparable with a characteristic phase-space scale. This depends on the initial state and on the nature of the underlying classical dynamics. In this Subsection, we discuss the semiclassical dynamics of quantum fluctuations. To compute the evolution of quantum spin fluctuations it is convenient to generalize the Holstein-Primakoff approach introduced in Sec. 2.4, above, to the non-equilibrium context [56; 57]. When the system is driven out of equilibrium, the direction of the collective spin configuration [parametrized by \(\theta(t)\) and \(\phi(t)\)] moves along the corresponding classical trajectory on the unit sphere. We thus let the adapted frame of reference \((\mathbf{X},\mathbf{Y},\mathbf{Z})\) in Eq. (12) vary in time, in such a way that the \(\mathbf{Z}\)-axis follows the evolution of \(\langle\hat{\mathbf{S}}(t)\rangle\propto\mathbf{Z}(t)\). This way, the collective spin components along \(\mathbf{X}\) and \(\mathbf{Y}\) are associated with quantum fluctuations and will be mapped to canonical bosonic variables. The time-dependent spin rotation described above is implemented by the time-dependent unitary operator \[\hat{V}(\theta(t),\phi(t))=e^{-i\phi(t)\,\hat{S}^{z}}\,e^{-i\theta(t)\,\hat{S} ^{y}}. \tag{129}\] where the time-dependence of the angles is for the moment unspecified. The Heisenberg equations for spin components \(\hat{S}^{\mu}\) with \(\mu=X,Y,Z\) in the mobile frame will then read \[\frac{d}{dt}\,\hat{S}^{\mu}=\frac{d}{dt}\,V\hat{S}^{\mu}V^{\dagger}=\frac{1}{ i}[\hat{S}^{\mu},\tilde{H}]\,,\qquad\text{where}\quad\tilde{H}(t)\equiv\hat{V}\, \hat{H}\,\hat{V}^{\dagger}+i\hat{V}\hat{V}^{\dagger}\,. \tag{130}\] The effective time-dependent Hamiltonian \(\tilde{H}(t)\) includes inertial forces arising from the time dependence of \(\hat{V}\). A direct calculation shows \[i\hat{V}\hat{V}^{\dagger}=-\,\hat{\omega}(t)\cdot\hat{\hat{S}}\qquad\text{with} \quad\hat{\omega}(t)=\left(-\sin\theta\,\hat{\phi},\hat{\theta},\cos\theta\, \hat{\phi}\right). \tag{131}\] The time-dependent Hamiltonian \(\tilde{H}(t)\) is then transformed to a bosonic Hamiltonian via the Holstein-Primakoff transformation, cf. Eq. (19). This yields an expression of the form [15] \[\begin{split}\tilde{H}(t)&\approx+\,(Ns)^{1}\; \mathcal{E}\left(\theta(t),\phi(t)\right)\\ &\quad+(Ns)^{1/2}\,\left(\tilde{h}_{Q}^{(1)}(t)\hat{q}+\tilde{H}_{ p}^{(1)}(t)\hat{p}\right)\\ &\quad+(Ns)^{0}\left(\tilde{h}_{QQ}^{(2)}(t)\,\frac{\hat{q}^{2}}{ 2}+\tilde{h}_{PP}^{(2)}(t)\,\frac{\hat{p}^{2}}{2}+\tilde{h}_{QP}^{(2)}(t)\, \frac{\hat{q}\hat{p}+\hat{p}\hat{q}}{2}\right)\\ &\quad+\mathcal{O}\Big{(}(Ns)^{-1/2}).\end{split} \tag{132}\] Compared to the "static" rotated-frame Hamiltonian (obtained by just rotating the spins and mapping to bosons) [see e.g. Eq. (20)], the additional inertial Hamiltonian modifies the linear terms as \(\tilde{h}_{Q}^{(1)}(t)\equiv h_{Q}^{(1)}(\theta(t),\phi(t))+\sin\theta(t)\; \dot{\phi}(t)\) and \(\tilde{h}_{P}^{(1)}(t)\equiv h_{P}^{(1)}(\theta(t),\phi(t))-\dot{\theta}(t)\), while quadratic ones are modified as \(\tilde{h}_{QQ,PP}^{(2)}(t)\equiv h_{QQ,PP}^{(2)}\big{(}\theta(t),\phi(t)\big{)} -\cos\theta(t)\;\dot{\phi}(t)\) and \(\tilde{h}_{QP}^{(2)}(t)\equiv h_{QP}^{(2)}\big{(}\theta(t),\phi(t)\big{)}\). The evolution of \(\theta(t)\) and \(\phi(t)\) is fixed by the vanishing of the linear terms \(\tilde{h}^{(1)}(t)\), ensuring \(\langle\hat{S}^{X}(t)\rangle=\langle\hat{S}^{Y}(t)\rangle=0\). This yields the classical mean-field equations of motion governed by \(\mathcal{H}_{\text{cl}}\), i.e. Eq. (126) for our model. On the other hand, the number of collective excitations \(\hat{n}_{0}=(\hat{q}^{2}+\hat{p}^{2}-1)/2\) [see e.g. Eq. (19)] non-trivially evolves in time. Its dynamics are governed by the time-dependent quadratic Hamiltonian parametrized by \(\tilde{h}^{(2)}(t)\) above. In order to evaluate them, one computes the Heisenberg equations of motion \[\begin{cases}\dot{\hat{q}}=+\tilde{h}_{QP}^{(2)}(t)\;\hat{q}+\tilde{h}_{PP}^ {(2)}(t)\;\hat{p}\\ \dot{\hat{p}}=-\tilde{h}_{QQ}^{(2)}(t)\;\hat{q}-\tilde{h}_{QP}^{(2)}(t)\;\hat {p}\end{cases}, \tag{133}\] with solution \(\begin{pmatrix}\hat{q}(t)\\ \dot{\hat{p}}(t)\end{pmatrix}=U(t)\begin{pmatrix}\hat{q}(0)\\ \dot{\hat{p}}(0)\end{pmatrix}\), where the \(2\times 2\) propagator \(U(t)\) can be formally written as the time-ordered exponential of the matrix defined by the right-hand side of Eq. (133). One can collect the dynamical fluctuations (or "correlations") \(G^{QQ}(t)\equiv\langle\hat{q}^{2}(t)\rangle\), \(G^{PP}(t)\equiv\langle\hat{p}^{2}(t)\rangle\) and \(G^{QP}(t)\equiv\frac{\langle\hat{q}(t)\hat{p}(t)+\hat{p}(t)\hat{q}(t)\rangle}{2}\) in the \(2\times 2\)_correlation matrix_ \[G(t)=\begin{pmatrix}G^{QQ}(t)&G^{QP}(t)\\ G^{QP}(t)&G^{PP}(t)\end{pmatrix}=U(t)\,G(t=0)\,U^{T}(t)\,. \tag{134}\] The number of dynamically generated excitations can be expressed as \[\langle\hat{n}_{0}(t)\rangle=\frac{G^{QQ}(t)+G^{PP}(t)-1}{2}=\frac{1}{2}\text {Tr}\bigg{[}G(t)-\frac{\mathbb{1}}{2}\bigg{]}\,. \tag{135}\] Note that \(\det G(t)\equiv 1/4\), which is an exact property of _pure_ Gaussian states preserved by Hamiltonian evolution. For our fully-connected Ising model, the equations of motion for the correlation matrix read \[\begin{cases}G^{QQ}=2J_{0}\cos\theta\sin\phi\cos\phi\,G^{QQ}+2J_{0}\left(\cos^{ 2}\phi-\sin^{2}\phi\right)\;G^{QP}\\ G^{PP}=-2J_{0}\cos\theta\sin\phi\cos\phi\,G^{PP}-2J_{0}\cos^{2}\phi\sin^{2}\theta \,G^{PQ}\\ G^{PQ}=-J_{0}\cos^{2}\phi\sin^{2}\theta\,G^{QQ}+J_{0}\left(\cos^{2}\phi-\sin^{2} \phi\right)\,G^{PP}\end{cases}. \tag{136}\] Crucially, because we obtained these equations by expanding the Hamiltonian in powers of \(\hbar_{\text{eff}}\) and because classical and quantum evolution generated by quadratic Hamiltonians coincide, _the semiclassical dynamics of quantum fluctuations_ -- characterized by the time-dependent correlation matrix \(G(t)\) -- _obeys the same equation of motion as the linearized flow of displacements from the classical trajectories._ This statement actually applies to arbitrary semiclassical systems with \(n\) degrees of freedom, where the correlation matrix \(G(t)\) of the quantum fluctuations becomes a \(2n\times 2n\) matrix. We refer to Ref. [297] or Appendix B for a complete discussion. The correlation matrix \(G(t)\) is equivalent to the monodromy matrix whose eigenvalues define _the finite-time classical Lyapunov spectrum_\(\{\lambda_{k}(t)\}\)[298]. When the classical dynamics is integrable, nearby initial conditions generically separate linearly in time, as it becomes manifest via action-angle variables [58]. Thus, the temporal growth of the quantum correlations is polynomial, \(\langle\hat{n}_{0}(t)\rangle\sim t^{2}\). Isolated unstable trajectories like the separatrix discussed in Sec. 4.1.2 are characterized by exponential sensitivity, and hence \(\langle n_{0}(t)\rangle\sim e^{2\lambda t}\), where \(\lambda\) is the largest eigenvalue of the saddle point that controls the instability. The asymptotic growth also depends on the initial conditions for systems with a mixed regular-chaotic phase space, e.g. resulting from integrability breaking within a Kolmogorov-Arnold-Moser scenario [298]. On the other hand, in systems with fully developed chaos in phase space, the Lyapunov spectrum is uniform and nonvanishing. This implies an asymptotic exponential growth of quantum fluctuations, \(\langle\hat{n}_{0}(t)\rangle\sim e^{2\lambda t}\). The classification is concluded by the case of stable equilibrium configurations, the linearized dynamics of which are equivalent to that of coupled harmonic oscillators. Accordingly, all the quantities of interest perform bounded (periodic or quasiperiodic) oscillations. This classification is summarized in the first row of Table 1. The formalism outlined in this Section is quantitatively accurate as long as the number of collective excitations does not grow too large \(\langle\hat{n}_{0}\rangle\ll N\) compared to the system size. As shown in Sec. 2.3 this assumption is generically valid for ground states, even at the quantum critical points [84, 299]. Out of equilibrium, this condition defines the Ehrenfest time scale, given by \[\langle\hat{n}_{0}(T_{\text{Ehr}})\rangle\sim N\,. \tag{137}\] On this time scale the quadratic truncation of the bosonic representation loses accuracy. The non-linear corrections generally lead to saturation of the growth of quantum fluctuations and to \begin{table} \begin{tabular}{l c c c} \hline \hline Classical trajectory & Stable & Regular & Chaotic \\ & & & (Unstable) \\ \hline Collective fluctuations & oscillations & \(t^{2}\) & \(e^{2\lambda t}\) \\ Ehrenfest time scale & \(\mathcal{O}(\sqrt{N})\) & \(\mathcal{O}(\sqrt{N})\) & \(\mathcal{O}(\ln N)\) \\ entanglement entropy & oscillations & \(\ln t\) & \(\Lambda_{K}\,t\) \\ square commutator & oscillations & \(t^{2}\) & \(e^{2\lambda t}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the dynamical behaviour of entanglement and chaos quantifiers of \(N\)-particle collective systems in the semiclassical regime. The growth of the entanglement quantifiers and the square commutator depends on the nature of the limiting classical trajectory in the \(2n\)-dimensional phase space (stable configuration, regular or chaotic), up to the Ehrenfest time. Here, \(\lambda\equiv\lambda_{1}\) is the maximum finite time Lyapunov exponent, and \(\Lambda_{K}=\sum_{k=1}^{2K}\lambda_{k}\) is the sum of the \(2K\) largest Lyapunov exponents, where \(K\) is the number of degrees of freedom associated with the considered subsystem. For \(K\)=\(n/2\), one has the classical Kolmogorov-Sinai entropy rate \(\Lambda_{\text{KS}}=\sum_{k^{\prime}:\lambda_{2}>0}\lambda_{k}\). revivals on much longer times. Putting everything together, we have \[\begin{cases}\text{regular trajectories}&\langle\hat{n}_{0}(t)\rangle\sim t^{2}&T_{ \text{Ehr}}\sim\hbar_{\text{eff}}^{-1/2}\sim\sqrt{N}\\ \text{unstable (chaotic) trajectories}&\langle\hat{n}_{0}(t)\rangle\sim e^{2\lambda t}&T_{ \text{Ehr}}\sim\ln\hbar_{\text{eff}}^{-1/2}\sim\ln N\end{cases}\;. \tag{138}\] The dynamical growth of collective quantum fluctuations goes hand in hand with the scrambling of quantum information and the dynamics of quantum entanglement. As we will discuss in the next two sections, the approach described here allows us to derive an exact relation between \(\langle\hat{n}_{0}(t)\rangle\), scrambling and entanglement. **Summary**: Collective quantum fluctuations evolve as the linearized flow of displacements around the classical trajectory before the Ehrenfest time scale. The latter is defined as the time for which the number of quantum fluctuations becomes comparable with the system size and consequently, it depends on the classical phase-space. #### 4.1.4 Scrambling dynamics Scrambling has been recently proposed as a pathway to characterize chaos in many-body dynamics. Generically identified as delocalization of quantum information, scrambling is commonly quantified by the dynamics of the _square-commutator_: \[c(t)=\langle\left|[\hat{A}(t),\hat{B}]\right|^{2}\rangle\;, \tag{139}\] or of the closely related out-of-time order correlators (OTOC) \(\langle\hat{A}(t)\hat{B}\hat{A}(t)\hat{B}\rangle\), where the expectation value is taken in a quantum state \(\hat{\rho}\), i.e., \(\langle\cdot\rangle=\text{Tr}(\hat{\rho})\). The square commutator was originally introduced by Larkin and Ovchinnikov [300] to semi-classically describe the exponential sensitivity to initial conditions.16 Footnote 16: The heuristics goes as follows: for \(\hat{A}=\hat{x}\) and \(\hat{B}=\hat{\rho}\) in the limit \(\hbar\to 0\), upon canonical quantization one has \[c(t)\simeq h^{2}\{x(t),p(0)\}_{\text{PB}}^{2}\simeq h^{2}\left|\frac{\partial x (t)}{\partial x(0)}\right|^{2}\;. \tag{140}\] Hence, \(c(t)\) encodes the square of the derivatives of the classical trajectory to respect to the initial conditions. Thus, whenever the underlying classical limit is chaotic, \(c(t)\) is expected to grow exponentially in time as \[c(t)\simeq h_{\text{eff}}^{2}\;e^{\tilde{\lambda}t}\;, \tag{141}\] with a rate \(\tilde{\lambda}\) which may be related to the classical Lyapunov exponent (but it is in principle distinct). This holds at intermediate times before the Ehrenfest scale \(t<T_{\text{Ehr}}\sim\ln\hbar_{\text{eff}}^{-1}\), in this context also referred to as _scrambling time_. Interest in the square-commutator was revived after Kitaev's proposal to use it to characterize many-body dynamics [301]. In this context, it was shown that the rate \(\tilde{\lambda}\) is upper bounded by quantum effects as \(\tilde{\lambda}\leq\frac{2\pi T}{\hbar}\)[302], as a consequence of the quantum fluctuation-dissipation theorem [303; 304]. This constraint - now known as "the bound to chaos" - is saturated by models of black holes, including the Sachdev-Ye-Kitaev model (SYK) [301; 305], a system of fully interacting disordered Majorana fermions where \(\hbar_{\text{eff}}\sim\ln N\). In the present case of fully-connected systems with a classical limit (Section 4.1.1), scrambling before the Ehrenfest time thus directly probes the sensitivity of the classical trajectories to infinitesimal perturbations. One can study the square commutator in Eq.(139) by taking the expectation value in pure quasiclassical initial states and by looking at the square commutator between two collective spin projections, namely \[c_{\alpha\beta}(t)=-\left(\frac{1}{N_{S}}\right)^{2}\langle\psi_{0}|\left[\hat {S}^{\alpha}(t),\hat{S}^{\beta}(0)\right]^{2}|\psi_{0}\rangle\;, \tag{142}\] where \(\alpha,\beta=x,y,z\) and \(|\psi_{0}\rangle\) is a fully polarized spin-coherent initial state. Using the expansion of the quantum fluctuations elaborated in Section 4.1.3, we can compute the semiclassical evolution of the out-of-time-order square commutator. By plugging the expansion of the rotated spin operators (19) into the definition (142), one then substitutes the formal solution for the spin fluctuations at time \(t\), i.e., \(\hat{Q}(t)=U_{qq}(t)\,\hat{Q}(0)+U_{qp}(t)\,\hat{P}(0)\) and \(\hat{P}(t)=U_{pq}(t)\,\hat{Q}(0)+U_{pp}(t)\,\hat{P}(0)\). The initial fluctuations for coherent states are \(\langle\hat{Q}(0)^{2}\rangle=\langle\hat{P}^{2}(0)\rangle=1/2\) and \(\langle\hat{Q}(0)\hat{P}(0)\rangle=\langle\hat{P}(0)\hat{Q}(0)\rangle=0\). The resulting out-of-time square commutator in Eq.(142) thus reads \[c_{\alpha\beta}(t) =\Big{[}X_{\alpha}(t)\big{(}\,U_{qq}(t)\,\,Y_{\beta}(0)-U_{qp}(t) \,\,X_{\beta}(0)\,\big{)}+Y_{\alpha}(t)\,\big{(}\,U_{pq}(t)\,Y_{\beta}(0)-U_{ pp}(t)\,X_{\beta}(0)\,\big{)}\Big{]}^{2}\] \[\qquad+\mathcal{O}(\hbar_{\rm eff}). \tag{143}\] This expresses a quantitative relation between the square-commutator and the formal evolution \(U(t)\) [cf, below Eq. (133)] of the quantum fluctuations, which encodes the of the evolution of linearised displacements and the _finite time_ Lyapunov exponent spectrum \(\{\lambda_{k}(t)\}\), as described in Section 4.1.3. Hence, when the classical limit is integrable, the square-commutator will grow as \(c(t)\simeq t^{2}\), within the Ehrenfest time scale \(T_{\rm Ehr}\). On the other hand in the presence of exponential sensitivity associated with a phase-space separatrix (cf. Sec. 4.1.1 above) or with chaos, the square-commutator \(c(t)\simeq e^{2\lambda_{1}t}\) grows exponentially before \(T_{\rm Ehr}\) with \(\lambda_{1}=\lambda_{1}(t)\) the maximal finite-time Lyapunov exponent of the underlying semiclassical trajectory. The different scenarios are summarized in Table 1. Results for the fully-connected quantum Ising model are shown in Fig. 13, where we consider \(c_{zz}(t)\) and compare analytical result (black full line) with numerical exact diagonalization results for finite system sizes. Parameters are specified in the caption. The plot highlights the relation between entanglement entropy (discussed below) and scrambling before the Ehrenfest time. As the considered model has an integrable classical limit, the square-commutator only grows exponentially \(c_{zz}(t)\sim e^{2\lambda t}\) for quenches at the dynamical critical point \(h_{\rm dcr}\), associated with a classical separatrix with instability rate \(\lambda\) given in Eq. (127). The exponential sensitivity of the square-commutator in the presence of the underlying separatrix has been explored in a number of fully connected mean-field models [306, 307, 308, 292] and recently probed experimentally [309]. When the initial state in Eq. (139) is a random permutationally invariant state, the growth rate of the square-commutator corresponds to the average of the finite-time Lyapunov over the whole phase-space. In the presence of an instability, this leads to a modified exponential growth \(c(t)\sim e^{\lambda t}\) (rather than \(e^{2\lambda t}\)) [310]. As we discussed above, fully-connected spin systems may exhibit classically chaotic evolution when driven periodically (e.g. quantum kicked top), coupled to other degrees of freedom (e.g. Dicke models), or for larger individual spins \(s>1/2\). Analysis as above predicts exponential growth of the square-commutator for underlying classical chaos, as reported in the literature for the quantum kicked top [311, 312, 313, 314, 315, 316, 317, 313, 317, 318, 319, 320, 321], the Dicke model [318, 319, 320, 321] and other spin models [322, 314]. Recently, it has been pointed out that scrambling may become _super-exponential_ in fully connected models when the average in Eq.(139) is done over at infinite temperature state [323]. A similar statement also applies to other quantifiers of quantum information spreading, in particular to entanglement entropy, which we turn to analyze in the next Subsection. #### 4.1.5 Entanglement dynamics It is by now well established that a large body of information about many-body dynamics, their thermalization properties, and the complexity of their numerical simulations, can be inferred from the evolution of bipartite entanglement entropies. For a composite system with Hilbert space \(\mathcal{H}=\mathcal{H}_{A}\otimes\mathcal{H}_{B}\) in a pure state \(\hat{\rho}=\ket{\psi}\bra{\psi}\), the bipartite entanglement between subsystems \(A\) and \(B\) is encoded in the reduced density matrix \(\hat{\rho}_{A}=\mathrm{Tr}_{B}\hat{\rho}\).17 The system is entangled with respect to the bipartition \((A,B)\) if \(\hat{\rho}_{A}\) (equivalently, \(\hat{\rho}_{B}\)) is not pure. The amount of bipartite entanglement can be quantified by the Renyi entropies Footnote 17: The nonvanishing eigenvalues of \(\hat{\rho}_{B}=\mathrm{Tr}_{A}\hat{\rho}\) are equal to those of \(\hat{\rho}_{A}\). \[S_{A}^{\alpha}=-\frac{1}{1-\alpha}\ln\mathrm{Tr}\,\hat{\rho}_{A}^{\alpha}\, \tag{144}\] parameterized by \(\alpha>1\). The von Neumann entropy is obtained as their limit for \(\alpha\to 1\), i.e., \[S_{A}=-\mathrm{Tr}\big{(}\hat{\rho}_{A}\ln\hat{\rho}_{A}\big{)}. \tag{145}\] As far as local Hamiltonians are concerned, the baseline features of entanglement entropy growth of pure states out-of-equilibrium are well understood. As a broad consequence of the light-cone spreading of quantum correlations (see discussion in Sec.3.1), in thermalizing local systems the entanglement entropy \(S_{A}(t)\) grows linearly in time before saturating to a value proportional to its volume [112; 113; 325]. The underlying mechanism is well understood in integrable systems, where it has been explained via a semiclassical picture based on quasi-particle pairs propagation [326]. Analytical insights in chaotic many-body systems came from the study of random unitary circuits [114]. On the other hand, the presence of localized integrals of motion (exact or approximate) causes a slowdown of entanglement growth, with a distinguished logarithmic increase for many-body localized systems [327]. Neither of these scenarios is adequate for long-range interacting systems, where numerical results exhibited slow logarithmic entanglement growth even in the absence of quenched disorder [125; 127; 292]. Figure 13: Entanglement and scrambling dynamics after a quench of the transverse field in the fully-connected Ising model, from \(h_{0}=0\) to \(h_{f}>0\). The analytical prediction (black lines) is compared with exact numerical results (colours) at finite \(N=50\), \(200\), \(800\). We study the growth of the entanglement entropy (145)(top panel) and the square commutator (142) (bottom panel) quenching above, below, and at the dynamical phase transition (DPT) at \(h_{f}=h_{\mathrm{dcr}}\), as pictorially shown in Figure 12 (a). (a1-b1) Quench above the DPT: \(h_{f}=2J_{0}>h_{\mathrm{dcr}}\). (a2-b2) Quench below the DPT: \(h_{f}=0.2J_{0}<h_{\mathrm{dcr}}\). (a3-b3) Quench at the DPT: \(h_{f}=J_{0}/2=h_{\mathrm{dcr}}\). Time is measured in units of \(J_{0}\). Plots adapted from Ref.[324]. A successful picture to capture entanglement growth in fully-connected systems was only achieved more recently [58; 297]. The semiclassical growth of quantum fluctuations (described in the section above) allows us to analytically relate the dynamics of the entanglement entropy \(S_{A}(t)\) to the quantifiers of chaos, leading to a general unifying picture. This formalism yields a clean prediction of logarithmic growth in the absence of semiclassical chaos. This constitutes the origin of the slowdown of entanglement growth, which was first observed in numerical simulations of dynamics in long-range interacting quantum spin chains [125; 292; 127]: We will complete this discussion in Section 4.2.5 below. We will illustrate how \(S_{A}(t)\)_asymptotically coincides with the logarithm of the phase space volume spanned by the quantum fluctuations of the subsystem degrees of freedom_, as originally identified in a seminal work by Zurek and Paz [328]; see Refs. [58; 297; 329; 330; 331] for more recent literature. In the case of fully-connected \(N\)-particle systems considered here, one considers a bipartition between any two sets of spins, where the only relevant parameter is the number \(N_{A}=f_{A}N\) of particles in subsystem \(A\) (with \(N_{B}=N-N_{A}=f_{B}N\)).18 The collective spin \(\hat{\vec{S}}\) can be correspondingly decomposed as \(\hat{\vec{S}}=\hat{\vec{S}}_{A}+\hat{\vec{S}}_{B}\) (see Fig. 14). Within the semiclassical description, the bipartite system can be represented by bosonic operators \((\hat{q}_{A},\hat{q}_{B})\) and \((\hat{q}_{B},\hat{p}_{B})\), associated with the quantum fluctuations of the two spins \(\hat{\vec{S}}_{A}\) and \(\hat{\vec{S}}_{B}\), respectively, via Holstein-Primakoff mapping. These quantum fluctuations are characterized by the correlation matrix \(G(t)\) defined in Eq. (134). It is convenient to define the subsystem's _reduced correlation matrix_\(G_{A}(t)\) as the \(2\times 2\) matrix of quantum fluctuations built out of the variables of subsystem \(A\) alone, i.e., Footnote 18: Due to permutational symmetry, spatial bipartitions have no meaning. \[G_{A}=\begin{pmatrix}\langle\hat{q}_{A}^{2}\rangle&\frac{\langle\hat{q}_{A} \hat{p}_{A}+\hat{p}_{A}\hat{q}_{A}\rangle}{2}\\ \frac{\langle\hat{q}_{A}\hat{p}_{A}+\hat{p}_{A}\hat{q}_{A}\rangle}{2}&\langle \hat{p}_{A}^{2}\rangle\end{pmatrix}\equiv\begin{pmatrix}G^{q_{A}q_{A}}&G^{q_{A} p_{A}}\\ G^{q_{A}p_{A}}&G^{p_{A}p_{A}}\end{pmatrix}. \tag{146}\] In the semiclassical regime of small \(\hbar_{\rm eff}\), the reduced density matrix \(\hat{p}_{A}(t)\) is asymptotically Gaussian to leading order, and thus fully determined by \(G_{A}(t)\). The entanglement properties can thus be computed via standard techniques [329], see also Refs. [330; 331]. The Von Neumann and the second Renyi entropies of a single boson \((\hat{q}_{A},\hat{p}_{A})\) in such a Gaussian state can be expressed in terms of the determinant of \(G_{A}\) as [332] Figure 14: Entanglement dynamics in infinite range spin-chains. (a) The system is partitioned into two blocks of \(N_{A}\) and \(N_{B}\) spins\(1/2\), initially fully polarized. (b) Collective spins of the two blocks. (c) Collective spin in the factorized initial state, represented on the Bloch sphere. The shaded area represents the quantum uncertainty of transverse components. (d) Nonlinear interactions determine spin squeezing, which makes the two blocks increasingly correlated (entangled). The rate of squeezing is governed by the separation of nearby semiclassical trajectories, and by Eq. (2) it determines the rate of growth of entanglement entropy. Right panels: (e) For generic (noncritical) quenches, nearby trajectories separate linearly in time, leading to a polynomially fast squeezing. (f) For a critical quench, the collective spin lies on the stable manifold of an unstable fixed point in phase space. In this case, nearby trajectories separate exponentially fast in time at a rate \(\lambda\) set by the eigenvalue of the linearized flow. \[S_{A}=2\,\sqrt{\det G_{A}}\mathrm{arccoth}\left(2\,\sqrt{\det G_{A}}\right)+\frac{1 }{2}\log\left(\det G_{A}-\frac{1}{4}\right)\, \tag{147a}\] \[S_{A}^{\,(2)}(t)=\frac{1}{2}\ln\ \det\left(2G_{A}(t)\right). \tag{147b}\] On the other hand, the matrix \(G_{A}\) can be directly related to the correlation matrix \(G\) of collective excitations \((\hat{q},\hat{p})\).19 Footnote 19: One can perform a linear canonical transformation to the collective \((\hat{q},\hat{p})\) and relative \((\delta\hat{q},\delta\hat{p})\) fluctuation modes: \[\left\{\begin{matrix}\hat{q}=+\sqrt{f_{A}}\ \hat{q}_{A}+\sqrt{f_{B}}\ \hat{q}_{B}\\ \delta\hat{q}=-\sqrt{f_{B}}\ \hat{q}_{A}+\sqrt{f_{A}}\ \hat{q}_{B}\end{matrix} \right.\begin{matrix}\hat{p}=+\sqrt{f_{A}}\ \hat{p}_{A}+\sqrt{f_{B}}\ \hat{p}_{B}\\ \delta\hat{p}=-\sqrt{f_{B}}\ \hat{p}_{A}+\sqrt{f_{A}}\ \hat{p}_{B}\end{matrix} \right.. \tag{148}\] Since the Hamiltonian is a function of the collective spin only, the latter bosonic mode is frozen in the vacuum. The explicit computation shows that the determinant can be expressed as \[\det G_{A}=\frac{1}{4}+f_{A}f_{B}\ \langle\hat{n}_{0}\rangle \tag{149}\] where \(\hat{n}_{0}=(\hat{q}^{2}+\hat{p}^{2}-1)/2\) represents the number of bosonic excitations of the collective spin [cf. Eq. (135)]. While the global evolution preserves the total volume, i.e., \(\det\left(2G(t)\right)\equiv 1\), the information loss generated by projecting the collective quantum fluctuations onto a subsystem yields an increase of entropy. By Eq. (147b), this increase may be visualized as an enhancement of the projected volume spanned by the reduced quantum fluctuations within the subsystem phase space, due to the progressive stretching of the phase-space volume spanned by the quantum fluctuations, see Fig. 14. The growth of entanglement entropy out of equilibrium is thus completely determined by the dynamical generation of the collective excitations \(\langle\hat{n}_{0}(t)\rangle=\frac{1}{2}\mathrm{Tr}\left[G(t)-\frac{1}{2}\right]\). As discussed in Section 4.1.3, \(G(t)\) describes the flow of linearized displacements around the classical trajectories. This connection highlights that the entanglement growth in the semiclassical regime is determined by the chaoticity properties of the underlying classical phase space. In fact, the qualitative time-dependence of \(\langle\hat{n}_{0}(t)\rangle\) depends on the nature of the classical trajectories. See Table 1 for a summary. The classical dynamics of fully-connected spin-\(1/2\) systems are generically integrable (as discussed in Section 4.1.1). Hence the temporal growth of the quantum correlations is at most polynomial, \(\langle n_{exc}(t)\rangle\sim t^{2}\) [see Fig. 5(e)], leading to \[S_{A}(t)\underset{t\gg 1}{\sim}S_{A}^{\,(2)}(t)\sim\ln t. \tag{150}\] In the (non-generic) case of quenches to dynamical critical points (see discussion in Section 4.1.2), the collective spin moves along an isolated unstable trajectory called separatrix [see Fig. 5(f)]. Out-of-equilibrium generation of collective excitations is thus exponentially fast in such critical quenches, \(\langle\hat{n}_{\mathrm{exc}}(t)\rangle\sim e^{2\lambda t}\), leading to a linear growth of entanglement entropy with a predicted slope \[S_{A}(t)\underset{t\gg 1}{\sim}S_{A}^{\,(2)}(t)\sim\lambda t. \tag{151}\] This phenomenology fully describes the entanglement entropy dynamics of the fully-connected quantum Ising model. This is shown in Fig. 13, where \(S_{A}(t)\) is studied for quenches in the transverse field from \(h_{0}=0\) to different \(h_{f}\) below above and at the dynamical critical point \(h_{\mathrm{dcr}}\), discussed in Sec. 4.1.2. This analysis can be extended to general fully-connected systems whose classical limit has \(n>1\) degrees of freedom [297] and may exhibit chaos. In this case, quantum fluctuations grow as \(\langle\hat{n}_{\mathrm{exc}}(t)\rangle\sim e^{2\lambda t}\), and the growth of the entanglement entropy \(S_{A}(t)\) is generically linear in time with a rate set by the sum of the largest \(2n_{A}\) Lyapunov exponents [297, 330, 333]: \[S_{A}(t)\underset{t\gg 1}{\sim}S_{A}^{\,(2)}(t)\sim\Lambda_{A}t=\bigg{(} \sum_{k=1}^{2n_{A}}\lambda_{k}\bigg{)}t. \tag{152}\] For \(n_{A}=n/2\), this rate coincides with the classical Kolmogorov-Sinai entropy rate \(\Lambda_{KS}=\sum_{\lambda_{k}:\lambda_{k}>0}\lambda_{k}\)[298]. A linear growth of entanglement entropy thus occurs in chaotic fully-connected spin systems, such as the quantum kicked top [292; 334; 335; 336; 337; 338; 339; 340; 341; 342; 343; 344; 345; 346], the Dicke model [297; 347; 348; 349; 350; 351; 352; 353; 354; 355], and larger-\(s\) fully-connected systems. The classification above is concluded by the case of near-equilibrium dynamics around stable equilibrium configurations. In this case, the linearized dynamics are equivalent to that of coupled harmonic oscillators, leading to persistent oscillations in entanglement dynamics. See Table 1 for a summary. It is worth noting that the stretching of collective quantum fluctuations in phase space, which lies at the origin of bipartite entanglement growth, is very explicitly connected with important witnesses of multipartite quantum entanglement, such as _spin squeezing_[356] and _the Quantum Fisher Information_[357]. A popular quantifier of spin-squeezing is defined by the minimal transverse variance of collective quantum spin fluctuations [358; 359]: \[\xi^{2}\equiv\min_{|\mathbf{u}|=1,\mathbf{u}.\mathbf{Z}}\frac{\left\langle \left(\mathbf{u}\cdot\mathbf{\hat{S}}\right)^{2}\right\rangle}{N/4}. \tag{153}\] The squeezing parameter \(\xi^{2}\) is equal to one 1 for coherent (separable) states, while it is smaller for squeezed states \(\xi^{2}<1\). It has long been known [360; 361] that collective spin squeezing is a witness of many-body quantum entanglement, which can be generated by fully connected interactions [358; 359]. Indeed, over the timescale \(t\ll T_{\rm Ehr}\), \(\xi(t)\), squeezing is explicitly related to \(\langle n_{0}(t)\rangle\) in Eq. (149) as \[\xi^{2}(t)=1+2\langle\hat{n}_{0}(t)\rangle-2\sqrt{\langle\hat{n}_{0}(t) \rangle(\langle\hat{n}_{0}(t)\rangle+1)}\;. \tag{154}\] Equations (147a, 149) and (154) express the quantitative link -- pictorially illustrated in Figure 14 -- between the entanglement entropy \(S_{A}\) and and the spin squeezing parameter \(\xi\), in collective spin models in the semiclassical regime in and out of equilibrium. Following this relation, we will refer to the mechanism of dynamical entanglement entropy growth outlined here as _spin-squeezing picture_[58]. For definiteness in this Report, we focus on bipartite entanglement entropies; we refer the readers to Ref. [297] for further details on the dynamics of multipartite entanglement. In conclusion, we recall that the analysis presented above is valid before the Ehrenfest time scale defined in Eq. (137). Over longer times entanglement entropies saturate, \(S_{A}^{\infty}\propto\log N_{A}\). **Summary**: In fully connected systems, entanglement dynamics follow a semiclassical picture (before Ehrenfest time), which relates it to the squeezing of quantum fluctuations in phase space. This relation generically predicts a logarithmic entanglement growth, in the absence of semiclassical chaos. ### Quench dynamics of long-range interacting spin systems (\(\alpha>0\)) As soon as the interaction range is finite, the full permutational symmetry of the problem is broken and, in principle, the system could thermalize. The purpose of this Section is to describe the persistent collective character of dynamics in systems with \(0<\alpha<d\). By formulating a non-equilibrium spin-wave theory [56; 57], (Sec. 4.2.1), we will be able to develop a physical picture in terms of a semiclassical collective degree of freedom coupled to excitations with finite wavelengths. Analysis of the resulting non-linear coupled equations allows us to demonstrate freezing of finite-wavelength modes for long times, resulting in lower bounds to thermalization time scales (Sec. 4.2.2). This scenario affects DPTs (Sec. 4.2.3), scrambling dynamics (Sec. 4.2.4), and the characteristic slow growth of entanglement entropy (Sec. 4.2.5). #### 4.2.1 Dynamics of quantum fluctuations with finite interaction range To study the impact of finite range interactions on the mean-field dynamics, we resort to a _non-equilibrium spin-wave theory_ developed in Refs. [56; 57]. The goal is to refine the time-dependent formalism of Sec. 4.1.3 to the full momentum-space representation of the finite-range spin Hamiltonian, Eqs. (28) and (29). Similarly to our discussion of equilibrium properties in Sec. 2.4, one can single out a collective \(\alpha\)-independent part of the Hamiltonian involving \(\mathbf{k}=\mathbf{0}\) spin operators only, and an \(\alpha\)-dependent perturbation which activates spin fluctuation modes at \(\mathbf{k}\neq\mathbf{0}\). We aim at expanding the individual spins as bosonic fluctuations around a yet unspecified time-dependent quantization axis \(\mathbf{Z}\) -- which we will later self-consistently require to coincide with the instantaneous direction of the collective spin \(\langle\hat{\vec{S}}(t)\rangle\propto\mathbf{Z}(t)\). To achieve this, one performs the time-dependent rotation generated by \(\hat{V}(t)\) in Eq. (129). The spin components in this time-dependent frame are governed by the inertial Hamiltonian \(\tilde{\hat{H}}(t)=\hat{V}\,\hat{H}\,\hat{V}^{\dagger}+i\hat{V}\hat{V}^{\dagger}\), as in Sec. 4.1.3. One then applies Holstein-Primakoff transformations to the individual rotating spins, as in Eq. (60). The resulting transformed Hamiltonian can be organized in the usual form \[\tilde{H}(t)=\frac{1}{s}\bigg{[}(Ns)^{1}\tilde{\mathcal{E}}_{0}(t)+(Ns)^{1/2} \tilde{H}_{1}(t)+(Ns)^{0}\tilde{H}_{2}(t)+(Ns)^{-1/2}\tilde{H}_{3}(t)+(Ns)^{-1 }\tilde{H}_{4}(t)+\ldots\bigg{]}. \tag{155}\] As we already observed in the discussion of equilibrium properties, comparison to the expansion for \(\alpha=0\) shows that \(\hat{q}\equiv\tilde{q}_{\mathbf{k}=\mathbf{0}}\), \(\hat{p}\equiv\tilde{p}_{\mathbf{k}=\mathbf{0}}\), \(\hat{n}_{0}\equiv\hat{n}_{\mathbf{k}=\mathbf{0}}\), and that the total occupation number of spin-wave excitations \(\hat{n}_{\mathrm{sw}}\) is given by the sum of bosonic occupation numbers of all the other spin-wave modes at finite wavelength, cf. Eq. (39) \[\hat{n}_{\mathrm{sw}}=\sum_{\mathbf{k}\neq\mathbf{0}}\hat{n}_{\mathbf{k}}\,\quad\text{with}\quad\hat{n}_{\mathbf{k}}\equiv\frac{\tilde{q}_{\mathbf{k}} \tilde{q}_{-\mathbf{k}}+\tilde{p}_{\mathbf{k}}\tilde{p}_{-\mathbf{k}}-1}{2}. \tag{156}\] The individual occupation numbers quantities \(\hat{n}_{\mathbf{k}\neq\mathbf{0}}\) are exactly conserved by the collective part of the Hamiltonian \(\tilde{H}_{\alpha=0}(t)\), which only depends on \(\mathbf{k}\neq\mathbf{0}\) bosons through the collective spin length [i.e. through \(\hat{n}_{\mathrm{sw}}\)]. The equations of motion of the classical angles \(\theta(t)\), \(\phi(t)\) is once again found by imposing the condition that \(\langle\hat{S}^{\,X}(t)\rangle=\langle\hat{S}^{\,Y}(t)\rangle=0\)[56; 57], namely that the collective bosonic mode describes fluctuations around the instantaneous average spin polarization. This amount to setting the coefficients of \(\tilde{q}_{\mathbf{k}=\mathbf{0}}\) and \(\tilde{p}_{\mathbf{k}=\mathbf{0}}\) equal to zero. Taking into account the leading term \(\tilde{H}_{1}(t)\) only, one retrieves the usual classical mean-field equations of motion (126). However, as first demonstrated in Refs. [56; 57], in the presence of finite-range interactions the collective spin trajectory may get modified by quantum fluctuations. This effect is the non-equilibrium counterpart of the corrections to the equilibrium spin polarization arising from a finite spin-wave density in the quantum ferromagnetic phase, cf. Sec. 2.4.3. The corrections arise from the terms in \(\tilde{H}_{3}(t)\) involving a bosonic operator with \(\mathbf{k}=\mathbf{0}\) (such that the remaining two operators have momenta \(\pm\mathbf{k}\)). In physical terms, these interactions describe the scattering of a collective spin excitation into a pair of spin waves with opposite finite momenta and vice versa. Taking into account this "feedback" from quantum fluctuations, one obtains a pair of modified equations for the angles \(\theta(t)\), \(\phi(t)\): \[\frac{d}{dt}\theta= +2J_{0}(1-\epsilon)\sin\theta\cos\phi\sin\phi\] \[-2J_{0}\bigg{(}\frac{1}{Ns}\sum_{\mathbf{k}}f_{\mathbf{k}}(\alpha) \ \left\langle\tilde{q}_{\mathbf{k}}\tilde{p}_{-\mathbf{k}}\right\rangle\bigg{)} \sin\theta\cos\phi\sin\phi \tag{157a}\] \[+2J_{0}\bigg{(}\frac{1}{Ns}\sum_{\mathbf{k}}f_{\mathbf{k}}(\alpha) \frac{\left\langle\tilde{q}_{\mathbf{k}}\tilde{p}_{-\mathbf{k}}+\tilde{p}_{ \mathbf{k}}\tilde{q}_{-\mathbf{k}}\right\rangle}{2}\bigg{)}\cos\theta\sin\theta \cos^{2}\phi,\] \[\frac{d}{dt}\phi= -h+2J_{0}(1-\epsilon)\cos\theta\cos^{2}\phi\] \[-2J_{0}\bigg{(}\frac{1}{Ns}\sum_{\mathbf{k}\neq 0}f_{\mathbf{k}}( \alpha)\ \left\langle\tilde{q}_{\mathbf{k}}\tilde{q}_{-\mathbf{k}}\right\rangle \bigg{)}\cos\theta\cos^{2}\phi\] (157b) \[+2J_{0}\bigg{(}\frac{1}{Ns}\sum_{\mathbf{k}\neq 0}f_{\mathbf{k}}( \alpha)\frac{\left\langle\tilde{q}_{\mathbf{k}}\tilde{p}_{-\mathbf{k}}+ \tilde{p}_{\mathbf{k}}\tilde{q}_{-\mathbf{k}}\right\rangle}{2}\bigg{)}\sin \phi\cos\phi,\] where we introduced the time-dependent _spin-wave density_ \[\epsilon(t)\equiv\frac{\left\langle\tilde{n}_{\text{tot}}(t)\right\rangle}{Ns }=\frac{1}{Ns}\sum_{\mathbf{k}}\frac{\left\langle\tilde{q}_{\mathbf{k}}\tilde {q}_{-\mathbf{k}}\right\rangle+\left\langle\tilde{p}_{\mathbf{k}}\tilde{p}_{- \mathbf{k}}\right\rangle-1}{2}\,. \tag{158}\] As usual, we observe that the impact of quantum fluctuations on the classical trajectory is suppressed in the classical limit \(s\to\infty\), and grows with \(\alpha\) at fixed \(s\). Similarly to what we found in equilibrium (Sec. 2.4), the properties of \(f_{\mathbf{k}}(\alpha)\) imply that all the quantum feedback terms in the right-hand sides of the above equations of motion are vanishingly small in the thermodynamic limit for \(0<\alpha<d\) and parametrically small for \(\alpha\searrow d\). In the latter case, we can make the usual replacement \((1/N)\sum_{\mathbf{k}}\mapsto\int d^{d}\mathbf{k}/(2\pi)^{d}\) in the thermodynamic limit. However, in finite systems those corrections could be expected to play a role for arbitrary \(\alpha\) at sufficiently long times. In turn, the evolution of quantum fluctuations is regulated by the spin-wave Hamiltonian \(\tilde{H}_{2}(t)\) to the same order of approximation as above. It is instructive to report its explicit expression for the considered variable-range quantum Ising model: \[\hat{H}_{2}(t)= -2J_{0}\sum_{\mathbf{k}}f_{\mathbf{k}}(\alpha)\times\] \[\left(\cos^{2}\theta\cos^{2}\phi\,\frac{\tilde{q}_{\mathbf{k}} \tilde{q}_{-\mathbf{k}}}{2}+\sin^{2}\phi\,\frac{\tilde{p}_{\mathbf{k}}\tilde{ p}_{-\mathbf{k}}}{2}-\cos\theta\sin\phi\cos\phi\,\frac{\tilde{p}_{\mathbf{k}} \tilde{q}_{-\mathbf{k}}+\tilde{q}_{\mathbf{k}}\tilde{p}_{-\mathbf{k}}}{2}\right)\] \[+2J_{0}\cos^{2}\phi\,\sum_{\mathbf{k}}\hat{n}_{\mathbf{k}}\,. \tag{159}\] This Hamiltonian is equivalent to a set of _externally driven_ quantum harmonic oscillators, labelled by the momentum \(\mathbf{k}\), where the driving is given by the motion of \(\theta(t)\) and \(\phi(t)\) and controlled by the couplings \(f_{\mathbf{k}}(\alpha)\). To close the system of equations of motion it is convenient to define the momentum-resolved correlation functions \[G_{\mathbf{k}}^{qq}(t)\equiv\left\langle\,\tilde{q}_{\mathbf{k}} (t)\tilde{q}_{-\mathbf{k}}(t)\,\right\rangle,\quad G_{\mathbf{k}}^{pp}(t) \equiv\left\langle\,\tilde{p}_{\mathbf{k}}(t)\tilde{p}_{-\mathbf{k}}(t)\,\right\rangle, \tag{160a}\] \[G_{\mathbf{k}}^{qp}(t)\equiv\frac{1}{2}\left\langle\,\tilde{q}_{ \mathbf{k}}(t)\tilde{p}_{-\mathbf{k}}(t)+\ \tilde{p}_{\mathbf{k}}(t)\tilde{q}_{-\mathbf{k}}(t)\right\rangle\,. \tag{160b}\] Starting from the Heisenberg equations \(\frac{d}{dt}\tilde{q}_{\mathbf{k}}=i[\tilde{H}_{2}(t),\tilde{q}_{\mathbf{k}}]\) and \(\frac{d}{dt}\tilde{p}_{\mathbf{k}}=i[\tilde{H}_{2}(t),\tilde{p}_{\mathbf{k}}]\) we compute \[\begin{cases}G_{\mathbf{k}}^{qq}=4J_{0}f_{\mathbf{k}}(\alpha)\,\cos\theta\cos \phi\sin\phi\,G_{\mathbf{k}}^{qq}+4J_{0}\,\left(\cos^{2}\phi-f_{\mathbf{k}}( \alpha)\,\sin^{2}\phi\right)\,G_{\mathbf{k}}^{qp}\,,\\ \tilde{G}_{\mathbf{k}}^{pp}=-4J_{0}\left(\cos^{2}\phi-f_{\mathbf{k}}(\alpha) \,\,\,\cos^{2}\theta\cos^{2}\phi\right)G_{\mathbf{k}}^{qp}-4J_{0}f_{\mathbf{k }}(\alpha)\,\,\,\cos\theta\cos\phi\sin\phi\,G_{\mathbf{k}}^{pp}\,,\\ G_{\mathbf{k}}^{qp}=-2J_{0}\left(\cos^{2}\phi-f_{\mathbf{k}}(\alpha)\,\,\, \cos^{2}\theta\cos^{2}\phi\right)G_{\mathbf{k}}^{qq}+2J_{0}\left(\cos^{2} \phi-f_{\mathbf{k}}(\alpha)\,\,\,\sin^{2}\phi\right)G_{\mathbf{k}}^{pp}\,. \end{cases} \tag{161}\] Like Eqs. (136), these equations are also not independent due to the relation \(4(G_{\mathbf{k}}^{pq})^{2}=4\,\,G_{\mathbf{k}}^{pp}\,\,G_{\mathbf{k}}^{qq}-1\), which is an exact property of Gaussian pure states, and which is then satisfied at all times and for all \(\mathbf{k}\)'s to the considered level of approximation. The _general physical picture_ is now clear: * To lowest order the collective spin follows the classical mean-field trajectory; * This collective spin motion acts as an external drive for the spin-wave excitations, whereby the couplings \(f_{\mathbf{k}}(\alpha)\) control the _driving amplitude_; * The dynamically populated non-equilibrium spin-wave "bath" may in turn back-react to modify the collective spin dynamics via the quantum feedback terms. To quadratic order of approximation in the spin waves, the quantum many-body dynamics of the system is described by the closed set of coupled non-linear evolution equations (157) and (161), together with suitable initial conditions (which may be a ground or thermal state of a pre-quench Hamiltonian -- see the discussion on equilibrium states in Sec. 2.4). This effective decoupling between the dominant zero mode and the finite \(k\)-suppressed spin waves has recently been exploited in Refs.[362; 363]. There, the quadratic zero mode is replaced by a (fully quantum) rotor, while the \(k\) spin waves are kept at the quadratic level. This approach allows to reproduce the dynamics of quantum fluctuations beyond Ehrenfest time. The formalism derived above is expected to provide an accurate description of the time-evolving many-body wave function whenever the dynamically generated spin-wave density \(\epsilon(t)\) remains small, i.e. \(\epsilon(t)\ll 1\). This diluteness condition allows us to theoretically describe the many-body dynamics as a self-consistent driven weakly-interacting bosonic gas. The neglected higher-order terms account for the non-linear scattering among spin waves, which is expected to contribute to the dynamics only over time scales parametrically long in \(1/\epsilon\). Physically, the diluteness condition above corresponds to the requirement that the time-evolving collective spin magnitude \(S\) remains close to its maximal value \(Ns\), as showcased by Eq. (39). The quality of this approximation is significantly impacted by the interaction range via \(f_{\mathbf{k}}(\alpha)\). Below we will use the formalism outlined above to review how the finite-range of interactions impacts the mean-field DPT, scrambling, and entanglement dynamics. **Summary**: Finite-wavelength fluctuations can be modeled as a set of driven bosonic modes, where the drive is given by the collective spin motion. The quantum fluctuations, in turn, may back-react on the collective spin dynamics. The full dynamics is thus described by a set of coupled non-linear evolution equations. #### 4.2.2 Prethermal freezing of spin-wave excitations The dynamical generation of spin-wave excitations with non-vanishing momenta for \(\alpha>0\) is responsible for modififcations to the mean-field dynamics. As is manifest in Eqs. (157), the impact of the quantum feedback is controlled by the finite-range perturbation via \(f_{\mathbf{k}\neq\mathbf{0}}(\alpha)\). In turn, the same quantities \(f_{\mathbf{k}\neq\mathbf{0}}(\alpha)\) bound the rate itself of dynamical generation of spin waves: From Eq. (159) it is clear that \(\left[\hat{n}_{\mathbf{k}\neq\mathbf{0}},\tilde{H}(t)\right]\propto f_{ \mathbf{k}\neq\mathbf{0}}(\alpha)\), consistently with the sum of the first two equations in (161). To formulate a more precise bound we can proceed as follows: First, we can eliminate the free spin-wave precession, i.e. the last term in (159), by switching to the "interaction picture": \(\tilde{q}_{\mathbf{k}}\mapsto\cos\Phi(t)\tilde{q}_{\mathbf{k}}+\sin\Phi(t) \tilde{p}_{\mathbf{k}}\), \(\tilde{p}_{\mathbf{k}}\mapsto-\sin\Phi(t)\tilde{q}_{\mathbf{k}}+\cos\Phi(t) \tilde{p}_{\mathbf{k}}\), where \(\Phi(t)=\int_{0}^{t}ds\,2J_{0}\cos^{2}\phi(s)\).20 Such a "gauge" transformation does not change the dynamical population of spin waves. All terms of the right-hand side of the modified Eq. (161) are now proportional to \(f_{\mathbf{k}\neq\mathbf{0}}(\alpha)\). We can then apply Gronwall's lemma [364] to this linear system of differential equations to bound the growth of the \(G_{\mathbf{k}}(t)\)'s: In particular, Footnote 20: While the axis \(\mathbf{Z}\) is fixed by a self-consistency requirement, the orientation of the transverse axes \(\mathbf{X}\), \(\mathbf{Y}\) is a “gauge freedom” within the formalism. The transformation above amounts to performing a time-dependent rotation of the transverse spin components by angle \(\Phi(t)\). With this choice of co-moving frame, the spin-wave Hamiltonian fully vanishes for \(\alpha=0\): spin fluctuations look frozen. \[\langle\tilde{n}_{\mathbf{k}}(t)\rangle=\frac{1}{2}\left(G_{\mathbf{k}}^{qq}+ G_{\mathbf{k}}^{pp}(t)-1\right)\leq\frac{1}{2}\left[\exp\left(c|f_{\mathbf{k}}( \alpha)|J_{0}t\right)-1\right]\;, \tag{162}\] where \(c\) is a constant related to the norm of the monodromy matrix on the right-hand side of Eq. (161). Thus, spin-wave excitations at momentum \(\mathbf{k}\) can only be dynamically generated over time scales \(J_{0}t\gg 1/f_{\mathbf{k}}(\alpha)\). Of course, the bound in Eq. (162) is only useful for \(\alpha\lesssim d\). As discussed in Sec. 2.4 and in Appendix C, the couplings \(f_{\mathbf{k}\neq\mathbf{0}}(\alpha)\) are suppressed in the thermodynamic limit \(N\to\infty\) when \(0<\alpha<d\) and are parametrically small as \(\alpha\searrow d\); see Fig. 3 for an illustration. This straightforwardly provides us with a bound on the rate of growth of the population of bosonic excitations for \(0<\alpha<d\) (see Appendix C): \[|f_{\mathbf{k}}(\alpha)|\leq\;\frac{\text{const}}{(|\mathbf{k}|L)^{\beta}}\;,\quad\text{with}\quad\beta\equiv\min\left(d-\alpha,(d+1)/2\right)\,. \tag{163}\] Therefore, there exists a long time scale \[T_{\text{sw}}\sim N^{\beta/d}\;, \tag{164}\] during which the dynamical excitation of spin waves with _finite wavelengths_ is suppressed.21 We can further easily bound the total density of spin-wave excitations _at finite times_: Footnote 21: Note that this further _a posteriori_ justifies the Holstein-Primakoff approach, as the density of spin waves remains small over a long time window. \[\epsilon(t)\leq\frac{1}{Ns}\sum_{\mathbf{k}}\frac{1}{2}\left[\exp\left(c|f_{ \mathbf{k}}(\alpha)|J_{0}t\right)-1\right]\sim\frac{J_{0}t}{N}\sum_{\mathbf{k }}|f_{\mathbf{k}}(\alpha)| \tag{165}\] For \(0<\alpha<d\) we have [recall \(\mathbf{k}\equiv\mathbf{k}_{\ell}=2\pi\mathbf{\ell}/L\), with \(\mathbf{\ell}=(\ell_{1},\ldots,\ell_{d})\) integers] \[\epsilon(t)\leq\frac{J_{0}t}{L^{d}}\sum_{|\ell|<L/2}\frac{1}{|\mathbf{\ell}|^{\beta }}\sim\frac{J_{0}t}{L^{\beta}} \tag{166}\] This bound proves that the spin-wave formalism described here is asymptotically exact in this regime and that the mean-field description of the collective spin polarization dynamics becomes exact in the thermodynamic limit. (We stress that this conclusion requires that \(N\to\infty\) must be taken before \(t\to\infty\).) It must be noted that the bound (162) above is overly pessimistic, as it considers exponential instability for _all_ the bosonic modes (and with the worst possible rates). In reality, our system can be approximately viewed as a set of quantum harmonic oscillators subject to periodic parametric drive at a frequency given by the classical collective spin precession. Such driven oscillators may or may not meet a resonance; for a given quench, resonances can be detected by performing a stability analysis of the stroboscopic (Floquet) evolution operator for each spin-wave mode [58]: \[\begin{pmatrix}\tilde{q}_{\mathbf{k}}(t_{0}+T_{\mathrm{cl}})\\ \tilde{p}_{\mathbf{k}}(t_{0}+T_{\mathrm{cl}})\end{pmatrix}=U_{\mathbf{k}}(T_{ \mathrm{cl}})\cdot\begin{pmatrix}\tilde{q}_{\mathbf{k}}(t_{0})\\ \tilde{p}_{\mathbf{k}}(t_{0})\end{pmatrix} \tag{167}\] over the period \(T_{\mathrm{cl}}\) of the motion of the angles \(\theta(t),\phi(t)\). The eigenvalues \(e^{\pm\lambda_{\mathbf{k}}T_{\mathrm{cl}}}\) of the \(2\times 2\) matrix \(U_{\mathbf{k}}(T_{\mathrm{cl}})\) directly give information on the _Floquet quasi-frequency_\(\lambda_{\mathbf{k}}\) (see, e.g., Refs. [365; 81]) of the driven oscillator, which may be purely real (resonance at mode \(\mathbf{k}\)) or purely imaginary \(\lambda_{\mathbf{k}}=i\omega_{\mathbf{k}}\) (non-resonance). In the latter case, the mode population performs bounded quasiperiodic oscillations, whereas in the former case, it blows up exponentially. The real parts of the Floquet quasi-frequencies play the role of finite-time Lyapunov exponents for our system. To quantify the global stability of the system's dynamics for a given quantum quench, one can compute the sum of all _positive_ Lyapunov exponents, which gives the Kolmogorov-Sinai entropy rate \(\Lambda_{\mathrm{KS}}\). It is convenient to inspect this quantity on varying the fully polarized initial configurations, parametrized by spherical angles: \[\Lambda_{\mathrm{KS}}(\theta_{0},\phi_{0})=\sum_{\mathbf{k}:\lambda_{\mathbf{ k}}>0}\mathrm{Re}[\lambda_{\mathbf{k}}(\theta_{0},\phi_{0})]\;. \tag{168}\] This quantity detects whether resonances are present for the considered quench, and quantifies the actual (initial) instability. While the stability analysis described here can be done for arbitrary \(\alpha\), it is only really meaningful for \(\alpha\lesssim d\), as for larger \(\alpha\) the density of spin-waves becomes finite in a finite time, and non-linear effects in the full many-body dynamics cannot be neglected. For \(0<\alpha<d\) the spin waves effectively reduce to a _discrete_ set of periodically driven quantum oscillators associated with long-wavelength modes with \(\mathbf{k}\propto 1/L\), cf. the discussion in Sec. 2.4. The stability the analysis described above was performed for a long-range quantum Ising chain with \(0<\alpha<1\) in Ref. [58]. For a large set of initial conditions, spin waves are found to be _stable_ (i.e. non-resonant), and consequently, their population remains bounded in time. While there is no simple rule to predict the existence of resonances, numerical exploration suggests that quenches _near_ dynamical criticality typically give rise to resonant excitation of long-wavelength spin waves. In other words, the classical separatrix of the mean-field dynamics for \(\alpha=0\) typically broadens to a _finite_ layer of instability (chaoticity) for \(\alpha>0\). In the left panel of Fig. 15 we report the \(k\)-resolved spin-wave dynamical population for \(\alpha=0.7\), obtained by numerically solving Eq. (161). The occurrence of resonances is systematically illustrated in the right panel of Fig. 15, which displays the value of \(\Lambda_{\mathrm{KS}}(\theta_{0},\phi_{0})\) as a function of the initial configuration. These results show that, at least in this case, only quenches near dynamical criticality give rise to resonant excitation of spin waves. For quenches that do exhibit resonances at certain non-vanishing momenta, the bound (162) captures the qualitative time-dependence of the population of the corresponding modes (the actual rate \(\lambda_{\mathbf{k}}\) is, however, generally lower). Thus, the rapidly growing population of excitations is expected to generate non-linear effects in the full many-body dynamics over comparatively short time scales of order \[T_{\mathrm{Ehr}}\sim\log N\,. \tag{169}\] As anticipated above such resonances involve selected modes with small momenta \(\mathbf{k}\propto 1/L\), as finite-momentum modes are driven very weakly. While all the other modes remain weakly excited (at least) for much longer times \(T_{\mathrm{sw}}\), the long-wavelength resonant modes form, together with the collective spin, a full-fledged non-linear dynamical system for intermediate time scales \(T_{\mathrm{Ehr}}\ll t\ll T_{\mathrm{sw}}\), which will generally feature semiclassical chaotic behaviour. As the target temporal window lies beyond the Ehrenfest time scale, the self-consistent approximation expounded here is not expected to provide a quantitatively accurate description of dynamics in this regime, and one must resort to numerical simulations to probe this conjectured chaotic behaviour. A closely related prethermal regime was discussed by Mori in Ref. [291] from the point of view of the quasi-conservation of permutation (swap) operators. Even though permutational symmetry is dynamically broken for \(\alpha>0\), Ref. [291] proves the following bound for long-range interacting quantum Ising chains: \[\left|\langle\hat{P}_{ij}(t)\rangle-\langle\hat{P}_{ij}(0)\rangle\right|\leq \left|i-j\right|c_{\alpha}\frac{J_{0}t}{N^{1-\alpha}} \tag{170}\] where \(\hat{P}_{ij}\) is the swap operator between spins at sites \(i\), \(j\) of the chain and \(c_{\alpha}\) is a \(N\)-independent constant. [This result can be easily generalized to arbitrary long-range interacting quantum systems.] As a result, starting from a permutationally-invariant state with \(\langle\hat{P}_{ij}(0)\rangle=1\) (e.g. fully polarized along a given direction), one deduces that the symmetry under permuting spins _at a finite distance_ is approximately conserved, i.e. \(\langle\hat{P}_{ij}(t)\rangle\sim 1\), up to time scales \(\sim N^{1-\alpha}\). This time scale coincides with \(T_{\rm sw}\) defined above upon setting \(d=1\). The underlying physical reason is that permutation at finite distances probe finite-wavelength fluctuations, which are consistently frozen over the same time scale. On the other hand, long-wavelength fluctuations are associated with permutations of spins over distances proportional to the length \(L\) of the spin chain, for which no long-time quasi-conservation can be guaranteed. Indeed, a severe breakdown of permutational symmetry may occur over a comparatively short time \(T_{\rm Ehr}\sim\log N\) over such large length scales [cf. the discussion on resonances above]. For the quench considered in Ref. [291], it can be shown that spin-wave resonances exist [58]. This observation points to the onset of a chaotic semiclassical regime over a time scale \(T_{\rm Ehr}\sim\ln N\), in agreement with the detection of semiclassical chaotic behaviour within the numerical simulations of prethermal dynamics in Ref. [291]. Finally, it is important to remark that for finite systems, the full self-consistent system of equations (157) and (161) allows us to investigate non-linear effects arising from dynamical changes in the driving frequency triggered by the quantum feedback. Long-time numerical computations suggest that the stability diagram based on the resonance picture outlined above is stable to the inclusion of the quantum feedback. It is an open problem to characterize conditions for long-time stability in long-range interacting spin systems -- both within and beyond the spin-wave approximation discussed here. An exciting possibility would be to show Kolmogorov-Arnold-Moser-type stability for this class of systems, which would provide a kind of interpolation between scenarios in few-body and many-body physics. **Summary**: In the strong long-range regime, there exists a long-time scale during which the finite wavelength excitations are suppressed. A stability analysis shows that they are stable for generic quenches, while instabilities for finite \(\alpha\) can appear in the proximity of dynamical phase transitions. #### 4.2.3 Impact of finite-range interactions on dynamical phase transitions In the last Subsection, we established that for quenches sufficiently far away from dynamical critical points the dynamical generation of spin-wave excitations is non-resonant. This observation leads to a long time window \(0<t<T_{\rm sw}\sim N^{\beta/d}\) (at least) where the spin-wave population remains low, and the collective spin evolution remains close to the unperturbed (mean-field) persistent oscillations. In this scenario, the fate of the mean-field dynamical phase transitions (DPT, discussed above in Sec. 4.1.2) stands out as a naturally prominent question. This issue has been largely studied as a function of the interaction range parameter \(\alpha\) using various numerical approaches [234; 239; 242; 293; 366]. For the standard long-range quantum Ising chain, a DPT is found in the thermodynamic limit for \(0\leq\alpha<2\), while it is absent for \(\alpha>2\), in qualitative agreement with the equilibrium phase diagram. This was shown using matrix-product state dynamical simulations in Ref. [242] [see Fig. 16(a-c)], where the relation between these DPTs and the singularities in the time-dependence of the Loschmidt echo (DQPTs) has been elucidated, see also Ref. [234; 239]. This transition has been studied via the semiclassical truncated Wigner approximation in Ref. [293], where it was found that the critical exponents for \(\alpha\lesssim 0.5\) are the same as the mean-field DPT. These DPTs have been experimentally observed with trapped-ion quantum simulators, which simulate versions of the long-range quantum Ising chain [21; 272; 367; 368]. Finite-range perturbations can also have a strong impact of the qualitative aspect of the mean-field dynamical phase diagram by inducing new exotic dynamical phases. Refs. [56; 57] studied the impact of short-range interactions on top of the fully-connected Ising model, showing the emergence of a _chaotic dynamical phase_ within which the asymptotic magnetic ordering is characterized by strong sensitivity to the parameters and initial conditions. It is found that nonequilibrium fluctuations can significantly affect the critical dynamics, resulting in a pseudo-aleatory collective evolution, reminiscent of a particle moving in a multiple-well potential, with a large initial energy, and subject to friction. The nonequilibrium phase diagram universally acquires the basic characteristics of a "coin toss" experiment. This result is confirmed by matrix-product-state numerical simulations away from the perturbative regime [56; 57], and a similar scenario was observed in the Dicke model [369]. **Summary**: Consistent with the prethermal scenario discussed above, dynamical phases on the two sides of a DPT persist for \(0<\alpha\leq 2\), as demonstrated by numerical results. Semiclassical chaos appears in correspondence of the dynamical critical point. #### 4.2.4 Scrambling dynamics with variable-range interactions Let us briefly discuss how a finite value of \(\alpha>0\) impacts the scrambling of quantum information and in particular the OTOC (139) dynamics, introduced in Sec. 4.1.4. We recall that the square-commutator has been initially put forward due to its exponential growth, i.e. \[C(t)=\langle\|[\hat{A}(t),\hat{B}(0)]\|^{2}\rangle\simeq\hbar_{\rm eff}^{2}\ e^{\mu}\,\] Figure 15: Time-dependent \(k\)-resolved spin-wave population for \(\alpha=0.7\) (right panel) after a quench from \(h_{0}=0\) to \(h_{f}=2J\). The blue color gradient for the spin-wave populations in Fourier modes follows the quasimomentum \(|k|\) from the darkest (\(k=\pm 2\pi/L\)) down to smaller-wavelength modes with larger \(|k|\) (only the first 20 modes out of \(N=500\) are shown). (Right) Density plot of the Kolmogorov-Sinai entropy rate \(\Lambda_{\rm KS}(\theta_{0},\phi_{0})\) for different initial conditions (\(\theta_{0},\phi_{0}\)) on the Bloch sphere for \(\alpha=0.7\), \(h=0.5J\). The picture is converged with respect to refining the \(k\)-space discretization (here \(N=100\)). Plots adapted from Ref. [58]. valid before the Ehrenfest or scrambling time \(t\ll T_{\rm Ehr}\sim\ln\hbar_{\rm eff}^{-1}\sim\ln N\) for systems with a semiclassical chaotic limit. This kind of behaviour characterizes as well large-\(N\) all-to-all disordered interacting models [370; 371; 372; 373; 374; 375], despite the absence of an obvious semiclassical limit, including the SYK which saturates the bound to chaos [302]. However, such exponential growth -- also known as _fast scrambling_[376] -- is challenged by finite-range interactions. The square-commutator \(C(t)\) was proved to grow at most polynomially in locally interacting systems [377] and in long-range interacting systems with \(\alpha>d\)[378]. This led to several proposals suggesting fully-connected interactions as a resource for fast scrambling [379; 380; 381; 382]. Finite-range interactions lead to a well-defined spatial structure, which allows investigating the space-dependent square commutator \[C(r,t)=\langle\big{|}[\hat{A}_{\bf x}(t),\hat{B}_{{\bf x}_{0}}(t)]\big{|}^{2} \rangle\;, \tag{171}\] where \(r=|{\bf x}-{\bf x}_{0}|\) is the distance between the location of the two considered operators. For locally interacting systems the square commutator \(C(r,t)\) becomes appreciable at times \(t\sim x/v_{B}\)[383; 384], where \(v_{B}\), referred to as the _butterfly velocity_[385], is generally smaller than the Lieb-Robison one \(v_{B}\leq v_{LR}\) [cf. (71)]. By contrast, long-range interactions are found to induce a non-linear light-cone effect, whereby information can spread super-ballistically. This occurrence has been studied numerically in quantum spin chains with variable range interactions [386; 387; 388; 292] and established via effective hydrodynamics descriptions in disordered models [389; 390; 391]. All these studies indicate the absence of ballistic spreading of \(C(r,t)\) for \(\alpha\leq d\). For quantum spin systems in the strong long-range regime \(\alpha\lesssim d\), one may use the non-equilibrium spin-wave theory reviewed in Sec. 4.2.1 above to study the dynamics of the space-resolved square commutator \(C(r,t)\). In agreement with the onset of semiclassical chaos for near-critical quantum quenches, discussed above in Sec. 4.2.2, one may expect a concomitant exponential growth of the square commutator in that regime. Figure 16: Dynamical phase transition with finite \(\alpha\) in the Ising Hamiltonian (1) with \(\gamma=1\). (a-c) Asymptotic value of the order parameter in Eq. (128) for different values of the post-quench field \(h_{f}/J\) for quenches from \(h_{0}=-\) with (a) \(\alpha=0.1\) (b) \(\alpha=1.5\) and (c) \(\alpha=3\). Panels adapted from Ref. [242]. (d-f) Experimental data of the spin-magnetization \(\langle\hat{S}^{\ast}(t)\rangle\) dynamics on a trapped ion quantum simulator. Data with \(L=16\) and \(\alpha\simeq 0.8\) and transverse field \(h_{f}/J=B_{z}/J_{0}=0.6,0.,1.6\) in (d,e,d) respectively. Panels adapted from Ref. [21]. #### 4.2.5 Entanglement entropy dynamics: Spin-squeezing vs Quasiparticle picture Long-range interacting systems exhibit a conceptually different dynamics of the entanglement entropy (147a) with respect to locally interacting systems [325; 326; 112; 113; 114]. On one side, their non-local interactions allow quantum correlations between distant degrees of freedom to build up very quickly. As discussed above, this leads to violations of the Lieb-Robinson bound (71) and nonlinear light-cone spreading of quantum correlations, see Sec. 3.1 and 4.2.4. On the other hand, the bipartite entanglement entropy growth after a quench with the Hamiltonian (1) was found to exhibit a counterintuitive dramatic slowdown as the range of interactions is increased: It becomes logarithmically slow for algebraically-decaying couplings with \(\alpha\) smaller than the spatial dimensionality \(d\)[125; 292; 58; 127], see also Fig. 17. Such numerical results can be rationalized using the semiclassical techniques introduced above, which leads to a complete picture of entanglement growth for long-range interacting systems [58]. Reviewing this framework is the goal of the present Subsection. For completeness, we mention in passing that multipartite entanglement associated with algebraically decaying interactions has been studied in depth, e.g. in the form of dynamical spin-squeezing [392; 393; 394; 395; 396; 397] or via its relation to dynamical susceptibilities in equilibrium [398]. In the fully-connected limit \(\alpha=0\), the growth of entanglement is determined by the squeezing of the collective fluctuations stemming from the underlying classical trajectory, see Fig. 5. The general framework illustrated in Sec. 4.1.5 predicts logarithmic growth in the absence of semiclassical chaos (Table 1), which is generic in fully-connected spin systems22. Footnote 22: without self-interactions if \(s>1/2\) For finite \(\alpha\) the behavior of \(S_{A}(t)\) can be understood at intermediate times by accounting for spin-wave excitations with non-vanishing momentum \(\mathbf{k}\) [cf. Eq.(160)] on top of the entanglement dynamics arising from collective spin excitations (or spin squeezing), discussed in Sec. 4.1.5 above. As described in Sec. 4.2.1, the time-evolving state of the spin-wave excitations is encoded in the correlations in Eq.(160), i.e. \((G_{\mathbf{k}}^{qq}(t),G_{\mathbf{k}}^{qp}(t),G_{\mathbf{k}}^{pp}(t))\) defined by Figure 17: Logarithmic growth of the entanglement entropy after a quench with the Ising Hamiltonian (1) with \(\gamma=1\), \(d=1\). (a) Exponential of \(S_{A}(t)\) as a function of dimensionless time, for different transverse fields \(h=0.7J,1J,1.3J\). For each \(h\), the results for \(L=30,40,50\) and \(\alpha=0.8,0.9,1\) are plotted. (b) \(S_{A}(t)\) in logarithmic scale for different initial states \(\left|\psi_{0}\right\rangle=\left|\uparrow\uparrow\dots\uparrow\right\rangle\) and the ones generated by applying single site Pauli operators. Simulation with \(L=50\), \(\alpha=0.5\) and \(h=J\). Image adapted from Refs. [125; 127] for (a), (b) respectively. \(G_{\mathbf{k}}^{\alpha\beta}(t)=\frac{1}{2}\left\langle\tilde{\alpha}_{\mathbf{k}}(t) \tilde{\beta}_{-\mathbf{k}}(t)+\tilde{\beta}_{\mathbf{k}}(t)\tilde{\alpha}_{- \mathbf{k}}(t)\right\rangle\) for \(\alpha,\beta=q,p\). Within the linear spin-wave analysis, the state of a subsystem composed of \(M=f_{A}N<N\) spins contained in a region \(A\) of the lattice is a Gaussian bosonic state determined by the instantaneous correlations \[\left\{G_{\mathbf{r},\mathbf{r}^{\prime}}^{\alpha\beta}(t)=\left\langle\alpha_ {\mathbf{r}}(t)\beta_{\mathbf{r}^{\prime}}(t)+\beta_{\mathbf{r}}(t)\alpha_{ \mathbf{r}^{\prime}}(t)\right\rangle\right\}_{\begin{subarray}{c}\mathbf{r}, \mathbf{r}^{\prime}\in A\\ \alpha,\beta=q,p\end{subarray}} \tag{172}\] within \(A\), which can be expressed in terms of \(\tilde{G}_{\mathbf{k}}^{\alpha\beta}(t)\) via Fourier antitransform. This set of correlations uniquely identifies the reduced density matrix \(\hat{\rho}_{A}(t)\). The von Neumann entropy of this Gaussian bosonic state can be computed via standard techniques [331], namely \[S_{A}=\sum_{i=1}^{M}\,S(\nu_{i})\,\quad\text{with}\quad S(\nu_{i})=\frac{ \nu_{i}+1}{2}\,\ln\frac{\nu_{i}+1}{2}-\frac{\nu_{i}-1}{2}\,\ln\frac{\nu_{i}-1 }{2}\, \tag{173}\] where \(\nu_{i}\) are the symplectic eigenvalues of the correlation matrix. For long-range interactions with \(0<\alpha<d\), the growth of \(S_{A}(t)\) turns out to be determined by the stability of the discrete set of long-wavelength excitations, expressed by the Floquet quasi-frequencies \(\lambda_{\mathbf{k}}\) with \(|\mathbf{k}|\propto 1/L\): see the dedicated discussion in Sec. 4.2.2 and Fig. 15. In particular, one can apply the general semiclassical description entanglement discussed in Sec. 4.1.5 and summarized in Table 1. If all the modes are stable (i.e., non-resonant), then \(S_{A}(t)\sim\ln t\) exhibits a slow growth dominated by the collective spin fluctuations with \(\mathbf{k}=\mathbf{0}\) only. This is indeed the case for typical quenches away from dynamical criticality, as discussed in Sec. 4.2.2. This observation underlies and rationalizes the previous numerical findings of logarithmic growth of the von Neumann entanglement entropy [125; 292; 127], reported at the beginning of this Subsection. On the other hand, if some mode is unstable (i.e., resonant), then \(S_{A}(t)\sim\Lambda_{\text{KS}}\,t\) exhibits a fast growth dominated by the instabilities, with \(\Lambda_{\text{KS}}\) in Eq. 168. This is what may happen for quenches in the proximity of dynamical critical points, discussed in Sec. 4.1.2. The _physical picture_ for the long-range entanglement dynamics is now clear before the Ehrenfest time: * The leading contribution comes from the semi-classical squeezing of the collective spin, which grows logarithmically in the absence of classical chaos; * In the strong long-range regime, the suppressed long-wavelength spin waves provide a subleading contribution to the entanglement growth. Figure 18: Entanglement dynamics after a quench from the ferromagnetic ground state \(h_{0}=0\) with a long-range Ising Hamiltonian (1) with \(\gamma=1\) and \(h_{f}=2\). Comparison between finite-size MPS-TDVP numerical data (light-to-dark blue curves for increasing \(N\)), the spin-squeezing contribution (grey) and full spin-wave entanglement (black), for \(\alpha=0.1\) (left panel) and \(0.7\) (right panel), for the quench \(h_{0}=0\to h_{f}=2J\), with \(N=500\). Figure adapted from Ref. [58]. The above analysis shows that slow logarithmic growth of the entanglement entropy should be generally expected in quench dynamics of spin systems with strong long-range interactions starting from a state with large spin polarization23. Footnote 23: Subject to the usual caveat of the absence of individual spin self-interactions for \(s>1/2\). One can solve the spin-wave equation of motion in Eqs (161) and compute the resulting time-dependent entanglement entropy via Eq. (173). The results for a typical quench in the long-range quantum Ising chain in a transverse field [cf. Eq. (1) with \(\gamma=1,d=1\)] are shown in Fig. 18, where the exact numerical \(S_{A}(t)\) for finite system size is compared with fully-connected "spin-squeezing" contribution and with the result obtained with the inclusion of spin waves. This analysis applies to a wide variety of spin models and quenches, as shown in E.1 where we study the long-range Ising Hamiltonian with transverse and longitudinal field for a quench near the critical point. We remark that the underlying mechanism crucially relies on the _discreteness_ of the set of excitation modes (the long-wavelength spin waves) which result in a bounded, subleading contribution to entanglement growth. This property is characteristic of strong long-range interactions (\(\alpha<d\)) and generically does not occur for other types of perturbations. If, for instance, a _finite-range_ perturbation is added on top of a fully-connected model, one can still have stable excitations. However, the presence of the continuous spectrum of excitations results in light-cone spreading of quantum correlations and linear growth of entanglement according to a standard quasiparticle picture [111], see e.g. [57] and E.2 for an example. In the weak long-range regime \(d<\alpha<d+2\), the growth of entanglement has been related to the nonlinear dispersion relation of quasiparticles [104]. We finally reiterate that the picture of entanglement dynamics for long-range interacting spin systems reviewed here, based on the semiclassical dynamics of quantum spin fluctuations, covers setups not encompassed by other theoretical pictures such as quasi-particles [111], spacetime membranes [114] or local integrals of motion [399]. [style=MyFrame] **Summary**: For \(0\leq\alpha<d\), a semi-classical picture predicts that the entanglement growth is dominated by the collective spin squeezing (logarithmic for generic quenches) and decorated by a discrete set of stable excitations. ## 5 Dynamical phases induced by periodic driving In this Section we will expand our analysis to non-autonomous, coherently driven systems. We will show how the previously introduced ideas allow to characterize nonequilibrium phases of spin systems with novel kinds of collective order dynamically stabilized by a periodic drive, which would not be possible in equilibrium [400]. Here, long-range interactions play the twofold role of protecting long-range order in highly excited states and hindering heating. We explain how this basic mechanism also protects spatiotemporal order such as time-crystalline behavior [401; 26; 402]. ### Kapitza phases As realized by Kapitza long ago, a rigid pendulum can be stabilized upside down by periodically driving its suspension point with tuned amplitude and frequency. While this dynamical stabilization is feasible in a variety of instances in systems with few degrees of freedom, it is natural to search for generalizations to multi-particle systems. In particular, a fundamental question is whether, by periodically driving a single parameter in a many-body system, one can stabilize an otherwise unstable phase of matter against all possible fluctuations of its microscopic degrees of freedom. Following Ref. [89], we report here on such a stabilization in experimentally realizable quantum many-body systems: a periodic modulation of a transverse magnetic field can make ferromagnetic spin systems with long-range interactions stably trapped around unstable paramagnetic configurations as well as in other unconventional dynamical phases with no equilibrium counterparts. Specifically, we will study the variable-range quantum Ising chain \[\hat{H}_{\alpha}(t)=-\sum_{i,j}J_{i,j}(\alpha)\,\hat{\sigma}_{i}^{x}\hat{\sigma }_{j}^{x}-h(t)\sum_{i}\hat{\sigma}_{i}^{z} \tag{174}\] i.e., the Hamiltonian (1) with \(d=1\), \(\gamma=1\), \(s=1/2\) (all such unnecessary restrictions are just chosen for the sake of definiteness and connection with trapped-ion experiments). Periodic driving is implemented as a cyclic modulation of the magnetic field \(h(t)\). Starting from the fully-connected limit \(\alpha\to 0\) -- akin to the classical Kapitza pendulum -- we employ the non-equilibrium spin-wave theory reviewed above in Sec. 4.2.1 to establish conditions under which dynamical stabilization extends to the quantum many-body domain. We conclude by discussing the long (or infinite) lifetime of such quantum many-body Kapitza phases. Elucidating the nature of quantum many-body dynamics in the strong long-range regime -- where no meaningful Lieb-Robinson bounds can be formulated -- these results complement the body of work on Floquet prethermalization in short- and weak long-range interacting spin systems, for which we refer the readers to the original works, see e.g. Refs. [403; 404; 405]. #### 5.1.1 Fully-connected limit \(\alpha=0\): Non-equilibrium phases by driving We first consider the nonequilibrium dynamics of fully-connected spin systems subject to an external periodic drive: We start from the infinite-range quantum Ising Hamiltonian \[\hat{H}_{\alpha=0}(t)=-\frac{J_{0}}{N}\sum_{i,j=1}^{N}\hat{\sigma}_{i}^{x}\hat {\sigma}_{j}^{x}-h(t)\sum_{i=1}^{N}\hat{\sigma}_{i}^{z}. \tag{175}\] subject to a monochromatic drive in the transverse field, \[h(t)=h_{0}+\delta h\cos(\Omega t), \tag{176}\] with amplitude \(\delta h\) and frequency \(\Omega\). As discussed Sec. 4.1.1, in the thermodynamic limit \(N\to\infty\) the nonequilibrium dynamics are governed by the classical limit \(\mathcal{H}_{\rm cl}(t)\) of the rescaled Hamiltonian \(\hat{H}/S\), \[\mathcal{H}_{\rm cl}(t)=-J_{0}\left(\mathcal{S}^{x}\right)^{2}-h(t)\mathcal{S }^{z}. \tag{177}\] The quench dynamics in presence of a static field \(h(t)\equiv h_{0}\) has been discussed in Sec. 4.1.2. For \(0\leq h_{0}<2J_{0}\) the system supports the ferromagnetic state indicated by the arrow in Fig. 19(a), \(\vec{\mathcal{S}}(t)\) follows one of the trajectories represented on the Bloch sphere in panel (a), selected by the initial condition \(\vec{\mathcal{S}}(0)\). Two families of them are characterized by a ferromagnetic-like, symmetry-breaking periodic evolution with opposite signs of the nonvanishing time-averaged order parameter \(\overline{\mathcal{S}^{x}}\). A trajectory (red) passing through the unstable paramagnetic point (red star) separates these two families from the paramagnetic-like orbits with \(\overline{\mathcal{S}^{x}}=0\). See Sec. 4.1.2 for more details. Turning on the modulation in Eq. (183), representative samples of discrete stroboscopic trajectories \(\{\vec{\mathcal{S}}(t_{n})\}\), where \(t_{n}=2\pi n/\Omega\), \(n=0,1,2,\dots\), of the collective spin are reported in Fig. 19(b), (c), and (d). For small modulation \(\delta h\) [see panel (b)], the two ferromagnetic ground states leave room to two periodic trajectories of the collective spin within the corresponding ferromagnetic sectors, _synchronized_ with the drive -- hence, appearing as a single point under stroboscopic observations. Conversely, initial states in a neighborhood of the unstable paramagnetic point [red star in panel (a)] display chaotic motion as soon as \(\delta h\neq 0\)[406; 407]. As \(\delta h\) increases, this chaotic region invades an increasingly large portion of the sphere [406]. This behavior can be understood on the basis of classical Kolmogorov-Arnold-Moser theory [408; 409]. Related phenomena have been experimentally observed with Bose-Einstein condensates [410]. Upon further increasing the modulation [see panel (c)], a region in the parameter space emerges where _dynamical stabilization_ of the unstable paramagnetic point occurs, thereby opening up a stability region around it. This phenomenon is analogous to the stabilization of the inverted pendulum discovered by Kapitza [411; 81]. In addition to this Kapitza-like stabilization, as \(\delta h\) increases with \(h_{0}\approx J_{0}\) [see panel (d)], another unconventional regime appears, characterized by dynamical ferromagnetic ordering in the \(yz\)-plane orthogonal to the direction \(x\) of the actual ferromagnetic interactions. The origin of the numerical phenomenology described above may be analytically understood by studying the regime of fast-driving limit \(\Omega\to\infty\) as a function of the rescaled amplitude \[\zeta=\delta h/\Omega\,. \tag{178}\] In this limit one can easily compute the effective static Hamiltonian governing the stroboscopic Figure 19: Collective spin dynamics in the infinite-range Ising ferromagnet. (a) Classical phase-space trajectories of the static Hamiltonian with \(h/J_{0}=1.2\). (b), (c), (d): Stroboscopic trajectories \(\{\vec{\mathcal{S}}(t_{n})\}\), with \(t_{n}=2\pi n/\Omega\), \(n=0,1,2,\dots\) of the collective spin subject to a driving of frequency \(\Omega/J_{0}=5\) and amplitudes \(\delta h/J_{0}=0.01\) (b), \(3.3\) (c), and \(5\) (d), with \(h_{0}/J_{0}=1.2\). Panel (b) shows the presence of a possible ferromagnetic dynamical ordering, corresponding to the evolution occurring within a single ferromagnetic sector \(S^{x}>0\), with a special synchronized trajectory (appearing as a single point under stroboscopic observations), together with the onset of chaotic behavior around the unstable paramagnetic point [406]. Panel (c) shows the appearance of a dynamically stabilized phase, akin to the well-known stabilization of the inverted driven Kapitza pendulum [411; 81]. Panel (d) shows that for larger driving frequencies, an unconventional dynamical ferromagnetic ordering appears, where the direction of the magnetization is orthogonal to the direction \(x\) of the actual ferromagnetic interactions. “Islands” with stable stroboscopic trajectories are indicated by the arrows. Figure taken from Ref. [89]. evolution, usually termed _Floquet Hamiltonian_: see Appendix F. When the system is driven rapidly enough at finite driving amplitude, the effective evolution is just governed by the time-averaged Hamiltonian: In physical terms, the system has no time to react to variations of external parameters much faster than its characteristic dynamical time scales. However, if the modulation amplitude \(\delta h\) is simultaneously increased with the frequency, keeping a finite ratio \(\zeta\equiv\delta h/\Omega\), the effective dynamics may become qualitatively different from those governed by the static Hamiltonian. Such qualitative changes involve a partial resummation of the high-frequency expansion (F.3) of the Floquet Hamiltonian [400], which is in general an intractable problem. Analytic solutions in closed form may be obtained in some cases by performing a convenient time-periodic canonical transformation [400]. In our case, this strategy is implemented by moving to a time-dependent frame in order to effectively eliminate the oscillating magnetic field: \[\begin{pmatrix}\hat{\sigma}_{i}^{x}\\ \hat{\sigma}_{i}^{z}\end{pmatrix}=\begin{pmatrix}\cos\left(2\zeta\sin(\Omega t )\right)\hat{\sigma}_{i}^{rx}+\sin\left(2\zeta\sin(\Omega t)\right)\hat{\sigma }_{i}^{r\gamma}\\ -\sin\left(2\zeta\sin(\Omega t)\right)\hat{\sigma}_{i}^{rx}+\cos\left(2\zeta \sin(\Omega t)\right)\hat{\sigma}_{i}^{r\gamma}\\ \hat{\sigma}_{i}^{r\gamma}\end{pmatrix}. \tag{179}\] The transformation is chosen in such a way that the inertial term in the transformed generator \(\tilde{H}(t)\) exactly cancels the driving term. Thus \(\tilde{H}(t)\) is given by the static part of the Hamiltonian alone [i.e. \(h(t)\mapsto h_{0}\)] with \(\hat{\sigma}_{i}^{x}\hat{\sigma}_{j}^{x}\) replaced by \[\begin{split}&\cos^{2}\left(2\zeta\sin(\Omega t)\right)\hat{ \sigma}_{i}^{rx}\hat{\sigma}_{j}^{rx}+\sin^{2}\left(2\zeta\sin(\Omega t) \right)\hat{\sigma}_{i}^{r\gamma}\hat{\sigma}_{j}^{r\gamma}\\ &+\cos\left(2\zeta\sin(\Omega t)\right)\sin\left(2\zeta\sin( \Omega t)\right)\left(\hat{\sigma}_{i}^{rx}\hat{\sigma}_{j}^{ry}+\hat{\sigma}_ {i}^{ry}\hat{\sigma}_{j}^{rx}\right).\end{split} \tag{180}\] Crucially, the modulation \(\delta h\) enters \(\tilde{H}(t)\) via the finite combination \(\zeta\) only, which allows us to perform a standard high-frequency expansion for large \(\Omega\). The effective static Hamiltonian \(\hat{H}_{\text{eff}}\) to lowest order is given by time-averaging: Upon taking the classical limit, this reads \[\mathcal{H}_{\text{eff}}=-J_{0}\,\left(\frac{1+\gamma(\zeta)}{2}(S^{x})^{2}+ \frac{1-\gamma(\zeta)}{2}(S^{y})^{2}\right)-h_{0}\,\mathcal{S}_{z}, \tag{181}\] i.e., a fully-connected \(XY\)-model with a "Floquet-engineered" anisotropy parameter \[\gamma=\gamma(\zeta)=\mathcal{J}_{0}(4\zeta), \tag{182}\] where \(\mathcal{J}_{0}\) is the standard Bessel function of the first kind. Equation (181) shows that the net effect of the driving is to redistribute the ferromagnetic coupling strength along the directions \(x\) and \(y\). The behavior of the effective anisotropy \(\gamma\) as a function of the rescaled driving strength \(\zeta\) is shown in Fig. 20. As \(\zeta\) increases from zero, the effective ferromagnetic interaction along \(x\) weakens, which makes it possible to dynamically Figure 20: Plot of the anisotropy \(\gamma\) in the effective fast-driving Floquet Hamiltonian \(\mathcal{H}_{\text{eff}}\), as a function of the rescaled driving amplitude \(\zeta\), given by Eq. (182). stabilize the paramagnetic configuration. The exact boundary \(h_{0}=h_{\rm cr}(\zeta)\equiv J_{0}\left(1+\left|\mathcal{J}_{0}(4\zeta)\right|\right)\) of the Kapitza phase \(K\) is reported in Fig. 21. Note that this region is continuously connected with the paramagnetic one \(P\) in the phase diagram, see Fig. 21 -- similarly to the region of dynamical stabilization of the classical Kapitza pendulum, which is continuously connected with the parameter region with a reversed direction of gravity, in which stability is trivial [81]. As \(\zeta\) increases further, intervals with a negative anisotropy \(\gamma\) appear, favoring ferromagnetic ordering along the direction \(y\). The mechanism is thus elucidated for the occurrence of the unconventional dynamical phases with ferromagnetic ordering in the \(yz\)-plane, orthogonal to the direction \(x\) of the actual ferromagnetic interaction, which builds up whenever \(\gamma<0\), \(h_{0}<J_{0}\left(1-\gamma\right)\), i.e., within the regions denoted by \(F_{\perp}\) in Fig. 21. The numerical simulations in Fig. 19 show that these nonequilibrium phases persist at finite driving frequencies, comparable to the characteristic energy scale \(J_{0}\) of the system. When the driving frequency \(\Omega\) is large but finite, the effective Floquet Hamiltonian (181) receives perturbative corrections in an expansion in inverse powers of \(\Omega\), which cause small quantitative modifications of the boundaries in Fig. 21. (For explicit expressions we refer to the original work [89].) A second Kapitza phase coexists with \(F_{\parallel,\perp}\) for \(h_{0}<J_{0}\left(1-\left|\mathcal{J}_{0}(4\zeta)\right|\right)\), i.e., within the shaded region in Fig. 21. In this case the effective Hamiltonian (181) has a _maximum_ at the paramagnetic point in addition to the two ferromagnetic minima in the \(xz\)- or \(yz\)-plane, depending on \(\gamma\) being positive or negative, respectively. The corresponding phase-space portraits are shown in Fig. 22. In particular, in correspondence of the values \(\zeta_{1},\zeta_{2},\dots\) such that \(\gamma=0\) (related to the zeros of the Bessel function \(\mathcal{J}_{0}\)), the effective Hamiltonian has continuous \(O(2)\) symmetry. In this case, stable trajectories exist around the direction of both the ferromagnetic minima and the paramagnetic configuration, which would be unstable in absence of the drive. Figure 21: Left: Fast-driving nonequilibrium phase diagram of the periodically driven infinite-range Ising model defined by Eqs. (175) and (183), taken from Ref. [89]. Upon varying the average magnetic field \(h_{0}\) and the rescaled modulation amplitude \(\zeta=\delta h/\Omega\), a dynamical paramagnetic phase \(P\), a dynamically stabilized Kapitza paramagnetic phase \(K\), a conventional dynamical ferromagnetic phase \(F_{\parallel}\) and an unconventional dynamical ferromagnetic phase \(F_{\perp}\) with orthogonal magnetization emerge. The line \(\zeta=0\) is the equilibrium phase diagram of the model. Within the shaded region on the left, a second Kapitza phase coexists with \(F_{\parallel,\perp}\). (Note that the dashed line separating \(K\) and \(P\) does not correspond to an actual phase transition.) Right: Schematic phase-space portraits of the effective high-frequency Hamiltonians governing the evolution of the collective spin, highlighting the various phases. #### 5.1.2 Quantum many-body Kapitza phases for \(\alpha>0\) The dynamically stabilized collective Kapitza phases discussed in Sec. 5.1.1 represent a semiclassical realization of the classical Kapitza pendulum with collectively interacting spins. However, it is a priori unclear whether such a Kapitza dynamical stabilization may occur in general quantum many-body systems with finite-range interactions, which give rise to fluctuations at all length scales: While dynamical stabilization of a collective degree of freedom is possible, the presence of many fluctuating degrees of freedom may be expected to give room to heating and destabilize orderly structures. The existence of dynamically stabilized many-body Kapitza phases was pointed out for a general class of quantum spin systems with long-range interactions in Ref. [89]; we will here review this phenomenon. We turn to discuss the full interacting Hamiltonian (174) with \(\alpha>0\). As in Sec. 5.1.1, we shall consider a periodic modulation of the magnetic field, \[h(t)=h_{0}+\delta h\,\cos(\Omega t). \tag{183}\] The goal of this Subsection is to demonstrate that (most of) the dynamically stabilized phases persist at least over a parametrically large time scale for \(0<\alpha\leq 2\), where the many-body dynamics cannot be reduced to those of a single collective degree of freedom.24 Footnote 24: These phases are actually more stable in higher-dimensional [2; 8] and/or higher-spin [412] systems (without spin self-interactions), where fluctuations are less effective. When \(\alpha>0\), both the collective spin \(\vec{\mathcal{S}}\) and the spin excitations with non-vanishing momenta non-trivially participate in non-equilibrium dynamics. The non-equilibrium spin-wave theory introduced in Refs. [56; 57] and reviewed in Sec. 4.2.1 provides a controlled methodological approach as well as an intuitive physical picture of non-equilibrium dynamics in terms of the coupled evolution of the collective spin and dynamically generated spin waves. This formalism can be straightforwardly extended to systems subject to arbitrary driving protocols, by replacing \(h\) with \(h(t)\) in Eqs. (157). To make the Section more self-contained we report here the expression of the variable-range Hamiltonian \(\hat{H}(t)\) (174) expanded to quadratic order in the spin-wave operators: Figure 22: Schematic phase-space portraits of the effective Hamiltonian \(\mathcal{H}_{\text{eff}}\) in Eq. (181) on the sphere, with parameters belonging to the shaded region of the nonequilibrium phase diagram in Fig. 21, corresponding to the coexistence of a dynamically stabilized Kapitza phase and the ferromagnetic phase \(F_{\parallel}\) [(a), shaded blue in Fig. 21], or \(F_{\perp}\) [(b), shaded orange in Fig. 21]. We emphasize that the paramagnetic configuration is here associated with a _maximum_ of \(\mathcal{H}_{\text{eff}}\). Figure taken from Ref. [89]. \[\hat{H}(t)=-Nh(t)\bigg{(}1-\frac{\hat{n}_{0}+\hat{n}_{\rm sw}}{N} \bigg{)}\cos\theta(t)\\ -NJ_{0}\bigg{[}\bigg{(}1-\frac{\hat{n}_{0}+\hat{n}_{\rm sw}}{N} \bigg{)}\sin\theta(t)\cos\phi(t)\bigg{]}^{2}\\ -4J_{0}\sum_{k}f_{k}(\alpha)\bigg{(}\cos^{2}\theta(t)\cos^{2} \phi(t)\ \frac{\tilde{q}_{k}\tilde{q}_{-k}}{2}+\sin^{2}\phi(t)\ \frac{\tilde{p}_{k}\tilde{p}_{-k}}{2}\\ -\cos\theta(t)\cos\phi(t)\sin\phi(t)\ \frac{\tilde{q}_{k}\tilde{p}_{-k}+ \tilde{p}_{k}\tilde{q}_{-k}}{2}\bigg{)}, \tag{184}\] where we use the same notations as in the rest of the Report. A _many-body_ Kapitza phase consists of a simultaneous dynamical stabilization of the whole spectrum of quantum excitations around an unstable configuration. Intuition on this phenomenon can be obtained at the level of linear stability by expanding \(\hat{H}(t)\) to quadratic order in the quantum fluctuations, as in Eq. (184), around the paramagnetic configuration with \(\theta=0\): \[\hat{H}(t)=\mathcal{E}_{\rm cl}(t)+2\sum_{k}\bigg{[}\big{(}h(t)-2J_{0}f_{k}( \alpha)\big{)}\frac{\tilde{q}_{k}\tilde{q}_{-k}}{2}+h(t)\frac{\tilde{p}_{k} \tilde{p}_{-k}}{2}\bigg{]}\, \tag{185}\] where \(\mathcal{E}_{\rm cl}(t)=-2Nh(t)\). In the absence of modulation in the ferromagnetic phase [i.e., \(h(t)=h_{0}<2J_{0}\)], an extended interval \([-k^{*},k^{*}]\) around \(k=0\) in the spin-wave band is associated with unstable modes, as their corresponding frequency \(\omega_{k}=\sqrt{h_{0}\left(h_{0}-2J_{0}f_{k}(\alpha)\right)}\) becomes imaginary for small enough \(k\). However, upon introducing the modulation \(h(t)\) as in Eq. (183) with \(\delta h\neq 0\), the effective dispersion relation \(\omega_{k,\rm eff}\) is modified. For a suitable choice of the driving parameters, the frequencies \(\omega_{k}\) may become real for all values of \(k\). The occurrence of this nontrivial stabilization of an otherwise unstable phase of matter against all possible fluctuations of its degrees of freedom is illustrated in Fig. 23 and it represents a generalization of the Kapitza pendulum to a genuine many-body system. In order to understand how all the degrees of freedom can get dynamically and simultaneously stabilized by driving a single modulated global field \(h(t)\), we consider the fast-driving Figure 23: Stabilization of _many-body_ Kapitza phases. In the presence of suitable periodic driving, the otherwise unstable spectrum of quantum excitations around the paramagnetic configuration gets simultaneously dynamically stabilized for all values of \(k\). Here \(\alpha=1.5\), \(N=400\), and \(h_{0}/J_{0}=1.35\). In absence of driving \(\delta h=0\) the system is in the ferromagnetic phase. The red points represent the (squared) frequency spectrum \(\omega_{k}^{2}=h_{0}\left(h_{0}-2J_{0}f_{k}(\alpha)\right)\) of the spin-wave excitations, labeled by their wavevector \(k\). An extended interval of long-wavelength modes are unstable (i.e., \(\omega_{k}^{2}<0\) for \(k\) near \(0\)). As the driving is turned on with a strength \(\delta h\) in a suitable range of values, not only the collective spin mode with \(k=0\) discussed in Sec. 5.1.1, but also the whole set of modes with \(k\neq 0\) become _stable_ (i.e., \(\omega_{k}^{2}>0\) for all \(k\)). The blue points show the exact effective dispersion relation \(\omega_{k}^{2}=(h_{0}-J_{0}f_{k}(\alpha))^{2}\) in the presence of a high-frequency driving \(\Omega\to\infty\) with \(\zeta=\delta h/\Omega=0.6014\) (corresponding to \(\gamma=0\) in the effective Hamiltonian, see the text). When \(J_{0}\ll\Omega<\infty\), this effective dispersion relation receives perturbative corrections in inverse powers of \(\Omega\), and no qualitative changes occur during the prethermal stage (see Sec. 5.1.3 and references therein). Figure taken from Ref. [89]. limit \(\Omega\to\infty\) as a function of the rescaled driving amplitude \(\zeta\), which can be studied analytically also for \(\alpha\neq 0\). In this regime the stroboscopic evolution of the system at times \(t_{n}=2\pi n/\Omega\) with \(n=0,1,2,\dots\) is governed by an effective static Hamiltonian \(\hat{H}_{\rm eff}\) obtained via a high-frequency expansion (see Appendix F). The computation of \(\hat{H}_{\rm eff}\), discussed in Sec. 5.1.1 for the infinite-range limit, is actually independent of the particular of the interaction range. Consequently, it can be implemented following exactly the same steps, leading to an effective long-range XY spin chain: \[\hat{H}_{\rm eff}=-\sum_{i\neq j}^{N}\frac{J}{\|i-j\|^{\alpha}}\left[\frac{1+ \gamma(\zeta)}{2}\hat{\sigma}_{i}^{x}\hat{\sigma}_{j}^{x}+\frac{1-\gamma(\zeta) }{2}\hat{\sigma}_{i}^{\gamma}\hat{\sigma}_{j}^{\gamma}\right]-h_{0}\sum_{i}^{ N}\hat{\sigma}_{i}^{z}, \tag{186}\] where the anisotropy parameter \(\gamma(\zeta)=\mathcal{J}_{0}(4\zeta)\) is the same as in Eq. (181) and is plotted in Fig. 20. Equation (186) allows us to discuss the modification of the nonequilibrium phase diagram in Fig. 21 for \(\alpha>0\) and large \(\Omega/J_{0}\to\infty\). The driven dynamics at stroboscopic times is equivalent to the _quench_ dynamics governed by the effective static Hamiltonian \(\hat{H}_{\rm eff}\). As we reviewed in Sec. 4.2.2 above concerning dynamical phase transitions, dynamical ordered phases arising from quench dynamics exist as long as the (post-quench) Hamiltonian supports long-range order at finite energy density above the ground state. In the present case, such dynamical ordering can be interpreted as dynamically stabilized non-equilibrium ordering and is dictated by the phase structure of \(\hat{H}_{\rm eff}\): Initializing the system in a state with a well-defined average polarization close enough to that characterizing an equilibrium state of \(\hat{H}_{\rm eff}\), its dynamical (stroboscopic) order parameter will be stable in the course of time-evolution. As we reviewed in Sec. 2, for one-dimensional systems ordering at finite energy density requires \(\alpha\leq 2\)[69; 75; 413]. The character of the dynamical magnetic ordering of the system depends upon the driving amplitude \(\zeta\): When the effective anisotropy parameter \(\gamma(\zeta)\) is small or large enough and negative, there appear dynamically stabilized unconventional ferromagnetic phases with paramagnetic character or with magnetization in the \(yz\)-plane orthogonal to the direction of actual ferromagnetic interactions, respectively. The latter phase in particular has no equilibrium counterpart in the Ising model. Figure 24: Fast-driving nonequilibrium phase diagram of the periodically driven long-range Ising chain defined by Eq. (174) and (183), for \(\alpha>0\). Compared to Fig. 21, the shaded region of coexistence of phase \(K\) with \(F_{\parallel,\perp}\) has disappeared, and the left boundary of region \(K\) moves leftwards upon increasing \(\alpha\), as indicated by the white arrows. This displacement is vanishingly small in the thermodynamic limit for \(0<\alpha\leq 1\), and finite for \(\alpha>1\). The amount indicated by the arrows corresponds to \(\alpha=1.5\) (it is magnified by a factor \(2\) for ease of visualization). Figure taken from Ref. [89]. Upon increasing \(\alpha\) up to the value 2, quantum fluctuations modify the phase boundaries in the nonequilibrium phase diagram in Fig. 21 as shown by the white arrows in Fig. 24. The shift of the phase boundary can be quantitatively computed using (equilibrium) spin-wave theory, which is exact for \(\alpha\lesssim 1\) and approximate for \(1<\alpha\leq 2\), see Eq. (63). Quantum fluctuations have a further, dramatic effect of the nonequilibrium phase diagram. In fact, the second Kapitza phase which coexists with the ferromagnetic phases at mean-field level, indicated by the shaded region in Fig. 21, turns out to be unstable to many-body fluctuations at finite wavelength. Although the driving can stabilize the collective spin, there appears an extended interval of unstable spin modes in the Brillouin zone with finite wavelength \(k\neq 0\), which are expected to prevent dynamical stabilization. In fact, within a linear stability analysis, the effective spectrum of excitations is given by \[\omega_{k}^{2}=\left[h_{0}-\left(1-\gamma(\zeta)\right)J_{0}f_{k}(\alpha) \right]\left[h_{0}-\left(1+\gamma(\zeta)\right)J_{0}f_{k}(\alpha)\right], \tag{187}\] as obtained by expanding Eq. (186) in spin-wave operators around the paramagnetic configuration \(\theta=0\). The effective dispersion relation features a finite interval in the Brillouin zone characterized by with imaginary frequencies within the range of parameter values \(h_{0}<J_{0}\left[1-|\gamma(\zeta)|\right]\) under consideration, see Fig. 25. The amplitude of this interval shrinks to zero when the anisotropy \(\gamma=\mathcal{J}_{0}(4\zeta)\) approaches 0, i.e., when the driving strength \(\zeta\) equals one of the zeros \(\zeta_{n}\) with \(n=1,2,\dots\) of the Bessel function. Away from this discrete set of values, the Kapitza phase coexisting with the ferromagnetic phases turns out to be destabilized by these finite-wavelength fluctuations, at least at the level of linear stability, in spite of the stabilization of the collective \(k=0\) mode. We remark that when \(\zeta\) is tuned to an isotropic point \(\zeta_{n}\), the many-body Kapitza phase discussed above becomes stable in the high-frequency limit \(\Omega\to\infty\). The reason behind such stability may be easily traced back to the stroboscopic conservation of the collective spin Figure 25: Effective spectrum of the quantum spin-wave excitations around the unstable paramagnetic configuration for \(\alpha=1.5\), \(h_{0}/J_{0}=0.35\), in the presence of a high-frequency drive with \(\delta h/\Omega=0\) (red), 0.4023 (blue) and 0.6014 (green), corresponding to effective anisotropy parameters \(\gamma=1\), 0.45, and 0, respectively, in the effective Hamiltonian \(\hat{H}_{\rm eff}\) in Eq. (186). The blue and green points correspond to parameters within the shaded region in Fig. 21, in which coexistence of Kapitza and ferromagnetic phases occurs in the infinite-range model. Although the collective \(k=0\) mode is dynamically stabilized, for \(\alpha>0\) an extended interval in the Brillouin zone appears with modes characterized by imaginary frequencies \(\omega_{k}^{2}<0\), as shown, e.g., by the blue points. As shown by the green points, this instability disappears only at isolated points \(\zeta_{1},\zeta_{2},\dots\) for which \(\gamma=0\) [corresponding to the zeros of the Bessel function, see after Eq. (181)], i.e., characterized by an emergent \(O(2)\) rotational symmetry. Figure taken from Ref. [89]. projection \(\mathcal{S}^{c}\) along the field direction, due to the emergent \(O(2)\) rotational symmetry. Indeed, if the system is initialized in a fully polarized state with a small displacement \(\theta_{0}\) away from the \(z\)-axis, the collective spin has to remain trapped in a neighborhood of the fully polarized configuration \(\theta=0\), because \(\mathcal{S}^{c}(t_{n})\approx 1-\theta_{0}^{2}/2\) cannot decrease. Let us finally briefly comment on what happens for \(\alpha>2\). For \(\alpha=\infty\) the long-range quantum Ising chain (174) reduces to the standard quantum Ising chain with nearest-neighbor interactions (which has been studied in Refs. [414; 415; 416; 417; 418; 419; 420; 421; 422; 423; 424; 425; 426; 427; 428; 429; 430; 431; 432; 433; 434; 435; 436; 437; 438; 439; 440; 439; 450; 451; 452; 453; 454; 455; 456; 457; 458; 459; 460; 461; 462; 463; 464; 465; 466; 467; 468; 469; 470; 471; 472; 473; 474; 475; 476; 477; 478; 479; 480; 481; 482; 483; 484; 485; 486; 487; 488; 489; 490; 491; 492; 493; 494; 495; 496; 497; 498; 499; 500; 501; 502; 503; 504; 505; 506; 507; 508; 509; 510; 511; 512; 513; 514; 515; 516; 517; 518; 519; 520; 521; 522; 523; 524; 525; 526; 527; 528; 529; 530; 531; 532; 533; 534; 535; 536; 537; 538; 539; 540; 541; 542; 543; 544; 545; 546; 547; 548; 559; 560; 561; 562; 563; 564; 565; 566; 567; 568; 569; 570; 571; 572; 573; 574; 575; 576; 577; 578; 579; 580; 581; 582; 583; 584; 585; 586; 587; 588; 589; 590; 591; 592; 593; 594; 595; 596; 597; 598; 599; 600; 601; 602; 603; 604; 605; 606; 607; 608; 609; 610; 611; 612; 613; 614; 615; 616; 617; 618; 619; 620; 621; 622; 623; 624; 625; 626; 627; 628; 629; 630; 631; 632; 633; 634; 635; 636; 637; 638; 639; 640; 641; 642; 643; 644; 645; 646; 647; 648; 659; 651; 652; 653; 654; 656; 667; 668; 669; 670; 671; 672; 673; 674; 675; 676; 677; 678; 689; 690; 691; 692; 693; 694; 695; 696; 697; 698; 699; 700; 701; 702; 703; 704; 705; 706; 707; 708; 709; 710; 711; 712; 713; 714; 715; 716; 717; 718; 719; 720; 721; 722; 723; 724; 725; 726; 727; 728; 729; 730; 731; 732; 733; 734; 735; 736; 737; 738; 739; 740; 741; 742; 743; 744; 745; 746; 747; 748; 749; 750; 751; 752; 753; 754; 755; 756; 757; 758; 759; 761; 762; 763; 764; 765; 766; 767; 768; 769; 777; 788; 78; 790; 791; 792; 793; 794; 795; 796; 797; 798; 799; 800; 81; 82; 83; 84; 85; 86; 87; 88; 89; 89; 901; 83; 85; 89; 86; 89; 871; 89; 802; 803; 81; 84; 86; 88; 87; 891; 88; 892; 804; 89; 805; 806; 807; 808; 81; 893; 810; 84; 811; 85; 812; 813; 86; 814; 815; 816; 817; 818; 82; 82; 83; 84; 86; 87; 88; 89; 89; 807; 808; 83; 84; 88; 89; 81; 85; 82; 84; 86; 87; 89; 88; 89; 81; 89; 809; 84; 89; 82; 85; 86; 87; 88; 89; 88; 89; 80; 89; 80; 81; 89; 80; 81; 82; 84; 89; 86; 88; 89; 810; 87; 89; 80; 89; 811; 88; 83; 85; 86; 89; 81; 87; 88; 89; 82; 893; 88; 89; 80; 81; 84; 81; 85; 86; 87; 88; 89; 89; 80; 82; 89; 80; 83; 84; 86; 87; 88; 89; 89; 80; 84; 88; 89; 85; 89; 86; 89; 87; 88; 89; 80; 89; 80; 81; 89; 80; 81; 80; 82; 85; 86; 87; 88; 89; 81; 89; 80; 84; 81; 88; 89; 82; 89; 80; 85; 87; 89; 80; 86; 88; 88; 89; 81; 89; 80; 87; 81; 88; 89; 81; 80; 88; 82; 89; 81; 80; 89; 82; 81; 80; 83; 81; 84; 85; 86; 87; 88; 89; 82; 87; 88; 88; 89; 80; 88; 81; 89; 81; 81; 82; 83; 85; 86; 87; 88; 89; 80; 88; 89; 81; 80; 89; 81; 810; 81; 82; 83; 86; 88; 87; 88; 89; 80; 81; 81; 84; 819; 82; 89; 83; 88; 89; 80; 84; 81; 85; 86; 87; 88; 88; 89; 87; 88; 89; 88; 89; 80; 88; 89; 81; 80; 89; 80; 81; 80; 81; 82; 81; 83; 84; 81; 82; 85; 86; 87; 88; 89; 82; 89; 80; 81; 80; 82; 81; 84; 81; 86; 89; 81; 80; 82; 83; 85; 86; 87; 88; 88; 89; 82; 88; 89; 80; 81; 84; 82; 86; 88; 89; 80; 87; 81; 88; 89; 81; 82; 89; 80; 81; 80; 82; 81; 83; 84; 81; 85; 86; 87; 88; 89; 82; 86; 88; 89; 87; 88; 89; 82; 88; 89; 80; 89; 80; 81; 81; 82; 89; 80; 81; 83; 86; 89; 82; 83; 87; 88; 89; 80; 84; 81; 88; 8; 89; 80; 81; 84; 82; 86; 89; 81; 85; 87; 88; 8; 89; 90; 89; 80; 89; 80; 89; 81; 89; 80; 89; 81; 89; 90; 81; 82; 89; 80; 83; 84; 89; 86; 89; 82; 87; 89; 83; 88; 89; 84; 89; 85; 89; 86; 89; 87; 89; 88; 99; 80; 89; 89; 90; 8 hand, the description of prethermal dynamics is more complicated due to nonlinear interactions between the collective spin and the bosonic "bath"; however, relaxation to a quasi-stationary state is typically very slow or absent. The neglected nonlinear spin-wave interactions are finally expected to ultimately lead to the decay of this prethermal quasi-stationary state [420, 421, 422, 423, 108, 251]. The fastest heating processes are associated with the absorption of a quantum of energy \(\sim\Omega\) from the drive through a high-order resonant transition involving \(\Omega/J_{0}\) elementary local transitions. According to by now standard theoretical arguments [403, 404, 405, 424], the associated heating time scale is expected to scale exponentially as \(\tau\sim\exp(\text{const}\times\Omega/J_{0})\). In conclusion, we note that several numerical studies of quench dynamics in long-range interacting chains with \(2<\alpha\ll\infty\) suggested that magnetic ordering survives for surprisingly long times in the prethermal regime [234, 245, 425, 62]. This occurrence has been recently analytically understood in terms of a suppressed rate of formation of unbound domain walls [103]. Furthermore, the expected functional form of the lifetime of such dynamical long-range ordering as a function of the driving protocol and of \(\alpha\) has been determined, predicting the possibility of having extremely long-lived order even at finite driving frequency [103]. When conditions for this phenomenon are met, signatures of dynamically stabilized ordered phases are expected to emerge even for \(\alpha>2\). Figure 26: Persistence of the dynamically stabilized phases at finite driving frequency. Left in each panel: Stroboscopic time-evolution \(\vec{\mathcal{S}}(t_{n})\) of the collective spin of the long-range Ising chains in Eq. (174) with \(\alpha\neq 0\), subject to the modulated magnetic field in Eq. (183). \(\vec{\mathcal{S}}(t_{n})\) is obtained by nonequilibrium spin-wave theory and, for simplicity of visualization, is projected onto the unit sphere. In all simulations, the static field is \(h_{0}/J_{0}=1.2\), as in Fig. 19, the driving frequency is \(\Omega/J_{0}=8\), and we used \(N=100\). The system is initialized in fully polarized states in the \(xz\) [panels (a), (b), (c)] and \(yz\) [panel (d)] planes. Right in each panel: relative departure \(\epsilon(t)\) of the total spin from its maximal length \(N/2\), due to the dynamical generation of quantum spin-wave excitations, corresponding to the largest trajectory in each panel. In particular: (a) Dynamical ferromagnetic phase, with \(\alpha=1\) and \(\delta h/J_{0}=0.05\). (b) Fast heating in the chaotic dynamical regime, with \(\alpha=0.8\), \(\delta h/J_{0}=0.2\). (c) Dynamically stabilized Kapitza phase, with \(\alpha=1\), \(\delta h/J_{0}=5.33\). (d) Dynamically stabilized ferromagnetic phase with magnetization in the \(yz\)-plane orthogonal to the direction \(x\) of the actual ferromagnetic interactions, with \(\alpha=1\), \(\delta h/J_{0}=8\). Panels (a), (c), and (d) demonstrate that the dynamical phases \(F_{\parallel}\), \(K\), \(F_{\perp}\) (see Fig. 21), respectively, continue to exist at finite driving frequency. The amount of excitations generated remains small and the total energy remains bounded across many cycles, qualifying these phases as _prethermal_. In panel (b), instead, heating is witnessed by the growth of \(\epsilon(t)\) (notice the different vertical scale in the plot). The heating rate in this case increases upon increasing \(\alpha\). Figure taken from Ref. [89]. ### Discrete time crystals The concept of spontaneous breaking of (continuous) time-translational invariance in quantum many body systems has been brought to widespread attention in Ref. [426]. Soon after, these non-equilibrium phases were proven impossible at equilibrium [427; 428]. Yet, _discrete_ time translational invariance, realized in periodically driven systems, can be spontaneously broken [429; 430; 431]. Thus, the term "discrete time crystals" (DTC) refers to systems where the discrete time-translation symmetry, encoded in the periodically driven Hamiltonian \(\hat{H}(t)=\hat{H}(t+T)\), is spontaneously broken. Expectation values of relevant observables exhibit oscillations with a period that is an integer multiple of \(T\). Several experimental observation of DTC have been discussed in the last decade [26; 27; 28; 214]. For a general overview of these research efforts, we refer the readers to recent reviews [432; 433; 434]. Following Ref. [401] we say that a DTC phase exists if, for a class of states \(|\Psi\rangle\) with short-ranged connected correlations [430], there always exists an observable \(\hat{O}\) such that the time-evolved expectation value in the thermodynamic limit \(N\to\infty\), satisfies the following conditions: 1. _Time-translation symmetry breaking_: \(\langle\hat{O}(t+T)\rangle\neq\langle\hat{O}(t)\rangle\), even though \(\hat{H}(t)=\hat{H}(t+T)\), so that long-range correlated Floquet eigenstates of the propagator \(\hat{U}_{F}=\hat{U}(t+T,t)\) exist [430]. 2. _Rigidity_: the periodic oscillations of \(\langle\hat{O}(t)\rangle\), with a period \(\tau\), shall persist in a whole finite and connected region of the Hamiltonian parameter space. 3. _Persistence_: the periodic oscillations of \(\langle\hat{O}(t)\rangle\) become stable at long time in the thermodynamic limit \(N\to\infty\). These conditions cannot be fulfilled by a local many-body quantum system due to the presence of external driving, which would lead to relaxation towards an infinite-temperature state, thereby preventing long-lived oscillations. To protect ordering against relaxation, a mechanism is required to control the impact of dynamically generated excitations. Pre-thermal stability can be achieved through long-range interactions, which are known to generate metastable states with lifetimes that grow as the system approaches the thermodynamic limit, see Secs. 3.2 and Sec. 4.2.1. Then, it is natural that the investigation of DTCs in clean systems has been primarily focused on long-range interacting models where the robustness of collective oscillations in presence of periodic drive is guaranteed. Accordingly, stable DTC phases can only be found for \(\alpha<d\)[401; 435; 436; 296], while for \(\alpha>d\), the lifetime of oscillations is expected to be finite in the \(N\to\infty\) limit [405; 103; 437]. The \(\alpha=0\) Ising model is a privileged playground for time-translational symmetry breaking, since it features a kaleidoscope of different DTCs. Indeed, the presence of \(p\)-order DTC phases (with a period of \(pT\), where \(p\) is an integer) which were first witnessed in specifically designed \(\mathbb{Z}_{p}\)-connected states [435; 401] has recently been detected as well in the standard \(\mathbb{Z}_{2}\) symmetric Ising model [296; 438], where \(p\) can be fractional and signatures of this behavior persist for finite \(\alpha\) and in the classical limit [439; 440; 441]. #### 5.2.1 Mean-field DTC In the mean-field limit \(\alpha=0\) it is possible to obtain an analytic solution for the periodic dynamics and establish the simplest instance of DTC [401]. We consider a step-wise drive of the magnetic field in Eq. (175) of the form \[h(t)=\psi\sum_{n=1}^{\infty}\delta(t-nT)\, \tag{188}\] of amplitude \(\psi\), and we focus on the evolution of the order parameter \(m^{a}(t)=\frac{1}{N}\sum_{j}\langle\hat{\sigma}_{j}^{\alpha}\rangle\) (where \(a=x,y,z\)), i.e. the components of the magnetization of the system. First, we observe that the Floquet operator (see F) can be expressed as the product of two distinct operators: \[\hat{U}_{F}=e^{-2i\psi\hat{S}_{z}}e^{iJ_{0}T\hat{S}_{z}^{2}/N}\, \tag{189}\] where we used the global spin operators notation defined in Eq. (4). The term \(\exp(-2i\psi\hat{S}_{z})\) in Eq. (189), which acts as a rotation around the \(z\)-axis, represents the effect of the kick term on the observable \(\vec{m}\). The other term describes the evolution of \(\vec{m}\) induced by the second term on the right-hand side of Eq. (189) over one period \(T\). The Heisenberg equations of motion corresponding to this evolution for the operators \(\hat{S}_{\,a}\) are: \[\frac{d}{dt}\hat{S}_{\,x}=0\,, \tag{190}\] \[\frac{d}{dt}\hat{S}_{\,y}=\frac{J_{0}}{N}\left(\hat{S}_{\,x}\hat {S}_{\,z}+\hat{S}_{\,z}\hat{S}_{\,x}\right)\,\] (191) \[\frac{d}{dt}\hat{S}_{\,z}=-\frac{J_{0}}{N}\left(\hat{S}_{\,x}\hat {S}_{\,y}+\hat{S}_{\,y}\hat{S}_{\,x}\right). \tag{192}\] As usual, due to the mean field nature of the problem we can neglect the spin-spin correlations in the thermodynamic limit [442]. Taking averages on both sides of Eq. (190) and using the decoupling relations \(\langle\hat{S}_{\,a}\hat{S}_{\,b}\rangle\simeq\langle\hat{S}_{\,a}\rangle \,\langle\hat{S}_{\,b}\rangle\) one obtains the closed set of equations for the magnetization: \[\dot{m}_{x}=0\,\quad\dot{m}_{y}=J_{0}m_{x}m_{z}\,\quad\dot{m}_{z}=-J_{0}m_{x}m_{ y}. \tag{193}\] As a consequence, after a time interval \(T\), \(\vec{m}\) undergoes a clockwise rotation around the \(x\)-axis by an angle of \(J_{0}Tm_{x}(t)\). The \(\mathbb{Z}_{2}\) symmetry of the model is encoded in the dynamical symmetry \(\psi\to\psi+\pi/2\) and \(\vec{m}_{n}\to R_{z}(\pi n)\cdot\vec{m}_{n}\) in Eq. (193). Integrating out the equation of motions in Eq. (189), the evolution of the observable \(\vec{m}\) is given by \[\vec{m}_{n+1}=f(\vec{m}_{n})\equiv R_{z}(2\psi)\cdot R_{x}(-J_{0}Tm_{x,n}) \cdot\vec{m}_{n}, \tag{194}\] Due to the periodic nature of the drive, the map in Eq. (194) displays an Hamiltonian structure and its action preserves the area of the region on the sphere \(|\vec{m}|^{2}=1\) span by the dynamics. Accordingly, one can employ polar coordinates along the \(z\)-axis, \(\vec{m}=(\sin\theta\cos\phi,\cos\theta\cos\phi,\cos\theta)\) in order to express an area element as \(dS=d\cos\theta d\phi\). Then, the coordinates \(\phi\) and \(I=\cos\theta\) serve as natural canonical conjugate variables for our system. Following the discussion in Ref. [442] the action \(I\) can be regarded as the \(z\)-component of the angular momentum and \(\phi\) as the rotation angle around the same axis. with initial conditions \(I_{0}=0\) and \(\phi_{0}=\pi/2\). This corresponds to the Poincare map obtained by taking stroboscopic section of the integrable dynamics. In other terms the motion of the order parameter \(\vec{m}_{n}\) at vanishing drive periods is quasi-periodic with a period \(\pi/\psi\). Slightly increasing the strength of the kicking period \(T\) the map in Eq. (195) is perturbed and the fate of the system follows the Kolmogorov-Arnold-Moser theorem [443; 444; 445]. The theorem states that small perturbations in the form of Eq. (188) only slightly deform the the torus \(I=\mathrm{const}\) at least as long as the drive frequency is not resonant. Thus, the motion remains quasi-periodic for drive strength \(\psi\) far enough from a rational multiple of \(\pi\). However, as soon as a resonance is approached and \(\psi\approx\psi_{r}\equiv r\pi\) with \(r=q/p\) and \(p\) and \(q\) are coprime integers, pairs of elliptic and unstable fixed points emerge in the dynamics due to the Poincare-Birkhoff theorem [446]. Then, distinct regions in the phase space \((I,\phi)\) can be distinguished depending on the action of the \(p\)-iterated map \(f^{p}(\vec{m})\), which also correspond to different \(\vec{m}_{n}\) evolutions. Quasi-periodic behaviour persists for initial conditions \((I_{0},\phi_{0})\) far enough from the fixed points, where a rotation dynamics occurs with \(\phi\) periodically spanning the interval \([0,2\pi]\). On the other hand, as the initial conditions \((I_{0},\phi_{0})\) approach the fixed points, a libration dynamics arises and \(\phi\) continuously oscillates around a finite value. As a result, successive map iterations do not substantial alter the magnetization value \(\vec{m}_{n+p}\approx\vec{m}_{n}\) and a DTC phase appears. Finally, the boundary between the DTC and the quasi-periodic regimes are occupied by chaotic regions, which grow and eventually take over the regular ones at large \(T\). **Summary**: The classical limit of the fully-connected model displays periodic motion in the \(T\to 0\) limit. Close to resonances, quasi-periodicity is broken but the Poincare-Birkhoff theorem leads to time-crystalline behavior, characterised by time-translational symmetry breaking of the magnetization, stable to small perturbations in the kicking strength. #### 5.2.2 Finite-size and finite-range effects The discussion above has been based on the semiclassical analysis which becomes exact in the thermodynamic limit. Yet, it is interesting to perform some numerical simulations at finite \(N\) to validate the large-\(N\) picture. At each finite size the modulus of the total spin \(\hat{S}\) of the system is conserved restricting the dynamics to the subspace of constant \(\hat{S}^{2}=S(S+1)\), with \(S=N/2\). Then, it is relatively straightforward to perform exact diagonalization up to large sizes (\(N=800\)) [201; 401]. To visualize the eigenstates in this subspace, we introduce the spin coherent states [447] \[\ket{\Omega(\theta,\phi)}=e^{-i\vec{n}\cdot\vec{S}}\ket{\mathrm{f}}, \tag{197}\] where \(\ket{\mathrm{f}}\) is the eigenstate corresponding to the maximum projection of the spin along the \(z\) direction and \(\vec{n}=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)\). The overlap between different coherent states remains finite at finite \(N\) and reads \[\langle\Omega(\theta,\phi)|\Omega(\theta+\Delta\theta,\phi+\Delta\phi)\rangle \rangle=\left(\sin\frac{\Delta\theta}{2}e^{-i\Delta\phi}\right)^{2S}, \tag{198}\] which vanishes in the \(N\to\infty\) limit due to the exponent \(S\). However, for any finite \(N\) the states in Eq. (197) form an overcomplete basis for the Hilbert space. Then, one can characterize the dynamics by estimating the projection \(|\bra{\Omega(\theta,\phi)}\ket{\eta_{m}}|^{2}\) for various different Floquet eigenstates \(\ket{\eta_{m}}\). The overlap \(|\bra{\Omega(\theta,\phi)}\ket{\eta_{m}}|^{2}\) for different values of \(m\) are shown in Fig. 27. The Floquet eigenstates in the \(p=4\) DTC phase appear clearly localized around four \(\mathbb{Z}_{4}\) symmetric points, see Fig. 27a, while this is no longer the case in the quasi-periodic phase, see Fig. 27b. This behavior can be explained semi-classically: close to a resonance, the Floquet evolution can be interpreted as a hopping between \(p\) adjacent wells in the classical phase space [438], so that the Floquet eigenstates have the form of tight-binding Bloch wavefunctions. A similar behavior for the \(p=2\) case (around \(\psi=\pi/2\)) has been observed in Ref. [401]. Let us notice that, given the initial condition chosen in the present study, in the \(N\to\infty\) limit, the only eigenstate which contributes to the dynamics, will be the one with a non-zero overlap with the point \(\theta=0\), \(\phi=0\), which in turn can correspond to each of the three phases. The current picture is not substantially altered by the inclusion of quantum fluctuations due to a finite value of \(\alpha\) or by additional local couplings. Indeed, the structure of the low-\(T\) DTC regions with \(p=2\) is generally resilient to quantum fluctuations. On the other hand, sufficiently high values of \(\alpha\) enhance the chaotic phase, leading to the disruption of the DTC phases with \(p>2\) for large enough values of the drive period \(T\), see Ref. [448] and Fig. 28b. #### 5.2.3 Order parameter The dynamical phase diagram of different high-order DTC phases exhibits intricate self-similar and fractal structures, where the regular phases are intertwined with the chaotic and quasi-periodic regions. The characterization of the entire dynamical phase diagram has long remained a difficult challenge. But the introduction of a novel order parameter enabled a comprehensive characterization of DTC phases, irrespective of their order [448]. The key to introduce a useful order parameter is to consider different values of the amplitude, \(\psi\), \(\psi+\delta\psi\), which amounts to consider two nearby initial conditions in the phase space. Then, we introduce the definition \[\zeta^{2}=\frac{1}{n_{\text{max}}}\sum_{n=0}^{n_{\text{max}}}\left(m_{x,n}( \psi+\delta\psi)-m_{x,n}(\psi)\right)^{2}. \tag{199}\] Both in the DTC phase and in the quasi-periodic one the evolution is not chaotic, so that the two nearby trajectories \(m_{x,n}(\psi+\delta\psi)\) and \(m_{x,n}(\psi)\) diverge polynomially in time. Expanding the Figure 27: **Eigenstate structure. The eigenstate structure radically changes between the three different phases of the system: while no recognizable pattern is present in the chaotic phase, panel (b), in the quasi-periodic phase the eigenstate is localized in a connected region of the \((I,\phi)\) space (panel (b), curves (a) and (c)), while in the \(p=4\) DTC phase it appears localized around four, \(\mathbb{Z}_{4}\) symmetric, points (panel (a): curve (b)). Image adapted from Ref. [448].** latter equation for small deviations \(\delta\psi\) yields \[\zeta^{2}=\frac{1}{n_{\rm max}}\sum_{n=0}^{n_{\rm max}}\left(m_{x,n}(\psi+\delta \psi)-m_{x,n}(\psi)\right)^{2}\sim\frac{\ell}{n_{\rm max}}\sum_{n=0}^{n_{\rm max }}\delta\psi^{2}n^{2}\sim\ell(\delta\psi n_{\rm max})^{2}\, \tag{200}\] where \(\ell\) depends on the average distance between two randomly chosen points of the two nearby trajectories The value of \(n_{\rm max}\) has to be large enough so that the rightmost term in Eq. (200) remains \(O(1)\), i.e. \(n_{\rm max}\to\infty\) as \(\delta\psi\to 0\). However, the value of \(\ell\) jumps discontinuously between the libration regime (corresponding to a DTC phase) and the rotation one (corresponding to a quasi-periodic phase). Indeed, close to the fixed point of the iterated map the micro-motion becomes negligible and \(\zeta\to 0\), signalling the emergence of the pure time-crystalline regime. The value \(\zeta\) in the two phases is not universal and depends on the value of \((n_{\rm max}\delta\psi)\sim O(1)\). The jump in the value of \(\ell\) results in a discontinuity in \(\zeta\), which may be observed in the numerical distribution of \(\zeta\). Indeed, the order parameter in the DTC is sharply peaked around \(\zeta=0\), but becomes negligible for \(\zeta\gtrsim 0.2\). The quasi-periodic phase is signalled by a peak at \(\zeta\sim 0.36\), which appears disconnected from the DTC peak at \(\zeta=0\). The exponential divergence of trajectories in the chaotic phase leads to the memory loss of initial conditions on a time-scale \(n_{\rm max}\sim-\log(\delta\psi)\), making the values \(m_{x,n}(\psi)\) and \(m_{x,n}(\psi+\delta\psi)\) to become two equally distributed random variables with zero mean. Then, due to the central limit theorem, \(\zeta^{2}\) is distributed as a Gaussian in the chaotic phase \[\langle\zeta^{2}\rangle=2\langle m_{x}^{2}\rangle \tag{201}\] and variance \(O(n_{\rm max}^{-1})\). For an isotropic system one can easily derive the peak value for the distribution, since \(|\vec{m}|^{2}=1\) one has \[\langle m_{x}^{2}\rangle=\frac{1}{3}\langle|\vec{m}|^{2}\rangle=\frac{1}{3}\, \tag{202}\] so that \(\langle\zeta^{2}\rangle=2/3\). Thus, the order parameter \(\zeta\) can be used to detect higher-order DTC phases in clean long-range systems, by exploiting the connection between DTC and Poincare-Birkhoff theorem [446, 449], which rigorously holds in the mean-field \(\alpha=0\) limit. Indeed, the phase diagram obtained by the numerical characterization of the order parameter, see Fig. 28 reproduces and expand the known properties of the DTC phases in the \(\alpha=0\) limit. [401, 439] The symmetry of the phase diagram around the \(\psi=\pi/4\) axis arises from the dynamical \(\mathbb{Z}_{2}\) symmetry, which is a notable feature that would have remained undetectable with a \(p\)-dependent order parameter. At low values of \(T\), the quasi-periodic phase dominates (pink area \(\zeta\approx 0.4\)), while small islands of the DTC phases emerge around specific values of \(\psi\), corresponding to rational multiples of \(\pi\) (black areas \(\zeta\approx 0\)). Initially, the size of these islands increases with increasing \(T\), and as they approach each other, chaos begins to emerge along their boundaries (yellow area, \(\zeta\approx\sqrt{2/3}\). Ultimately, all islands associated with DTC of order \(p>2\) are engulfed by the chaotic phase, with the largest (central) one corresponding to \(p=4\) surviving the longest. Interestingly, at certain values of the driving period, we observe a revival of the higher-order DTC phases, particularly pronounced for \(p=4\) (small, arrow shaped, black area at high \(T\) for \(\psi\approx\pi/4\) in Fig. 28a). The boundary between the chaotic and DTC phases is not smooth; instead, it exhibits self-similar patterns that repeat at increasingly smaller scales. The emergence of this fractal scaling in the boundaries of time-crystalline phases draws a direct analogy with similar phenomena observed in traditional critical systems, particularly percolation, self-avoiding random walks, and the Potts model [450; 451], where a rigorous connection between conformal invariance and stochastic evolution has been established [452; 453]. As previously noted in Ref. [436], the formation of DTC islands can be comprehended within the framework of area-preserving maps [454], specifically linked to the existence of Arnold tongues [298; 455]. We conclude this Subsection with the remark that, similarly to the Kapitza phases discussed in Sec. 5.1, one can straightforwardly extend the analysis to the variable-range model with \(\alpha>0\) using the non-equilibrium spin-wave theory of Refs. [56; 57; 89] reviewed above in Sec. 4.2.1, see e.g. Fig. 28b. For details, we refer the reader to Refs. [439; 448]. ## 6 Conclusions and perspectives Let us first summarize the salient features that we discussed in this Report; hence, we will mention aspects which have not been covered here; finally, we will point out pending problems that it would be interesting to explore. **What we discussed.** In this Report, we provided a comprehensive pedagogical overview of non-equilibrium phenomena arising in the dynamics of non-random long-range interacting systems. For the sake of definiteness and connection with the theoretical and experimental literature, we took the XY quantum spin model with power-law decaying interactions with exponent \(\alpha\). Our primary focus was on the _strong long-range interactions_ regime \(0<\alpha\lesssim d\), intermediate between the mean-field limit \(\alpha=0\) and the quasilocal regime \(\alpha\gg d\). It is in this regime that the most surprising and unusual features of out-of-equilibrium dynamics appear. Our discussion was divided into three primary setups: Dynamics at low energies; Quantum quenches far away from equilibrium; Periodic driving. Figure 28: **Phase diagram.** Panel (a): Color plot of the order parameter \(\zeta\) as a function of the amplitude \(\psi\) and the period \(T\) of the drive, saturated at the value \(\zeta=\sqrt{2/3}\), with \(n_{\text{max}}=300\) and \(\delta\psi=1.6\cdot 10^{-3}\). Panel (b): same as (a) but for \(\alpha=0.5\). Image adapted from Ref. [448]. Sec. 2 - _Equilibrium_: we started by providing a concise but rather exhaustive summary of equilibrium properties exhibited by variable-range ferromagnetic spin Hamiltonians. This encompassed a discussion on the equilibrium phase diagram, the mean-field solution, the expansion in quantum fluctuations, and unusual spectral properties such as discreteness and divergent propagation velocity. Sec. 3 - _Low-energy dynamics_: we delved into near-equilibrium dynamics in a variety of setups. Here, the discrete spectrum of the strong long-range regime induces unusual equilibration dynamics, universal defect formation, and the emergence of non-analytic behavior in the fidelity. Sec. 4 - _Quantum quench dynamics_: we introduced a formalism to treat the coupled dynamics of semiclassical collective observables and the dynamics of quantum fluctuations. This allowed us to describe dynamical criticality, quantum information scrambling, and to formulate a squeezing-induced picture for the growth of entanglement. Sec. 5 - _Periodic driving_: Finally, we described how strong long-range interactions prevent periodic driving from heating the system or inducing thermalization. Instead, they allow dynamical stabilization of novel phases and time-crystalline behavior. **What we _did not_ discuss.** The present memoir covers only a small portion of the varied and lively field of quantum dynamics with long-range interactions. Let us mention (in a non-exhaustive manner) some complementary studies which have not been discussed here. A large bulk of literature addresses _mathematically and via quantum-information approaches_ the impact of long-range interactions onto the spreading of correlations [37, 40, 41, 120, 121, 122, 124, 127, 128, 139, 456, 457, 458, 459, 460, 461, 462, 463]. Interestingly, several of these results pointed out how long-range interactions may not enhance correlation spreading, but remain "shielded" in dynamical evolution [464, 465, 466, 467, 468, 469, 470, 464], as partly discussed in Section 3.1. While our Report concerns spin systems, a lot of work aims at understanding the dynamics in the presence of _fermionic_ long-range Hamiltonians. These studies have been initiated at equilibrium [467, 468, 469, 471, 472, 473, 474, 475, 476, 477, 477, 478, 479, 480, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 493, 494, 499, 495, 494, 493, 496, 497, 498, 499, 499, 499, 493, 494, 495, 496, 497, 498, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 4999, 499, 4999, A significant part of the research on quantum long-range interactions focuses on _disordered_ systems. These studies range from the effect of long-range interactions on many-body localization [193, 520, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536] (explored experimentally [537, 538, 539]), or features of glassiness in bosonic quantum systems [540], e.g. in the quantum Sherrington-Kirkpatrick (SK) model -- and chaotic and holographic properties of fully-connected disordered fermionic systems, a la Sachdev-Ye-Kitaev (SYK) [541]. Another class of disordered systems are _random quantum circuits_, which make it possible to study the dynamics through exact or hydrodynamic solutions [542]. This is true also in the case of long-range interactions [543, 544], where several rigorous results have been obtained on Brownian all-to-all circuits [545, 546, 547, 548, 549, 550, 551]. Many studies have investigated long-range models in _open quantum systems_, where interaction with the surrounding environment induces dissipative effects. For all-to-all interactions, the man-field approach is correct for systems with collective jump operators [552, 553, 554]. In these settings, quantum correlations have been studied with methods similar as the ones discussed in Sec. 4.1.5 in Refs. [558, 565, 566, 567] or via Keldysh techniques [568, 569, 570]. The impact of finite range interactions has been recently addressed in Refs. [571, 572, 573], while in Refs. [574, 575, 576] it was tackled via a generalization of the non-equilibrium spin-wave approach discussed in Sec.4.2.1. _Other studies_ in the field include hydrodynamics with long-range interactions [577, 578, 579, 580, 581], or hybrid models stemming for instance from the interplay between short-range and long-range interactions. The interplay between long-range interactions and non-homogeneity has also been studied in quasi-periodic systems [582, 583, 584, 585], where the topological properties of the system are altered by the long-range couplings [586] as it has also been shown in topological superconductors [587, 588, 589, 590]. To conclude, very recently, a lot of attention in quantum dynamics has been given to _measurement induced phase-transitions_, a new dynamical phenomenon which results from the interplay between unitary (entangling) evolution and (disentangling) measurements [542]. In this context, Refs. [591, 592, 593, 594, 595, 596, 597, 543, 598, 599] have discussed how long-range interactions affect the non-equilibrium phase diagram, which has been probed experimentally in a trapped ion simulator [600]. **What shall be done.** Quantum dynamics with long-range interactions is an exciting field of research with challenging open problems that go beyond the ones addressed here. We are pleased to conclude with a brief overview of the open questions. First of all, the methods considered in this Report may be extended to address a series of pending problems about long-range interacting spin systems. As mentioned in Section 2.3.1, the spatial _spreading of correlations_ could be characterized using the non-equilibrium spin-wave theory [56, 57] illustrated in Sec. 4.2.1. Related approaches could be used to explore quantum information _scrambling_, where it would be interesting to elucidate how the established exponential growth of the square-commutator (139) at dynamical critical points for \(\alpha=0\) is influenced by spatial fluctuations for \(\alpha>0\). A fundamental question concerns _thermalization_ in this class of systems. Even though permutational symmetry is broken for \(\alpha\neq 0\), anomalous dynamics compatible with a prethermal scenario appear, for long - yet finite - time scales [58, 256] (see Sec. 4.2). It is a challenging open problem to understand whether and how some form of non-thermal behavior persists at infinite time. This difficult subject has been addressed in a few numerical explorations of either spectral properties [601, 602, 603, 604] or finite-time quench dynamics [62, 103, 242, 366]. However, due to the challenging finite-size effects, the general question - especially near the strong long-range regime - is far from settled. Finally, several broader open questions go far beyond what is discussed here. This is the case of long-range _antiferromagnetic_ interaction, which characterizes several experimental platforms [488; 492; 494; 495; 496; 497; 498; 499; 500; 501; 502] as mentioned above. It is well known that in equilibrium, the competition between anti-ferromagnetism and long-range interactions can result in frustration and phenomena such as spin liquids. It would therefore be important to develop methods to tackle dynamics out of equilibrium. Concurrently, many remarkable physical phenomena stem from the simultaneous presence of long-range interactions and quenched _disorder_, from glassiness in the SK to fast scrambling in the SYK model. Although these models may be analytically solvable with a mean-field ansatz, their classical limit and the impact of quantum fluctuations remains a challenging problem in general. It would be highly desirable to have a comprehensive framework to understand the quantum dynamics with \(\alpha>0\) in this class of models. In this regard, we note that the non-equilibrium spin-wave theory of Refs. [56; 57] has been extended to disordered spin models as well, see Ref. [605]. To conclude, we remark that in nature long-range interactions always represent an instantaneous approximation for _retarded_ interactions mediated by a field (e.g. the electromagnetic field). Retardation may give rise to outstanding physical phenomena in certain conditions: For example, finite-frequency long-wavelength modes may non-trivially hybridize with the mediating-field excitations, as happens e.g. for optical phonons and light in ionic crystals (phonon-polaritons) [606; 607]. Exploration of the full range of dynamical phenomena induced by retarded long-range interactions in AMO platforms stands out as an intriguing direction. ## Acknowledgements We thank F. Carollo, R. Fazio, J. Halimeh, A. Guo, M. Heyl, M. Knap, J. Knolle, J. Marino, L. Piroli, A. Pizzi, P. Poggi, L. Santos, M. Tran and B. Zunkovic for their feedback on the manuscript and suggestions. N.D. acknowledges funding by the Swiss National Science Foundation (SNSF) under project funding ID: 200021_207537 and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC2181/1-390900948 (the Heidelberg STRUCTURERES Excellence Cluster). A.L. acknoledges support by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 864597) and by the Swiss National Science Foundation. S.P. acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 -390534769. ## Appendix A Semiclassical spectrum For a system with a single degree of freedom, the spectrum obtained via the semiclassical quantization rule consists of classical trajectories with energy \(E_{n}\) such as to enclose an area \(S_{\mathrm{cl}}(E_{n})\) in phase space equal to an integer multiple of Planck's constant \(h\), i.e., \[S_{\mathrm{cl}}(E_{n})=nh,\quad n\text{ integer}, \tag{100}\] where \(S_{\mathrm{cl}}(E)\) corresponds to the classical action. This quantization rule, together with the well-known relation between the action \(S_{\mathrm{cl}}\) and the classical period of a trajectory (see, e.g., Ref. [608]) \[\frac{dS_{\mathrm{cl}}(E)}{dE}=T_{\mathrm{cl}}(E)\equiv\frac{2\pi}{\Omega_{ \mathrm{cl}}(E)} \tag{101}\] yields the semiclassical level spacing \[E_{n+1}-E_{n}\sim\frac{dE_{n}}{dn}=\hbar\,\Omega_{\mathrm{cl}}(E_{n}). \tag{102}\] This equation may be seen as a generalization of the relation valid for the spectrum of a harmonic oscillator to nonlinear dynamics, in which the oscillations are not isochronous and thus the quantum energy spectrum is not equispaced. The semiclassical density of states \(\rho(E)\) is given by the inverse level spacing. We can use the above semiclassical relation to elucidate the spectral properties of the fully-connected quantum Ising model, cf. Sec. 2.3. For \(h>h_{\rm cr}\), taking the low-energy limit (\(n\) finite and \(N\to\infty\)) of Eq. (A.3), one recovers the harmonic tower of excitations of Eq. (16), with \(\Omega_{\rm cl}(E_{n})\sim_{N\to\infty}\omega_{>}/s\). At the quantum critical point \(h=h_{\rm cr}\), the energy gap of the elementary excitations above the ground state vanishes in the thermodynamic limit. The classical counterpart of this phenomenon is the vanishing of the classical frequency of small oscillations, which occurs because the minimum of \(\mathcal{H}_{\rm cl}\) at criticality is quartic rather than quadratic. The critical scaling of the energy gap as \(N\to\infty\) may be extracted via semiclassical considerations: Retaining the quartic terms of order \(1/N\) neglected in Eq. (16), and applying the semiclassical quantization rule in Eq. (A.1), one finds the low-energy asymptotics of quantized energy levels as \[E_{n}-E_{0}\quad\underset{\begin{subarray}{c}n\,\mathrm{finite}\\ N\to\infty\end{subarray}}{\sim}\qquad c\,\frac{n^{4/3}}{N^{1/3}},\] (A.4) which shows that the critical gap above the ground state for \(h=h_{\rm cr}\) scales as \(N^{-1/3}\) for large \(N\). Along these lines, one also finds \(\langle\hat{n}_{0}\rangle\sim N^{1/3}\). For a more systematic analysis see e.g. Ref. [83]. ## Appendix B Semiclassical description of fully-connected systems In this Appendix we review the semi-classical descriptions of quantum dynamics, which applies beyond the example of fully-connected spin systems discussed in Sections 4.1.1 and 4.1.3. ### Semiclassical approach We focus on quantum systems characterized by a small parameter \(\hbar_{\rm eff}\), which controls the impact of the quantum fluctuations. A system in this class is described by \(n\) degrees of freedom, identifyed by \(2n\) operators \(\hat{\mathbf{\xi}}=(\hat{q}_{1},\ldots,\hat{q}_{n},\hat{p}_{1},\ldots,\hat{p}_{n})\). These satisfy the standard canonical commutation relations \([\hat{q}_{i},\hat{p}_{j}]=i\hbar_{\rm eff}\,\delta_{ij}\), o more compactly \([\hat{\mathbf{\xi}},\hat{\mathbf{\xi}}]=i\hbar_{\rm eff}\,\mathcal{J}\), where \(\mathcal{J}\) is the symplectic unit25. The system is such that allows a re-scaling of the Hamiltonian Footnote 25: The symplectic matrix \(\mathcal{J}\) is given by the \(2n\times 2n\) antisymmetric matrix \(\mathcal{J}=\begin{pmatrix}\mathbb{0}_{n}&\mathbb{1}_{n}\\ -\mathbb{1}_{n}&\mathbb{0}_{n}\end{pmatrix}\), which satisfies \(\mathcal{J}^{2}=-\mathbb{1}_{2n}\). \[\hat{\mathbf{\xi}}=\mathcal{J}\,\partial\mathcal{H}_{\rm cl}(\hat{\mathbf{\xi}})\;.\] (B.2) One could equivalently define a classical system described by \(2n\) classical phase-space variables \(\mathbf{\xi}_{\rm cl}=(q_{1},\ldots,q_{n},p_{1},\ldots,p_{n})\), obeying the canonical Poisson brackets \(\{q_{i},p_{j}\}=\delta_{ij}\) and whose dynamics is given by the Hamilton-Jacobi equation of motion \(\hat{\mathbf{\xi}}_{\rm cl}=\{\mathbf{\xi}_{\rm cl},\mathcal{H}_{\rm cl}\}=\mathcal{J} \,\partial\mathcal{H}_{\rm cl}(\mathbf{\xi}_{\rm cl})\). The full quantum evolution for the expectation value of the operator \(\hat{\mathbf{\xi}}(t)\) evaluated on a the generic quantum state \(|\psi_{0}\rangle\) reads \[\frac{d}{dt}\,\langle\hat{\mathbf{\xi}}(t)\rangle=\mathcal{J}\,\,\left\langle \partial\mathcal{H}_{\rm cl}(\hat{\mathbf{\xi}}(t))\,\right\rangle\;.\] (B.3) This is exactly what stated by the Ehrenfest theorem [609], which describes the exact quantum evolutions of operators at time \(t\), without approximations. Even if this relation is reminiscent of the Hamilton's equations for the classical variable \(\mathbf{\xi}_{\text{cl}}\), in principle one has \(\langle\partial\mathcal{H}_{\text{cl}}(\hat{\mathbf{\xi}})\rangle\neq\partial \mathcal{H}_{\text{cl}}(\langle\hat{\mathbf{\xi}}\rangle)\). However, whenever quantum fluctuations are small one can look at the replacements27 Footnote 27: Notice that this is always exact the case of quadratic Hamiltonians. \[\langle\partial\mathcal{H}_{\text{cl}}(\hat{\mathbf{\xi}})\rangle\to\partial \mathcal{H}_{\text{cl}}(\langle\hat{\mathbf{\xi}}\rangle)\;.\] (B.4) This substitution is equivalent closing the cumulants at second order, namely to take \(\langle\hat{\mathbf{\xi}}\,\hat{\mathbf{\xi}}^{\prime}\rangle=\langle\hat{\mathbf{\xi}} \rangle\,\langle\hat{\mathbf{\xi}}^{\prime}\rangle\). We consider the case in which the initial state \(|\psi_{0}\rangle\) corresponds to a narrow Gaussian wave-packet, centered around a point with a _small variance_\(\Delta^{2}\) of quantum fluctuations of order \(\Delta^{2}\sim\mathcal{O}(\hbar_{\text{eff}})\). A large number of relevant initial states lie in this class. For instance, consider coherent states or pure nonentangled ones, such as uncorrelated product states, routinely prepared in cold-atom experiments via standard techniques. Weakly entangled initial states may be treated on equal footing. Therefore, by virtue of Eq. (B.4), the average \(\langle\hat{\mathbf{\xi}}(t)\rangle\) moves along the classical trajectory to the leading order in \(\hbar_{\text{eff}}\), \[\frac{d}{dt}\,\langle\hat{\mathbf{\xi}}(t)\rangle=\,\mathbb{J}\;\partial\mathcal{ H}_{\text{cl}}\,\big{(}\langle\hat{\mathbf{\xi}}(t)\rangle\big{)}+\mathcal{O}( \hbar_{\text{eff}})\;,\] (B.5) that is \[\langle\hat{\mathbf{\xi}}(t)\rangle=\mathbf{\xi}_{\text{cl}}(t)+O(\hbar_{\text{eff}})\;.\] (B.6) According to the standard semiclassical theory [282, 283, 610], quantum fluctuations around the classical trajectory \(\mathbf{\xi}_{\text{cl}}(t)\) will remain approximately Gaussian for a diverging time scale as \(\hbar_{\text{eff}}\to 0\) during the evolution, the so-called _Ehrenfest time_ scale \(T_{\text{Ehr}}=T_{\text{Ehr}}(\hbar_{\text{eff}})\). At \(T_{\text{Ehr}}\) quantum interference effects become dominant and the semiclassical description breaks down. The Ehrenfest time can be defined as the time scale for which the gaussian approximation breaks down and quantum fluctuations become of the order of one, i.e. \(\Delta^{2}(T_{\text{Ehr}})=O(1)\). This depends on how quantum fluctuations evolve in time that, in turn, is determined by the regularity properties of the classical trajectories, as summarized in Table 1. This semiclassical description is not restricted to phase-space or coherent variables, but it describes the dynamics of several interesting models. In particular, Sciolla and Biroli [219] formulated a general theory for systems with full permutational invariance in states belonging to the totally-symmetric sector. ### Classical limit of permutationally invariant systems We recall how the permutational symmetries allow for exactly mapping collective quantum models to systems of few degrees of freedom characterized by a vanishingly small effective Planck constant in the thermodynamic limit [219]. We consider a Hamiltonian \(\hat{H}\) characterizing a uniform all-to-all interaction of \(N\) elementary constituents, such as spins or particles. The symmetry under permutations of the degrees of freedom makes the mean-field treatment of the quantum dynamics exact for large \(N\). To show how the semiclassical description emerges, we consider an ensemble of \(N\) identical \(q\)-level quantum systems. A basis of the many-body Hilbert space can be constructed as the tensor product of identical single-unit bases \(\{|\alpha\rangle\}\) with \(\alpha=1,\ldots,q\). Binary permutation operators are unitary transformations that exchange a pair of units in the system. Their action is defined by \[\hat{P}_{ij}\,|\alpha_{1},\ldots,\alpha_{i},\ldots\alpha_{j},\ldots,\alpha_{ N}\rangle=|\alpha_{1},\ldots,\alpha_{j},\ldots\alpha_{i},\ldots,\alpha_{N} \rangle\;,\] (B.7) for all pairs \(i>j\). A system has full permutational invariance if its Hamiltonian \(\hat{H}\) commutes with all permutation operators. The totally-symmetric subspace (TSS) of the many-body Hilbert space is simultaneously invariant under all permutations 28. A basis of the TSS can be obtained by symmetrizing the many-body configurations \(|\alpha_{1},\dots,\alpha_{N}\rangle\) with respect to all permutations. It can be labelled by the numbers \(N_{1},\dots,N_{q}\) of units occupying each level with \(\sum_{\alpha=1}^{q}N_{\alpha}=N\). The dimension of the TSS, Footnote 28: Unless permutational symmetry is spontaneously broken or fragmentation phenomena take place [83]. \[\text{dim TSS}\ =\begin{pmatrix}N+q-1\\ q-1\end{pmatrix}\quad\underset{N\to\infty}{\sim}\quad\frac{N^{q-1}}{(q-1)!}\,\] (B.8) is only polynomially large in \(N\), which allows for the exact numerical analysis of large systems. Due to the symmetry of \(\hat{H}\), the time-evolution of totally symmetric initial states never leaves the TSS. Typically, such initial states may be simple products of identical single-body states, or ground states, like the ones prepared in experiments. It was shown by Sciolla and Biroli in Ref. [219] that the dynamics of symmetric observables within the TSS is classical in the thermodynamic limit. To show this, observe that possible off-diagonal transitions governed by \(\hat{H}\) are uniquely identified by a set of integers \(m_{1},\dots,m_{q}\) \[|N_{1},\dots,N_{q}\rangle\to|N_{1}+m_{1},\dots,N_{q}+m_{q}\rangle\,.\] (B.9) For convenience, we turn the occupation numbers \(N_{\alpha}\) into fractions \(x_{\alpha}\equiv N_{\alpha}/N\), with \(0\leq x_{\alpha}\leq 1\) and \(\sum_{\alpha=1}^{q}x_{\alpha}=1\), and denote basis states by \(|\mathbf{x}\rangle\), where \(\mathbf{x}=(x_{1},\dots,x_{q})\). Hence, we write the matrix elements of \(\hat{H}\) as29 Footnote 29: For simplicity, we assume time-reversal invariance, which results in real matrix elements \(T_{\mathbf{m}}(\mathbf{x})\in\mathbb{R}\). \[H_{\mathbf{x},\mathbf{x}^{\prime}}\equiv\langle\mathbf{x}|\hat{H}|\mathbf{x}^ {\prime}\rangle=V(\mathbf{x})\,\delta_{\mathbf{x},\mathbf{x}^{\prime}}-\sum_{ \mathbf{m}\neq\emptyset}T_{\mathbf{m}}(\mathbf{x})\delta_{\mathbf{x},\mathbf{x }^{\prime}+\mathbf{m}/N},\] (B.10) with \(\mathbf{m}=(m_{1},\dots,m_{q})\in\mathbb{Z}^{q}\). Terms in the Hamiltonian \(\hat{H}\) involving up to \(k\) bodies yield "local" transitions in the TSS basis, characterized by \(|\mathbf{m}|\equiv\sum_{\alpha}|m_{\alpha}|\leq 2k\). By the extensivity of the Hamiltonian \(\hat{H}\), both \(V(\mathbf{x})\) and \(T_{\mathbf{m}}(\mathbf{x})\) are extensive, \[V(\mathbf{x})\sim N\,v(\mathbf{x}),\qquad T_{\mathbf{m}}(\mathbf{x})\sim N\,t _{\mathbf{m}}(\mathbf{x}).\] (B.11) Crucially, the densities \(v\) and \(t\) are smooth functions of \(\mathbf{x}\), as they generally result from combinatoric factors of the occupation numbers which are insensitive to small changes \(N_{\alpha}\mapsto N_{\alpha}\pm 1,2,\dots\) to leading order in the thermodynamic limit [219]. This result is based on the smoothness of the matrix elements of \(\hat{H}\) between two TSS states concerning small changes in the occupation numbers \(N_{\alpha}\to N_{\alpha}\pm 1,\pm 2,\dots\). These properties allow to rewrite the Schrodinger equation in the TSS as \[\frac{1}{N}\frac{\partial}{\partial t}\psi(\mathbf{x},t)=\left[v(\mathbf{x})- \sum_{0\leq|\mathbf{m}|\leq 2k}t_{\mathbf{m}}(\mathbf{x})\,\cosh\left(\frac{ \mathbf{m}}{N}\frac{\partial}{\partial\mathbf{x}}\right)\,\right]\,\psi( \mathbf{x},t)\.\] (B.12) Equation (B.12) shows that the dynamics of wave-functions in the TTS is governed by the effective Hamiltonian \[\mathcal{H}_{\text{cl}}(\mathbf{\hat{q}},\mathbf{\hat{p}})\equiv v(\mathbf{ \hat{q}})-\sum_{\mathbf{m}}t_{\mathbf{m}}(\mathbf{\hat{x}})\,\cosh\left( \mathbf{m}\cdot\mathbf{\hat{p}}\right)\,\] (B.13) expressed in terms of the conjugated canonical operators \(\hat{\mathbf{\xi}}=(\hat{\mathbf{q}},\hat{\mathbf{p}})\) \[\frac{N_{\alpha}}{N}\mapsto\hat{q}_{\alpha},\qquad-i\frac{\partial}{\partial N_ {\alpha}}\mapsto\hat{p}_{\alpha}\;,\] (B.14) with an effective Planck constant \[\hbar_{\rm eff}\equiv\frac{1}{N}\qquad(\hbar=1\text{ in our units})\;,\] (B.15) that approaches zero in the thermodynamic limit. Thus, the quantum system of the original system of all-to-all interacting \(q\)-level units is mapped to \(n=q-1\) collective degrees of freedom.30 As outlined in the previous section, its quantum dynamics is equivalent, in the thermodynamic limit, to the one governed by the Hamilton equations generated by \(\mathcal{H}_{\rm cl}\). Footnote 30: Notice that the exact constraint \(\sum_{\alpha}x_{\alpha}=1\) can be solved explicitly, eliminating one degree of freedom. ### Beyond global permutational symmetry The semiclassical approach reviewed in the previous B.2 and Sec. 4.1.1 applies to a much wider class of states and models than discussed therein. One natural extension consists of a _composite system of \(M\) collective subsystems_, possibly composed of different kinds of degrees of freedom. This is possible if interactions couple the various subsystems uniformly in their elementary units, i.e., via collective operators only. Thus, the global system has a semiclassical description. When each subsystem is large, the global system will be described by \(\sum_{m=1}^{M}(q_{m}-1)\) semiclassical collective degrees of freedom, where \(q_{m}\) is the number of levels for the \(m\)-th degree of freedom. For example, the Dicke model, where \(N\) spins interact collectively with a cavity mode [260], can be viewed as an example of two classical degrees of freedom, one for the collective spin and one for the cavity mode. The same holds for the two-species kicked top [335]. A second, closely related generalization, is represented by _non-symmetric states which partially break the full permutational symmetry_. Such states may be obtained by bringing together a number \(M\ll N\) of initially separated subsystems. In this case, the full permutational symmetry breaks down into the product of smaller permutational symmetries acting separately on each subsystem. While the full system evolves outside of its totally symmetric subspace (TSS), the restricted symmetry allows a description of the dynamics within the product of the TSSs of the \(M\) individual subsystems. The semiclassical theory can thereby be applied in the thermodynamic limit, and one ends up with a few-body semiclassical system described by \(M\times(q-1)\) collective degrees of freedom. In this case, the Hamiltonian depends on these variables only via the \(q-1\) global collective combinations, leaving all the \((M-1)\times(q-1)\) remaining coordinates frozen in their initial values. A simple example is given by a permutationally invariant system of \(N\) spins-\(1/2\) initially in a random product state \(|\cdots\nearrow\nearrow\nearrow\nearrow\nearrow\nearrow\nearrow\nearrow\nearrow\nearrow\nearrow\)\(\ldots\rangle\) of spins pointing up or down along a given axis. Such a state is far away from the Dicke manifold of maximal total spin length \(N/2\). Grouping together the spins pointing in the same direction into two subsystems \(A\) and \(B\), with \(N_{A}\) and \(N_{B}\) spins respectively, the global system may be viewed as two interacting collective spins \(\hat{\vec{S}}_{A}\), \(\hat{\vec{S}}_{B}\), of length \(N_{A}/2\) and \(N_{B}/2\) respectively, initially pointing in opposite directions. In agreement with the above observation, the motion of the two spins is not independent: the Hamiltonian generates a nonlinear collective precession, and the angle between \(\hat{\vec{S}}_{A}\) and \(\hat{\vec{S}}_{B}\) is a constant of motion. ## Appendix C Asymptotic estimates for \(f_{\mathbf{k}}(\alpha)\) Here we review the properties of the Fourier transform of \(J/\|\mathbf{r}\|^{\alpha}\) on a periodic \(d\)-dimensional lattice of \(V=L^{d}\) sites, which we denote \(f_{\mathbf{k}}(\alpha)\): \[f_{\mathbf{k}}(\alpha)=\sum_{\mathbf{r}\neq\mathbf{0}}\frac{e^{-i\mathbf{k} \cdot\mathbf{r}}}{\|\mathbf{r}\|^{\alpha}}\bigg{/}\sum_{\mathbf{r}\neq\mathbf{0 }}\frac{1}{\|\mathbf{r}\|^{\alpha}}. \tag{108}\] The properties derived below only rely on the asymptotic decay of interactions \(J_{\mathbf{r},\mathbf{r}^{\prime}}\sim 1/\|\mathbf{r}-\mathbf{r}^{\prime}\|^{ \alpha}\) -- neither on the details of \(J_{\mathbf{r},\mathbf{r}^{\prime}}\) at short distances nor on the specific lattice. ### Strong long-range regime (\(0<\alpha<d\)) For \(0<\alpha<d\) the leading behavior is captured by approximating sums with integrals in Eq. (108). As we are interested in the scaling with \(L\) only, we do not keep track of prefactors. Following the standard procedure for Fourier transforming a radial function, we switch to spherical coordinates and integrate over all the angles: \[f_{\mathbf{k}\neq\mathbf{0}}(\alpha)\sim\frac{1}{L^{d-\alpha}}\int_{1}^{L}d \rho\,\rho^{d-1-\alpha}\,\frac{\mathcal{J}_{d/2-1}(|\mathbf{k}|\rho)}{(| \mathbf{k}|\rho)^{d/2-1}}, \tag{109}\] where \(\mathcal{J}_{\nu}(x)\) is the standard Bessel function of order \(\nu\). For finite \(|\mathbf{k}|\) the right-hand side always vanishes in the limit \(L\to\infty\). A finite value of \(f_{\mathbf{k}\neq\mathbf{0}}\) is only obtained when \(|\mathbf{k}|\propto 1/L\). Recalling the definition \(\mathbf{k}\equiv\mathbf{k}_{\mathbf{n}}\equiv 2\pi\mathbf{n}/L\), we make the substitution \(\rho=Ls\) and take \(L\to\infty\): \[f_{\mathbf{k}\neq\mathbf{0}}(\alpha)\equiv f_{\mathbf{n}\neq\mathbf{0}}( \alpha)\sim\int_{0}^{1}ds\,s^{d-1-\alpha}\,\frac{\mathcal{J}_{d/2-1}(2\pi| \mathbf{n}|s)}{(2\pi|\mathbf{n}|s)^{d/2-1}}. \tag{110}\] Thus, for \(0<\alpha<d\), \(f_{\mathbf{k}\neq\mathbf{0}}\) is actually a function of the discrete index \(\mathbf{n}\). For large \(|\mathbf{n}|\) we obtain the asymptotic estimate \[f_{\mathbf{n}\neq\mathbf{0}}(\alpha)\sim\frac{A(\alpha)}{|\mathbf{n}|^{d- \alpha}}+\frac{B(\alpha)}{|\mathbf{n}|^{(d+1)/2}}\,. \tag{111}\] The first [second] term governs the asymptotic decay of the discrete coefficients \(f_{\mathbf{n}\neq\mathbf{0}}(\alpha)\) for \((d-1)/2<\alpha<d\) [respectively \(0<\alpha<(d-1)/2\)]. ### Weak long-range regime (\(\alpha>d\)) For \(\alpha>d\) the function \(f_{\mathbf{k}}(\alpha)\) attains a finite limit for all \(\mathbf{k}\) as \(L\to\infty\). For small \(\mathbf{k}\), this function has a singular behavior. In this "large-scale" limit it is again legitimate to replace the sum by the corresponding integral. Proceeding similarly to Eqs. (109), we find \[f_{\mathbf{k}\neq\mathbf{0}}(\alpha)\sim\int_{1}^{\infty}d\rho\,\rho^{d-1- \alpha}\,\frac{\mathcal{J}_{d/2-1}(|\mathbf{k}|\rho)}{(|\mathbf{k}|\rho)^{d/2- 1}}\bigg{/}\int_{1}^{\infty}d\rho\,\rho^{d-1-\alpha}C_{d} \tag{112}\] where \(C_{d}=2^{-(d/2-1)}/\Gamma(d/2)\). Here the short-distance part gives a regular contribution \(\mathcal{O}(|\mathbf{k}|^{2})\) and the long-distance part gives a singular contribution \(\mathcal{O}(|\mathbf{k}|^{\alpha-d})\): \[f_{\mathbf{k}\neq\mathbf{0}}(\alpha)\sim 1-\bar{A}(\alpha)|\mathbf{k}|^{\alpha-d} -\bar{B}(\alpha)|\mathbf{k}|^{2}. \tag{113}\] The first [second] term governs the asymptotic low-momentum behavior of \(f_{\mathbf{k}\neq\mathbf{0}}(\alpha)\) for \(d<\alpha<d+2\) [respectively \(\alpha>d+2\)]. ## Appendix D Exact solution of quasi-static drive for a single mode ### Fidelity and defect density The dynamics of a single spin-wave corresponds to the one of a single Harmonic oscillator and can be solved exactly [195; 196; 197] for any time dependent frequency. Any dynamical state \(\psi(x,t)\) in the representation of the coordinate \(x\) can be expressed as \[\psi(x,t)=\sum\alpha_{n}\psi_{n}(x,t),\] (D.1) where \(\alpha_{n}\) are time independent constants and the dynamical eigenstates are given by \[\psi_{n}(x,t)=\frac{1}{\sqrt{2^{n}n!}}\left(\frac{1}{2\pi\xi^{2}(t)}\right)^{ \frac{1}{4}}e^{-W(t)\frac{\nu^{2}}{2}}H_{n}\left(\frac{x}{\sqrt{2}\xi(t)} \right)e^{-i\left(n+\frac{1}{2}\right)\lambda(t)},\] (D.2) the expression for the effective frequency \(W(t)\) and the (in-influential) phase \(\Phi(t)\) are given in the main text. If the initial state is a pure state of the basis (D.2), specifically the ground state in our case, then one has \(\alpha_{n}=0,\quad\forall n>0\) and we recover the single squeezed state generated by the operator in Eq. (86). This state describe the dynamics at all times, and thus in the exact dynamical basis (D.2) no excited states will be generated. However, at each finite time \(t>-h_{\text{cr}}/\delta\) the squeezed state \(\psi_{0}(x,t)\) will have a finite overlap with all states in the adiabatic basis \(\psi_{n}^{\text{ad}}(x,t)\), which is defined as \[\psi_{n}^{\text{ad}}(x,t)=\frac{1}{\sqrt{2^{n}n!}}\left(\frac{\Omega(t)}{\pi} \right)^{\frac{1}{4}}e^{-\Omega(t)\frac{\nu^{2}}{2}}H_{n}\left(x\,\sqrt{ \Omega(t)}\right).\] (D.3) It is convenient to write the expression of the defect density as [169] \[n_{\text{exc}}(t)=\sum_{n\in 2\mathbb{N}}n|c_{n}(t)|^{2},\] (D.4) where the coefficients \[c_{n}(t)=\int_{-\infty}^{+\infty}dx\,\psi_{n}^{*}(x,t)\psi_{0}(x,t)\] (D.5) are the transition amplitudes between the dynamical state and the instantaneous equilibrium basis. It is rather straightforward to get an exact expression for these coefficients \[c_{n}(t)=\int_{-\infty}^{+\infty}dx\psi_{n}^{\text{ad}*}(x,t)\psi_{0}(x,t)= \frac{1}{\sqrt{2^{n}n!\pi}}\left(\frac{\Omega(t)}{2\xi^{2}(t)}\right)^{\frac{ 1}{4}}\int_{-\infty}^{+\infty}dxe^{-(\Omega(t)+\tilde{\Omega}(t))\frac{\nu^{2} }{2}}H_{n}\left(\sqrt{\Omega(t)}x\right).\] (D.6) Performing a change of variable the integral can be cast into the form \[\int_{-\infty}^{+\infty}dxe^{-(\Omega(t)+\tilde{\Omega}(t))x^{2}}H_{n}\left( \sqrt{\omega(t)}x\right)=\left(\Omega(t)\right)^{-\frac{1}{2}}\int_{-\infty}^ {+\infty}e^{-\left(\frac{\tilde{\Omega}(t)}{\Omega(t)}+1\right)\frac{\nu^{2}} {2}}H_{n}\left(s\right)ds.\] Next we employ the generating function for Hermite polynomials in the integral, \[\int_{-\infty}^{+\infty}e^{-\left(\frac{\tilde{\Omega}(t)}{\Omega _{0}}+1\right)\frac{\nu^{2}}{2}}H_{n}\left(s\right)ds =\lim_{t\to 0}\frac{d^{n}}{dt^{n}}\int_{-\infty}^{+\infty}e^{- \left(\frac{\tilde{\Omega}(t)}{\Omega(t)}+1\right)\frac{\nu^{2}}{2}}e^{2st-t ^{2}}ds=\sqrt{\frac{2\pi}{\left(\frac{\tilde{\Omega}(t)}{\Omega(t)}+1\right) }}\lim_{t\to 0}\frac{d^{n}}{dt^{n}}e^{-t^{2}\frac{\left(\tilde{\Omega}(t)- \tilde{\Omega}(t)\right)}{\left(\tilde{\Omega}(t)-\tilde{\Omega}(t)\right)}}\] (D.7) \[=\left\{\begin{array}{ll}\sqrt{\frac{2\pi}{\left(\frac{\tilde{ \Omega}(t)}{\tilde{\Omega}(t)}+1\right)}}\frac{n!}{\frac{n!}{2}}\left(\frac{ \tilde{\Omega}(t)-\Omega(t)}{\tilde{\Omega}(t)+\Omega(t)}\right)^{n/2}&\text{ for }n\in 2\mathbb{Z},\\ 0&\text{ for }n\in 2\mathbb{Z}+1.\end{array}\right.\] Thus the probability of having \(n\) excitations in the evolved state at the time \(t\) is given by \[\left|c_{n0}(t)\right|^{2}=\frac{(n-1)!!}{n!!}\frac{\sqrt{2\Omega(t)}}{\xi(t) \left|\bar{\Omega}(t)+\Omega(t)\right|}\left|\frac{\bar{\Omega}(t)-\Omega(t)}{ \bar{\Omega}(t)+\Omega(t)}\right|^{n},\] (D.8) which leads to Eqs. (98) and (99) in the main text. ### Slow quench to the critical point Here we consider the half ramp with \(t\in[-h_{\text{cr}}/\delta,0]\). After the rescaling, this problem coincides to solving the Ermakov-Milne equation \[\ddot{\xi}(t)+\Omega(t)^{2}\xi(t)=\frac{1}{4\xi^{3}(t)},\] (D.9) with the rescaled frequency \[\Omega(t)^{2}=t+(N\delta)^{-2/3}\] (D.10) with the simplified extended time interval \(t\in[-\infty,0]\). The solution of Eq. (D.9) can be constructed from that of the associated classical harmonic oscillator \[\ddot{x}(t)+\Omega(t)^{2}x(t)=0.\] (D.11) This equation admits the two independent solutions \[x_{1}(t)=\text{Ai}\left(-\Omega^{2}(t)\right),\qquad x_{2}(t)=\text{Bi}\left(- \Omega^{2}(t)\right)\] (D.12) in terms of the Airy functions Ai and Bi. The two functions \(x_{1}(t)\) and \(x_{2}(t)\) have the constant and finite Wronskian \[\text{Wr}(x_{1},x_{2})=\frac{1}{\pi}.\] (D.13) It is convenient to rewrite the solutions of Eq. (D.9) as a pair of complex conjugate solutions \(w\) and \(w^{*}\) with \[w=ax_{1}(t)+bx_{2}(t),\] (D.14) where \(a\in\mathbb{C}\) and \(b\in\mathbb{R}\) are constants. Since Eq. (D.11) is homogeneous one can rescale the two solution by a constant factor and subsequently, without loss of generality, impose \(b=1\). The function \[\xi(t)=\sqrt{ww^{*}}\] (D.15) is a solution of the Ermakov-Milne equation (88) if \[\text{Wr}(w,w^{*})=2i\text{Im}(a)\text{Wr}(x_{1},x_{2})=i,\] (D.16) which uniquely fixes the imaginary part of \(a\). To completely define the solution, it is required to find the appropriate value of \(\text{Re}(b)\) which satisfies the boundary conditions \[\lim_{t\rightarrow-\infty}\frac{1}{2\xi(t)^{2}}=\Omega(t),\qquad\lim_{t \rightarrow-\infty}\dot{\xi}(t)=0.\] (D.17) These conditions are consistent with the system being in the adiabatic ground state at large \(|t|\). In the \(t\to\infty\) limit, \(\Omega^{2}\) diverges and one must use the asymptotic expansion for the Airy functions \[\lim_{t\to-\infty}x_{1}(t)\sim\frac{\cos\left(\frac{2}{3}\Omega^{3}-\frac{\pi}{4 }\right)}{\sqrt{\pi}\Omega^{1/4}},\qquad\lim_{t\to-\infty}x_{2}(t)\sim\frac{ \sin\left(\frac{2}{3}\Omega^{3}-\frac{\pi}{4}\right)}{\sqrt{\pi}\Omega^{1/4}}.\] (D.18) In order to satisfy (D.17), the oscillatory terms in the expression for \(\xi\) must cancel for large \(s\), implying \[\text{Re}(a)=0,\qquad\text{Im}(a)=b.\] (D.19) Moreover one has to impose the condition \[\text{Wr}(w,w^{*})=2i\text{Im}(a)b\text{Wr}(x_{1},x_{2})=i,\] (D.20) which fully determines the coefficients in Eq. (D.14), \[\text{Im}(a)=b=\sqrt{\frac{\pi}{2}}.\] (D.21) The resulting expression for the scale factor is \[\xi(t)^{2}=\frac{\pi}{2}\text{Ai}\left(-\Omega(t)^{2}\right)^{2}+\frac{\pi}{2 }\text{Bi}\left(-\Omega(t)^{2}\right)^{2},\] (D.22) and the number of defects is given by Eq. (98). The number of defects at the final point of the ramp (which is the critical point) is obtained by evaluating Eq. (98) at \(t=0\). At this instant the rescaled frequency is given by its finite-size correction \(\Omega(0)=\Lambda^{-1/3}=(N\delta)^{-1/3}\), and the scale factor reads \[\xi(0)^{2}=\frac{\pi}{2}\text{Ai}\left(-\Lambda^{-2/3}\right)^{2}+\frac{\pi}{ 2}\text{Bi}\left(-\Lambda^{-2/3}\right)^{2}.\] (D.23) Let us consider the thermodynamic limit \(\Lambda\to\infty\) first. In this case the argument of the Airy functions goes to zero and the terms in the square brackets of Eq. (98) read \[\frac{1}{4\xi(0)^{4}}=\frac{3^{8/3}\Gamma(2/3)^{4}}{16\pi^{2}},\qquad\left( \frac{\dot{\xi}(0)}{\dot{\xi}(0)}\right)=\frac{3^{2/3}\Gamma(2/3)^{2}}{\Gamma (1/3)^{2}},\] (D.24) leading to \[n_{\text{exc}}(0)=\frac{\pi\,\Lambda^{1/3}}{3^{2/3}\Gamma(1/3)^{2}},\] (D.25) where we restricted to the leading term in the \(\Lambda\to\infty\) limit. Therefore the result for the number of excitations diverges in the thermodynamic limit with a power \(N^{1/3}\). However the residual heat is finite since it is obtained by multiplying the divergent defect density with the vanishing oscillator frequency \(E_{\text{res}}(0)=\Delta(0)n_{\text{exc}}(0)\), leading to \[E_{\text{res}}=\frac{\pi\,\delta^{1/3}}{3^{2/3}\Gamma(1/3)^{2}},\] (D.26) which agrees with the KZ scaling of Ref. [106]. For a finite-size system \(N<\infty\), the slow ramp limit \(\delta\to 0\) coincides with the \(\Lambda\to 0\) limit of Eq. (98) evaluated at \(t=0\). The leading term in this case is generated by the velocity correction to the effective frequency \[\lim_{\Lambda\to 0}\frac{\dot{\xi}(0)}{\dot{\xi}(0)}=-\frac{5}{24}\Lambda^{2/3},\] (D.27) which, substituted into Eq. (98) evaluated at \(t=0\), gives \[n_{\rm exc}(0)=\frac{25}{2304}\Lambda^{2}\propto\delta^{2},\] (D.28) which leads to the expected adiabatic correction for the residual energy \(E_{\rm res}\propto\delta^{2}\) in a finite-size system [183]. ### Full ramp Along previous sections we have depicted the analytic solution of a semi-infinite ramp with frequency \(\omega(t)^{2}=|t|\) starting at \(t=-\infty\) and terminating at \(t=0\). Now, we are gonna extend such treatment to the entire interval \(t\in(-\infty,+\infty)\). It is worth noting that in the case of a full ramp \(t\in(-\infty,+\infty)\), we do not consider the case of a finite \(\Delta_{N}\), since its calculation do not present any relevant difference from the half-ramp case. Taking the thermodynamic limit first \(N\to\infty\), we consider a general solution in the form of Eq. (D.14) satisfying the boundary conditions \[\lim_{t\to 0^{+}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where now \(h_{z}\) and \(h_{x}\) are respectively the transverse and longitudinal field and \(\mathcal{N}_{\alpha,N}\) is the Kac normalization (3). This model has been considered by T.Mori in Ref. [291]. There, it is argued that the non-equilibrium dynamics of a long-range quantum Ising chain (with \(0<\alpha<1\) and with transverse field \(h_{z}=0.32J\) and longitudinal field \(h_{x}=0.26J\)) shows signatures of many-body chaos. The dynamics are studied by starting from the paramagnetic state with spins fully polarized along the \(z\) axis, i.e., from \(h_{z,0}=\infty\). (Note that \(x\leftrightarrow z\) have been exchanged in our conventions.) We apply here the non-equilibrium spin-wave theory and the theory of entanglement dynamics developed in the present chapter. Upon adding a longitudinal field, the classical equation of motion of the collective spin [cf. Eq. (126) of Section 4.1.1] now reads \[\begin{cases}\dot{\theta}=2J_{0}\sin\theta\cos\phi\sin\phi+2h_{x}\sin\phi\\ \dot{\phi}=-2h_{z}+2J_{0}\cos\theta\cos^{2}\phi+2h_{x}\frac{\cos\theta}{\sin \theta}\cos\phi\.\end{cases}\] (E.2) and the evolution equations for the spin-wave correlations, ( \(\tilde{J}_{k}\equiv J_{0}\widetilde{J}_{\alpha,k}\) ) are \[\begin{cases}\dot{G}_{k}^{qq}=&4\tilde{J}_{k}\ \cos\theta\cos\phi\sin\phi\ \tilde{G}_{k}^{qq}+4\left(J_{0}\cos^{2}\phi+h_{x}\frac{\cos\phi}{\sin\theta}- \tilde{J}_{k}\ \sin^{2}\phi\right)\ \tilde{G}_{k}^{qp},\\ \dot{G}_{k}^{pp}=&-4\left(J_{0}\cos^{2}\phi+h_{x}\frac{\cos\phi}{\sin\theta}- \tilde{J}_{k}\ \cos^{2}\theta\cos^{2}\phi\right)\tilde{G}_{k}^{qp}-4\tilde{J}_{k}\ \cos \theta\cos\phi\sin\phi\tilde{G}_{k}^{pp}\\ \dot{G}_{k}^{pq}=&-2\left(J_{0}\cos^{2}\phi-\tilde{J}_{k}\ \cos^{2}\theta\cos^{2}\phi \right)\tilde{G}_{k}^{qq}+2\left(J_{0}\cos^{2}\phi+h_{x}\frac{\cos\phi}{\sin \theta}-\tilde{J}_{k}\ \sin^{2}\phi\right)\tilde{G}_{k}^{pp}\end{cases}\.\] (E.3) We first study the mean-field case \(\alpha=0\), verifying that the growth of entanglement entropy is logarithmic for the considered quench, see Figure E.29, as follows from our predictions. However, due to the closeness to a nearby dynamical critical point, the short-time dynamics of entanglement is fast, and the universal logarithmic behavior emerges only over longer times. In agreement with our theory, larger system sizes are required to observe the asymptotic behavior, as confirmed by the ED numerical results. Because of these strong finite-size effects, we did not attempt for \(\alpha>0\) numerical investigations with MPS-TDVP, limited to \(N\lesssim 100\), but directly studied the limiting behavior in the thermodynamic limit via a full spin-wave calculation of entanglement dynamics. The results are shown in Figure E.30, left panel, for increasing values of \(\alpha\), and they confirm that the growth of entanglement entropy is linear for \(\alpha>0\), as suggested by the results of Ref. [291] in view of the interpretation provided by the theory presented here. To fully corroborate this picture, we presented a similar analysis to that outlined above for the Ising chain in a transverse field. In Figure E.31, we report the time evolution of the \(k\)-resolved spin-wave population for the same quench. The dynamical production of long-wavelength spin-wave excitations is unstable, i.e., exponentially growing. This occurrence hints at the fact that the quench considered in Ref. [291] falls into a layer of instability of the many-body semiclassical dynamics, characterized by a positive Kolmogorov-Sinai entropy rate (168) and hence a linear growth of entanglement entropy in time. This is confirmed by the spherical plot in Figure E.30, right panel, of the Kolmogorov-Sinai entropy rate \(\Lambda_{\text{KS}}\) as a function of the initial configuration on the Bloch sphere (168). The considered quench falls inside the instability layer which opens up around the classical separatrix upon increasing \(\alpha>0\). However, we emphasize that a large set of initial configurations show a stable generation of spin waves, and hence slow logarithmic growth of entanglement entropy, even for this Hamiltonian (the black region in the spherical plot). ### Short-range perturbations to collective spin models The above analysis shows that slow logarithmic growth of the entanglement entropy can be expected in the quench dynamics of spin-\(1/2\) systems with long-range interactions. The underlying mechanism involves the existence of a _discrete_ set of excitation modes (the long-wavelength spin waves) which yield a bounded, subleading contribution to entanglement when non-resonantly driven by the collective spin dynamics. However, this is an intrinsic property of slowly-decaying interactions, which generically fails for other types of perturbations. To explicitly show this, we consider _additional_ finite-range interactions as perturbations to an integrable system with collective interactions. To be specific, we consider a Hamiltonian of the form \[\hat{H}_{lr+sr}=\hat{H}_{\alpha}-\lambda\sum_{i}\hat{\sigma}_{i}^{x}\,\hat{ \sigma}_{i+1}^{x}\;,\] (E.4) where \(\hat{H}_{\alpha}\) is the long-range quantum Ising chain in Eq. (1) (\(d=1\)). In Refs. [56; 57], it has been shown that the nonequilibrium spin-wave approach adequately describes the dynamics of this Hamiltonian when \(\lambda\ll J\). We show that the two kinds of perturbations, corresponding to raising \(\alpha\) or \(\lambda\) from \(0\), respectively, lead to a radically different scenario of entanglement growth, in accordance with the theory developed above. For the spin-wave analysis of the Hamiltonian \(H_{lr+sr}\) in Eq. (E.4), it is actually sufficient to substitute \(J_{k\neq 0}=J_{0}\widetilde{f}_{\alpha,k}+\lambda\cos k\) in Eqs. (161). In the case \(\alpha=0\), \(\lambda\neq 0\), the spin-wave Hamiltonian features two fundamental differences: firstly, it is equivalent to a system of quantum oscillators with short-range interactions, hence described by a _continuous_ dispersion relation with a finite bandwidth (apart from the singular \(k=0\) mode); secondly, all excitations with \(k\neq 0\) now live on a widely separated energy scale \(\lambda\ll J\) with respect to the classical drive. Therefore, away from fine-tuned resonances, the system typically behaves as a standard model of free bosonic excitations, where the fast, non-resonant drive amounts to modifying their effective dispersion relation. Such a system is expected to exhibit light-cone spreading of quantum correlations and linear growth of entanglement entropy, according to the standard Calabrese-Cardy quasiparticle picture [611], in stark contrast to the perturbation with \(\alpha>0\), \(\lambda=0\) discussed above. To be fair, it should be noted that the \(\lambda\)-perturbed model features a coexistence of two mechanisms, namely the spin squeezing associated with the singular \(k=0\) mode and the Figure 29: Comparison between entanglement entropy growth computed numerically (ED) and analytically (semi-classical formula) for \(\alpha=0\), for the quenches in Ref.[291] [Eq. (E.1) with \(h_{z}=0.32J_{0}\), \(h_{x}=0.26J_{0}\), initial state polarized along \(z\)]. The growth is logarithmic, but finite-size effects are strong due to closeness to a mean-field dynamical critical point. traveling quasiparticles associated with the all the remaining \(k\neq 0\) modes. Although the second mechanism is clearly dominant [linear over logarithmic \(S(t)\)], for tiny perturbations \(\lambda\ll J\), a long time is required to appreciate this distinction. In practice, for small sizes, short times, weak quenches, and/or weak perturbations, one will always observe a crossover from initial logarithmic growth to an asymptotic linear growth. We verified the predictions above explicitly: see the comparison between the two perturbations in Figure 32. We conclude that, as expected based on the present analysis, the nature of the integrability-breaking perturbation is crucial, and the slow growth of entanglement analyzed is a characteristic property of long-range interactions. ## Appendix F Floquet Hamiltonian and high-frequency expansion Whenever the time-dependent Hamiltonian of a system has a period \(T\), i.e., \(\hat{H}(t+T)=\hat{H}(t)\), the resulting time-evolution operator \(\hat{U}(t_{2},t_{1})\) satisfies \[\hat{U}(t_{0}+nT,t_{0})=\big{[}\hat{U}(t_{0}+T,t_{0})\big{]}^{n} \tag{104}\] Figure 30: Left panel: Comparison between entanglement entropy growth obtained via the full spin-wave computation with \(N=500\), for increasing \(\alpha=0\), \(0.3\) and \(0.5\), for the quenches in Ref. [291] [Eq. (103) with \(h_{z}=0.32J_{0}\), \(h_{x}=0.26J_{0}\), initial state polarized along \(z\)]. While the growth is logarithmic in the integrable case \(\alpha=0\), the breaking of integrability induced by a finite range triggers a linear growth of \(S(t)\), due to unstable excitation of long-wavelength spin waves: see the text and Figure 31. Right panel: Spherical plot of the Kolmogorov-Sinai entropy rate \(h_{KS}(\theta_{0},\phi_{0})\) versus the initial spin-polarized configuration, for \(\alpha=0.7\). Figure 31: Time-dependent \(k\)-resolved spin-wave population for the quenches in Ref. [291] [Eq. (104) with \(h_{z}=0.32J_{0}\), \(h_{x}=0.26J_{0}\), initial state polarized along \(z\)]. Collective quantum fluctuations with \(k=0\) grow polynomially, whereas the long-wavelength modes \(k=\pm 2\pi/L\) (left, \(\alpha=0.3\)) and \(k=\pm 2\pi/L,4\pi/L,6\pi/L\) (right, \(\alpha=0.5\)), diverge exponentially fast in time. Here we have set \(N=500\). for any integer \(n\). Accordingly, it is convenient to define an effective static Hamiltonian \(\hat{H}_{\text{eff}}\)[365; 400], \[\hat{U}_{F}\equiv\hat{U}(t_{0}+T,t_{0})=\mathcal{T}e^{-i\int_{t_{0}}^{t_{0}+T}d \tau\,\hat{H}(\tau)}\equiv e^{-iT\hat{H}_{\text{eff}}},\] (F.2) usually referred to as the _Floquet Hamiltonian_. Its spectrum is defined up to integer multiples of the frequency \(2\pi/T\) and it is independent of the choice of the reference time \(t_{0}\). The state of the system at stroboscopic times \(t_{n}=t_{0}+nT\) is therefore entirely and unambiguously determined by the Floquet Hamiltonian \(\hat{H}_{\text{eff}}\). A series expansion of \(\hat{H}_{\text{eff}}\) in powers of the period \(T\), known as the _Magnus expansion_, can be written as \[\hat{H}_{\text{eff}}=\sum_{n=0}^{\infty}\hat{H}_{\text{eff}}^{(n)},\] (F.3) with \(\hat{H}_{\text{eff}}^{(n)}\) proportional to \(T^{n}\). Explicitly, the first terms read \[\hat{H}_{\text{eff}}^{(0)} =\int_{t_{0}}^{t_{0}+T}\frac{d\tau_{1}}{T}\,\hat{H}(\tau_{1}),\] (F.4) \[\hat{H}_{\text{eff}}^{(1)} =\frac{T}{2}\int_{t_{0}}^{t_{0}+T}\frac{d\tau_{1}}{T}\int_{t_{0}} ^{t_{0}+\tau_{1}}\frac{d\tau_{2}}{T}\,\big{[}\hat{H}(\tau_{1}),\hat{H}(\tau_{2 })\big{]},\] (F.5) with the higher order terms involving a increasing number of nested commutators of \(\hat{H}\) at different times. This expansion is convergent when \(T\) is smaller than the inverse maximal extension of the spectrum of \(\hat{H}(t)\)[365].
2303.08517
$L^{q}$ estimates for nonlocal p-Laplacian type equations with BMO kernel coefficients in divergence form
We study $s$-fractional $p$-Laplacian type equations with discontinuous kernel coefficients in divergence form to establish $W^{s+\sigma,q}$ estimates for any choice of pairs $( \sigma,q)$ with $q\in(p,\infty)$ and $\sigma\in\left(0,\min\left\{\frac{s}{p-1},1-s\right\}\right)$ under the assumption that the associated kernel coefficients have small BMO seminorms near the diagonal. As a consequence, we find in the literature an optimal fractional Sobolev regularity of such a non-homogeneous nonlocal equation when the right-hand side is presented by a suitable fractional operator. Our results are new even in the linear case.
Sun-Sig Byun, Kyeongbae Kim
2023-03-15T10:51:50Z
http://arxiv.org/abs/2303.08517v1
(L^{q}\) estimates for nonlocal p-Laplacian type equations with BMO kernel coefficients in divergence form ###### Abstract. We study \(s\)-fractional \(p\)-Laplacian type equations with discontinuous kernel coefficients in divergence form to establish \(W^{s+\sigma,q}\) estimates for any choice of pairs \((\sigma,q)\) with \(q\in(p,\infty)\) and \(\sigma\in\left(0,\min\left\{\frac{s}{p-1},1-s\right\}\right)\) under the assumption that the associated kernel coefficients have small BMO seminorms near the diagonal. As a consequence, we find in the literature an optimal fractional Sobolev regularity of such a non-homogeneous nonlocal equation when the right-hand side is presented by a suitable fractional operator. Our results are new even in the linear case. Key words and phrases:Nonlocal; Calderon-Zygmund type estimate ; Nonlinear 2020 Mathematics Subject Classification: 35B65, 35D30, 35J70, 35R05 ## 1. Introduction ### Overview and main results In this paper, we study the nonhomogeneous problem of \(s\)-fractional \(p\)-Laplacian type equation \[(-\Delta_{p})_{A}^{s}u=(-\Delta_{p})^{\frac{s}{p}}f\quad\text{in }\Omega, \tag{1.1}\] where \[(-\Delta_{p})_{A}^{s}u(x)=\,\text{p.v.}\int_{\mathbb{R}^{n}}A(x,y)\frac{|u(x)- u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{s(p-1)}}\frac{dy}{|x-y|^{n+s}}\] and \[(-\Delta_{p})^{\frac{s}{p}}f(x)=\,\text{p.v.}\int_{\mathbb{R}^{n}}|f(x)-f(y)| ^{p-2}(f(x)-f(y))\frac{dy}{|x-y|^{n+s}}=(-\Delta_{p})_{1}^{\frac{s}{p}}f(x).\] Here, \(0<s<1\), \(2\leq p<\infty\), \(\Omega\subset\mathbb{R}^{n}\) is an open and bounded set with \(n\geq 2\), and \(A=A(x,y):\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}\) is a kernel coefficient with \[A(x,y)=A(y,x)\quad\text{and}\quad\Lambda^{-1}\leq A(x,y)\leq\Lambda\] for all \((x,y)\in\mathbb{R}^{n}\times\mathbb{R}^{n}\) and for some constant \(\Lambda\geq 1\). And \(f\) is a given measurable function and \(u=u(x):\mathbb{R}^{n}\to\mathbb{R}\) is the unknown. In fact, (1.1) occurs as the Euler-Lagrange equation of the following functional \[v\mapsto\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\left(\frac{1}{p}A(x,y) \left|\frac{v(x)-v(y)}{|x-y|^{s}}\right|^{p}-|f(x)-f(y)|^{p-2}\left(f(x)-f(y) \right)\left(\frac{v(x)-v(y)}{|x-y|^{s}}\right)\right)\frac{dx\,dy}{|x-y|^{n}}\] defined for \(v\in W^{s,p}(\mathbb{R}^{n})\) satisfying \(v=g\) on \(\mathbb{R}^{n}\setminus\Omega\) for some boundary data \(g\in W^{s,p}(\mathbb{R}^{n})\). The aim of this paper is to establish that the following implication \[\left(f(x)-f(y)\right)\in L^{q}_{\text{loc}}\left(\Omega\times\Omega\,;\, \frac{dx\,dy}{|x-y|^{n+\sigma q}}\right)\Longrightarrow\left(\frac{u(x)-u(y)} {|x-y|^{s}}\right)\in L^{q}_{\text{loc}}\left(\Omega\times\Omega\,;\,\frac{dx \,dy}{|x-y|^{n+\sigma q}}\right) \tag{1.2}\] holds with the desired Calderon-Zygmund type estimate (1.8) for each choice of two numbers \(q,\sigma\) with \[q\in(p,\infty)\text{ and }\sigma\in\left(0,\min\left\{\frac{s}{p-1},1-s\right\}\right) \tag{1.3}\] under a possibly discontinuous kernel coefficient \(A(x,y)\). For the classical case that \(p=q\) and \(\sigma=0\), there is a unique weak solution \(u\in W^{s,p}(\Omega)\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) with the standard energy estimate (2.15), as follows from Lemma 2.7 below. We first discuss a motivation of the study (1.1) from the corresponding local problem when \(s=1\). According to the well-known elliptic theory, for a weak solution \(w\) of \[-\text{div}\left(B|Dw|^{p-2}Dw\right)=-\text{div}\left(|F|^{p-2}F\right)\quad \text{in }\Omega,\] which comes from the Euler-Lagrange equation of the functional \[v\mapsto\int_{\Omega}\frac{1}{p}B|Dv|^{p}-|F|^{p-2}F\cdot Dv\,dx\] defined for \(v\in W^{1,p}(\Omega)\) with \(v=g\) on \(\partial\Omega\), there holds \[F\in L^{q}_{\rm loc}\Longrightarrow Dw\in L^{q}_{\rm loc} \tag{1.4}\] for every \(q\in[p,\infty)\), provided that the principal coefficient function \(B:\Omega\to\mathbb{R}\) has a sufficiently small BMO seminorm, see [10, 22, 25, 9] and references therein. To find a nonlocal analogue of the Calderon-Zygmund theory (1.4), we first need to introduce a fractional gradient operator and a fractional divergence operator by adopting the notation from the very interesting works [30]. For \(t\in[0,1)\), the fractional gradient operator \(d_{t}:\mathcal{M}\left(\mathbb{R}^{n}\right)\to\mathcal{M}_{od}\left(\mathbb{R }^{n}\times\mathbb{R}^{n}\right)\) is defined by \[(d_{t}g)(x,y)=\frac{g(x)-g(y)}{|x-y|^{t}}\quad\text{if }x\neq y,\] where \(\mathcal{M}(\mathbb{R}^{n})\) is the function space consisting of all measurable real-valued functions defined on \(\mathbb{R}^{n}\) and \(\mathcal{M}_{od}(\mathbb{R}^{n}\times\mathbb{R}^{n})=\{F(x,y)\in\mathcal{M}( \mathbb{R}^{n}\times\mathbb{R}^{n})\,;\,F(x,y)=-F(y,x)\}\). Note that \(d_{t}g\) is well defined as \(\{(x,y)\in\mathbb{R}^{n}\times\mathbb{R}^{n}\,;\,x=y\}\) is a measure zero set in \(\mathbb{R}^{n}\times\mathbb{R}^{n}\). For \(t\in(0,1)\), the fractional divergence operator \(\operatorname{div}_{t}:\mathcal{S}_{od}\left(\mathbb{R}^{n}\times\mathbb{R}^ {n}\right)\to\mathcal{S}\left(\mathbb{R}^{n}\right)\) is defined by \[\operatorname{div}_{t}\left(F\right)(x)=\int_{\mathbb{R}^{n}}\frac{F(x,y)}{|x -y|^{t}}\frac{dy}{|x-y|^{n}},\] where \(\mathcal{S}\left(\mathbb{R}^{n}\right)\) is the classical Schwartz space and \(\mathcal{S}_{od}\left(\mathbb{R}^{n}\times\mathbb{R}^{n}\right)=\mathcal{S}( \mathbb{R}^{n}\times\mathbb{R}^{n})\cap\mathcal{M}_{od}(\mathbb{R}^{n}\times \mathbb{R}^{n})\) (see [41]). Then our equation (1.1) can be rewritten as \[\operatorname{div}_{s}\left(A(x,y)|d_{s}u|^{p-2}d_{s}u\right)=\operatorname{ div}_{s}\left(|d_{0}f|^{p-2}d_{0}f\right),\] while the relation (1.2) can be rewritten as \[d_{0}f\in L^{q}_{\rm loc}\left(\Omega\times\Omega\,;\,\frac{dx\,dy}{|x-y|^{n+ eq}}\right)\Longrightarrow d_{s}u\in L^{q}_{\rm loc}\left(\Omega\times \Omega\,;\,\frac{dx\,dy}{|x-y|^{n+eq}}\right)\] for each \(q\in(p,\infty)\) and \(\sigma\in\left(0,\min\left\{\frac{s}{p-1},1-s\right\}\right)\), which is the corresponding nonlocal version of our Calderon-Zygmund theory. We now mention the recent results for the Calderon-Zygmund theory of nonlocal problems. For \(p=2\), Mengesha, Schikorra and Yeepo [31] establish the Calderon-Zygmund theory for a variety of forcing terms, provided that the kernel coefficient \(A\) is Holder continuous by obtaining suitable commutator estimates. On the other hand, in [35, 34], Nowak obtains Calderon-Zygmund-type estimates with respect to non-divergence data with the associated kernel coefficient \(A\) having a small \(BMO\) seminorm on \(\Omega\times\Omega\) via maximal function techniques. For the global estimate, Abdellaoui, Fernandez, Leonori and Younes [1] establish global regularity results for the fractional Laplacian, with the zero boundary condition and non-divergence data by employing the Green function formula of the fractional Laplacian. For the nonlinear case when \(p>2\), the Calderon-Zygmund theory for non-divergence data is recently investigated by Diening and Nowak in [17]. In particular, they accomplish this by obtaining precise pointwise bounds in terms of a certain fractional sharp maximal function. On the other hand, we here wish to find an optimal Calderon-Zygmund theory for divergence data by dealing with a suitable form of the right-hand side along with a minimal regularity assumptions on the kernel coefficients. In addition, we point out that unlike the methods used in [1, 31, 34, 35], we use the maximal function free technique introduced in [2] and later extensibly employed in the literature. Later in this paper, we show that our result covers the main result given in [34] (see Subsection 1.2 below). The main difficulty to handle such nonlinear problems is that even though \(u\) and \(v\) are weak solutions to (1.1), \(u\pm v\) is not a weak solution to (1.1), as usual. Indeed for \(p=2\), this linearity is used to prove comparison estimates of higher-order fractional gradients, which is an essential ingredient to improve the range of the fractional differentiability (see [34]). However, in this paper when dealing with nonlinear fractional problems, it seems difficult to find desired comparison estimates of higher-order fractional gradients. To overcome this, we turn to an interpolation argument along with slightly higher fractional Sobolev regularity of a solution, which turns out to be obtained regardless of the linearity, and then run a boot strap argument in order to prove comparison estimates for fractional gradients of higher order. We believe that this method is applicable for more general nonlinear nonlocal equations with nonstandard growth [6, 8, 36]. We further refer to [3, 4, 5, 18, 19, 20, 23, 24, 26, 27, 28, 29, 32, 33, 38, 39, 40] for a further discussion of various regularity results of nonlocal problems. We next give the definition of local weak solutions to (1.1). See Section 2 for related definitions and notations. **Definition**.: Let \(d_{0}f\in L^{p}_{\mathrm{loc}}\left(\frac{dx\,dy}{|x-y|^{n}};\,\Omega\times\Omega\right)\) and \(f\in L^{p-1}_{s}(\mathbb{R}^{n})\). We say that \(u\in W^{s,p}_{\mathrm{loc}}(\Omega)\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) is a local weak solution to (1.1) if it satisfies for any \(\phi\in W^{s,p}_{c}(\Omega)\), \[\begin{split}&\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}A(x,y) \left(\frac{|u(x)-u(y)|}{|x-y|^{s}}\right)^{p-2}\left(\frac{u(x)-u(y)}{|x-y|^{s }}\right)\left(\frac{\phi(x)-\phi(y)}{|x-y|^{s}}\right)\frac{dx\,dy}{|x-y|^{n} }\\ &=\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}|f(x)-f(y)|^{p-2}\left( f(x)-f(y)\right)\left(\frac{\phi(x)-\phi(y)}{|x-y|^{s}}\right)\frac{dx\,dy}{|x-y|^{n} }.\end{split} \tag{1.5}\] _Remark 1_.: Since \(d_{0}f\in L^{p}_{\mathrm{loc}}\left(\frac{dx\,dy}{|x-y|^{n}};\Omega\times \Omega\right)\) and \(f\in L^{p-1}_{s}(\mathbb{R}^{n})\), Lemma 2.7 ensures that the right-hand side is well-defined. We next introduce a regularity assumption on the associated kernel coefficient \(A\), so called, \((\delta,R)\)-vanishing condition. **Definition**.: Let \(A:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}\) be a kernel coefficient. We say that for \(\delta,R>0\), \(A\) is \((\delta,R)\)-vanishing in \(\Omega\times\Omega\) if \[\sup_{(x_{0},y_{0})\in\Omega\times\Omega}\sup_{0<r\leq R}\fint_{B_{r}(x_{0})} \fint_{B_{r}(y_{0})}|A(x,y)-(A)_{B_{r}(x_{0})\times B_{r}(y_{0})}|\,dx\,dy\leq\delta. \tag{1.6}\] In particular, we say that \(A\) is \((\delta,R)\)-vanishing in \(\Omega\times\Omega\) only at the diagonal, if \[\sup_{x_{0}\in\Omega}\sup_{0<r\leq R}\fint_{B_{r}(x_{0})}\fint_{B_{r}(x_{0})} |A(x,y)-(A)_{B_{r}(x_{0})\times B_{r}(x_{0})}|\,dx\,dy\leq\delta. \tag{1.7}\] We now mention that the class of functions with the \((\delta,R)\)-vanishing condition contains not only all continuous functions, but also a large class of functions with discontinuity. _Remark 2_.: We note that given \(\delta>0\), any function belonging to the class of functions with vanishing mean oscillation \(VMO\) satisfies \((\delta,R)\)-vanishing condition, whenever \(R>0\) is sufficiently small depending on \(\delta\) (see [37]). On the other hand, there are many \((\delta,R)\)-vanishing kernel coefficients which does not belong to VMO space. Let us assume that \(K\) is a merely measurable kernel which is not in VMO space. Choose \[A(x,y)=\frac{\delta}{4\Lambda}K(x,y)+\frac{\Lambda}{2}\quad(x,y\in\mathbb{R}^ {n})\] to see that (1.6) for any \(R>0\), but \(A\) does not belong to VMO space. The next remark is to explain that (1.7) is the more general assumption than (1.6). _Remark 3_.: Note that if \(A\) is \((\delta,R)\)-vanishing in \(\Omega\times\Omega\), then \(A\) is also \((\delta,R)\)-vanishing in \(\Omega\times\Omega\) only at the diagonal. However, the converse is not true. For instance if \(A(x,y)=\frac{K_{1}(x,y)\chi_{(|x-y|>R)}+K_{2}(x,y)}{2\Lambda}+\frac{\Lambda}{2}\) where \(K_{1}\) is merely measurable and \(K_{2}\) is \((\delta,R)\)-vanishing in \(\Omega\times\Omega\), then \(A\) is \((\delta,R)\)-vanishing in \(\Omega\times\Omega\) only at the diagonal, but not \((\delta,R)\)-vanishing in \(\Omega\times\Omega\). We clearly point out that our problem (1.1) has a scaling invariant property which we now state. **Lemma 1.1**.: _Let \(u\) be a weak solution to (1.1) and \(r\in(0,1)\). Assume that \(A\) is \((\delta,R)\)-vanishing in \(\Omega\times\Omega\) only at the diagonal. Define_ \[\tilde{u}(x)=\frac{u(rx)}{r^{s}},\quad\tilde{f}(x)=f(rx)\quad\text{and}\quad \tilde{A}(x,y)=A(rx,ry)\quad(x,y\in\mathbb{R}^{n}).\] _Then \(\tilde{u}\) is a weak solution to_ \[(-\Delta_{p})^{s}_{\tilde{A}}\tilde{u}=(-\Delta_{p})^{\frac{s}{r}}\tilde{f} \quad\text{in }\frac{1}{r}\Omega,\] _and \(\tilde{A}\) is \(\left(\delta,\frac{R}{r}\right)\)-vanishing in \(\frac{1}{r}\Omega\times\frac{1}{r}\Omega\) only at the diagonal._ We now introduce our main result. **Theorem 1.2**.: _Let \(q\in(p,\infty)\), \(\sigma\in\left(0,\min\left\{\frac{s}{p-1},1-s\right\}\right)\) and \(R>0\) be given. Then there exists a small positive constant \(\delta\) depending only on \(n,s,p,\Lambda,q\) and \(\sigma\) such that for each \(A\) with \((\delta,R)\)-vanishing in \(\Omega\times\Omega\) only at the diagonal and for each \(f\in L^{p-1}_{s}(\mathbb{R}^{n})\) with \(d_{0}f\in L^{q}_{\mathrm{loc}}\left(\Omega\times\Omega;\frac{dx\,dy}{|x-y|^{n+ \sigma q}}\right)\), any weak solution \(u\in W^{s,p}_{\rm loc}(\Omega)\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) of (1.1) satisfies that \(d_{s}u\in L^{q}_{\rm loc}\left(\Omega\times\Omega;\frac{dx\,dy}{|x-y|^{n+q}}\right)\). In particular, there is a positive constant \(c=c(n,s,p,\Lambda,q,\sigma)\) such that_ \[\left(\fint_{B_{r}(x_{0})}\int_{B_{r}(x_{0})}|d_{s}u|^{q}\frac{dx \,dy}{|x-y|^{n+\sigma q}}\right)^{\frac{1}{q}}\] \[\leq c\left(\left(\fint_{B_{2r}(x_{0})}\int_{B_{2r}(x_{0})}\left| \frac{d_{s}u}{(2r)^{\sigma}}\right|^{p}\frac{dx\,dy}{|x-y|^{n}}\right)^{\frac{ 1}{p}}+\mathrm{Tail}_{\rm s,p}\left(\frac{u-(u)_{B_{2r}(x_{0})}}{(2r)^{\sigma+ s}};B_{2r}(x_{0})\right)\right) \tag{1.8}\] \[+c\left(\left(\fint_{B_{2r}(x_{0})}\int_{B_{2r}(x_{0})}|d_{0}f|^{ q}\frac{dx\,dy}{|x-y|^{n+\sigma q}}\right)^{\frac{1}{q}}+\mathrm{Tail}_{\frac{1}{p },p}\left(\frac{f-(f)_{B_{2r}(x_{0})}}{(2r)^{\sigma}};B_{2r}(x_{0})\right)\right)\] _whenever \(B_{2r}(x_{0})\Subset\Omega\) and \(r\in(0,R]\)._ _Remark 4_.: We observe the following equivalent relation \[d_{s}u\in L^{q}_{\rm loc}\left(\Omega\times\Omega;\frac{dx\,dy}{|x-y|^{n+\sigma q }}\right)\Longleftrightarrow u\in W^{s+\sigma,q}_{\rm loc}(\Omega),\] which implies that \[f\in W^{\sigma,q}_{\rm loc}(\Omega)\Longrightarrow u\in W^{s+\sigma,q}_{\rm loc }(\Omega)\] for any choice of \(q\in(p,\infty)\) and \(\sigma\in\left(0,\min\left\{\frac{s}{p-1},1-s\right\}\right)\). _Remark 5_.: Let us assume \(d_{0}f\in L^{q}_{\rm loc}\left(\Omega\times\Omega;\frac{dx\,dy}{|x-y|^{n}}\right)\) for some \(q\in(p,\infty)\). However, it is not always true that \(d_{0}f\in L^{p}_{\rm loc}\left(\Omega\times\Omega;\frac{dx\,dy}{|x-y|^{n}}\right)\), as \(\int_{B}\int_{B^{\prime}}\frac{dx\,dy}{|x-y|^{n}}<\infty\) for any ball \(B,B^{\prime}\Subset\Omega\) if and only if \(\mathrm{dist}(B,B^{\prime})>0\). Thus, we do not ensure that the right-hand side of (1.5) is well-defined. If \(d_{0}f\in L^{q}_{\rm loc}\left(\Omega\times\Omega;\frac{dx\,dy}{|x-y|^{n+ \sigma q}}\right)\) for some \(q\in(p,\infty)\) and \(\sigma>0\), then \(f\in W^{\sigma,q}_{\rm loc}(\Omega)\). By Lemma 2.4, we have \(f\in W^{\tilde{\mathbb{F}},p}_{\rm loc}(\Omega)\), which gives that \(d_{0}f\in L^{p}_{\rm loc}\left(\Omega\times\Omega;\frac{dx\,dy}{|x-y|^{n}}\right)\). Therefore, it is natural to take \(\sigma>0\). In addition, the regularity of solutions to the homogeneous fractional p-Laplacian equation is known only for \(C^{\beta}\) for any \(\beta<\min\left\{\frac{sp}{p-1},1\right\}\), if \(p>2\) (see [5]). Accordingly, the upper bound of \(\sigma\) is \(\min\left\{\frac{s}{p-1},1-s\right\}\). ### Derivation of regularity results for non-divergence data \(g\) In this subsection, we show that for any weak solution \(u\in W^{s,2}_{\rm loc}(\Omega)\cap L^{1}_{2s}(\mathbb{R}^{n})\) to \[(-\Delta)^{s}_{A}u=g\quad\text{in }\Omega, \tag{1.9}\] if \(A\) has a sufficiently small BMO seminorom only at the diagonal, then the following implication \[g\in L^{\frac{ng}{n+(2s-t)q}}_{\rm loc}(\Omega)\Longrightarrow u\in W^{t,q}_{ \rm loc}(\Omega) \tag{1.10}\] holds for each \[q\in(2,\infty)\quad\text{and}\quad t\in(s,\min\left\{2s,1-s\right\})\,.\] Let \(g\in L^{\frac{ng}{n+(2s-t)q}}_{\rm loc}(\Omega)\) for some \(q\in(2,\infty)\) and \(t\in(s,\min\left\{2s,1-s\right\})\), and let \(\Omega^{\prime}\Subset\Omega\) be an open set. We first note from [35, Theorem 4.4] that there is a weak solution \(f\in W^{\frac{g}{2},2}(\mathbb{R}^{n})\) to \[(-\Delta)^{\frac{s}{2}}f=g\quad\text{in }\Omega^{\prime\prime} \tag{1.11}\] satisfying \(f\in H^{s,\frac{ng}{n+(2s-t)q}}(\Omega^{\prime})\), where \(\Omega^{\prime\prime}\) is an open set such that \(\Omega^{\prime}\Subset\Omega^{\prime\prime}\Subset\Omega\). In light of (1.9) and (1.11), we have that \(u\in W^{s,2}_{\rm loc}(\Omega^{\prime\prime})\cap L^{1}_{2s}(\mathbb{R}^{n})\) is a weak solution to \[(-\Delta)^{s}_{A}u=(-\Delta)^{\frac{s}{2}}f\quad\text{in }\Omega^{\prime\prime}.\] Applying [35, Proposition 2.5] with \(p=\frac{ng}{n+(2s-t)q}\), \(s_{1}=t-s\) and \(p_{1}=q\) leads to \[f\in W^{t-s,q}_{\rm loc}(\Omega^{\prime\prime}).\] In addition, we recall that the fact that \(W^{\frac{g}{2},2}(\mathbb{R}^{n})\subset L^{1}_{s}(\mathbb{R}^{n})\) to observe that \(f\in L^{1}_{s}(\mathbb{R}^{n})\). Therefore, our main theorem 1.2 implies that \[u\in W^{t,q}_{\rm loc}(\Omega^{\prime\prime})\subset W^{t,q}(\Omega^{\prime}), \tag{1.12}\] provided that the kernel coefficient is \((\delta,R)\)-vanishing in \(\Omega\times\Omega\) only at the diagonal for sufficiently small \(\delta\) depending only on \(n,s,p,\Lambda,q\) and \(t\), as \(q\in(2,\infty)\) and \(t-s\in(0,\min\left\{s.1-s\right\})\). Since \(\Omega^{\prime}\) is arbitrarily selected to be embedded in \(\Omega\), (1.12) yields \(u\in W^{t,q}_{\rm loc}(\Omega)\), which is (1.10). This is the same result given in [34, Theorem 1.1]. ### Plan of the paper This paper is organized as follows. In Section 2, we introduce some notations, function spaces, lemmas about embeddings and tail estimates, the existence result of the corresponding boundary value problem to (1.1) and a technical lemma. In Section 3, we derive comparison estimates. In Section 4, we obtain a covering lemma to deal with upper level sets of fractional gradients of weak solutions. Finally, in Section 5, we prove our main result. ## 2. Preliminaries and Notations In what follows, we write \(c\) to mean a general constant equal or bigger than \(1\) and it possibly changes from line to line. Furthermore, we use parentheses to denote the relevant dependencies on parameters such as \(c\equiv c(n,s,p)\), and we denote \[\mathtt{data}=\mathtt{data}(n,s,p,\Lambda,q,\sigma).\] We first introduce geometric and functional notations. 1. Let us denote \(B_{r}(x)\) by the ball in \(\mathbb{R}^{n}\) with center \(x\in\mathbb{R}^{n}\) and a radius \(r>0\), and let \(B_{r}\equiv B_{r}(0)\). 2. We denote \(\mathcal{B}_{r}(x,y)=B_{r}(x)\times B_{r}(y)\) for \(x,y\in\mathbb{R}^{n}\) and \(r>0\). In particular, we write \(\mathcal{B}_{r}(x)=\mathcal{B}_{r}(x,x)\) and \(\mathcal{B}_{r}=\mathcal{B}_{r}(0)\). 3. The cube in \(\mathbb{R}^{n}\) with center \(x\) and side-length \(r\) is denoted by \(Q_{r}(x)\), and let \(Q_{r}\equiv Q_{r}(0)\). 4. We denote \(\mathcal{Q}_{r}(x,y)=Q_{r}(x)\times Q_{r}(y)\) for \(x,y\in\mathbb{R}^{n}\) and \(r>0\). Moreover, we write \(\mathcal{Q}_{r}(x)=\mathcal{Q}_{r}(x,x)\) and \(\mathcal{Q}_{r}=\mathcal{Q}_{r}(0)\). 5. Given cubes \(Q^{1}\) and \(Q^{2}\) in \(\mathbb{R}^{n}\), we denote \(P^{i}\mathcal{Q}=Q^{i}\times Q^{i}\) for \(i=1,2\), where \(\mathcal{Q}=Q^{1}\times Q^{2}\). 6. For a locally integrable function \(v:\mathbb{R}^{n}\to\mathbb{R}\) and a bounded set \(B\subset\mathbb{R}^{n}\), we denote the average of \(v\) over \(B\) by \((v)_{B}=\int_{B}v(x)\,dx\). 7. For a locally integrable function \(F:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}\), \(x_{0}\in\mathbb{R}\) and \(r>0\), we define a function \(F_{r,x_{0}}:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}\) by \[F_{r,x_{0}}(x,y)=\begin{cases}(F)_{\mathcal{B}_{r}(x_{0})}&\text{ if }(x,y)\in \mathcal{B}_{r}(x_{0}),\\ F(x,y)&\text{ otherwise.}\end{cases}\] (2.1) We next introduce the fractional Sobolev space and tail space. For a measurable function \(v:\Omega\to\mathbb{R}\), we say that \(v\in W^{s,p}(\Omega)\) if \(v\in L^{p}(\Omega)\) and \[[v]_{W^{s,p}(\Omega)}=\left(\int_{\Omega}\int_{\Omega}\frac{|v(x)-v(y)|^{p}} {|x-y|^{n+sp}}\,dx\,dy\right)^{\frac{1}{p}}<\infty.\] In particular, we say \(v\in W^{s,p}_{c}(\Omega)\) if \(v\in W^{s,p}(\Omega)\) and \(v\) has a compact support embedded in \(\Omega\). For a given open set \(\Omega^{\prime}\subset\mathbb{R}^{n}\) such that \(\Omega\Subset\Omega^{\prime}\) and a measurable function \(g:\mathbb{R}^{n}\to\mathbb{R}\), we define \[X^{s,p}_{g}(\Omega,\Omega^{\prime})=\left\{v\in W^{s,p}(\Omega^{\prime})\,; \,v=g\text{ for a.e. on }\mathbb{R}^{n}\setminus\Omega\right\}.\] We now introduce the associated tail space. We say that \(v\in L^{p-1}_{sp}(\mathbb{R}^{n})\) if \[\int_{\mathbb{R}^{n}}\frac{|v(y)|^{p-1}}{(1+|y|)^{n+sp}}\,dy<\infty,\] and we write \[\operatorname{Tail}_{\text{\tiny{s,p}}}\left(v;B_{r}(x_{0})\right)=\left(r^{ sp}\int_{\mathbb{R}^{n}\setminus B_{r}(x_{0})}\frac{|v(y)|^{p-1}}{|y-x_{0}|^{n+sp}} \,dy\right)^{\frac{1}{p-1}}.\] After a simple algebraic computation, we observe that for any \(r>0\), \[\operatorname{Tail}_{\text{\tiny{s,p}}}(1;B_{1})=\operatorname{Tail}_{\text{ \tiny{s,p}}}(1;B_{r}), \tag{2.2}\] which is a positive number depending only on \(n,s\) and \(p\). We now introduce dual pairs of operators and measures \((D^{r},\,\mu_{r})\) for \(\tau\in\left(0,\frac{n}{p}\right)\), which were first introduced in [28]. Define an operator \(D^{\tau}:\mathcal{M}\left(\mathbb{R}^{n}\times\mathbb{R}^{n}\right)\to\mathcal{M }\left(\mathbb{R}^{n}\times\mathbb{R}^{n}\right)\) by \[\left(D^{\tau}F\right)(x,y)=\frac{F(x,y)}{|x-y|^{\tau}}\quad\text{if }x\neq y,\] and a measure \(\mu_{\tau}\) on \(\mathbb{R}^{n}\times\mathbb{R}^{n}\) by \[\mu_{\tau}(\mathcal{A})=\int_{\mathcal{A}}\frac{dx\,dy}{|x-y|^{n-p\tau}}\quad \text{for any measurable set }\mathcal{A}\subset\mathbb{R}^{n}\times\mathbb{R}^{n}. \tag{2.3}\] In this notation, we observe that \(u\in W^{s,p}_{\mathrm{loc}}(\Omega)\) if and only if \(D^{r}d_{s}u\in L^{p}_{\mathrm{loc}}\left(\Omega\times\Omega\,;\mu_{\tau}\right)\). We now prove some properties of the measure \(\mu_{\tau}\) defined in (2.3). **Lemma 2.1**.: 1. _There exists a constant_ \(v_{0}\) _depending only on_ \(n\) _and_ \(p\) _such that_ \[\mu_{\tau}\left(\mathcal{B}_{R}(x_{0})\right)=v_{0}\frac{R^{n+p\tau}}{\tau} \quad\text{for any $x_{0}\in\mathbb{R}^{n}$ and $R>0$}.\] (2.4) 2. _Let_ \(\rho\) _and_ \(R\) _be any positive numbers, and let_ \(x_{0},y_{0}\in\mathbb{R}^{n}\)_. Then_ \[\frac{\mu_{\tau}\left(\mathcal{B}_{R}(x_{0},y_{0})\right))}{\mu_{\tau}\left( \mathcal{B}_{\rho}(x_{0},y_{0})\right)}=\left(\frac{R}{\rho}\right)^{n+p\tau}.\] (2.5) 3. _Let_ \(\mathcal{Q}_{r}(x_{0},y_{0})\) _be any cube in_ \(\mathcal{B}_{R}\) _for_ \(r,R>0\) _and_ \(x_{0},y_{0}\in\mathbb{R}^{n}\)_. Then_ \[\frac{\mu_{\tau}\left(\mathcal{B}_{R}\right)}{\mu_{\tau}\big{(}\mathcal{Q}_{r} (x_{0},y_{0})\big{)}}\leq 2^{n}\frac{v_{0}}{\tau}\left(\frac{R}{r}\right)^{2n}.\] (2.6) Proof.: For (2.4) and (2.5), we refer to [28, Proposition 4.1]. A direct computation needs to \[\mu_{\tau}(\mathcal{B}_{R}) =v_{0}\frac{R^{n+p\tau}}{\tau}\] \[=\frac{v_{0}}{\tau}\frac{R^{n+p\tau}}{r^{2n}}\int_{Q^{1}_{r}(x_{0 })}\int_{Q^{2}_{r}(y_{0})}\,dx\,dy\] \[\leq\frac{v_{0}}{\tau}\frac{R^{n+p\tau}}{r^{2n}}\int_{Q^{1}_{r}(x _{0})}\int_{Q^{2}_{r}(y_{0})}\left(\frac{2R}{|x-y|}\right)^{n-p\tau}\,dx\,dy\] \[\leq 2^{n}\frac{v_{0}}{\tau}\left(\frac{R}{r}\right)^{2n}\mu_{ \tau}\big{(}\mathcal{Q}_{r}\left(x_{0},y_{0}\right)\big{)},\] where we have used the fact that \[|x-y|\leq 2R\quad\text{for any $x\in Q^{1}_{r}$ and $y\in Q^{2}_{r}$}.\] This implies (2.6). We now provide embeddings of the fractional Sobolev spaces. The first embedding lemma is the Sobolev-Poincare inequality (see [35, Lemma 2.4] and [16, Theorem 6.7]). **Lemma 2.2**.: _Let \(u\in W^{s,p}(B_{r})\). Then for any_ \[\gamma\in\begin{cases}\left[1,\frac{np}{n-sp}\right]&\text{if $n>ps$},\\ \left[1,\infty\right]&\text{if $n\leq ps$},\end{cases}\] _there holds that_ \[\left(\!\!\int_{B_{r}}|u-(u)_{B_{r}}|^{\gamma}\,dx\right)^{\frac{1}{\gamma}} \leq cr^{s}\left(\!\!\int_{B_{r}}\int_{B_{r}}\frac{|u(x)-u(y)|^{p}}{|x-y|^{n+ sp}}\,dx\,dy\right)^{\frac{1}{\gamma}} \tag{2.7}\] _for some constant \(c=c(n,s,p,\gamma)\). In particular, (2.7) also holds if we take \(Q_{r}\) instead of \(B_{r}\)._ We next introduce another type of the Poincare inequality. **Lemma 2.3**.: _For any \(u\in X^{s,p}_{0}\left(B_{\rho},B_{R}\right)\) with \(0<\rho<R\), there is a positive constant \(c=c\left(n,s,p,\rho,R\right)\) such that_ \[\int_{B_{R}}|u(x)|^{p}\,dx\leq c\int_{B_{R}}\int_{B_{R}}\frac{|u\left(x\right) -u\left(y\right)|^{p}}{|x-y|^{n+sp}}\,dx\,dy.\] Proof.: Using the fact that \(u(x)\equiv 0\) on \(B_{R}\setminus B_{\rho}\) and \(1\leq\frac{2R}{|x-y|}\) for any \(x,y\in B_{R}\), we have \[\int_{B_{R}}|u(x)|^{p}\,dx =\int_{B_{\rho}}|u(x)|^{p}\,dx=\!\!\int_{B_{R}\setminus B_{ \frac{R+s}{2}}}\int_{B_{\rho}}|u(x)-u(y)|^{p}\,dx\,dy\] \[\leq cR^{n+sp}\!\!\int_{B_{\frac{R+s}{2}}\setminus B_{\rho}}\int_ {B_{\rho}}\frac{|u(x)-u(y)|^{p}}{|x-y|^{n+sp}}\,dx\,dy\leq c\int_{B_{R}}\int_{B _{R}}\frac{|u(x)-u(y)|^{p}}{|x-y|^{n+sp}}\,dx\,dy.\] This completes the proof. We now give an embedding lemma between fractional Sobolev spaces. **Lemma 2.4**.: _(See [34, Proposition 2.5]) Let \(u\in W^{s_{2},q}(B_{r})\). For \(s_{1}\in(0,s_{2})\) and \(p\in(1,q]\), we have that_ \[[u]_{W^{s_{1},p}(B_{r})}\leq cr^{\left(s_{2}-\frac{n}{p}\right)-\left(s_{1}- \frac{n}{p}\right)}[u]_{W^{s_{2},q}(B_{r})}\] _for some constant \(c=c(n,s_{1},s_{2},p,q)\)._ We introduce an interpolation lemma which will be used later in Lemma 5.2. **Lemma 2.5**.: _(Interpolation) Let \(u\in W^{s_{1},p}(B_{r})\cap W^{s_{2},p}(B_{r})\) for \(0<s_{1}<s_{2}<1\). Then we have that for any \(t\in[0,1]\),_ \[[u]_{W^{s_{1}+(1-t)s_{2},p}(B_{r})}\leq[u]_{W^{s_{1},p}(B_{r})}^{t}[u]_{W^{s_{ 2},p}(B_{r})}^{1-t}. \tag{2.8}\] Proof.: If \(t=0\) or \(t=1\), then it follows directly. We may assume that \(t\in(0,1)\). Using Holder's inequality, we have \[[u]_{W^{t_{1}+(1-t)s_{2},p}(B_{r})}^{p}=\int_{B_{r}}\int_{B_{r}}\frac{|u(x)-u (y)|^{tp}}{|x-y|^{t(n+s_{1})}}\frac{|u(x)-u(y)|^{(1-t)p}}{|x-y|^{(1-t)(n+s_{2} )}}\leq[u]_{W^{s_{1},p}(B_{r})}^{tp}[u]_{W^{s_{2},p}(B_{r})}^{(1-t)p}.\] By taking the power of \(\frac{1}{p}\) for both sides of the above inequality, (2.8) follows. We now prove two useful tail estimates which will be frequently employed later. **Lemma 2.6**.: _Let \(u\in L^{p-1}_{\mathrm{sp}}(\mathbb{R}^{n})\) with \(D^{\tau}d_{\alpha+t}u\in L^{p}_{\mathrm{loc}}\left(\Omega\times\Omega\,;\mu_{ \tau}\right)\) for some \(t\in[0,s]\) and \(\alpha\in[0,1)\), and \(B_{\rho}(y_{0})\Subset\Omega\). (1). For any nonnegative integer \(i\) with \(B_{2^{i}\rho}(y_{0})\Subset\Omega\), there is a constant \(c=c(n,s,p)\) such that_ \[\begin{split}\mathrm{Tail}_{\mathrm{s,p}}\left(u-(u)_{B_{\rho}(y_ {0})};B_{\rho}(y_{0})\right)&\leq c\rho^{\alpha+\tau+t}\sum_{j=1}^ {i}2^{-j\left(\frac{\pi\rho}{p-1}-(\alpha+\tau+t)\right)}\left(\frac{1}{\tau} \mathchoice{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 100047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 100047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{\vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0.0pt width 10047pt depth 0.0pt}\kern-10.0pt}{ \vbox{\hrule height 0. For the last inequality, we have used the fact that \[\left|(u)_{B_{2^{j+1}\rho}}-(u)_{B_{2^{j}\rho}}\right|\leq c\left(\fint_{B_{2^{j+ 1}\rho}}|u-(u)_{B_{2^{j+1}\rho}}|^{p}\,dy\right)^{\frac{1}{p}}. \tag{2.11}\] Then using Jensen's inequality, the fact that \[1\leq\frac{2^{j+1}\rho}{|x-y|}\quad\text{for any }x,y\in B_{2^{j}\rho}\] and (2.4), we have \[\left(\fint_{B_{2^{j}\rho}}|u-(u)_{B_{2^{j}\rho}}|^{p}\,dy\right)^ {\frac{1}{p}} \leq c\left(\fint_{B_{2^{j}\rho}}\fint_{B_{2^{j}\rho}}|u(x)-u(y)|^{ p}\,dy\right)^{\frac{1}{p}} \tag{2.12}\] \[\leq c(2^{j}\rho)^{\alpha+t}\left(\fint_{B_{2^{j}\rho}}\int_{B_{ 2^{j}\rho}}\frac{|u(x)-u(y)|^{p}}{|x-y|^{n+(\alpha+t)p}}\,dy\right)^{\frac{1}{ p}}\] \[\leq c\frac{\left(2^{j}\rho\right)^{\alpha+\tau+t}}{\tau^{\frac{ 1}{p}}}\left(\fint_{\mathsf{B}_{2^{j}\rho}}|D^{\alpha+\tau}d_{t}u|^{p}\,d\mu_{ \tau}\right)^{\frac{1}{p}}.\] Since \(p-1\geq 1\), we observe that \[\left(\sum_{k=0}^{i-1}\left(T_{k}^{\frac{1}{p-1}}\right)^{p-1}\right)^{\frac{ 1}{p-1}}\leq\sum_{k=0}^{i-1}T_{k}^{\frac{1}{p-1}},\] which implies that \[\left(\sum_{k=0}^{i-1}T_{k}\right)^{\frac{1}{p-1}} \leq c\sum_{k=0}^{i-1}2^{-k\frac{\pi p}{p-1}}\sum_{j=1}^{k+1} \frac{\left(2^{j}\rho\right)^{\alpha+\tau+t}}{\tau^{\frac{1}{p}}}\left(\fint_ {\mathsf{B}_{2^{j}\rho}}|D^{\alpha+\tau}d_{t}u|^{p}\,d\mu_{\tau}\right)^{\frac {1}{p}}\] \[\leq c\sum_{j=1}^{i}\sum_{k=j-1}^{i-1}2^{-k\frac{\pi p}{p-1}} \frac{\left(2^{j}\rho\right)^{\alpha+\tau+t}}{\tau^{\frac{1}{p}}}\left(\fint_ {\mathsf{B}_{2^{j}\rho}}|D^{\alpha+\tau}d_{t}u|^{p}\,d\mu_{\tau}\right)^{\frac {1}{p}}\] \[\leq c\sum_{j=1}^{i}2^{-j\frac{\pi p}{p-1}}\frac{\left(2^{j}\rho \right)^{\alpha+\tau+t}}{\tau^{\frac{1}{p}}}\left(\fint_{\mathsf{B}_{2^{j} \rho}}|D^{\alpha+\tau}d_{t}u|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}},\] where we have used (2.12) for the first inequality, Fubini's theorem for the second inequality and then the fact that \[\sum_{k=j-1}^{i}2^{-k\frac{\pi p}{p-1}}\leq c(s,p)2^{-j\frac{\pi p}{p-1}}\] for the last inequality. We now estimate \(T^{\frac{1}{p-1}}\) as \[T^{\frac{1}{p-1}} \leq\left(\rho^{sp}\int_{\mathbb{R}^{n}\setminus B_{2^{i}\rho}} \frac{|u-(u)_{B_{2^{j}\rho}}|^{p-1}}{|y|^{n+sp}}\,dy\right)^{\frac{1}{p-1}}\] \[\quad+\sum_{j=0}^{i-1}\left(\rho^{sp}\int_{\mathbb{R}^{n}\setminus B _{2^{i}\rho}}\frac{|(u)_{B_{2^{j+1}\rho}}-(u)_{B_{2^{j}\rho}}|^{p-1}}{|y|^{n+ sp}}\,dy\right)^{\frac{1}{p-1}},\] where we have used Minkowski's inequality. We further estimate the above second term in the right-hand side as \[\sum_{j=0}^{i-1}\left(\rho^{sp}\int_{\mathbb{R}^{n}\setminus B_{2 ^{i}\rho}}\frac{|(u)_{B_{2^{j+1}\rho}}-(u)_{B_{2^{j}\rho}}|^{p-1}}{|y|^{n+sp}} \,dy\right)^{\frac{1}{p-1}}\] \[\leq c\sum_{j=0}^{i-1}2^{\frac{\pi ip}{p-1}}|(u)_{B_{2^{j+1}\rho} }-(u)_{B_{2^{j}\rho}}|\] \[\leq c\sum_{j=1}^{i}2^{\frac{\pi ip}{p-1}}\frac{\left(2^{j}\rho \right)^{\alpha+\tau+t}}{\tau^{\frac{1}{p}}}\left(\fint_{\mathsf{B}_{2^{j} \rho}}|D^{\alpha+\tau}d_{t}u|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}},\] where we have used (2.2) for the first inequality, and (2.11), (2.12) for the last inequality. Combine all the estimates for \(\left(\sum\limits_{k=0}^{i-1}T_{k}\right)^{\frac{1}{p-1}}\) and \(T^{\frac{1}{p-1}}\) to see that \[\mathrm{Tail}_{\mathrm{s,p}}\left(u-(u)_{B_{\rho}};B_{\rho}\right) \leq\left(\sum\limits_{k=0}^{i-1}T_{k}+T\right)^{\frac{1}{p-1}}\] \[\leq\left(\sum\limits_{k=0}^{i-1}T_{k}\right)^{\frac{1}{p-1}}+T^{ \frac{1}{p-1}}\] \[\leq c\frac{\rho^{\alpha+\tau+t}}{\tau^{\frac{1}{p}}}\sum\limits_{j=1 }^{i}2^{-j\left(\frac{\pi p}{p-1}-(\alpha+\tau+t)\right)}\left(\fint_{\mathcal{ B}_{2j_{\rho}}}|D^{\alpha+\tau}d_{t}u|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}}\] \[\quad+c2^{-i\frac{\pi p}{p-1}}\,\mathrm{Tail}_{\mathrm{s,p}}\left( u-(u)_{B_{2i_{\rho}}};B_{2^{i_{\rho}}}\right).\] We are now in the position to prove (2.10). Note that \[\mathrm{Tail}_{\mathrm{s,p}}\left(u-(u)_{B_{\rho}(y_{0})};B_{\rho }(y_{0})\right) \leq\mathrm{Tail}_{\mathrm{s,p}}\left(u-(u)_{B_{R}(x_{0})};B_{\rho }(y_{0})\right)\] \[\quad+\mathrm{Tail}_{\mathrm{s,p}}\left((u)_{B_{R}(x_{0})}-(u)_{B _{\rho}(y_{0})};B_{\rho}(y_{0})\right)\eqqcolon I_{1}+I_{2}.\] We now estimate \(I_{1}\) and \(I_{2}\) as \[I_{1} \leq c\left(\rho^{sp}\int_{B_{R}(x_{0})\setminus B_{\rho}(y_{0})} \frac{|u-(u)_{B_{R}(x_{0})}|^{p-1}}{|y-y_{0}|^{n+sp}}\,dy\right)^{\frac{1}{p-1 }}+c\left(\rho^{sp}\int_{\mathbb{R}^{n}\setminus B_{R}(x_{0})}\frac{|u-(u)_{B _{R}(x_{0})}|^{p-1}}{|y-y_{0}|^{n+sp}}\,dy\right)^{\frac{1}{p-1}}\] \[\leq c\left(\frac{R}{\rho}\right)^{\frac{n}{p-1}}\left(\fint_{B_{R}(x _{0})}|u-(u)_{B_{R}(x_{0})}|^{p}\,dy\right)^{\frac{1}{p}}\] \[\quad+c\left(\frac{R}{R-\rho}\right)^{\frac{n+sp}{p-1}}\left( \frac{\rho}{R}\right)^{\frac{\pi p}{p-1}}\left(R^{sp}\int_{\mathbb{R}^{n} \setminus B_{R}(x_{0})}\frac{|u-(u)_{B_{R}(x_{0})}|^{p-1}}{|y-x_{0}|^{n+sp}} \,dy\right)^{\frac{1}{p-1}}\] and \[I_{2}\leq c\left|(u)_{B_{R}(x_{0})}-(u)_{B_{\rho}(y_{0})}\right|\leq c\left( \frac{R}{\rho}\right)^{\frac{p}{p}}\left(\fint_{B_{R}(x_{0})}|u-(u)_{B_{R}(x_{ 0})}|^{p}\,dy\right)^{\frac{1}{p}},\] where we have used Holder's inequality and the fact that \[|y-y_{0}|\geq|y-x_{0}|-|x_{0}-y_{0}|\geq\frac{R-\rho}{R}|y-x_{0}|\quad\text{ any }y\in\mathbb{R}^{n}\setminus B_{R}(x_{0}).\] As in (2.12), we have \[\left(\fint_{B_{R}(x_{0})}|u-(u)_{B_{R}(x_{0})}|^{p}\,dy\right)^{\frac{1}{p}} \leq cR^{\alpha+\tau+t}\left(\frac{1}{\tau}\fint_{\mathcal{B}_{R}(x_{0})}|D^{ \tau}d_{t+\alpha}u|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}} \tag{2.13}\] for some constant \(c=c(n,s,p)\). Combine the estimates \(I_{1}\), \(I_{2}\) and (2.13) to obtain (2.10). We now provide the existence result of the corresponding boundary value problem to (1.1) and the standard energy estimate. **Lemma 2.7**.: _Let \(\Omega^{\prime}\) be an open and bounded set in \(\mathbb{R}^{n}\) such that \(\Omega\Subset\Omega^{\prime}\), and let \(g\in W^{s,p}(\Omega^{\prime})\cap L^{p-1}_{sp}(\mathbb{R}^{n})\). If \(f\in L^{p-1}_{s}(\mathbb{R}^{n})\) with \(d_{0}f\in L^{p}\left(\Omega^{\prime}\times\Omega^{\prime};\frac{dx\,dy}{|x-y| ^{n}}\right)\), then there is a unique weak solution \(u\in W^{s,p}(\Omega)\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) to_ \[\begin{cases}(-\Delta_{p})^{a}_{A}u=(-\Delta_{p})^{\frac{s}{p}}f&\text{ in }\Omega\\ u=g&\text{ on }\mathbb{R}^{n}\setminus\Omega\end{cases} \tag{2.14}\] _with the estimate_ \[\int_{\Omega}\int_{\Omega}|d_{s}u|^{p}\frac{dx\,dy}{|x-y|^{n}}\leq c\left( \int_{\Omega^{\prime}}\int_{\Omega^{\prime}}|d_{0}f|^{p}\frac{dx\,dy}{|x-y|^{n}} +\mathrm{Tail}_{\frac{s}{p},p}(f-(f)_{\Omega^{\prime}};\Omega^{\prime})^{p}+ \|g\|_{W^{s,p}(\Omega^{\prime})}^{p}\right) \tag{2.15}\] _for some constant \(c=c(n,s,p,\Omega,\Omega^{\prime})\)._ Proof.: We first note that for any \(\phi\in X_{0}^{s,p}(\Omega,\Omega^{\prime})\), \[\left|\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}|f(x)-f(y)|^{p-2}(f (x)-f(y))(\phi(x)-\phi(y))\frac{dx\,dy}{|x-y|^{n+s}}\right|\] \[\leq\int_{\Omega^{\prime}}\int_{\Omega^{\prime}}|f(x)-f(y)|^{p-1} |\phi(x)-\phi(y)|\frac{dx\,dy}{|x-y|^{n+s}}\] \[\quad+2\int_{\mathbb{R}^{n}\setminus\Omega^{\prime}}\int_{\Omega }|f(x)-f(y)|^{p-1}|\phi(x)|\frac{dx\,dy}{|x-y|^{n+s}}\eqqcolon I_{1}+I_{2}.\] Using Holder's inequality, we estimate \(I_{1}\) as \[I_{1}\leq\left(\int_{\Omega^{\prime}}\int_{\Omega^{\prime}}|d_{0}f|^{p}\frac{ dx\,dy}{|x-y|^{n}}\right)^{\frac{p-1}{p}}[\phi]_{W^{s,p}(\Omega^{\prime})}.\] In light of Jensen's inequality and the fact that \[|x-y|\geq c(1+|y|)\quad\text{for any }x\in\Omega,\,y\in\mathbb{R}^{n}\setminus \Omega^{\prime}\quad\text{and}\quad\int_{\mathbb{R}^{n}\setminus\Omega^{ \prime}}\frac{dy}{|y|^{n+s}}\leq c\] for some constant \(c=c(n,s,\Omega,\Omega^{\prime})\), we find \[I_{2}\leq c\int_{\Omega}|f(x)|^{p-1}|\phi(x)|\,dx+c\int_{\mathbb{R}^{n} \setminus\Omega^{\prime}}|f(y)|^{p-1}\frac{dy}{(1+|y|)^{n+s}}\int_{\Omega}| \phi(x)|\,dx\eqqcolon I_{2,1}+I_{2,2}.\] We now estimate \(I_{2,1}\) as \[I_{2,1} \leq c\int_{\Omega}|f(x)-(f)_{\alpha}|^{p-1}|\phi(x)|\,dx+c|(f)_{ \alpha}|^{p-1}\int_{\Omega}|\phi(x)|\,dx\] \[\leq c\left[\left(\int_{\Omega}|f(x)-(f)_{\alpha}|^{p}\,dx\right) ^{\frac{p-1}{p}}+|(f)_{\alpha}|^{p-1}\right]\left(\int_{\Omega}|\phi(x)|^{p} \,dx\right)^{\frac{1}{p}}\] \[\leq c\left[\left(\int_{\Omega}\int_{\Omega}|f(x)-f(y)|^{p}\frac{ dx\,dy}{|x-y|^{n}}\right)^{\frac{p-1}{p}}+|(f)_{\alpha}|^{p-1}\right]\|\phi \|_{L^{p}(\Omega)},\] where we have used Holder's inequality, the second inequality in (2.12) with \(u(x)\), \(B_{2^{j}\rho}\), \(\alpha\) and \(\tau\) there, replaced by \(f(x)\), \(\Omega\), \(0\) and \(0\), respectively, and the fact that \(\frac{2\mathrm{diam}(\Omega)}{|x-y|}\geq 1\) for any \(x,y\in\Omega\). In addition, Holder's inequality yields \[I_{2,2}\leq c\int_{\mathbb{R}^{n}\setminus\Omega^{\prime}}|f(y)|^{p-1}\frac{ dy}{1+|y|^{n+s}}\|\phi\|_{L^{p}(\Omega)}.\] Combine all the estimates \(I_{1}\) and \(I_{2}\) to see that the linear functional \[T_{f}:\phi\to\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}|f(x)-f(y)|^{p-2}(f(x)- f(y))(\phi(x)-\phi(y))\frac{dx\,dy}{|x-y|^{n+s}},\quad\phi\in X_{0}^{s,p}( \Omega,\Omega^{\prime})\] is the element in the dual space of \(X_{0}^{s,p}(\Omega,\Omega^{\prime})\). Therefore, the existence of a unique weak solution follows from [5, Proposition 2.12]. We are going to prove (2.15). We first note from [5, Remark A.4] that there is a constant \(c_{p}\) depending only on \(p\) such that for any \(a,b\in\mathbb{R}\), \[\left(|a|^{p-2}a-|b|^{p-2}b\right)(a-b)\geq\frac{1}{c_{p}}|a-b|^{p}. \tag{2.16}\] We now test \(u-g\) to (2.14), and use (2.16) and the fact that \(A\geq\Lambda^{-1}\), in order to see that \[J\eqqcolon=\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}|d_{s}(u-g)| ^{p}\frac{dx\,dy}{|x-y|^{n}} \leq c\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}|d_{0}f|^{p-1}|d_{s} (u-g)|\frac{dx\,dy}{|x-y|^{n}}\] \[\quad+c\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}|d_{s}g|^{p-1}|d _{s}(u-g)|\frac{dx\,dy}{|x-y|^{n}}\eqqcolon J_{1}+J_{2}.\] We first note from \(u=g\) on \(\mathbb{R}^{n}\setminus\Omega\) that \[J_{1}\leq c\int_{\Omega^{\prime}}\int_{\Omega^{\prime}}|d_{0}f|^{p-1}|d_{s}(u-g )|\frac{dx\,dy}{|x-y|^{n}}+c\int_{\mathbb{R}^{n}\setminus\Omega^{\prime}} \int_{\Omega}|d_{0}f|^{p-1}|(u-g)(x)|\frac{dx\,dy}{|x-y|^{n}}\eqqcolon J_{1,1} +J_{1,2}.\] Using Holder's inequality and Young's inequality, we find that \[J_{1,1}\leq\frac{1}{4}\int_{\Omega^{\prime}}\int_{\Omega^{\prime}}|d_{s}(u-g)| ^{p}\frac{dx\,dy}{|x-y|^{n}}+c\int_{\Omega^{\prime}}\int_{\Omega^{\prime}}|d_{0 }f|^{p}\frac{dx\,dy}{|x-y|^{n}}.\] For the estimate of \(J_{1,2}\), we follow the same arguments as in the estimate of \(I_{2,2}\) in Lemma 3.3 below with \(B_{3}\), \(B_{4}\) and \(w(x)\) replaced by \(\Omega\), \(\Omega^{\prime}\) and \(g(x)\), respectively, to see that \[J_{1,2}\leq\frac{1}{4}\int_{\Omega^{\prime}}\int_{\Omega^{\prime}}|d_{s}(u-g)|^ {p}\frac{dx\,dy}{|x-y|^{n}}+c\,\text{Tail}_{\frac{1}{p},\mathrm{p}}(f-(f)_{ \Omega^{\prime}};\Omega^{\prime})^{p}+c\int_{\Omega^{\prime}}\int_{\Omega^{ \prime}}|d_{0}f|^{p}\frac{dx\,dy}{|x-y|^{n}}.\] Combining the estimates of \(J\), \(J_{1}\) and \(J_{2}\), we get \[\int_{\Omega^{\prime}}\int_{\Omega^{\prime}}|d_{s}(u-g)|^{p}\frac{dx\,dy}{|x-y |^{n}}\leq c\,\text{Tail}_{\frac{1}{p},\mathrm{p}}(f-(f)_{\Omega^{\prime}}; \Omega^{\prime})^{p}+c\int_{\Omega^{\prime}}\int_{\Omega^{\prime}}|d_{0}f|^{p }\frac{dx\,dy}{|x-y|^{n}}.\] Thus, (2.15) follows by plugging the fact that \[\int_{\Omega^{\prime}}\int_{\Omega^{\prime}}|d_{s}u|^{p}\frac{dx\,dy}{|x-y|^{ n}}-c(p)\int_{\Omega^{\prime}}\int_{\Omega^{\prime}}|d_{s}g|^{p}\frac{dx\,dy}{|x-y |^{n}}\leq c(p)\int_{\Omega^{\prime}}\int_{\Omega^{\prime}}|d_{s}(u-g)|^{p} \frac{dx\,dy}{|x-y|^{n}}\] into the right-hand side of the above inequality. We end this section with the following technical lemma. **Lemma 2.8**.: _(See [21, Lemma 6.1]) Let \(\phi:[1,2]\to\mathbb{R}\) be a nonnegative bounded function. For \(1\leq r_{1}<r_{2}\leq 2\), we assume that_ \[\phi(r_{1})\leq\frac{1}{2}\phi(r_{2})+\frac{\mathcal{M}}{\left(r_{2}-r_{1} \right)^{2n}},\] _where \(\mathcal{M}>0\). Then,_ \[\phi(1)\leq c\mathcal{M}\] _for some constant \(c=c(n)\)._ ## 3. Comparison estimates In this section, we prove a comparison lemma based on a freezing argument which will be an important ingredient in Section 5. Before proving this, we need to prove the following two lemmas. The first one is a self-improving property of a weak solution to the corresponding homogeneous problem of (1.1). **Lemma 3.1**.: _Let \(w\in W^{s,p}(B_{3})\cap L^{p-1}_{\mathrm{sp}}(\mathbb{R}^{n})\) be a local weak solution to_ \[(-\Delta_{p})^{s}_{A}w=0\quad\text{in }B_{3}.\] _Then there are constant \(\epsilon_{0}=\epsilon_{0}(n,s,p,\Lambda)\in(0,1)\) and \(c=c(n,s,p,\Lambda)\) such that_ \[\left(\frac{1}{\tau}\!\!\!\int_{\mathcal{B}_{2}}|D^{r}d_{s}w|^{p(1+\epsilon_{0} )}\,d\mu_{\tau}\right)^{\frac{1}{p(1+\epsilon_{0})}}\leq c\left[\left(\frac{1 }{\tau}\!\!\int_{\mathcal{B}_{3}}|D^{r}d_{s}w|^{p}\,d\mu_{\tau}\right)^{\frac{ 1}{p}}+\text{Tail}_{\text{s,p}}\left(\frac{w-(w)_{B_{3}}}{3^{s+\tau}};B_{3} \right)\right]. \tag{3.1}\] Proof.: In light of [7, Theorem 1.1, Remark 1], we find that there exist constants \(\tilde{\delta}\in(0,1)\) and \(c\) depending only on \(n,s,p\) and \(\Lambda\) such that \[[w]_{W^{s+\frac{n\tilde{\delta}}{p(1+\delta)},p(1+\tilde{\delta})}}\big{(}B_{ \frac{1}{\tilde{\delta}(90)}}\big{)}\leq c\left[\left(\int_{\mathcal{B}_{\frac {1}{2}}(y_{0})}|D^{\tau}d_{s}w|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}}+\text{Tail }_{\text{s,p}}\left(w-(w)_{B_{\frac{1}{2}}(y_{0})};B_{\frac{1}{2}}(y_{0}) \right)\right] \tag{3.2}\] for any \(y_{0}\in B_{2}\). Apply (2.10) with \(u=w\), \(x_{0}=0\), \(\rho=\frac{1}{2}\), \(R=3\), \(\alpha=0\) and \(t=s\) to observe that \[\text{Tail}_{\text{s,p}}\left(w-(w)_{B_{\frac{1}{2}}(y_{0})};y_{0},\frac{1}{2} \right)\leq c\left[\left(\frac{1}{\tau}\!\!\int_{\mathcal{B}_{3}}|D^{\tau}d_{s} w|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}}+\text{Tail}_{\text{s,p}}(w-(w)_{B_{3}};B_{3}) \right]. \tag{3.3}\] On the other hand, there is a sufficiently small \(\epsilon_{0}=\epsilon_{0}(n,s,p)<\tilde{\delta}\) such that \[p(1+\epsilon_{0})<\begin{cases}\frac{np}{n-sp}&\text{if }n>sp,\\ 2p&\text{if }n\leq sp.\end{cases} \tag{3.4}\] Since \(\tau<\frac{n}{p}\), we apply Lemma 2.4 with \(u=w\), \(r=\frac{1}{8}\), \(s_{1}=s+\tau\frac{\epsilon_{0}}{1+\epsilon_{0}}\), \(p=p(1+\epsilon_{0})\), \(s_{2}=s+\frac{n\tilde{\delta}}{p(1+\tilde{\delta})}\) and \(q=p(1+\tilde{\delta})\) to see that \[\left(\int_{\mathcal{B}_{\frac{1}{8}}(y_{0})}|D^{\tau}d_{s}w|^{p(1+\epsilon_{0} )}\,d\mu_{\tau}\right)^{\frac{1}{p(1+\epsilon_{0})}}=[w]_{W^{s+\tau}\frac{ \epsilon_{0}}{1+\epsilon_{0}},p(1+\epsilon_{0})}\big{(}B_{\frac{1}{8}(y_{0})} \big{)}\leq c[w]_{W^{s+\frac{n\tilde{\delta}}{p(1+\tilde{\delta})},p(1+\tilde{ \delta})}}\big{(}B_{\frac{1}{8}(y_{0})}\big{)}\cdot \tag{3.5}\] We combine (3.2), (3.3) and (3.5) to get \[\left(\int_{\mathcal{B}_{\frac{1}{6}(y_{0})}}|D^{\tau}d_{s}w|^{p(1+\epsilon_{0})} \,d\mu_{\tau}\right)^{\frac{1}{p(1+\epsilon_{0})}}\leq c\left(\int_{\mathcal{B} _{3}}|D^{\tau}d_{s}w|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}}+c\,\mathrm{Tail_{s, p}}\left(w-(w)_{B_{3}};B_{3}\right). \tag{3.6}\] We will find similar estimates like (3.6) for off-diagonal cubes. Let us take \(\mathcal{Q}\equiv\mathcal{Q}_{\frac{1}{8\sqrt{n}}}\left(z_{1},z_{2}\right) \Subset\mathcal{B}_{3}\) satisfying \[\mathrm{dist}\left(Q^{1},Q^{2}\right)>\frac{1}{8\sqrt{n}}, \tag{3.7}\] where \(Q^{1}=Q_{\frac{1}{8\sqrt{n}}}(z_{1})\) and \(Q^{2}=Q_{\frac{1}{8\sqrt{n}}}(z_{2})\). We now set \[\gamma=\begin{cases}\frac{np}{n-sp}&\text{if }n>sp,\\ 2p&\text{if }n\leq sp.\end{cases} \tag{3.8}\] By (3.4) and (3.8), we use Holder's inequality and then apply Lemma 4.2 with \(\tilde{\gamma}=\gamma\), \(\tilde{q}=p\), \(\tilde{p}=p\) and \(\alpha=0\), in order to see that \[\left(\fint_{\mathcal{Q}}|D^{\tau}d_{s}u|^{p(1+\epsilon_{0})}\,d \mu_{\tau}\right)^{\frac{1}{p(1+\epsilon_{0})}} \leq\left(\fint_{\mathcal{Q}}|D^{\tau}d_{s}u|^{\gamma}\,d\mu_{ \tau}\right)^{\frac{1}{\gamma}}\] \[\leq c\left(\fint_{\mathcal{Q}}|D^{\tau}d_{s}u|^{p}\,d\mu_{\tau} \right)^{\frac{1}{p}}\] \[\quad+\frac{c}{\tau^{\frac{1}{p}}}\left(\frac{d(\mathcal{Q})}{ \mathrm{dist}(Q^{1},Q^{2})}\right)^{s+\tau}\left[\sum_{d=1}^{2}\left(\fint_{p^ {d}\mathcal{Q}}|D^{\tau}d_{s}u|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}}\right]\] for some constant \(c=c(n,s,p)\). After a few simple algebraic calculations, we observe \[\left(\int_{\mathcal{Q}}|D^{\tau}d_{s}u|^{p(1+\epsilon_{0})}\,d \mu_{\tau}\right)^{\frac{1}{p(1+\epsilon_{0})}} \leq\frac{c\mu\left(\mathcal{Q}\right)^{\frac{1}{p(1+\epsilon_{0}) }}}{\mu\left(\mathcal{Q}\right)^{\frac{1}{p}}}\left(\int_{\mathcal{Q}}|D^{ \tau}d_{s}u|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}} \tag{3.9}\] \[+\frac{c}{\tau^{\frac{1}{p}}}\left[\sum_{d=1}^{2}\frac{\mu\left( \mathcal{Q}\right)^{\frac{1}{p(1+\epsilon_{0})}}}{\mu\left(P^{d}\mathcal{Q} \right)^{\frac{1}{p}}}\left(\int_{P^{d}\mathcal{Q}}|D^{\tau}d_{s}u|^{p}\,d\mu _{\tau}\right)^{\frac{1}{p}}\right].\] We now take \(z_{1},z_{2}\in B_{2}\), then we have that \(P^{1}\mathcal{Q}\), \(P^{2}\mathcal{Q}\) and \(\mathcal{Q}\) are embedded in \(\mathcal{B}_{3}\), and that \[\frac{1}{8\sqrt{n}}<\mathrm{dist}\left(Q^{1},Q^{2}\right)<4.\] Thus, we further estimate the right-hand side of (3.9) as \[\left(\int_{\mathcal{Q}}|D^{\tau}d_{s}u|^{p(1+\epsilon_{0})}\,d\mu_{\tau} \right)^{\frac{1}{p(1+\epsilon_{0})}}\leq c\left(\int_{\mathcal{B}_{3}}|D^{ \tau}d_{s}u|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}}, \tag{3.10}\] where we have used (4.32) below and (2.4). Since \(\mathcal{B}_{2}\) is covered by finitely many diagonal balls \(\mathcal{B}\left(y_{i},\frac{1}{8}\right)\) and off-diagonal cubes \(\mathcal{Q}_{\frac{1}{8\sqrt{n}}}\left(z_{1,i},z_{2,i}\right)\) satisfying (3.7) with \(y_{i},\,z_{1,i},\,z_{2,i}\in B_{2}\), the standard covering argument along with (3.6) and (3.10) yields \[\left(\int_{\mathcal{B}_{2}}|D^{\tau}d_{s}w|^{p(1+\epsilon_{0})}\,d\mu_{\tau} \right)^{\frac{1}{p(1+\epsilon_{0})}}\leq c\left[\left(\int_{\mathcal{B}_{3}} |D^{\tau}d_{s}w|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}}+\mathrm{Tail_{s,p}} \left(w-(w)_{B_{3}};B_{3}\right)\right].\] After a simple algebraic computations together with (2.4), (3.1) follows. The second one is higher Holder regularity of weak solutions to nonlocal \(p\)-Laplacian type equations with locally constant kernel coefficients. **Lemma 3.2**.: _Let \(v\in W^{s,p}(B_{2})\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) be a local weak solution to_ \[(-\Delta_{p})^{s}_{A_{2}}v=0\quad\text{in }B_{2}, \tag{3.11}\] _where \(A_{2}\equiv A_{2,0}\) defined in (2.1). Then \(v\in C^{\alpha}(B_{1})\) for every \(\alpha\in\left(0,\min\left\{1,\frac{sp}{p-1}\right\}\right)\) with the estimate_ \[[v]_{C^{\alpha}(B_{1})}\leq c\left(\fint_{B_{2}}|v-(v)_{B_{2}}|^{p}\,dx\right)^ {\frac{1}{p}}+c\,\mathrm{Tail_{s,p}}(v-(v)_{B_{2}};B_{2}), \tag{3.12}\] _where \(c=c(n,s,p,\Lambda,\alpha)\)._ Proof.: Applying [7, Lemma 4.1] with \(q=p\) and \(b=0\), we find that \[[v-(v)_{B_{2}}]_{C^{a}(B_{1})}\leq c\left(\fint_{B_{2}}|v-(v)_{B_{2}}|^{p}\,dx \right)^{\frac{1}{p}}+c\operatorname{Tail}_{\operatorname{s,p}}(v-(v)_{B_{2}};B _{2}),\] where we have used the fact that \(v-(v)_{B_{2}}\) is also a local weak solution to (3.11). Then the desired estimate (3.12) follows from the fact that \(v-(v)_{B_{2}}\) and \(v\) have the same Holder seminorm. Let us restrict the range of \(\tau\) as \[\tau\in\left(0,\min\left\{\frac{s}{p-1},1-s\right\}\right). \tag{3.13}\] We note that \(\tau<\frac{n}{p}\), since \(n\geq 2\) and \(\frac{p}{p-1}\leq 2\). We now give the following comparison lemma. **Lemma 3.3**.: _For any \(\epsilon>0\), there is a constant \(\delta=\delta(n,s,p,\Lambda,\epsilon)\) such that for any weak solution \(u\) to (1.1) with_ \[B_{4}\subset\Omega\] _and_ \[\frac{1}{\tau}\fint_{\mathcal{B}_{4}}|D^{\tau}d_{s}u|^{p}\,d\mu_{\tau}+ \operatorname{Tail}_{\operatorname{s,p}}\left(\frac{u-(u)_{B_{4}}}{4^{\tau+s} };B_{4}\right)^{p}\leq 1, \tag{3.14}\] _there exists a weak solution \(v\in W^{s,p}(B_{2})\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) to_ \[(-\Delta_{p})^{s}_{A_{2}}v=0\quad\text{in }B_{2}\] _such that_ \[\frac{1}{\tau}\fint_{\mathcal{B}_{4}}|D^{\tau}d_{s}(u-v)|^{p}\,d\mu_{\tau} \leq e^{p}\quad\text{and}\quad\|D^{\tau}d_{s}v\|_{L^{\infty}(\mathcal{B}_{1})} \leq c_{0} \tag{3.15}\] _for some constant \(c_{0}=c_{0}(n,s,p,\Lambda,\tau)\)._ Proof.: From [5, Proposition 2.12], there is a unique weak solution \(w\in X^{s,p}_{u}(B_{3},B_{4})\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) to \[\begin{cases}(-\Delta_{p})^{s}_{A}w=0&\text{in }B_{3},\\ w=u&\text{on }\mathbb{R}^{n}\setminus B_{3}.\end{cases}\] Then we observe that \[\langle(-\Delta_{p})^{s}_{A}u-(-\Delta_{p})^{s}_{A}w,u-w\rangle=\left\langle( -\Delta_{p})^{\frac{p}{p}}f,u-w\right\rangle,\] where \[\langle(-\Delta_{p})^{s}_{A}u,u-w\rangle\] \[=\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}A(x,y)\left(\frac{|u(x )-u(y)|}{|x-y|^{s}}\right)^{p-2}\left(\frac{u(x)-u(y)}{|x-y|^{s}}\right)\left( \frac{(u-w)(x)-(u-w)(y)}{|x-y|^{s}}\right)\frac{dx\,dy}{|x-y|^{n}}.\] We now estimate \(I_{1}\) and \(I_{2}\). **Estimate of \(I_{1}\).** In light of (2.4) and (2.16), we have \[I_{1}\geq\frac{1}{c_{p}\Lambda}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac {|(u-w)(x)-(u-w)(y)|^{p}}{|x-y|^{n+sp}}\,dx\,dy\geq\frac{\nu_{0}}{c_{p}\Lambda \tau}\fint_{\mathcal{B}_{4}}|D^{\tau}d_{s}(u-w)|^{p}\,d\mu_{\tau}.\] **Estimate of \(I_{2}\).** By the fact that \(u=w\) on \(\mathbb{R}^{n}\setminus B_{3}\), we first estimate \(I_{2}\) as \[I_{2}\leq\int_{B_{4}}\int_{B_{4}}\frac{|d_{0}f(x,y)|^{p-1}|(u-w) (x)-(u-w)(y)|}{|x-y|^{n+s}}\,dx\,dy\] \[\qquad+2\int_{\mathbb{R}^{n}\setminus B_{4}}\int_{B_{3}}\frac{|d_ {0}f(x,y)|^{p-1}|(u-w)(x)|}{|x-y|^{n+s}}\,dx\,dy=:I_{2,1}+2I_{2,2}.\] Applying Young's inequality to \(I_{2,1}\), we find that there is a constant \(c=c(n,s,p,\Lambda)\) such that \[I_{2,1}\leq\frac{c}{\tau}\fint_{\mathcal{B}_{4}}|D^{\tau}d_{0}f|^{p}\,d\mu_{ \tau}+\frac{1}{4}\frac{\nu_{0}}{c_{p}\Lambda\tau}\fint_{\mathcal{B}_{4}}|D^{ \tau}d_{s}(u-w)|^{p}\,d\mu_{\tau}.\] To estimate of \(I_{2,2}\), we first observe \[I_{2,2} \leq\int_{\mathbb{R}^{n}\setminus B_{4}}\int_{B_{3}}\frac{|f(x)-(f) _{B_{4}}|^{p-1}|(u-w)(x)|}{|x-y|^{n+s}}\,dx\,dy\] \[\quad+\int_{\mathbb{R}^{n}\setminus B_{4}}\int_{B_{3}}\frac{|f(y) -(f)_{B_{4}}|^{p-1}|(u-w)(x)|}{|x-y|^{n+s}}\,dx\,dy\eqqcolon I_{2,2,1}+I_{2,2, 2}.\] By the fact that \[\int_{\mathbb{R}^{n}\setminus B_{4}}\frac{1}{|x-y|^{n+s}}\,dy\leq c(n,s)\quad \text{for any $x\in B_{3}$}\] and Young's inequality, we show that for any \(\varsigma\in(0,1)\), \[I_{2,2,1}\leq\frac{c}{\varsigma^{\frac{1}{p-1}}}\int_{B_{4}}|f(x)-(f)_{B_{4}} |^{p}\,dx+\varsigma\int_{B_{4}}|u(x)-w(x)|^{p}\,dx. \tag{3.16}\] We next apply (2.12) and Lemma 2.3 to the first term and the second term in the right-hand side of (3.16), respectively, in order to get that \[I_{2,2,1}\leq\frac{c}{\tau}\mathchoice{\vbox{\hbox{$-$}}}{\vbox{ \hbox{$-$}}}{\vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}{\vbox{ \hbox{$-$}}}}{\vbox{\hbox{$-$}}}\!\int_{B_{4}}|D^{\tau}d_{0}f|^{p}\,d\mu_{ \tau}+\frac{1}{4}\frac{\nu_{0}}{c_{p}\Lambda\tau}\mathchoice{\vbox{\hbox{$-$}}}{\vbox{ \hbox{$-$}}}{\vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}{ \vbox{\hbox{$-$}}}}{\vbox{\hbox{$-$}}}\!\int_{B_{4}}|D^{\tau}d_{s}(u-w)|^{p} \,d\mu_{\tau},\] where we take \(\varsigma\) so small depending on \(n,s,p\) and \(\Lambda\). By the fact that \(|x-y|\geq c|y|\) for any \(x\in B_{3}\) and \(y\in\mathbb{R}^{n}\setminus B_{4}\), we estimate \(I_{2,2,2}\) as \[I_{2,2,2}\leq\int_{\mathbb{R}^{n}\setminus B_{4}}\int_{B_{3}}\frac{|f(y)-(f)_{ B_{4}}|^{p-1}|(u-w)(x)|}{|x-y|^{n+s}}\,dx\,dy.\] As in (3.16), Young's inequality and Lemma 2.2 yield that \[I_{2,2,2}\leq c\operatorname{Tail}_{\frac{p}{p},p}\left(f-(f)_{B_{4}};B_{4} \right)^{p}+\frac{1}{4}\frac{\nu_{0}}{c_{p}\Lambda\tau}\mathchoice{\vbox{ \hbox{$-$}}}{\vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}{ \vbox{\hbox{$-$}}}}{\vbox{\hbox{$-$}}}\!\int_{B_{4}}|D^{\tau}d_{s}(u-w)|^{p} \,d\mu_{\tau}.\] Combine all the estimates \(I_{1}\) and \(I_{2}\) to see that \[\frac{1}{\tau}\mathchoice{\vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}{ \vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}}{\vbox{\hbox{$-$}}}\!\int_{B_{4}}|D^{\tau}d_{s}(u-w)|^{p}\,d\mu_{ \tau}\leq c\left(\frac{1}{\tau}\mathchoice{\vbox{\hbox{$-$}}}{ \vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}{\vbox{ \hbox{$-$}}}{\vbox{\hbox{$-$}}}}{\vbox{\hbox{$-$}}}\!\int_{B_{4}}|D^{ \tau}d_{0}f|^{p}\,d\mu_{\tau}+\operatorname{Tail}_{\frac{p}{p},p}\left(f-(f)_{B _{4}};B_{4}\right)^{p}\right)\leq c\delta^{p}. \tag{3.17}\] where we have used (3.14) for the last inequality. We next apply Lemma 3.1 to see that there exists a small number \(\epsilon_{0}\in(0,1)\) depending only on \(n,s,p\) and \(\Lambda\) such that \[\left(\frac{1}{\tau}\mathchoice{\vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}{ \vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}}{\vbox{\hbox{$-$}}}\!\int_{B_{2}}|D^{\tau}d_{s}w|^{p(1+ \epsilon_{0})}\,d\mu_{\tau}\right)^{\frac{1}{\rho(1+\epsilon_{0})}}\leq c\left[ \left(\frac{1}{\tau}\mathchoice{\vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}{ \vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}{\vbox{ \hbox{$-$}}}}{\vbox{\hbox{$-$}}}\!\int_{B_{3}}|D^{\tau}d_{s}w|^{p}\,d\mu_{ \tau}\right)^{\frac{1}{\rho}}+\operatorname{Tail}_{\operatorname{s,p}}(w-(w)_{B _{3}};B_{3})\right]\] for some constant \(c=c(n,s,p,\Lambda)\). Note that using Minkowski's inequality, (3.17) and (3.14), we have that \[\left(\frac{1}{\tau}\mathchoice{\vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}{ \vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}}{\vbox{\hbox{$-$}}}\!\int_{B_{3}}|D^{\tau}d_{s}w|^{p}\,d\mu_{\tau} \right)^{\frac{1}{p}}\leq\left(\frac{1}{\tau}\mathchoice{\vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}{ \vbox{\hbox{$-$}}}}{\vbox{\hbox{$-$}}}\!\int_{B_{3}}|D^{\tau}d_{s}(w-u)|^{p}\,d\mu_{ \tau}\right)^{\frac{1}{p}}+\left(\frac{1}{\tau}\mathchoice{\vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}{ \vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}}{\vbox{\hbox{$-$}}}\!\int_{B_{3}}|D^{\tau}d_{s}u|^{p}\,d\mu_{ \tau}\right)^{\frac{1}{p}}\leq c\] and \[\operatorname{Tail}_{\operatorname{s,p}}(w-(w)_{B_{3}};B_{3})\leq\operatorname{Tail }_{\operatorname{s,p}}(w-u-(w-u)_{B_{3}};B_{3})+\operatorname{Tail}_{ \operatorname{s,p}}(u-(u)_{B_{3}};B_{3})\eqqcolon T_{1}+T_{2}.\] **Estimate of \(T_{1}\).** Since \(u=w\) on \(\mathbb{R}^{n}\setminus B_{3}\), it follows from (2.2), Lemma 2.3, (2.4) and (3.17) that \[T_{1}=\operatorname{Tail}_{\operatorname{s,p}}((w-u)_{B_{3}};B_{3})=\nu_{1}|(w-u) _{B_{3}}|\leq\frac{c}{\tau}\mathchoice{\vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}{ \vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}}{\vbox{\hbox{$-$}}}\!\int_{B_{3}}|D^{\tau}d_{s}(w-u)|^{p}\,d\mu_{ \tau}\leq c.\] **Estimate of \(T_{2}\).** Applying (2.10) with \(y_{0}=0\), \(x_{0}=0\), \(\rho=3\), \(R=4\), \(\alpha=0\) and \(t=s\) leads to \[T_{2}\leq\operatorname{Tail}_{\operatorname{s,p}}(u-(u)_{B_{4}};B_{4})+\left( \frac{c}{\tau}\mathchoice{\vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}{ \vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}}{\vbox{\hbox{$-$}}}\!\int_{B_{4}}|D^{\tau}d_{s}u|^{p}\,d\mu_{ \tau}\right)^{\frac{1}{p}}\leq c.\] For the last inequality, we have used (3.14). Therefore, we have \[\left(\frac{1}{\tau}\mathchoice{\vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}{ \vbox{\hbox{$-$}}}{\vbox{\hbox{$-$}}}}{\vbox{\hbox{$-$}}}\!\int_{B_{2}}|D^{\tau}d_{s}w|^{p(1+ \epsilon_{0})}\,d\mu_{\tau}\right)^{\frac{1}{p(1+\epsilon_{0})}}\leq c \tag{3.18}\] for some constant \(c=c(n,s,p,\Lambda)\). We now note from [5, Proposition 2.12] that there is a unique weak solution \(v\in X_{w}^{s,p}(B_{2},B_{3})\cap L_{sp}^{p-1}(\mathbb{R}^{n})\) to \[\begin{cases}(-\Delta_{p})_{A_{2}}^{s}v=0&\text{in }B_{2},\\ v=w&\text{on }\mathbb{R}^{n}\setminus B_{2}.\end{cases}\] Therefore, we observe that \[\left\langle(-\Delta_{p})_{A}^{s}w-(-\Delta_{p})_{A_{2}}^{s}v,v-w\right\rangle=0\] and rearrange it as \[J_{1}\coloneqq\left\langle(-\Delta_{p})_{A_{2}}^{s}w-(-\Delta_{p})_{A_{2}}^{s }v,v-w\right\rangle=-\left\langle(-\Delta_{p})_{A}^{s}w-(-\Delta_{p})_{A_{2}}^ {s}w,v-w\right\rangle\eqqcolon J_{2}.\] We first note from (2.16) that \[J_{1}\geq\frac{\nu_{0}}{c_{p}\Lambda\tau}\mathchoice{\vbox{\hbox{$ \int$}}\kern-7.499886pt}{\vbox{\hbox{$\int$}}\kern-6.374903pt}{ \vbox{\hbox{$\int$}}\kern-4.499931pt}{\vbox{\hbox{$ \int$}}\kern-3.749943pt}\!\int_{\mathcal{B}_{2}}\left|D^{\tau}d_{s}(w-v) \right|^{p}d\mu_{\tau}.\] **Estimate of \(J_{2}\)**. Using the definition of \(A_{2}\) given in (2.1), we next estimate \(J_{2}\) as \[J_{2}\leq\int_{B_{2}}\int_{B_{2}}\frac{|(w(x)-w(y))|^{p-1}|A-A_{2}|\|(w-v)(x)- (w-v)(y)|}{|x-y|^{n+sp}}\,dx\,dy.\] We then use Holder's inequality, (2.4), Young's inequality, (3.14) and (3.18) to the above estimate so as to obtain that \[J_{2} \leq c\left(\frac{1}{\tau}\mathchoice{\vbox{\hbox{$ \int$}}\kern-7.499886pt}{\vbox{\hbox{$\int$}}\kern-6.374903pt}{ \vbox{\hbox{$\int$}}\kern-4.499931pt}{\vbox{\hbox{$ \int$}}\kern-3.749943pt}\!\int_{\mathcal{B}_{2}}\left|D^{\tau}d_{s}w\right|^{ p(1+\epsilon_{0})}d\mu_{\tau}\right)^{\frac{p-1}{p(1+\epsilon_{0})}}\left( \mathchoice{\vbox{\hbox{$\int$}}\kern-7.499886pt}{\vbox{\hbox{$ \int$}}\kern-6.374903pt}{\vbox{\hbox{$\int$}} \kern-4.499931pt}{\vbox{\hbox{$\int$}}\kern-3.749943pt}\! \int_{B_{2}}\mathchoice{\vbox{\hbox{$\int$}}\kern-7.499886pt}{\vbox{ \hbox{$\int$}}\kern-6.374903pt}{\vbox{\hbox{$ \int$}}\kern-4.499931pt}{\vbox{\hbox{$ \int$}}\kern-3.749943pt}\!\int_{B_{2}}\left|A-A_{2}\right|^{p-1}\frac{1+ \epsilon_{0}}{1-\epsilon_{0}}dx\,dy\right)^{\frac{p-1}{p}-\frac{\epsilon_{0}}{ 1+\epsilon_{0}}}\] \[\quad\times\left(\frac{1}{\tau}\mathchoice{\vbox{\hbox{$ \int$}}\kern-7.499886pt}{\vbox{\hbox{$\int$}}\kern-6.374903pt}{ \vbox{\hbox{$\int$}}\kern-4.499931pt}{\vbox{\hbox{$ \int$}}\kern-3.749943pt}\!\int_{\mathcal{B}_{2}}\left|D^{\tau}d_{s}(w-v) \right|^{p}d\mu_{\tau}\right)^{\frac{1}{p}}\] \[\leq\frac{1}{4}\frac{\nu_{0}}{c_{p}\Lambda\tau}\mathchoice{ \vbox{\hbox{$\int$}}\kern-7.499886pt}{\vbox{\hbox{$ \int$}}\kern-6.374903pt}{\vbox{\hbox{$\int$}} \kern-4.499931pt}{\vbox{\hbox{$\int$}}\kern-3.749943pt}\! \int_{\mathcal{B}_{2}}\left|D^{\tau}d_{s}(w-v)\right|^{p}d\mu_{\tau}+c \delta^{\nu}\] for some constant \(\nu=\nu(n,s,p,\Lambda)>0\). Thus, we get that \[\frac{1}{\tau}\mathchoice{\vbox{\hbox{$\int$}}\kern-7.499886pt}{\vbox{ \hbox{$\int$}}\kern-6.374903pt}{\vbox{\hbox{$\int$}} \kern-4.499931pt}{\vbox{\hbox{$\int$}}\kern-3.749943pt}\! \int_{\mathcal{B}_{2}}\left|D^{\tau}d_{s}(w-v)\right|^{p}d\mu_{\tau}\leq c \delta^{\nu}. \tag{3.19}\] Taking \(\delta\) sufficiently small depending on \(n,s,p,\Lambda\) and \(\epsilon\), we have that \[\frac{1}{\tau}\mathchoice{\vbox{\hbox{$\int$}}\kern-7.499886pt}{\vbox{ \hbox{$\int$}}\kern-6.374903pt}{\vbox{\hbox{$\int$}} \kern-4.499931pt}{\vbox{\hbox{$\int$}}\kern-3.749943pt}\! \int_{\mathcal{B}_{2}}\left|D^{\tau}d_{s}(u-v)\right|^{p}d\mu_{\tau} \leq\frac{c}{\tau}\mathchoice{\vbox{\hbox{$\int$}} \kern-7.499886pt}{\vbox{\hbox{$\int$}}\kern-6.374903pt}{\vbox{ \hbox{$\int$}}\kern-4.499931pt}{\vbox{\hbox{$ \int$}}\kern-3.749943pt}\!\int_{\mathcal{B}_{2}}\left|D^{\tau}d_{s}(u-v) \right|^{p}d\mu_{\tau} \tag{3.20}\] \[\leq c\delta^{p}+c\delta^{\nu}\leq e^{p},\] where we have used Jensen's inequality, (3.17) and (3.19). We now use Lemma 3.2 to see that \[\|D^{\tau}d_{s}v\|_{L^{\infty}(\mathcal{B}_{1})}\leq c[v]_{C^{s+\tau}(\mathcal{B} _{1})}\leq c\left(\mathchoice{\vbox{\hbox{$\int$}}\kern-7.499886pt}{\vbox{ \hbox{$\int$}}\kern-6.374903pt}{\vbox{\hbox{$\int$}} \kern-4.499931pt}{\vbox{\hbox{$\int$}}\kern-3.749943pt}\! \int_{B_{2}}\left|v-(v)_{B_{2}}\right|^{p}dx\right)^{\frac{1}{p}}+c\operatorname{ Tail_{s,p}}(v-(v)_{B_{2}};B_{2}) \tag{3.21}\] for some constant \(c=c(n,s,p,\Lambda,\tau)\). As in (2.12) along with (3.18) and (3.19), we get that \[\left(\mathchoice{\vbox{\hbox{$\int$}}\kern-7.499886pt}{\vbox{ \hbox{$\int$}}\kern-6.374903pt}{\vbox{\hbox{$\int$}} \kern-4.499931pt}{\vbox{\hbox{$\int$}}\kern-3.749943pt}\! \int_{B_{2}}\left|v-(v)_{B_{2}}\right|^{p}dx\right)^{\frac{1}{p}}\leq c \left(\frac{1}{\tau}\mathchoice{\vbox{\hbox{$\int$}}\kern-7.499886pt}{\vbox{ \hbox{$\int$}}\kern-6.374903pt}{\vbox{\hbox{$\int$}} \kern-4.499931pt}{\vbox{\hbox{$\int$}}\kern-3.749943pt}\! \int_{\mathcal{B}_{2}}\left|D^{\tau}d_{s}v\right|^{p}d\mu_{\tau}\right)^{\frac{1}{p} }\leq c. \tag{3.22}\] Like the proof for estimates of \(T_{1}\) and \(T_{2}\), we obtain \[\operatorname{Tail_{s,p}}(v-(v)_{B_{2}};B_{2}) \leq\operatorname{Tail_{s,p}}((v-w)-(v-w)_{B_{2}};B_{2})\] \[\quad+\operatorname{Tail_{s,p}}((w-u)-(w-u)_{B_{2}};B_{2})+ \operatorname{Tail_{s,p}}(u-(u)_{B_{2}};B_{2})\] \[\leq c\left(|(v-w)_{B_{2}}|+|(u-w)_{B_{2}}|\right)\] \[\quad+\operatorname{Tail_{s,p}}((w-u);B_{2})+\operatorname{ Tail_{s,p}}(u-(u)_{B_{2}};B_{2}).\] We note that \[|(u-w)_{B_{2}}|\leq c\int_{B_{3}}|(u-w)(x)|^{p}\,dx\leq\frac{c}{\tau}\mathchoice{ \vbox{\hbox{$\int$}}\kern-7.499886pt}{\vbox{\hbox{$\int$}}\kern-6.374903pt}{ \vbox{\hbox{$ where we have used Lemma 2.3, (3.17) and (3.19). In light of (3.23), (2.9) and (3.14), we find \[\operatorname{Tail}_{\operatorname{s,p}}((w-u);B_{2})+\operatorname{ Tail}_{\operatorname{s,p}}(u-(u)_{B_{2}};B_{2}) \leq c\int_{B_{3}}|(u-w)(x)|^{p}\,dx\] \[\quad+\frac{c}{\tau}\!\!\int_{\mathcal{B}_{4}}|D^{\tau}d_{s}u|^{ p}\,d\mu_{\tau}+c\operatorname{Tail}_{\operatorname{s,p}}(u-(u)_{B_{4}};B_{4})\] \[\leq c.\] Therefore, combining the above three inequalities leads to \[\operatorname{Tail}_{\operatorname{s,p}}(v-(v)_{B_{2}};B_{2})\leq c. \tag{3.24}\] Taking into account (3.21), (3.22) and (3.24), we conclude the second inequality in (3.15). This completes the proof. Using scaling and translation arguments, we have the non-scaled version of Lemma 3.3. **Lemma 3.4**.: _For any \(\epsilon>0\), there is a constant \(\delta=\delta(n,s,p,\Lambda,\epsilon)\) such that for any weak solution \(u\) to (1.1) with_ \[B_{20\rho_{x_{i}}}(x_{i})\subset\Omega\] _and_ \[\frac{1}{\tau}\!\int_{\mathcal{B}_{20\rho_{x_{i}}}(x_{i})}|D^{ \tau}d_{s}u|^{p}\,d\mu_{\tau}+\operatorname{Tail}_{\operatorname{s,p}}\left( \frac{u-(u)_{B_{20\rho_{x_{i}}}(x_{i})}}{(20\rho_{x_{i}})^{\tau+s}};B_{20\rho_ {x_{i}}}(x_{I})\right)^{p}\leq\lambda^{p},\] \[\frac{1}{\tau}\!\int_{\mathcal{B}_{20\rho_{x_{i}}}(x_{i})}|D^{ \tau}d_{0}f|^{p}\,d\mu_{\tau}+\operatorname{Tail}_{\operatorname{\tilde{p},p}} \left(\frac{f-(f)_{B_{20\rho_{x_{i}}}(x_{i})}}{(20\rho_{x_{i}})^{\tau}};B_{20 \rho_{x_{i}}}(x_{i})\right)^{p} \tag{3.25}\] \[\quad+\left(\!\int_{B_{10\rho_{x_{i}}}(x_{i})}\!\int_{B_{10\rho_{ x_{i}}}(x_{i})}\lambda\left|A(x,y)-(A)_{B_{10\rho_{x_{i}}}(x_{i})\times B_{10\rho_{ x_{i}}}(x_{i})}\right|\,dx\,dy\right)^{p}\leq(\delta\lambda)^{p},\] _there exists a weak solution \(v\in W^{s,p}(B_{10\rho_{x_{i}}}(x_{i}))\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) to_ \[(-\Delta_{p})^{s}_{A_{10\rho_{x_{i}}},x_{i}}v=0\quad\text{in }B_{10\rho_{x_{i}}}(x_{i}) \tag{3.26}\] _such that_ \[\frac{1}{\tau}\!\int_{\mathcal{B}_{5\rho_{x_{i}}}(x_{i})}|D^{\tau}d_{s}(u-v)|^ {p}\,d\mu_{\tau}\leq(\epsilon\lambda)^{p}\quad\text{and}\quad\|D^{\tau}d_{s}v \|_{L^{\infty}(\mathcal{B}_{5\rho_{x_{i}}}(x_{i}))}\leq c_{0}\lambda \tag{3.27}\] _for some constant \(c_{0}=c_{0}(n,s,p,\Lambda,\tau)\)._ Proof.: Let us define the scaled functions for \(x,y\in\mathbb{R}^{n}\), \[\tilde{u}(x)=\frac{u(5\rho_{x_{i}}x+x_{i})}{(5\rho_{x_{i}})^{\tau+s}\lambda}, \quad\tilde{f}(x)=\frac{f(5\rho_{x_{i}}x+x_{i})}{(5\rho_{x_{i}})^{\tau}\lambda},\quad\text{and}\quad\tilde{A}(x,y)=A(5\rho_{x_{i}}x+x_{i},5\rho_{x_{i}}y+x_{ i})\] to see that \[(-\Delta_{p})^{s}_{\tilde{A}}\tilde{u}=(-\Delta_{p})^{\frac{s}{\tilde{\tau}}} \tilde{f}\quad\text{in }B_{4}\] and (3.14) with \(u=\tilde{u}\), \(f=\tilde{f}\) and \(A=\tilde{A}\) by (3.25). Therefore, using Lemma 3.3, there is a weak solution \(\tilde{v}\in W^{s,p}(B_{2})\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) to \[(-\Delta_{p})^{s}_{\tilde{A}_{2}}\tilde{v}=0\quad\text{in }B_{2}\] such that (3.15) with \(u=\tilde{u}\) and \(v=\tilde{v}\) holds. Taking \(v(x)=\lambda\left(5\rho_{x_{i}}\right)^{\tau+s}\tilde{v}\left(\frac{x-x_{i}}{5 \rho_{x_{i}}}\right)\) for \(x\in\mathbb{R}^{n}\) and changing variables, we obtain that \(v\in W^{s,p}(B_{10\rho_{x_{i}}}(x_{i}))\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) is a weak solution to (3.26) satisfying (3.27). ## 4. Coverings of upper level sets In this section, we construct coverings for upper level sets of \(D^{\tau}d_{\alpha+s}u\) under the following assumption: \[\int_{\mathcal{B}_{2}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}+|D^{\tau}d_{\alpha}f|^ {\tilde{q}}\,d\mu_{\tau}<\infty \tag{4.1}\] for some \[\tilde{p},\,\tilde{q}\in[p,\infty)\text{ and }\alpha\in[0,1)\,. \tag{4.2}\] We note that the the given parameter \(\alpha\) will be used to improve the range of \(\sigma\) as in (1.3) (see Section 5.2 below). Let us fix \(\delta\in(0,1]\). To handle the upper level set of \(D^{\tau}d_{\alpha+s}u\), we define for any \(x\in B_{2}\) and \(\rho\in(0,2-|x|]\), \[\Theta\left(x,\rho\right)=\left(\fint_{\mathcal{B}_{\rho}(x)}|D^{\tau}d_{ \alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{p}}+\frac{1}{\delta} \left(\fint_{\mathcal{B}_{\rho}(x)}|D^{\tau}d_{\alpha}f|^{\tilde{q}}\,d\mu_{ \tau}\right)^{\frac{1}{q}}. \tag{4.3}\] In this setting, based on the techniques given in [28], we now prove the following lemma which is a modified version of [28, Proposition 5.1]. **Lemma 4.1**.: _Let \(u\in L_{\mathcal{B}}^{p-1}(\mathbb{R}^{n})\) and \(f\in L_{\mathcal{B}}^{p-1}(\mathbb{R}^{n})\) satisfy (4.1) along with (4.2), and let \(1\leq r_{1}<r_{2}\leq 2\). Then there are families of countable disjoint diagonal balls and off-diagonal cubes, \(\left\{\mathcal{B}_{\rho_{x_{i}}}\left(x_{i}\right)\right\}_{i\in\mathbb{N}}\) and \(\left\{\mathcal{Q}\right\}_{\mathcal{Q}\in\mathcal{A}}\), respectively, such that_ \[E_{\lambda}\coloneqq\left\{(x,y)\in\mathcal{B}_{r_{1}}\,;\,|D^{\tau}d_{\alpha +s}u|(x,y)>\lambda\right\}\subset\left(\bigcup_{i\in\mathbb{N}}\mathcal{B}_{ \mathcal{B}_{\rho_{x_{i}}}}\left(x_{i}\right)\right)\bigcup\left(\bigcup_{ \mathcal{Q}\in\mathcal{A}}\mathcal{Q}\right) \tag{4.4}\] _whenever \(\lambda\geq\lambda_{0}\), where_ \[\begin{split}\lambda_{0}&\coloneqq\frac{c_{d}}{\tau ^{\frac{1}{p}}}\left(\frac{20}{r_{2}-r_{1}}\right)^{2n}\left(\left(\fint_{ \mathcal{B}_{2}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)^{ \frac{1}{p}}+\mathrm{Tail}_{\mathrm{s,p}}\left(\frac{u-(u)_{B_{2}}}{2^{\alpha +\tau+s}};B_{2}\right)\right)\\ &\quad+\frac{c_{d}}{\tau^{\frac{1}{p}}}\left(\frac{20}{r_{2}-r_{1 }}\right)^{2n}\frac{1}{\delta}\left(\left(\fint_{\mathcal{B}_{2}}|D^{\tau}d_{ \alpha}f|^{\tilde{q}}\,d\mu_{\tau}\right)^{\frac{1}{q}}+\mathrm{Tail}_{\mathrm{ \frac{p}{p}},p}\left(\frac{f-(f)_{B_{2}}}{2^{\alpha+\tau}};B_{2}\right)\right) \end{split} \tag{4.5}\] _for some constant \(c_{d}=c_{d}(n,s,p)\) independent of \(\tau\) and \(\alpha\). In particular, we have that_ \[\Theta(x_{i},\rho_{x_{i}})\geq\frac{\tau^{\frac{1}{p}}}{c_{d}}\lambda\quad \text{and}\quad\Theta(x_{i},\rho)\leq\frac{\tau^{\frac{1}{p}}}{c_{d}}\lambda \quad\text{ if }\rho_{x_{i}}\leq\rho\leq\frac{r_{2}-r_{1}}{10}\;(i=1,2,\ldots), \tag{4.6}\] \[\left(\fint_{\mathcal{Q}}|D^{\tau}d_{\alpha+s}u|^{\tilde{\gamma}}\,d\mu_{ \tau}\right)^{\frac{1}{p}}\leq\frac{c_{oh}}{\tau^{\frac{1}{p}}}\lambda\quad \text{for any }\mathcal{Q}\in\mathcal{A} \tag{4.7}\] _and_ \[\sum_{\mathcal{Q}\in\mathcal{A}}\mu_{\tau}(\mathcal{Q})\leq\frac{c_{od}}{ \lambda\tilde{\rho}}\int_{\mathcal{B}_{\tau^{2}}\cap\{|D^{\tau}d_{\alpha+s}u| >c_{u}\lambda\}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau} \tag{4.8}\] _for some constants \(c_{oh}=c_{oh}(n,s,p,\tilde{p},\tilde{\gamma})\) and \(c_{od}=c_{od}(n,s,p,\tilde{p})\) independent of \(\tau\) and \(\alpha\), where \(c_{u}=\frac{\tau^{\frac{1}{p}}}{c_{d}}\) and_ \[\tilde{\gamma}=\begin{cases}\frac{n\tilde{p}}{n-s\tilde{p}}&\text{if }s\tilde{p}<n,\\ 2q&\text{if }s\tilde{p}\geq n.\end{cases} \tag{4.9}\] Proof.: **Step 1. Diagonal balls and Vitali's covering.** We first find diagonal balls satisfying (4.6) using the exit time argument and Vitali's covering lemma. Let us consider a constant \(\lambda\) satisfying \[\begin{split}\lambda&\geq\frac{1}{\kappa}\left(\frac{20} {r_{2}-r_{1}}\right)^{2n}\left(\left(\fint_{\mathcal{B}_{2}}|D^{\tau}d_{ \alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{p}}+\mathrm{Tail}_{ \mathrm{s,p}}\left(\frac{u-(u)_{B_{2}}}{2^{\alpha+\tau+s}};B_{2}\right)\right) \\ &\quad+\frac{1}{\kappa}\left(\frac{20}{r_{2}-r_{1}}\right)^{2n} \frac{1}{\delta}\left(\left(\fint_{\mathcal{B}_{2}}|D^{\tau}d_{\alpha}f|^{ \tilde{q}}\,d\mu_{\tau}\right)^{\frac{1}{q}}+\mathrm{Tail}_{\frac{2}{p},p} \left(\frac{f-(f)_{B_{2}}}{2^{\alpha+\tau}};B_{2}\right)\right),\end{split} \tag{4.10}\] where a free parameter \(\kappa\in(0,1]\) will be selected later. We next define \[D_{\kappa\lambda}=\left\{(x,x)\in\mathcal{B}_{r_{1}}\,;\,\sup_{0<\rho\leq \frac{r_{2}-r_{1}}{10}}\Theta(x,\rho)>\kappa\lambda\right\}.\] We now note from (2.4) that for any \(x\in B_{r_{1}}\), \[\begin{split}\left(\fint_{\mathcal{B}_{\frac{20-r_{1}}{10}}(x)}|D^{ \tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{p}}& \leq\left(\frac{\mu_{\tau}\left(\mathcal{B}_{2}\right)}{\mu_{\tau}\left( \mathcal{B}_{\frac{20-r_{1}}{10}}(x)\right)}\fint_{\mathcal{B}_{2}}|D^{\tau}d_ {\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{p}}\\ &\leq\left(\left(\frac{20}{r_{2}-r_{1}}\right)^{2n}\fint_{ \mathcal{B}_{2}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)^{\frac{1 }{p}}\end{split} \tag{4.11}\] and \[\frac{1}{\delta}\left(\fint_{\mathcal{B}_{\frac{2-r_{1}}{10}}(x)}|D^{\tau}d_{ \alpha}f|^{\tilde{q}}\,d\mu_{\tau}\right)^{\frac{1}{q}}\leq\frac{1}{\delta} \left(\left(\frac{20}{r_{2}-r_{1}}\right)^{2n}\fint_{\mathcal{B}_{2}}|D^{\tau}d _{\alpha}f|^{\tilde{q}}\,d\mu_{\tau}\right)^{\frac{1}{q}}. \tag{4.12}\] Combine (4.3), (4.10), (4.11) and (4.12) to see that \[\Theta\left(x,\frac{r_{2}-r_{1}}{10}\right)\leq\kappa\lambda\quad\text{for any }x\in B_{r_{1}}.\] Thus, there is an exit radius \(\rho_{x}\in\left(0,\frac{r_{2}-r_{1}}{10}\right]\) such that \[\Theta(x,\rho_{x})\geq\kappa\lambda\quad\text{and}\quad\Theta(x,\rho)\leq \kappa\lambda\quad\text{if }\rho_{x}\leq\rho\leq\frac{r_{2}-r_{1}}{10}, \tag{4.13}\] for each \((x,x)\in D_{\kappa\lambda}\). Using the Vitali covering lemma, we have a family of countable disjoint diagonal balls \(\left\{\mathcal{B}_{\rho_{x_{i}}}\left(x_{i}\right)\right\}_{i\in\mathbb{N}}\) satisfying \[\bigcup_{(x,x)\in D_{\kappa\lambda}}\mathcal{B}_{\rho_{x}}\left(x\right)\subset \bigcup_{i\in\mathbb{N}}\mathcal{B}_{5\rho_{x_{i}}}\left(x_{i}\right). \tag{4.14}\] ### Step2. Dyadic cubes and Calderon-Zygmund decomposition In this step, we find a collection of dyadic cubes contained in \(\mathcal{B}_{r_{2}}\) which covers \(\mathcal{B}_{r_{1}}\) and apply Calderon-Zygmund decomposition to the each dyadic cube in order to find a covering of \(E_{\lambda}\). We first find finitely many disjoint dyadic cubes \(\{\mathcal{K}_{j}\}_{j}\) such that \[\mathcal{B}_{r_{1}}\subset\bigcup_{j}\{\mathcal{K}_{j}\}\subset\mathcal{B}_{ r_{2}}\quad\text{and}\quad\mathcal{K}_{j}\cap\mathcal{B}_{r_{1}}\neq\emptyset.\] This follows from [28, Proposition 5-5B]. In particular, the cube is of the form \(\mathcal{K}_{j}=\mathcal{Q}_{2^{-k_{0}}}\left(2^{-k_{0}}X_{j}\right)\) for some \(X_{j}\in\mathbb{Z}^{n}\times\mathbb{Z}^{n}\) and the positive integer \(k_{0}\) such that \[\frac{r_{2}-r_{1}}{20}\leq 2^{-k_{0}}20\sqrt{n}<\frac{r_{2}-r_{1}}{10}. \tag{4.15}\] We recall \(P^{d}\mathcal{K}=K^{d}\times K^{d}\) for a given cube \(\mathcal{K}=K^{1}\times K^{2}\). In this setting, we note from (2.4), (2.6) and (4.10) that \[\begin{split}&\left(\fint_{\mathcal{K}}|D^{\tau}d_{\alpha+s}u|^{ \tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{p}}+\left[\sum_{d=1}^{2}\left(\frac{1 }{\tau}\fint_{P^{d}\mathcal{K}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau }\right)^{\frac{1}{p}}\right]\\ &\leq\left(\frac{\mu_{\tau}(\mathcal{B}_{2})}{\mu_{\tau}( \mathcal{K})}\fint_{\mathcal{B}_{2}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{ \tau}\right)^{\frac{1}{p}}+\left[\sum_{d=1}^{2}\left(\frac{1}{\tau}\frac{\mu_ {\tau}(\mathcal{B}_{2})}{\mu_{\tau}(P^{d}\mathcal{K})}\fint_{\mathcal{B}_{2}} |D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{p}}\right] \\ &\leq 3\left(2^{n}\frac{\nu_{0}+1}{\tau}\left(\frac{800\sqrt{n}}{r_{2 }-r_{1}}\right)^{2n}\fint_{\mathcal{B}_{2}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}} \,d\mu_{\tau}\right)^{\frac{1}{p}}\\ &\leq\lambda\end{split} \tag{4.16}\] for any \(\mathcal{K}\in\{\mathcal{K}_{j}\}\) by taking \(\kappa\in(0,\kappa_{0}]\), where \[\kappa_{0}=\left(\frac{\tau}{(\nu_{0}+1)(80n)^{4n}4^{\tilde{p}}}\right)^{\frac {1}{p}}. \tag{4.17}\] We next observe a simple fact for a dyadic cube \(\mathcal{Q}=Q^{1}\times Q^{2}\) and its predecessor \(\tilde{\mathcal{Q}}=\tilde{Q}^{1}\times\tilde{Q}^{2}\). If \(\operatorname{dist}\left(\tilde{Q}^{1},\tilde{Q}^{2}\right)\geq d\left(\tilde{ \mathcal{Q}}\right)\), where we define \(d\big{(}\mathcal{Q}\big{)}=\operatorname{side}\) length of \(\mathcal{Q}\), then \[\operatorname{dist}\left(Q^{1},Q^{2}\right)\geq\operatorname{dist}\left(\tilde{ Q}^{1},\tilde{Q}^{2}\right)\geq d\left(\tilde{\mathcal{Q}}\right)>d(\mathcal{Q}). \tag{4.18}\] In light of (4.16) and (4.18), we apply the same arguments as in the proof of the classical Calderon-Zygmund decomposition lemma to each cube \(\mathcal{K}\in\{\mathcal{K}_{j}\}\), in order to find a family of countable disjoint dyadic cubes \(\tilde{\mathcal{A}}=\left\{\mathcal{Q}^{i}\right\}\) such that \(\mathcal{Q}^{i}\subset\mathcal{K}_{j}\) for some \(j\) satisfying \[\begin{split}&\left(\fint_{\mathcal{Q}^{i}}|D^{\tau}d_{\alpha+s}u|^{ \tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{p}}\\ &\quad+\mathfrak{A}(\mathcal{Q}^{i})\left(\frac{d(\mathcal{Q}^{i })}{\operatorname{dist}(Q^{i,1},Q^{i,2})}\right)^{s+\alpha+\tau}\left[\sum_{d=1 }^{2}\left(\frac{1}{\tau}\fint_{P^{d}\mathcal{Q}^{i}}|D^{\tau}d_{\alpha+s}u|^{ \tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{p}}\right]>\lambda\quad\text{and}\\ &\quad\left(\fint_{\mathcal{Q}^{i}}|D^{\tau}d_{\alpha+s}u|^{ \tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{p}}\\ &\quad+\mathfrak{A}(\tilde{\mathcal{Q}}^{i})\left(\frac{d( \tilde{\mathcal{Q}}^{i})}{\operatorname{dist}(Q^{i,1},\tilde{Q}^{i,2})}\right) ^{s+\alpha+\tau}\left[\sum_{d=1}^{2}\left(\frac{1}{\tau}\fint_{P^{d}\mathcal{ Q}^{i}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{p}} \right]\leq\lambda,\end{split} \tag{4.19}\] where \(\tilde{\mathcal{Q}}^{i}=\tilde{Q}^{i,1}\times\tilde{Q}^{i,2}\) is the predecessor of \(\mathcal{Q}^{i}=Q^{i,1}\times Q^{i,2}\) and \[\mathfrak{A}\left(\mathcal{Q}^{i}\right)=\begin{cases}1&\text{if } \operatorname{dist}\left(Q^{i,1},Q^{i,2}\right)\geq d(\mathcal{Q}^{i}),\\ 0&\text{if }\operatorname{dist}\left(Q^{i,1},Q^{i,2}\right)<d(\mathcal{Q}^{i}). \end{cases} \tag{4.20}\] In addition, if \((x,y)\in\mathcal{B}_{r_{1}}\setminus\bigcup\limits_{i}\mathcal{Q}^{i}\), then by using Holder's inequality and (4.19), we find a sequence of dyadic cubes \(\mathcal{K}^{j}\notin\tilde{\mathcal{A}}\) containing \((x,y)\) such that \[\left(\fint_{\mathcal{K}^{j}}|D^{\tau}d_{\alpha+s}u|\,d\mu_{\tau}\right)\leq \left(\fint_{\mathcal{K}^{j}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau} \right)^{\frac{1}{p}}\leq\lambda\quad\text{and}\quad\text{diam}\left(\mathcal{ K}^{j}\right)\to 0\text{ as }j\to\infty.\] Therefore, the fact that \(\mu_{\tau}\) has the doubling property (2.5) and is absolutely continuous with respect to the Lebesgue measure \(\,dx\,dy\) yields that \[|D^{\tau}d_{\alpha+s}u|\leq\lambda\quad\text{a.e. on }\mathcal{B}_{r_{1}} \setminus\bigcup\limits_{i}\mathcal{Q}^{i}\text{ with respect to the measure }\mu_{\tau}. \tag{4.21}\] **Step 3. First elimination of nearly diagonal cubes.** We first want to reduce the family of off-diagonal cubes as follows: For any \(\mathcal{Q}=Q^{1}\times Q^{2}\in\tilde{\mathcal{A}}\), \[\mathcal{Q}\subset\bigcup\limits_{i}\mathcal{B}_{5\rho_{\tau_{i}}}(x_{i}) \tag{4.22}\] provided that \[\operatorname{dist}\left(\tilde{Q}^{1},\tilde{Q}^{2}\right)<d\left(\tilde{ \mathcal{Q}}\right). \tag{4.23}\] To this end, we note from (4.19) and (4.20) that one of the following three inequalities must hold: \[\begin{split}&\left(\fint_{\mathcal{Q}}|D^{\tau}d_{\alpha+s}u|^{ \tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{p}}>\frac{\lambda}{3},\quad\left( \frac{1}{\tau}\fint_{P^{1}\mathcal{Q}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d \mu_{\tau}\right)^{\frac{1}{p}}>\frac{\lambda}{3},\\ &\left(\frac{1}{\tau}\fint_{P^{2}\tilde{\mathcal{Q}}}|D^{\tau}d_{ \alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{p}}>\frac{\lambda}{3}. \end{split} \tag{4.24}\] Suppose that the first inequality in (4.24) holds. In light of [28, Proposition 5.3], we observe that there are points \((x_{0},x_{0})\in\mathbb{R}^{2n}\) and \((x_{1},x_{2})\in\tilde{\mathcal{Q}}=\tilde{Q}^{1}\times\tilde{Q}^{2}\) such that \[\operatorname{dist}\left(\tilde{Q}^{1},\tilde{Q}^{2}\right)=\sqrt{2} \operatorname{dist}(\tilde{\mathcal{Q}},(x_{0},x_{0}))=\sqrt{2}|(x_{1},x_{2})-( x_{0},x_{0})|.\] Then using (4.23), we find that for any \((a,b)\in\tilde{\mathcal{Q}}\), \[|(a,b)-(x_{0},x_{0})| \leq|(a,b)-(x_{1},x_{2})|+|(x_{1},x_{2})-(x_{0},x_{0})|\] \[\leq 2\sqrt{n}d(\tilde{\mathcal{Q}})+\frac{1}{\sqrt{2}}\operatorname {dist}(\tilde{Q}^{1},\tilde{Q}^{2})\] \[\leq 4\sqrt{n}d(\mathcal{Q})+\frac{2}{\sqrt{2}}d\left(\mathcal{Q}\right)\] \[\leq 10\sqrt{n}d(\mathcal{Q}),\] which implies that \(\mathcal{Q}\subset\tilde{\mathcal{Q}}\subset\mathcal{B}_{10\sqrt{n}d( \mathcal{Q})}(x_{0})\). Since \(\mathcal{Q}\cap\mathcal{B}_{r_{1}}\neq\emptyset\), we take \(z_{0}\in Q^{1}\cap B_{r_{1}}\). Then we observe that \[\mathcal{B}_{10\sqrt{n}d(\mathcal{Q})}(x_{0})\subset\mathcal{B}_{20\sqrt{n}d( \mathcal{Q})}(z_{0}), \tag{4.25}\] where we have used fact that \(z_{0}\in Q^{1}\subset B_{10\sqrt{\pi}d(\mathcal{Q})}(x_{0})\subset B_{20\sqrt{\pi}d (\mathcal{Q})}(z_{0})\). Note from (2.6), the first inequality in (4.24) and (4.17) that \[\begin{split}\fint_{\mathcal{B}(z_{0},20\sqrt{\pi}d(\mathcal{Q}) )}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}&\geq\frac{ \mu_{\tau}(\mathcal{Q})}{\mu_{\tau}\Big{(}\mathcal{B}(z_{0},20\sqrt{n}d( \mathcal{Q}))\Big{)}}\fint_{\mathcal{Q}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d \mu_{\tau}\\ &\geq\frac{\tau}{2^{n}v_{0}(20\sqrt{n})^{2n}}\left(\frac{\lambda }{3}\right)^{\tilde{p}}\\ &\geq(\kappa\lambda)^{\tilde{p}}\,.\end{split} \tag{4.26}\] Using (4.13), (4.26) and the fact that \(z_{0}\in B_{r_{1}}\) and \(20\sqrt{n}d(\mathcal{Q})<\frac{r_{2}-r_{1}}{10}\) by (4.15), we have \[\mathcal{B}_{20\sqrt{\pi}d(\mathcal{Q})}(z_{0})\subset\mathcal{B}_{\mu_{ \tau_{0}}}\left(z_{0}\right). \tag{4.27}\] From (4.14), (4.25) and (4.27), we have (4.22). We next assume that the second inequality in (4.24) holds. We write \(\tilde{\mathcal{Q}}=\tilde{\mathcal{Q}}_{d(\tilde{\mathcal{Q}})}\left(x,y\right)\) to see that \(P^{1}\tilde{\mathcal{Q}}=\tilde{\mathcal{Q}}_{d(\tilde{\mathcal{Q}})}\left(x \right)\). For any \((a,b)\in\tilde{\mathcal{Q}}\), we find that \[\begin{split}|(a,b)-(x,x)|&\leq|(a,b)-(x,y_{2})|+|( x,y_{2})-(x,y_{1})|+|(x,y_{1}),(x,x)|\\ &\leq\frac{3}{2}\sqrt{n}d(\tilde{\mathcal{Q}})+\operatorname{ dist}\left(\tilde{Q}^{1},\tilde{Q}^{2}\right)+\frac{1}{2}\sqrt{n}d\left( \tilde{\mathcal{Q}}\right)\\ &<10\sqrt{n}d(\mathcal{Q}),\end{split}\] where \(y_{1}\in\tilde{Q}^{1}\) and \(y_{2}\in\tilde{Q}^{2}\) such that \(|y_{1}-y_{2}|=\operatorname{dist}\left(\tilde{Q}^{1},\tilde{Q}^{2}\right)\). Thus, this yields that \(\mathcal{Q}\subset\tilde{\mathcal{Q}}\subset\mathcal{B}_{10\sqrt{\pi}d( \mathcal{Q})}(x)\subset\mathcal{B}_{20\sqrt{\pi}d(\mathcal{Q})}(z_{0})\) for some \(z_{0}\in B_{r_{1}}\). Like in the estimate of (4.26), we deduce that \[\fint_{\mathcal{B}_{10\sqrt{\pi}d(\mathcal{Q})}(x)}|D^{\tau}d_{\alpha+s}u|^{ \tilde{p}}\,d\mu_{\tau}\geq\frac{\mu_{\tau}\left(P^{1}\mathcal{Q}\right)}{\mu _{\tau}\left(\mathcal{B}_{10\sqrt{\pi}d(\mathcal{Q})}(x)\right)}\fint_{P^{1} \mathcal{Q}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\geq(\kappa\lambda )^{\tilde{p}}\,,\] where we have used (2.4) and the second inequality in (4.24) for the third inequality. This implies (4.22). Similarly, if the third inequality in (4.24) is true, then we have (4.22). Therefore, we now restrict our concentration on a subfamily of \(\widetilde{\mathcal{A}}\), \(\mathcal{A}\) which is defined by \[\mathcal{A}=\left\{\mathcal{Q}\in\widetilde{\mathcal{A}}\ \ \mathcal{Q}\in \widetilde{\mathcal{A}}\setminus\bigcup_{i}\mathcal{B}_{5\rho_{s_{i}}}\left(x \right)\right\}. \tag{4.28}\] Then we note from (4.22), (4.23) and (4.18) that \[\operatorname{dist}\left(Q^{1},Q^{2}\right)\geq\operatorname{dist}\left( \tilde{Q}^{1},\tilde{Q}^{2}\right)\geq d\left(\tilde{\mathcal{Q}}\right)>d( \mathcal{Q})\quad\text{for any }\mathcal{Q}=Q^{1}\times Q^{2}\in\mathcal{A}. \tag{4.29}\] **Step 4. Off-diagonal reverse Holder's inequality.** We now introduce an important lemma to handle the off-diagonal cubes in \(\mathcal{A}\). **Lemma 4.2** (Reverse Holder inequality on off-diagonal cubes).: _Let \(\mathcal{Q}=Q^{1}\times Q^{2}\) be a cube such that_ \[\operatorname{dist}\left(Q^{1},Q^{2}\right)\geq d(\mathcal{Q}). \tag{4.30}\] _Then there is a constant \(c=c(n,s,p,\tilde{p},\tilde{\gamma})\) independent of \(\tau\) and \(\alpha\) such that_ \[\begin{split}\left(\fint_{\mathcal{Q}}|D^{\tau}d_{\alpha+s}u|^{ \tilde{\gamma}}\,d\mu_{\tau}\right)^{\frac{1}{\tilde{\gamma}}}&\leq c \left(\fint_{\mathcal{Q}}|D^{\tau}d_{\alpha+s}u|^{p}\,d\mu_{\tau}\right)^{\frac {1}{\tilde{\gamma}}}\\ &\quad+c\left(\frac{d(\mathcal{Q})}{\operatorname{dist}(Q^{1},Q^{2 })}\right)^{s+\alpha+\tau}\left[\sum_{d=1}^{2}\left(\frac{1}{\tau}\fint_{P^{d} \mathcal{Q}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{ \tilde{\gamma}}}\right].\end{split}\] Proof.: We note from (4.30) that for any \(x\in Q^{1}\) and \(y\in Q^{2}\), \[|x-y|\leq|x-x_{0}|+|x_{0}-y_{0}|+|y_{0}-y|\leq\sqrt{n}d(\mathcal{Q})+ \operatorname{dist}(Q^{1},Q^{2})+\sqrt{n}d(\mathcal{Q})\leq 3\sqrt{n}\operatorname{ dist}(Q^{1},Q^{2}),\] where \(x_{0}\in Q^{1}\) and \(y_{0}\in Q^{2}\) such that \(\operatorname{dist}(Q^{1},Q^{2})=|x_{0}-y_{0}|\). Therefore, we have that \[\operatorname{dist}\left(Q^{1},Q^{2}\right)\leq|x-y|\leq 3\sqrt{n}\operatorname{ dist}\left(Q^{1},Q^{2}\right) \tag{4.31}\] whenever \(x\in Q^{1}\) and \(y\in Q^{2}\). The above inequality (4.31) and Jensen's inequality yield that \[\frac{d(\mathcal{Q})^{2n}}{c(n)\operatorname{dist}\left(Q^{1},Q^{2}\right)^{n-p \tau}}\leq\mu_{\tau}(\mathcal{Q})\leq\frac{d(\mathcal{Q})^{2n}}{\operatorname{ dist}\left(Q^{1},Q^{2}\right)^{n-p\tau}} \tag{4.32}\] and \[\int_{\mathcal{Q}}|D^{\tau}d_{\alpha+s}u|^{\tilde{\gamma}}\,d\mu_{\tau}\] \[\leq\frac{c\operatorname{dist}\left(Q^{1},Q^{2}\right)^{n-p\tau}}{d (\mathcal{Q})^{2n}}\frac{1}{\operatorname{dist}\left(Q^{1},Q^{2}\right)^{n-p \tau+\tilde{\gamma}(s+\alpha+\tau)}}\int_{Q^{1}}\int_{Q^{2}}|u(x)-u(y)|^{ \tilde{\gamma}}\,dx\,dy\] \[\leq\sum_{i=1}^{2}\frac{c\operatorname{dist}\left(Q^{1},Q^{2} \right)^{-\tilde{\gamma}(s+\alpha+\tau)}}{d(\mathcal{Q})^{2n}}\int_{Q^{1}} \int_{Q^{2}}|u(x)-(u)_{Q^{i}}|^{\tilde{\gamma}}\,dx\,dy\] \[\quad+\frac{c\operatorname{dist}\left(Q^{1},Q^{2}\right)^{- \tilde{\gamma}(s+\alpha+\tau)}}{d(\mathcal{Q})^{2n}}\int_{Q^{1}}\int_{Q^{2}}| (u)_{Q^{1}}-(u)_{Q^{2}}|^{\tilde{\gamma}}\,dx\,dy=:\sum_{i=1}^{2}I_{i}+J\] for some constant \(c=c(n,\tilde{\gamma})\). We now further estimate \(I_{1}\), \(I_{2}\) and \(J\). **Estimates of \(I_{1}\) and \(I_{2}\).** From Lemma 2.2 together with (4.9), there is a constant \(c=c(n,s,\tilde{p},\tilde{\gamma})\) such that \[\left(\int_{Q^{1}}|u-(u)_{Q^{1}}|^{\tilde{\gamma}}\,dx\right)^{\frac{1}{ \tilde{\gamma}}}\leq cd\left(Q^{1}\right)^{s}\left(\int_{Q^{1}}\int_{Q^{1}} \frac{|u(x)-u(y)|^{\tilde{p}}}{|x-y|^{n+s\tilde{p}}}\,dx\,dy\right)^{\frac{1}{ p}}. \tag{4.33}\] In light of (4.33), the fact that \[1\leq\left(\frac{2d(\mathcal{Q})}{|x-y|}\right)^{\tilde{p}(\alpha+\tau)-p \tau}\quad\text{ any }x,y\in Q^{1}\] and (2.4), we estimate \(I_{1}\) as \[I_{1} \leq\frac{c\operatorname{dist}(Q^{1},Q^{2})^{-\tilde{\gamma}(s+ \alpha+\tau)}}{d(\mathcal{Q})^{2n}}d(\mathcal{Q})^{2n+\tilde{\gamma}s}\left( \int_{Q^{1}}\int_{Q^{1}}\frac{|u(x)-u(y)|^{\tilde{p}}}{|x-y|^{n+s\tilde{p}}} \,dx\,dy\right)^{\frac{\tilde{\gamma}}{p}}\] \[\leq\frac{cd(\mathcal{Q})^{\tilde{\gamma}s}}{\operatorname{dist} (Q^{1},Q^{2})^{\tilde{\gamma}(s+\alpha+\tau)}}\left(\int_{Q^{1}}\int_{Q^{1}} \left(\frac{2d(\mathcal{Q})}{|x-y|}\right)^{\tilde{p}(\alpha+\tau)-p\tau} \frac{|u(x)-u(y)|^{\tilde{p}}}{|x-y|^{n+s\tilde{p}}}\,dx\,dy\right)^{\frac{ \tilde{\gamma}}{p}}\] \[\leq c\left(\frac{d(\mathcal{Q})}{\operatorname{dist}(Q^{1},Q^{2 })}\right)^{\tilde{\gamma}(s+\alpha+\tau)}\left(\frac{1}{\tau}\!\!\int_{Q^{1} \times Q^{1}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)^{\frac{ \tilde{\gamma}}{p}}\] for some constant \(c=c(n,s,p,\tilde{p},\tilde{\gamma})\). Likewise, we have \[I_{2}\leq c\left(\frac{d(\mathcal{Q})}{\operatorname{dist}(Q^{1},Q^{2})} \right)^{\tilde{\gamma}(s+\alpha+\tau)}\left(\frac{1}{\tau}\!\!\int_{Q^{2} \times Q^{2}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)^{\frac{ \tilde{\gamma}}{p}}.\] **Estimate of \(J\).** In light of Jensen's inequality, (4.31) and (4.32), we estimate \(J\) as \[J \leq c\frac{1}{\operatorname{dist}(Q^{1},Q^{2})^{\tilde{\gamma}(s +\alpha+\tau)}}\left(\int_{Q^{1}}\int_{Q^{2}}|u(x)-u(y)|^{p}\,dx\,dy\right)^{ \frac{\tilde{\gamma}}{p}}\] \[\leq c\left(\int_{Q^{1}}\int_{Q^{2}}\left(\frac{|u(x)-u(y)|}{|x- y|^{s+\alpha+\tau}}\right)^{p}\,dx\,dy\right)^{\frac{\tilde{\gamma}}{p}}\] \[\leq c\left(\int_{\mathcal{Q}}|D^{\tau}d_{\alpha+s}u|^{p}\,d\mu_{ \tau}\right)^{\frac{\tilde{\gamma}}{p}}.\] Combine all the estimates \(I_{1}\), \(I_{2}\) and \(J\) to get the desired result. _Remark 6_.: We now prove that if \(\mathcal{Q}\in\mathcal{A}\), then there is a constant \(c_{oh}=c_{oh}(n,s,p,\tilde{p},\tilde{\gamma})\) such that \[\left(\int_{\mathcal{Q}}|D^{\tau}d_{\alpha+s}u|^{\tilde{\gamma}}\,d\mu_{\tau} \right)^{\frac{1}{\tilde{\gamma}}}\leq c_{oh}\lambda\quad\text{for any }\mathcal{Q}\in\mathcal{A}. \tag{4.34}\] By \(p\leq\tilde{p}\), \(\tilde{\mathcal{Q}}\subset 4\mathcal{Q}\) and \(P^{d}\tilde{\mathcal{Q}}\subset 4P^{d}\mathcal{Q}\), we observe that \[\left(\int_{\mathcal{Q}}|D^{\tau}d_{\alpha+s}u|^{p}\,d\mu_{\tau}\right)^{\frac{ 1}{\tilde{\gamma}}}\leq\left(\int_{\mathcal{Q}}|D^{\tau}d_{\alpha+s}u|^{\tilde {p}}\,d\mu_{\tau}\right)^{\frac{1}{\tilde{\gamma}}}\leq\left(\frac{\mu_{\tau}( \mathcal{Q})}{\mu_{\tau}(\mathcal{Q})}\!\!\int_{\mathcal{Q}}|D^{\tau}d_{\alpha+s }u|^{\tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{\tilde{p}}}\] and \[\left(\frac{d(\mathcal{Q})}{\mathrm{dist}(Q^{1},Q^{2})}\right)^{s+ \alpha+\tau}\left[\sum_{d=1}^{2}\left(\frac{1}{\tau}\!\!\!\int_{P^{d}\mathcal{Q}} |D^{r}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{p}}\right]\] \[\leq\left(\frac{d(\tilde{\mathcal{Q}})}{\mathrm{dist}(\tilde{Q}^{ 1},\tilde{Q}^{2})}\right)^{s+\alpha+\tau}\left[\sum_{d=1}^{2}\left(\frac{\mu_{ \tau}(4P^{d}\mathcal{Q})}{\mu_{\tau}(P^{d}\mathcal{Q})}\frac{1}{\tau}\!\!\!\int _{P^{d}\tilde{\mathcal{Q}}}|D^{r}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)^ {\frac{1}{p}}\right].\] Using (4.19), (4.29), Lemma 4.2, (2.5) and the above observations, we have (4.34). **Step 5. Decomposition of \(\mathcal{A}\).** We now decompose the family \(\mathcal{A}\) into \(AD_{\lambda}=\bigcap\limits_{d=1}^{2}AD_{\lambda}^{d}\) and \(ND_{\lambda}=\bigcup\limits_{d=1}^{2}ND_{\lambda}^{d}\), where we define \[AD_{\lambda}^{d}=\left\{\mathcal{Q}\in\mathcal{A}\,;\,\frac{1}{\tau}\!\!\!\int _{P^{d}\mathcal{Q}}|D^{r}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\leq\left( \frac{\lambda}{4}\right)^{\tilde{p}}\right\} \tag{4.35}\] and \[ND_{\lambda}^{d}=\left\{\mathcal{Q}\in\mathcal{A}\,;\,\frac{1}{\tau}\!\!\!\int _{P^{d}\mathcal{Q}}|D^{r}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}>\left(\frac{ \lambda}{4}\right)^{\tilde{p}}\right\}. \tag{4.36}\] We now estimate \(\mu_{\tau}(\mathcal{Q})\) for each cubes \(\mathcal{Q}\in\mathcal{A}\). **Step 6. Measure estimate of \(\mathcal{Q}\in AD_{\lambda}\).** We first estimate \(\mu_{\tau}(\mathcal{Q})\) for \(\mathcal{Q}\in AD_{\lambda}\). **Lemma 4.3**.: _There is a constant \(c_{ad}=c_{ad}(\tilde{p})\) such that for every \(\mathcal{Q}=Q^{1}\times Q^{2}\in AD_{\lambda}\),_ \[\mu_{\tau}(\mathcal{Q})\leq\frac{c_{ad}}{\lambda^{\tilde{p}}}\int_{\mathcal{Q }\cap\{|D^{r}d_{\alpha+s}u|>\kappa\lambda\}}|D^{r}d_{\alpha+s}u|^{\tilde{p}} \,d\mu_{\tau}. \tag{4.37}\] Proof.: In light of (4.19) and Jensen's inequality, we have that \[\lambda^{\tilde{p}} \leq 3^{\tilde{p}-1}\left(\!\!\!\int_{\mathcal{Q}}|D^{r}d_{\alpha+s} u|^{\tilde{p}}\,d\mu_{\tau}\right) \tag{4.38}\] \[\quad+3^{\tilde{p}-1}\left(\frac{d(\mathcal{Q})}{\mathrm{dist}(Q^ {1},Q^{2})}\right)^{\tilde{p}(s+\alpha+\tau)}\left[\sum_{d=1}^{2}\left(\frac{1 }{\tau}\!\!\!\int_{P^{d}\mathcal{Q}}|D^{r}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{ \tau}\right)\right].\] Note that \[\left(\!\!\!\int_{\mathcal{Q}}|D^{r}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau} \right)\leq(\kappa\lambda)^{\tilde{p}}+\frac{1}{\mu_{\tau}(\mathcal{Q})}\int_ {\mathcal{Q}\cap\{|D^{r}d_{\alpha+s}u|>\kappa\lambda\}}|D^{r}d_{\alpha+s}u|^{ \tilde{p}}\,d\mu_{\tau}. \tag{4.39}\] Taking into account (4.17), (4.29), (4.35), (4.38) and (4.39), we find that \[\frac{\lambda^{\tilde{p}}}{2}\leq\frac{3^{\tilde{p}-1}}{\mu_{\tau}(\mathcal{ Q})}\int_{\mathcal{Q}\cap\{|D^{r}d_{\alpha+s}u|>\kappa\lambda\}}|D^{r}d_{\alpha+s}u|^{ \tilde{p}}\,d\mu_{\tau},\] which implies the desired result (4.37). **Step 7. Measure estimate of \(\mathcal{Q}\in ND_{\lambda}\).** We next want to estimate \(\mu_{\tau}(\mathcal{Q})\) for \(\mathcal{Q}\in ND_{\lambda}\). To do this, we introduce families of countable diagonal cubes \[P^{d}ND_{\lambda}=\left\{P^{d}\mathcal{Q}\,;\,\mathcal{Q}=Q^{1}\times Q^{2}\in ND _{\lambda}\right\}\quad(d=1,2).\] Then we observe the following two facts (see [28, Section 5]): 1. \(P^{1}ND_{\lambda}=P^{2}ND_{\lambda}\). 2. There is a subfamily of disjoint cubes \(NH\subset P^{1}ND_{\lambda}\) such that \[\bigcup_{\mathcal{H}\in NH}\mathcal{H}=\bigcup_{\mathcal{Q}\in P^{1}ND_{ \lambda}}\mathcal{Q}=\bigcup_{\mathcal{Q}\in P^{2}ND_{\lambda}}\mathcal{Q}.\] (4.40) We now introduce the following lemma which gives an information of qualitative distance between \(Q^{1}\) and \(Q^{2}\) for \(\mathcal{Q}=Q^{1}\times Q^{2}\in ND_{\lambda}\). **Lemma 4.4**.: _Let \(\mathcal{Q}=Q^{1}\times Q^{2}\in ND_{\lambda}\). Assume that \(Q^{d}\times Q^{d}\subset\mathcal{H}\) for some \(d\in\{1,2\}\) and \(\mathcal{H}\in NH\). Then we have_ \[\mathrm{dist}\left(Q^{1},Q^{2}\right)\geq d(\mathcal{H}).\] Proof.: We first assume \(d=1\). We prove this Lemma by contradiction. Suppose that \[\operatorname{dist}\left(Q^{1},Q^{2}\right)<d(\mathcal{H}). \tag{4.41}\] Since \(\mathcal{Q}\) and \(\mathcal{H}\) are dyadic cubes with \(d(\mathcal{Q})\leq\operatorname{dist}\left(Q^{1},Q^{2}\right)<d(\mathcal{H})\), we observe \[2d(\mathcal{Q})\leq d(\mathcal{H}). \tag{4.42}\] Let \((x_{c},x_{c})\) be the center of \(\mathcal{H}\). Then we find that for \(\mathcal{B}\equiv\mathcal{B}_{\frac{d(\mathcal{H})}{2}}\left(x_{c}\right)\), \[\mathcal{B}\subset\mathcal{H}\subset\sqrt{n}\mathcal{B} \tag{4.43}\] For any \((x,y)\in\mathcal{Q}\), we first note from (4.41) and (4.42) that \[\begin{split}|(x,y)-(x_{c},x_{c})|&\leq|(x,y)-(x,y^{\prime})|+|(x,y^{\prime})-(x_{c},x^{\prime})|+|(x_{c},x^{\prime})-(x_{c},x_ {c})|\\ &\leq\sqrt{n}d(\mathcal{Q})+\sqrt{n}\frac{d(\mathcal{H})}{2}+ \operatorname{dist}\left(Q^{1},Q^{2}\right)+\sqrt{n}\frac{d(\mathcal{H})}{2} \\ &\leq\sqrt{n}\left[\frac{d(\mathcal{H})}{2}+\frac{d(\mathcal{H})}{ 2}+\frac{d(\mathcal{H})}{2}\right]+\operatorname{dist}\left(Q^{1},Q^{2}\right) <5\sqrt{n}d(\mathcal{H}),\end{split} \tag{4.44}\] where \((x^{\prime},y^{\prime})\in\mathcal{Q}\) such that \(\operatorname{dist}\left(Q^{1},Q^{2}\right)=|x^{\prime}-y^{\prime}|\). Thus, (4.43) and (4.44) imply that \(\mathcal{Q},\mathcal{H}\subset\mathcal{B}_{5\sqrt{n}d(\mathcal{H})}\left(x_{ c}\right)\). In addition, by following the same arguments as in (4.25) and using the fact that \(\mathcal{B}_{5\sqrt{n}d(\mathcal{H})}\left(x_{c}\right)\subset\mathcal{B}_{10 \sqrt{n}d(\mathcal{H})}\left(z_{0}\right)\), there is a point \(z_{0}\in B_{r_{1}}\cap Q^{1}\) such that \[\mathcal{Q}\subset\mathcal{H}\subset\mathcal{B}_{10\sqrt{n}d(\mathcal{H})} \left(z_{0}\right). \tag{4.45}\] Then using (4.36), (4.40), (4.45) and (2.4), we observe that \[\begin{split}\left(\frac{\lambda}{4}\right)^{\tilde{p}}& <\frac{1}{\tau}\mathchoice{\vbox{\hbox{$\int$}}}{\vbox{ \hbox{$\int$}}}{\vbox{\hbox{$\int$}}}{\vbox{\hbox{$ \int$}}}{\vbox{\hbox{$\int$}}}}{\vbox{\hbox{$ \int$}}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\\ &\leq\frac{1}{\tau}\frac{\mu_{\tau}\left(\mathcal{B}_{10\sqrt{n}d( \mathcal{H})}\left(z_{0}\right)\right)}{\mu_{\tau}\left(\mathcal{B}_{\frac{d( \mathcal{H})}{2}}\left(x_{c}\right)\right)}\mathchoice{\vbox{\hbox{$ \int$}}}{\vbox{\hbox{$\int$}}}{\vbox{\hbox{$ \int$}}}{\vbox{\hbox{$\int$}}}{\vbox{\hbox{$ \int$}}}}{\vbox{\hbox{$\int$}}}\!\int_{\mathcal{B}_{10\sqrt{n}d( \mathcal{H})}\left(z_{0}\right)}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau} \\ &=(20\sqrt{n})^{n+pr\,\tau}\mathchoice{\vbox{\hbox{$ \tau$}}}{\vbox{\hbox{$\tau$}}}{\vbox{\hbox{$ \tau$}}}{\vbox{\hbox{$\tau$}}}{\vbox{\hbox{$ \tau$}}}\!\int_{\mathcal{B}_{10\sqrt{n}d(\mathcal{H})}\left(z_{0}\right)}|D^{ \tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau},\end{split}\] which implies that \[(\kappa\lambda)^{\tilde{p}}<\frac{\tau}{(20n)^{2n}4^{\tilde{p}}}\lambda^{ \tilde{p}}<\mathchoice{\vbox{\hbox{$\int$}}}{\vbox{\hbox{$ \int$}}}{\vbox{\hbox{$\int$}}}{\vbox{\hbox{$ \int$}}}{\vbox{\hbox{$\int$}}}}{\vbox{\hbox{$ \int$}}}_{\mathcal{B}_{10\sqrt{n}d(\mathcal{H})}\left(z_{0}\right)}|D^{\tau}d_{ \alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}. \tag{4.46}\] Since (4.15) gives that \(10\sqrt{n}d(\mathcal{H})<\frac{r_{2}-r_{1}}{10}\), we find that \(\mathcal{B}_{10\sqrt{n}d(\mathcal{H})}\left(z_{0}\right)\subset\bigcup \limits_{\tau}\mathcal{B}_{5\rho_{x_{i}}}(x_{i})\), where we have used (4.13), (4.46) and the fact that \(z_{0}\in B_{r_{1}}\). Thus, this contradicts the definition of the family \(\mathcal{A}\) given in (4.28). We similarly prove the case for \(d=2\). With the above lemma, we now prove the following measure estimates for the cubes in \(ND_{\lambda}\). **Lemma 4.5**.: _There is a constant \(c_{nd}=c_{nd}(n,s,p,\tilde{p})\) such that_ \[\sum_{\mathcal{Q}\in ND_{\lambda}}\mu_{\tau}(\mathcal{Q})\leq\frac{c_{nd}}{ \lambda^{\tilde{p}}}\int_{\mathcal{B}_{\frac{r_{1}}{r}\cap\{\,|D^{\tau}d_{ \alpha+s}u|>\kappa\lambda\}}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}. \tag{4.47}\] Proof.: Let \(\mathcal{Q}\in ND_{\lambda}\). We then note from (4.29) that \[\begin{split}&\left(\frac{d(\mathcal{Q})}{\operatorname{dist}(Q^{1}, Q^{2})}\right)^{\tilde{p}(s+\alpha+\tau)}\left[\left(\frac{1}{\tau}\mathchoice{ \vbox{\hbox{$\int$}}}{\vbox{\hbox{$\int$}}}{\vbox{\hbox{$ \int$}}}{\vbox{\hbox{$\int$}}}{\vbox{\hbox{$ \int$}}}}{\vbox{\hbox{$\int$}}}_{P^{d}\mathcal{Q}}|D^{\tau}d_{ \alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)\right]\\ &\leq\frac{(\kappa\lambda)^{\tilde{p}}}{\tau}+\left(\frac{d( \mathcal{Q})}{\operatorname{dist}(Q^{1},Q^{2})}\right)^{\tilde{p}(s+\alpha+\tau)} \left[\frac{1}{\tau}\left(\frac{1}{\mu_{\tau}(P^{d}\mathcal{Q})}\int_{P^{d} \mathcal{Q}\cap\{\,|D^{\tau}d_{\alpha+s}u|>\kappa\lambda\}}|D^{\tau}d_{\alpha+s }u|^{\tilde{p}}\,d\mu_{\tau}\right)\right]\end{split} \tag{4.48}\] for \(d=1\) and \(2\). In light of (4.19), (4.17), Jensen's inequality, (4.39) and (4.48), we have that \[\begin{split}&\frac{\lambda^{\tilde{p}}}{2}\mu_{\tau}(\mathcal{Q})\\ &\leq 3^{\tilde{p}-1}\int_{\mathcal{Q}\cap\{\,|D^{\tau}d_{\alpha+s}u|> \kappa\lambda\}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\\ &\quad+3^{\tilde{p}-1}\left(\frac{d(\mathcal{Q})}{\operatorname{ dist}(Q^{1},Q^{2})}\right)^{\tilde{p}(s+\alpha+\tau)}\left[\sum_{d=1}^{2}\left( \frac{1}{\tau}\frac{\mu_{\tau}(\mathcal{Q})}{\mu_{\tau}(P^{d}\mathcal{Q})} \int_{P^{d}\mathcal{Q}\cap\{\,|D^{\tau}d_{\alpha+s}u|>\kappa\lambda\}}|D^{ \tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)\right].\end{split} \tag{4.49}\] We now denote \[K^{d}=\sum_{\mathcal{Q}\in ND_{\lambda}^{d}}\left(\frac{d(\mathcal{Q})}{\mathrm{ dist}(Q^{1},Q^{2})}\right)^{\tilde{p}(s+\alpha+\tau)}\left[\left(\frac{1}{\tau}\frac{ \mu_{\tau}(\mathcal{Q})}{\mu_{\tau}(P^{d}\mathcal{Q})}\int_{P^{d}\mathcal{Q} \cap\{|D^{\tau}d_{\alpha+u}|>\kappa\lambda\}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p }}d\mu_{\tau}\right)\right] \tag{4.50}\] for \(d=1\) and \(2\). To estimate \(K^{d}\) for each \(d=1\) and \(2\), we investigate families of countable disjoint cubes \[ND_{\lambda}^{d}(\mathcal{H})=\left\{\mathcal{Q}\in ND_{\lambda}^{d};\,P^{d} \mathcal{Q}\subset\mathcal{H}\right\}\quad(d=1,2) \tag{4.51}\] for \(\mathcal{H}\in NH\). We note from (4.40) that \[ND_{\lambda}^{d}=\bigcup_{\mathcal{H}\in NH}ND_{\lambda}^{d}(\mathcal{H}),\] where \(\left\{ND_{\lambda}^{d}(\mathcal{H})\right\}_{\mathcal{H}\in NH}\) are mutually disjoint classes. Using (4.51), we first find countable mutually disjoint families of cubes \(\left[ND_{\lambda}^{d}(\mathcal{H})\right]_{i}\) such that \[\left[ND_{\lambda}^{d}(\mathcal{H})\right]_{i}=\left\{\mathcal{Q}\in ND_{ \lambda}^{d}(\mathcal{H})\,;\,d(\mathcal{Q})=2^{-i}d(\mathcal{H})\right\} \quad(i\geq 0)\] and \[ND_{\lambda}^{d}(\mathcal{H})=\bigcup_{i=0}^{\infty}\left[ND_{\lambda}^{d}( \mathcal{H})\right]_{i}.\] In light of Lemma 4.4, we next decompose \(\left[ND_{\lambda}^{d}(\mathcal{H})\right]_{i}\) into subfamilies \(\left[ND_{\lambda}^{d}(\mathcal{H})\right]_{i,j}\) which are defined by \[\left[ND_{\lambda}^{d}(\mathcal{H})\right]_{i,j}=\left\{\mathcal{Q}\in\left[ ND_{\lambda}^{d}(\mathcal{H})\right]_{i}\,;\,2^{j}d(\mathcal{H})\leq\mathrm{ dist}(Q^{1},Q^{2})<2^{j+1}d(\mathcal{H})\text{ for }\mathcal{Q}=Q^{1}\times Q^{2}\right\}\] for \(j\geq 0\). To further decompose \(\left[ND_{\lambda}^{d}(\mathcal{H})\right]_{i,j}\) into the smaller families, we denote all diagonal dyadic cubes in \(\mathcal{H}\) with the side-length \(2^{-i}d(\mathcal{H})\) by \(\mathcal{H}_{i}(k)=H_{i}(k)\times H_{i}(k)\) for \(i\geq 0\) and \(k=1,2,\ldots,2^{in}\). From the above notation, we set \[\left[ND_{\lambda}^{d}(\mathcal{H})\right]_{i,j,k}=\left\{\mathcal{Q}\in\left[ ND_{\lambda}^{d}(\mathcal{H})\right]_{i,j}\,;\,P^{d}\mathcal{Q}=\mathcal{H}_{i}(k) \right\}.\] Then \(\left\{\left[ND_{\lambda}^{d}(\mathcal{H})\right]_{i,j,k}\right\}_{i,j,k\geq 0}\) is a class of the mutually disjoint families such that \[ND_{\lambda}^{d}(\mathcal{H})=\bigcup_{i,j,k\geq 0}\left[ND_{\lambda}^{d}( \mathcal{H})\right]_{i,j,k}.\] We first note that for any \(\mathcal{Q}\in\left[ND_{\lambda}^{1}(\mathcal{H})\right]_{i,j,k}\), \(\mathcal{Q}\) is of the form \(H_{i}(k)\times Q^{2}\) for some dyadic cube \(Q^{2}\) satisfying \[d(Q^{2})=2^{-i}d(\mathcal{H})\quad\text{and}\quad 2^{j}d(\mathcal{H})\leq \mathrm{dist}(H_{i}(k),Q^{2})<2^{j+1}d(\mathcal{H}). \tag{4.52}\] Thus, we find that there exists a constant \(c\) depending only on \(n\) such that \[\left|\left[ND_{\lambda}^{d}(\mathcal{H})\right]_{i,j,k}\right|\leq c2^{n(i+j)}, \tag{4.53}\] where \(\left|\left[ND_{\lambda}^{d}(\mathcal{H})\right]_{i,j,k}\right|\) denotes the number of the cubes in \(\left[ND_{\lambda}^{d}(\mathcal{H})\right]_{i,j,k}\). From the decomposition of \(ND_{\lambda}^{d}(\mathcal{H})\), we now estimate \(K^{d}\) for each \(d=1\) and \(2\). We first note from (2.4) and (4.32) that there is a constant \(c=c(n)\) such that \[\frac{\mu_{\tau}(\mathcal{Q})}{\mu_{\tau}(P^{d}\mathcal{Q})}\leq c\frac{\tau}{ \nu_{0}}\left(\frac{d(\mathcal{Q})}{\mathrm{dist}(Q^{1},Q^{2})}\right)^{n-pr}. \tag{4.54}\] According to (4.54), the fact that \(p\tau\leq\tilde{p}\tau\) and (4.52), we find \[K^{d} \leq\frac{c}{\nu_{0}}\sum_{\mathcal{H}\in H}\sum_{i,j,k}\sum_{ \mathcal{Q}\in\left[ND_{\lambda}^{d}(\mathcal{H})\right]_{i,j,k}}\left(\frac{d( \mathcal{Q})}{\mathrm{dist}(Q^{1},Q^{2})}\right)^{n-pr+\tilde{p}(s+\alpha)+ \tilde{p}\tau}\] \[\times\left(\int_{P^{d}\mathcal{Q}\cap\{|D^{\tau}d_{\alpha+u}|> \kappa\lambda\}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)\] \[\leq\frac{c}{\nu_{0}}\sum_{\mathcal{H}\in H}\sum_{i,j\geq 0}\sum_{k=1}^{2^ {in}}\sum_{\mathcal{Q}\in\left[ND_{\lambda}^{d}(\mathcal{H})\right]_{i,j,k}}\left( 2^{-(i+j)}\right)^{n+\tilde{p}(s+\alpha)}\] \[\times\left(\int_{\mathcal{H}_{i}(k)\cap\{|D^{\tau}d_{\alpha+u}|> \kappa\lambda\}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right).\] Using (4.53) and the fact that \(\mathcal{H}_{i}(k)\) is disjoint for each \(i\) and \(k\), we further estimate \(K^{d}\) as \[K^{d} \leq\frac{1}{\nu_{0}}\sum_{\mathcal{H}\in H}\sum_{i,j\geq 0}\sum_{k=1} ^{2^{n}}c\left(2^{-(i+j)}\right)^{\tilde{p}(s+\alpha)}\left(\int_{\mathcal{H}_{ i}(k)\cap\{|D^{\tau}d_{\alpha+s}u|>\kappa\lambda\}}|D^{\tau}d_{\alpha+s}u|^{ \tilde{p}}\,d\mu_{\tau}\right)\] \[\leq\frac{1}{\nu_{0}}\sum_{\mathcal{H}\in H}\sum_{i,j\geq 0}c\left(2^ {-(i+j)}\right)^{sp}\left(\int_{\mathcal{H}\cap\{|D^{\tau}d_{\alpha+s}u|>\kappa \lambda\}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)\] \[\leq c\sum_{\mathcal{H}\in H}\left(\int_{\mathcal{H}\cap\{|D^{ \tau}d_{\alpha+s}u|>\kappa\lambda\}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_ {\tau}\right)\] for some constant \(c=c(n,s,p)\), For the last inequality, we have used the fact that \[\sum_{i,j\geq 0}\left(2^{-(i+j)}\right)^{a}\leq\frac{2^{a}}{a\mathrm{ln}2} \quad(a>0). \tag{4.55}\] Since \(NH\) is the family of disjoint dyadic cubes and \(\mathcal{H}\subset\mathcal{B}_{\tau_{2}}\), the above estimate yields that \[K^{d}\leq c\int_{\mathcal{B}_{\tau_{2}}\cap\{|D^{\tau}d_{\alpha+s}u|>\kappa \lambda\}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}. \tag{4.56}\] Then we note from (4.35), (4.38), (4.49), (4.50) and (4.56) that \[\sum_{\mathcal{Q}\in ND_{\lambda}^{1}\cap AD_{\lambda}^{2}}\frac {\lambda^{\tilde{p}}}{4}\mu_{\tau}(\mathcal{Q}) \leq 3^{\tilde{p}-1}\sum_{\mathcal{Q}\in ND_{\lambda}^{1}\cap AD_{ \lambda}^{2}}\int_{\mathcal{Q}\cap\{|D^{\tau}d_{\alpha+s}u|>\kappa\lambda\}}|D ^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}+3^{\tilde{p}-1}K^{1}\] \[\leq c\int_{\mathcal{B}_{\tau_{2}}\cap\{|D^{\tau}d_{\alpha+s}u|> \kappa\lambda\}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau},\] for some constant \(c=c(n,s,p,\tilde{p})\). Likewise, we deduce that there is a constant \(c=c(n,s,p,\tilde{p})\) such that \[\sum_{\mathcal{Q}\in ND_{\lambda}^{2}\cap AD_{\lambda}^{1}}\frac{\lambda^{ \tilde{p}}}{4}\mu_{\tau}(\mathcal{Q}) \leq c\int_{\mathcal{B}_{\tau_{2}}\cap\{|D^{\tau}d_{\alpha+s}u|>\kappa \lambda\}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\] and \[\sum_{\mathcal{Q}\in ND_{\lambda}^{1}\cap ND_{\lambda}^{2}}\frac{\lambda^{ \tilde{p}}}{4}\mu_{\tau}(\mathcal{Q}) \leq c\int_{\mathcal{B}_{\tau_{2}}\cap\{|D^{\tau}d_{\alpha+s}u|> \kappa\lambda\}}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}.\] By combining the above three estimates, we obtain the desired estimate (4.47). **Step 8. The choice of constants.** We now take \[\kappa=\frac{\tau^{\frac{1}{p}}}{(\nu_{0}+1)(80n)^{8n}}.\] Using the fact that \(\tau^{\frac{1}{p}}\leq n\tau^{\frac{1}{p}}\) by (3.13), we have \(\kappa\leq\kappa_{0}\) which is given in (4.17). We next choose \[c_{d}=\frac{\tau^{\frac{1}{p}}}{\kappa},\quad c_{u}=\frac{\tau^{\frac{1}{p}}}{ c_{d}}\quad\text{and}\quad c_{od}=c_{ad}+c_{nd},\] where \(c_{d}=c_{d}(n,s,p)\) and \(c_{od}=c_{od}(n,s,p,\tilde{p})\). **Step 9. Conclusion of the proof.** Let \(\lambda\geq\lambda_{0}\) given in (4.5). Then \(\lambda\) satisfies (4.10). Therefore, in light of (4.14), (4.21) and (4.28), we find (4.4). In addition, taking into account (4.34), (4.37) and (4.47), we prove (4.6), (4.7) and (4.8). This completes the proof of Lemma 4.1. We finish this section by providing estimates which will be needed for the above comparison lemma. _Remark 7_.: For each diagonal ball \(\mathcal{B}_{\rho_{x_{i}}}(x_{i})\) obtained by Lemma 4.1, we want to show that if \[\alpha+\tau<\frac{s}{p-1}, \tag{4.57}\] then \[\left(\frac{1}{\tau}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{ \mathcal{B}_{20\rho_{x_{i}}}(x_{i})}|D^{\tau}d_{\alpha+s}u|^{p}\,d\mu_{\tau} \right)^{\frac{1}{p}}+\mathrm{Tail}_{\mathrm{s,p}}\left(\frac{u-(u)_{B_{20 \rho_{x_{i}}}}}{(20\rho_{x_{i}})^{\alpha+\tau+s}};B_{20\rho_{x_{i}}}(x_{i}) \right)\leq\frac{c}{\frac{s}{p-1}-(\alpha+\tau)}\lambda, \tag{4.58}\] \[\left(\frac{1}{\tau}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{\mathcal{B}_{20\rho_{x_{i}}}(x_{i})}|D^{\tau}d_{ \alpha}f|^{\tilde{q}}\,d\mu_{\tau}\right)^{\frac{1}{q}}+\mathrm{Tail}_{\mathrm{ \frac{1}{p}},\mathrm{p}}\left(\frac{f-(f)_{B_{20\rho_{x_{i}}}}}{(20\rho_{x_{i}}) ^{\alpha+\tau}};B_{20\rho_{x_{i}}}(x_{i})\right)\leq\frac{c}{\frac{s}{p-1}-( \alpha+\tau)}\delta\lambda,\] where \(c=c(n,s,p)\). To this end, we divide the range of \(20\rho_{x_{i}}\) into \[\frac{r_{2}-r_{1}}{20}<20\rho_{x_{i}}\leq 2(r_{2}-r_{1}) \tag{4.59}\] and \[0<20\rho_{x_{i}}\leq\frac{r_{2}-r_{1}}{20}. \tag{4.60}\] (1). Assume (4.59) is true. Using (2.4), Holder's inequality and (4.5), we get that \[\left(\frac{1}{\tau}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{ \mathcal{B}_{20\rho_{x_{i}}}(x_{i})}|D^{\tau}d_{\alpha+s}u|^{p}\,d\mu_{\tau} \right)^{\frac{1}{p}}\] \[\leq\frac{1}{\tau^{\frac{1}{p}}}\left(\frac{40}{r_{2}-r_{1}} \right)^{2n}\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\! \int_{\mathcal{B}_{2}}|D^{\tau}d_{\alpha+s}u|^{p}\,d\mu_{\tau}\right)^{\frac{1} {p}}\] \[\leq c\lambda\] for some constant \(c=c(n)\). Similarly, we find \[\left(\frac{1}{\tau}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\! \int_{\mathcal{B}_{20\rho_{x_{i}}}(x_{i})}|D^{\tau}d_{\alpha}f|^{\bar{q}}\,d\mu_ {\tau}\right)^{\frac{1}{q}}\leq\tau^{\frac{1}{p}}\left(\frac{\mu_{\tau}\left( \mathcal{B}_{2}\right)}{\mu_{\tau}\left(\mathcal{B}_{20\rho_{x_{i}}}(x_{i}) \right)}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{\mathcal{B}_{2}}|D^{\tau}d_{\alpha+s}u|^{\bar{p}}\,d \mu_{\tau}\right)^{\frac{1}{p}}\leq c\delta\lambda,\] where we have used the fact that \(\tau^{\frac{1}{p}}\leq nr^{\frac{1}{p}}\) by (3.13). On the other hand, in light of (4.2) and the fact that \[\frac{1}{(20\rho_{x_{i}})^{\alpha+\tau+s}}\leq\left(\frac{20}{r_{2}-r_{1}} \right)^{\alpha+\tau+s}\leq\left(\frac{20}{r_{2}-r_{1}}\right)^{\frac{sp}{p-1}},\] we estimate the tail term of \(u\) as \[\operatorname{Tail}_{\mathrm{s,p}}\left(\frac{u-(u)_{B_{20\rho_{x _{i}}}(x_{i})}}{(20\rho_{x_{i}})^{\alpha+\tau+s}};B_{20\rho_{x_{i}}}(x_{i})\right) \leq\left(\frac{20}{r_{2}-r_{1}}\right)^{\frac{sp}{p-1}} \operatorname{Tail}_{\mathrm{s,p}}\left(u-(u)_{B_{2}};B_{20\rho_{x_{i}}}(x_{i} )\right)\] \[\quad+\left(\frac{20}{r_{2}-r_{1}}\right)^{\frac{sp}{p-1}} \operatorname{Tail}_{\mathrm{s,p}}\left((u)_{B_{2}}-(u)_{B_{20\rho_{x_{i}}}(x_{ i})};B_{20\rho_{x_{i}}}(x_{i})\right)\] \[=:\left(\frac{20}{r_{2}-r_{1}}\right)^{\frac{sp}{p-1}}T_{1}+ \left(\frac{20}{r_{2}-r_{1}}\right)^{\frac{sp}{p-1}}T_{2}.\] By the fact that \((|a|+|b|)^{\frac{1}{p-1}}\leq 2|a|^{\frac{1}{p-1}}+2|b|^{\frac{1}{p-1}}\) and \(|y-x_{i}|\geq\frac{r_{2}-r_{1}}{2}|y|\) for any \(y\in\mathbb{R}^{n}\setminus B_{2}\), we further estimate \(T_{1}\) as \[T_{1} \leq 2\left((20\rho_{x_{i}})^{sp}\int_{\mathbb{R}^{n}\setminus B_{2 }}\frac{|u(y)-(u)_{B_{2}}|^{p-1}}{|y-x_{i}|^{n+sp}}\,dy\right)^{\frac{1}{p-1}}\] \[\quad+2\left((20\rho_{x_{i}})^{sp}\int_{B_{2}\setminus B_{20\rho_ {x_{i}}}(x_{i})}\frac{|u(y)-(u)_{B_{2}}|^{p-1}}{|y-x_{i}|^{n+sp}}\,dy\right)^{ \frac{1}{p-1}}\] \[\leq 2\left(2^{n+2sp}\left(\frac{1}{r_{2}-r_{1}}\right)^{n}\int_{ \mathbb{R}^{n}\setminus B_{2}}\frac{|u(y)-(u)_{B_{2}}|^{p-1}}{|y|^{n+sp}}\,dy \right)^{\frac{1}{p-1}}\] \[\quad+2\left(\left(\frac{40}{r_{2}-r_{1}}\right)^{n}|B_{1}| \mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{B_{2}}|u(y)-(u)_{B_{2}}|^{p-1}\,dy\right)^{\frac{1}{p-1}}.\] Applying Holder's inequality and (2.12) to the second term in the above last inequality, we observe that there is a constant \(c=c(n,s,p)\) such that \[T_{1} \leq\frac{c}{(r_{2}-r_{1})^{\frac{n}{p-1}}}\operatorname{Tail}_{ \mathrm{s,p}}\left(\frac{u-(u)_{B_{2}}}{2^{\alpha+\tau+s}};B_{2}\right)+\frac{c}{ (r_{2}-r_{1})^{\frac{n}{p-1}}}\left(\frac{1}{\tau}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{\mathcal{B}_{2}}|D^{\tau}d_{\alpha+s}u|^{p}\,d\mu_{ \tau}\right)^{\frac{1}{p}}\] \[\leq\frac{c}{(r_{2}-r_{1})^{\frac{n}{p-1}}}\operatorname{Tail}_{ \mathrm{s,p}}\left(\frac{u-(u)_{B_{2}}}{2^{\alpha+\tau+s}};B_{2}\right)+\frac{c}{ (r_{2}-r_{1})^{\frac{n}{p-1}}}\left(\frac{1}{\tau}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\! \int_{\mathcal{B}_{2}}|D^{\tau}d_{\alpha+s}u|^{\bar{p}}\,d\mu_{\tau}\right)^{ \frac{1}{p}}.\] For the last inequality, we have used Holder's inequality. We next estimate \(T_{2}\) as \[T_{2}\leq\frac{c}{(r_{2}-r_{1})^{\frac{p}{p-1}}}\left(\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{B_{2}}|u(y)-(u)_{B_{2}}|^{p-1}\,dy\right)^{\frac{1}{p-1}} \leq\frac{c}{(r_{2}-r_{1})^{\frac{p}{p-1}}}\left(\frac{1}{\tau}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.49794pt}}{{\vbox{\hbox{$-$}}\kern-13.49 where we have used (2.2), (2.11), Holder's inequality and (2.12). Combine all the estimates \(T_{1}\) and \(T_{2}\) together with (4.5) and the fact that \(\frac{\varepsilon p}{p-1}+\frac{n}{p-1}\leq 2n\) to see that \[\left(\frac{1}{\tau}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\! \int_{\mathcal{B}_{20\rho_{x_{i}}}(x_{i})}|D^{\tau}d_{\alpha+s}u|^{p}\,d\mu_{ \tau}\right)^{\frac{1}{p}}+\text{Tail}_{\text{s,p}}\left(\frac{u-(u)_{B_{20\rho_ {x_{i}}}(x_{i})}}{(20\rho_{x_{i}})^{\alpha+\tau+s}};B_{20\rho_{x_{i}}}(x_{i}) \right)\leq c\lambda \tag{4.61}\] for some constant \(c=c(n,s,p)\). Likewise, by following the same lines for the proof of (4.61) with \(u(x)\), \(s\), \(\tilde{p}\), \(sp\) and \(\lambda\) there, replaced by \(f(x)\), \(0\), \(\tilde{q}\), \(s\) and \(\delta\lambda\), respectively, we have \[\left(\frac{1}{\tau}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\! \int_{\mathcal{B}_{20\rho_{x_{i}}}(x_{i})}|D^{\tau}d_{\alpha}f|^{\tilde{q}}\,d \mu_{\tau}\right)^{\frac{1}{p}}+\text{Tail}_{\frac{s}{p},p}\left(\frac{f-(f)_{ B_{20\rho_{x_{i}}}(x_{i})}}{(20\rho_{x_{i}})^{\alpha+\tau+s}};B_{20\rho_{x_{i}}}(x_{i}) \right)\leq c\delta\lambda. \tag{4.62}\] Since \(1\leq\frac{1}{\frac{1}{p-1}-(\alpha+\tau)}\) which follows by (4.57), (4.61) and (4.62) imply (4.58). (2). We now assume (4.60). Then there is a positive integer \(k\in\mathbb{N}\) such that \[\frac{r_{2}-r_{1}}{20}<2^{k}20\rho_{x_{i}}\leq\frac{r_{2}-r_{1}}{10}. \tag{4.63}\] Applying Lemma 2.6 with \(\Omega=B_{2}\), \(t=s\), \(\rho=20\rho_{x_{i}}\), \(x_{0}=x_{i}\) and \(i=k\), we have that \[\text{Tail}_{\text{s,p}}\left(\frac{u-(u)_{B_{20\rho_{x_{i}}}(x_{i })}}{(20\rho_{x_{i}})^{\alpha+\tau+s}};B_{20\rho_{x_{i}}}(x_{i})\right) \leq c2^{-k\frac{\varepsilon p}{p-1}}\text{Tail}_{\text{s,p}} \left(\frac{u-(u)_{B_{2^{k}20\rho_{x_{i}}}(x_{i})}}{(20\rho_{x_{i}})^{\alpha+ \tau+s}};B_{2^{k}20\rho_{x_{i}}}(x_{i})\right)\] \[\qquad+c\sum_{j=1}^{k}2^{-j\left(\frac{\varepsilon p}{p-1}-( \alpha+\tau+s)\right)}\left(\frac{1}{\tau}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{\mathcal{B}(x_{0},2^{j}20\rho_{x_{i}})}|D^{\tau}d_{ \alpha+s}u|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}},\] where we denote the two terms on the right-hand side by \(T_{3}\) and \(T_{4}\). we first estimate \(T_{3}\) as \[T_{3}\leq\text{Tail}_{\text{s,p}}\left(\frac{u-(u)_{B_{2^{k}20\rho_{x_{i}}}(x _{i})}}{(2^{k}20\rho_{x_{i}})^{\alpha+\tau+s}};B_{2^{k}20\rho_{x_{i}}}(x_{i}) \right)\leq c\lambda,\] where we have used (4.61) along with (4.63) and the fact that \[2^{-k\frac{\varepsilon p}{p-1}}\leq 2^{-k(\alpha+\tau+s)}.\] By Holder's inequality and (4.6), we next estimate \(T_{4}\) as \[T_{4}\leq\sum_{j=1}^{k}\frac{2^{-j\left(\frac{\varepsilon p}{p-1}-(\alpha+ \tau+s)\right)}}{\tau^{\frac{1}{p}}}\left(\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int_{\mathcal{B}(x_{0},2^{j}20\rho_{x_{i}}) }|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d\mu_{\tau}\right)^{\frac{1}{p}}\leq c \sum_{j=1}^{k}2^{-j\left(\frac{\varepsilon p}{p-1}-(\alpha+\tau+s)\right)}\lambda\] for some constant \(c=c(n,s,p)\). Combining the estimates of \(T_{3}\), \(T_{4}\) and the fact that \[\left(\frac{1}{\tau}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{\mathcal{B}_{20\rho_{x_{i}}}(x_{i})}|D^{\tau}d_{ \alpha+s}u|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}}\leq\left(\frac{1}{\tau} \mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\! \int_{\mathcal{B}_{20\rho_{x_{i}}}(x_{i})}|D^{\tau}d_{\alpha+s}u|^{\tilde{p}}\,d \mu_{\tau}\right)^{\frac{1}{p}}\leq\lambda,\] which follows from Holder's inequality and (4.6), we have \[\left(\frac{1}{\tau}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{\mathcal{B}_{20\rho_{x_{i}}}(x_{i})}|D^{\tau}d_{ \alpha+s}u|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}}+\text{Tail}_{\frac{s}{p},p} \left(\frac{u-(u)_{B_{20\rho_{x_{i}}}(x_{i})}}{(20\rho_{x_{i}})^{\alpha+\tau+s }};B_{20\rho_{x_{i}}}(x_{i})\right)\leq c\sum_{j=0}^{k}2^{-j\left(\frac{ \varepsilon p}{p-1}-(\alpha+\tau+s)\right)}\lambda \tag{4.64}\] for some constant \(c=c(n,s,p)\). We follow the same lines for proving (4.64) with \(u(x)\), \(s\), \(\tilde{p}\), \(sp\) and \(\lambda\) there, replaced by \(f(x)\), \(0\), \(\tilde{q}\), \(s\) and \(\delta\lambda\), respectively, in order to obtain that \[\left(\frac{1}{\tau}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{\mathcal{B}_{20\rho_{x_{i}}}(x_{i})}|D^{\tau}d_{ \alpha}f|^{\tilde{q}}\,d\mu_{\tau}\right)^{\frac{1}{q}}+\text{Tail}_{\frac{s}{p},p} \left(\frac{f-(f)_{B_{20\rho_{x_{i}}}(x_{i})}}{(20\rho_{x_{i}})^{\alpha+\tau}};B _{20\rho_{x_{i}}}(x_{i})\right)\leq c\sum_{j=0}^{k}2^{-j\left(\frac{ \varepsilon p}{p ## 5. \(L^{q}\)-estimate of \(d_{s}u\) In the previous section, we have proved Lemma 4.1 which have constructed coverings of upper level sets for \(D^{\tau}d_{\alpha+s}u\). We are now going to prove our main theorem 1.2 using a bootstrap argument. To treat an iterative process, we need to introduce new parameters. We first note that there is the smallest positive integer \(l=l(n,s,p)\) such that \[n\leq(l+1)p\frac{s}{2}. \tag{5.1}\] Let us start with \(p_{0}=p\). If \(l=1\), then define \[\gamma_{0}=2q. \tag{5.2}\] Otherwise, for any nonnegative integer \(h<l-1\), we inductively define \[p_{h+1}=\frac{np_{h}}{n-p_{h}\frac{s}{2}}\quad\text{and}\quad\gamma_{h}=\frac{ np_{h}}{n-p_{h}s} \tag{5.3}\] with \(\gamma_{-1}=2q\). We note from (5.1) and (5.3) that \[p_{h+1}=\frac{np}{n-(h+1)\frac{sp}{2}}\quad\text{and}\quad\gamma_{h}=\frac{np }{n-(h+2)\frac{sp}{2}}\quad\text{for $h<l-1$}.\] Then, we find the smallest nonnegative integer \(l_{q}=l_{q}(n,s,p,q)\) such that \[q<\gamma_{l_{q}}. \tag{5.4}\] Let \(u\in W^{s,p}_{\text{loc}}(B_{4})\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) be a weak solution to the localized problem \[(-\Delta_{p})^{s}_{A}u=(-\Delta_{p})^{\frac{s}{2}}f\quad\text{in $B_{4}$}. \tag{5.5}\] Suppose that \(f\in L^{p-1}_{s}(\mathbb{R}^{n})\) satisfies \(d_{0}f\in L^{q}_{\text{loc}}\left(\mathcal{B}_{4};\frac{dx\,dy}{|x-y|^{n+ \sigma q}}\right)\) for any \[q\in(p,\infty)\quad\text{and}\quad\sigma\in\left(0,\min\left\{\frac{s}{p-1}, 1-s\right\}\right).\] We now divide this section into two subsections depending on the range of \(\sigma\) below, (5.6) and (5.35). ### Restricted range of fractional differentiability \(\sigma\) We first show the following lemma which will be used to apply a bootstrap argument. **Lemma 5.1**.: _Let \(u\) be a weak solution to (5.5) with_ \[\sigma\in\left(0,\left(1-\frac{p}{q}\right)\min\left\{\frac{s}{p-1},1-s\right\}\right) \tag{5.6}\] _and \(D^{\tau}d_{s}u\in L^{p_{h}}_{\text{loc}}\left(\mathcal{B}_{4}\,;\mu_{\tau}\right)\), where \(h\in\{0,1,\ldots,l_{q}\}\) and \(\tau=\frac{q}{q-p}\sigma\). Then there are a small positive constant \(\delta=\delta(\texttt{data})\) and a positive constant \(c=c(\texttt{data})\) independent of \(h\) such that if \(A\) is \((\delta,2)\)-vanishing in \(B_{4}\times B_{4}\) only at the diagonal, then_ \[\begin{split}\left(\fint_{\mathcal{B}_{1}}|D^{\tau}d_{s}u|^{ \tilde{q}}\,d\mu_{\tau}\right)^{\frac{1}{\tilde{q}}}&\leq c\left( \left(\fint_{\mathcal{B}_{2}}|D^{\tau}d_{s}u|^{p_{h}}\,d\mu_{\tau}\right)^{ \frac{1}{\tilde{\tau}_{h}}}+\text{\rm{Tail}}_{\text{s,p}}\left(\frac{u-(u)_{ B_{2}}}{2^{\tau+s}};B_{2}\right)\right)\\ &\quad+c\left(\left(\fint_{\mathcal{B}_{2}}|D^{\tau}d_{0}f|^{ \tilde{q}}\,d\mu_{\tau}\right)^{\frac{1}{\tilde{q}}}+\text{\rm{Tail}}_{\text{ \tiny$\tilde{\mathbb{F}}$},p}\left(\frac{f-(f)_{B_{2}}}{2^{\tau}};B_{2}\right) \right),\end{split} \tag{5.7}\] _where the constant \(\hat{q}\) is defined by_ \[\hat{q}=\begin{cases}p_{h+1}&\quad\text{if $\gamma_{h}\leq q$},\\ q&\quad\text{if $\gamma_{h}>q$}.\end{cases} \tag{5.8}\] Proof.: By Holder's inequality and the fact that \(\tau=\frac{q}{q-p}\sigma\), we first note \[\left(\fint_{\mathcal{B}_{2}}|D^{\tau}d_{0}f|^{p}\,d\mu_{\tau}\right)^{\frac{ 1}{\tilde{p}}}\leq\left(\fint_{\mathcal{B}_{2}}|D^{\tau}d_{0}f|^{q}\,d\mu_{ \tau}\right)^{\frac{1}{\tilde{q}}}=\left(\frac{1}{\mu_{\tau}\left(\mathcal{B} _{2}\right)}\fint_{\mathcal{B}_{2}}|d_{0}f|^{q}\frac{dx\,dy}{|x-y|^{n+\sigma q} }\right)^{\frac{1}{\tilde{q}}}<\infty.\] Let \(\epsilon\in(0,1)\) which will be determined later. We then take \(\delta=\delta(n,s,p,\lambda,\epsilon)\) given in Lemma 3.4. Let \(1\leq r_{1}<r_{2}\leq 2\). We now apply Lemma 4.1 with \(\alpha=0\), \(\tilde{p}=p_{h}\), \(\tilde{q}=p\) and \(\tilde{\gamma}=\gamma_{h}\) to find families of countable disjoint diagonal balls and off-diagonal cubes, \(\{\mathcal{B}_{\rho_{x_{i}}}\left(x_{i}\right)\}_{i\in\mathbb{N}}\) and \(\{\mathcal{Q}\}_{\mathcal{Q}\in\mathcal{A}}\), such that \[\{(x,y)\in\mathcal{B}_{r_{1}}\,;\,|D^{\tau}d_{s}u|(x,y)>\lambda\}\subset\left( \bigcup_{i\in\mathbb{N}}\mathcal{B}_{\tilde{\gamma}\rho_{x_{i}}}\left(x_{i} \right)\right)\bigcup\left(\bigcup_{\mathcal{Q}\in\mathcal{A}}\mathcal{Q}\right)\] for any fixed \(\lambda\geq\lambda_{0}\), where \(\lambda_{0}\) is given in (4.5) with \(\alpha=0\), \(\tilde{p}=p_{h}\) and \(\tilde{q}=p\). Furthermore, Lemma 4.1 yields (4.6), (4.7) and (4.8) with \(\alpha=0\), \(\tilde{p}=p_{h}\), \(\tilde{q}=p\) and \(\tilde{\gamma}=\gamma_{h}\). For \(L\geq\lambda_{0}\), we define a function \(\phi_{L}(r):[1,2]\to\mathbb{R}\) by \[\phi_{L}(r)=\left(\fint_{\mathcal{B}_{r}}|D^{r}d_{s}u|_{L}^{\tilde{q}}\ d\mu_{ \tau}\right)^{\frac{1}{\tilde{q}}},\] where \(|D^{\tau}d_{s}u|_{L}=\max\{|D^{\tau}d_{s}u|\,,L\}\). We now want to show that if \(L\geq\lambda_{0}\), then \[\phi_{L}(r_{1}) \leq\frac{1}{2}\phi_{L}(r_{2})+\frac{c}{(r_{2}-r_{1})^{2n}}\left( \left(\fint_{\mathcal{B}_{2}}|D^{r}d_{s}u|^{p_{h}}\,d\mu_{\tau}\right)^{\frac {1}{\tilde{q}}}+\mathrm{Tail}_{\mathrm{s},p}\left(\frac{u-(u)_{B_{2}}}{2^{\tau +s}};B_{2}\right)\right) \tag{5.9}\] \[\quad+\frac{c}{(r_{2}-r_{1})^{2n}}\left(\left(\fint_{\mathcal{B} _{2}}|D^{r}d_{0}f|^{\tilde{q}}\,d\mu_{\tau}\right)^{\frac{1}{\tilde{q}}}+ \mathrm{Tail}_{\frac{p}{p},p}\left(\frac{f-(f)_{B_{2}}}{2^{\tau}};B_{2}\right)\right)\] for some constant \(c=c(\text{data})\) independent of \(L\). By (4.58) with \(\tilde{p}=p_{h},\alpha=0\) and \(\tilde{q}=p\), and the fact that \(\tau<\frac{s}{p-1}\), we deduce that \[\frac{1}{\tau}\fint_{\mathcal{B}_{20\rho_{x_{i}}}(x_{i})}|D^{r}d_ {s}u|^{p}\,d\mu_{\tau}+\mathrm{Tail}_{\mathrm{s},p}\left(\frac{u-(u)_{B_{20\rho _{x_{i}}}}}{(20\rho_{x_{i}})^{\alpha+\tau+s}};B_{20\rho_{x_{i}}}(x_{i})\right) ^{p}\leq(c_{1}\lambda)^{p},\] \[\frac{1}{\tau}\fint_{\mathcal{B}_{20\rho_{x_{i}}}(x_{i})}|D^{r}d _{\alpha}f|^{p}\,d\mu_{\tau}+\mathrm{Tail}_{\frac{s}{p},p}\left(\frac{f-(f)_{B _{20\rho_{x_{i}}}}}{(20\rho_{x_{i}})^{\alpha+\tau+s}};B_{20\rho_{x_{i}}}(x_{i} )\right)^{p}\leq(\delta c_{1}\lambda)^{p}\] for some constant \(c_{1}=c_{1}(n,s,p,q,\sigma)\). By the above choice of \(\delta=\delta(n,s,p,\Lambda,\epsilon)\), we apply Lemma 3.3 with \(\Omega\) and \(\lambda\) there, replaced by \(B_{4}\) and \(c_{1}\lambda\), respectively, to find a weak solution \(v\) of (3.26) satisfying \[\frac{1}{\tau}\fint_{\mathcal{B}_{5\rho_{x_{i}}}(x_{i})}|D^{\tau}d_{s}(v-u)|^{ p}\,d\mu_{\tau}\leq(\epsilon c_{1}\lambda)^{p}\quad\text{and}\quad\|D^{r}d_{s}v\|_{L ^{\infty}(\mathcal{B}_{5\rho_{x_{i}}}(x_{i}))}\leq c_{0}c_{1}\lambda \tag{5.10}\] for some constant \(c_{0}=c_{0}(\text{data})\), as \(\tau\) depends only on \(p,q\) and \(\sigma\). From Fubini's theorem, we have that \[\int_{\mathcal{B}_{r_{1}}}|D^{\tau}d_{s}u|_{L}^{\tilde{q}}\,d\mu_{\tau} =\int_{0}^{\infty}\hat{q}\lambda^{\tilde{q}-1}\mu_{\tau}\left\{(x, y)\in\mathcal{B}_{r_{1}}\,;\,|D^{\tau}d_{s}u|_{L}(x,y)>\lambda\right\}\,d\lambda\] \[=\int_{0}^{M\lambda_{0}}\hat{q}\lambda^{\tilde{q}-1}\mu_{\tau} \left\{(x,y)\in\mathcal{B}_{r_{1}}\,;\,|D^{\tau}d_{s}u|(x,y)>\lambda\right\}\,d\lambda\] \[\quad+\int_{M\lambda_{0}}^{L}\hat{q}\lambda^{\tilde{q}-1}\mu_{ \tau}\left\{(x,y)\in\mathcal{B}_{r_{1}}\,;\,|D^{\tau}d_{s}u|(x,y)>\lambda \right\}\,d\lambda\coloneqq I+J,\] where \(M>1\) will be selected later to control the off-diagonal upper level set of \(D^{\tau}d_{s}u\) and \(L>M\lambda_{0}\). We first observe that \[I\leq(M\lambda_{0})^{\tilde{q}}\mu_{\tau}(\mathcal{B}_{r_{1}}).\] We next estimate \(J\) as \[J =\int_{\lambda_{0}}^{LM^{-1}}\hat{q}M^{\tilde{q}}\lambda^{\tilde{ q}-1}\mu_{\tau}\left(\{(x,y)\in\mathcal{B}_{r_{1}}\,;\,|D^{\tau}d_{s}u(x,y)|>M \lambda\}\right)\,d\lambda \tag{5.11}\] \[\leq\int_{\lambda_{0}}^{LM^{-1}}\hat{q}M^{\tilde{q}}\lambda^{ \tilde{q}-1}\mu_{\tau}\left(\left\{(x,y)\in\left(\bigcup_{i}\mathcal{B}_{5\rho _{x_{i}}}(x_{i})\right)\bigcup\left(\bigcup_{\mathcal{Q}\in\tilde{\mathcal{A}}} \mathcal{Q}\right)\,;\,|D^{\tau}d_{s}u(x,y)|>M\lambda\right\}\right)\,d\lambda\] \[\leq\int_{\lambda_{0}}^{LM^{-1}}\hat{q}M^{\tilde{q}}\lambda^{ \tilde{q}-1}\mu_{\tau}\left(\left\{(x,y)\in\left(\bigcup_{i}\mathcal{B}_{5\rho _{x_{i}}}(x_{i})\right)\,;\,|D^{\tau}d_{s}u(x,y)|>M\lambda\right\}\right)\,d\lambda\] \[\quad+\int_{\lambda_{0}}^{LM^{-1}}\hat{q}M^{\tilde{q}}\lambda^{ \tilde{q}-1}\mu_{\tau}\left(\left\{(x,y)\in\left(\bigcup_{\mathcal{Q}\in \tilde{\mathcal{A}}}\mathcal{Q}\right)\,;\,|D^{\tau}d_{s}u(x,y)|>M\lambda \right\}\right)\,d\lambda\eqqcolon J_{1}+J_{2},\] where we have used the change of variables for the first equality and the fact that \[\{(x,y)\in\mathcal{B}_{r_{1}}\,;\,|D^{\tau}d_{s}u(x,y)|>M\lambda\} \subset\{(x,y)\in\mathcal{B}_{r_{1}}\,;\,|D^{\tau}d_{s}u(x,y)|>\lambda\} \tag{5.12}\] \[\subset\left(\bigcup_{i}\mathcal{B}_{5\rho_{x_{i}}}(x_{i})\right) \bigcup\left(\bigcup_{\mathcal{Q}\in\tilde{\mathcal{A}}}\mathcal{Q}\right)\] for the second inequality. By taking \(M>c_{0}c_{1}\), we find that there is a constant \(c=c(n,s,p,q,\sigma)\) such that \[\begin{split}&\int_{\lambda_{0}}^{LM^{-1}}M^{\hat{q}}\lambda^{ \hat{q}-1}\mu_{\tau}\left(\left\{(x,y)\in\mathcal{B}_{5\rho_{x_{i}}}\left(x_{i }\right)\,;\,|D^{\tau}d_{s}u(x,y)|>M\lambda\right\}\right)\,d\lambda\\ &\leq\int_{\lambda_{0}}^{LM^{-1}}M^{\hat{q}}\lambda^{\hat{q}-1} \mu_{\tau}\left(\left\{(x,y)\in\mathcal{B}_{5\rho_{x_{i}}}\left(x_{i}\right)\, ;\,|D^{\tau}d_{s}(u-v)(x,y)|>M\lambda\right\}\right)\,d\lambda\\ &\quad+\int_{\lambda_{0}}^{LM^{-1}}M^{\hat{q}}\lambda^{\hat{q}-1} \mu_{\tau}\left(\left\{(x,y)\in\mathcal{B}_{5\rho_{x_{i}}}\left(x_{i}\right) \,;\,|D^{\tau}d_{s}v(x,y)|>M\lambda\right\}\right)\,d\lambda\\ &=\int_{\lambda_{0}}^{LM^{-1}}M^{\hat{q}}\lambda^{\hat{q}-1}\mu_ {\tau}\left(\left\{(x,y)\in\mathcal{B}_{5\rho_{x_{i}}}\left(x_{i}\right)\,;\,|D ^{\tau}d_{s}(u-v)(x,y)|>M\lambda\right\}\right)\,d\lambda\\ &\leq\int_{\lambda_{0}}^{LM^{-1}}M^{\hat{q}}\lambda^{\hat{q}-1} \int_{\mathcal{B}_{5\rho_{x_{i}}}\left(x_{i}\right)}\frac{|D^{\tau}d_{s}(u-v) (x,y)|^{p}}{(M\lambda)^{p}}\,d\mu_{\tau}\,d\lambda\\ &\leq cM^{\hat{q}-p}e^{p}\int_{\lambda_{0}}^{LM^{-1}}\lambda^{ \hat{q}-1}\mu_{\tau}\left(\mathcal{B}_{5\rho_{x_{i}}}\left(x_{i}\right)\right) \,d\lambda\\ &\leq cM^{\hat{q}-p}e^{p}\int_{\lambda_{0}}^{LM^{-1}}\lambda^{ \hat{q}-1}\mu_{\tau}\left(\mathcal{B}_{\rho_{x_{i}}}\left(x_{i}\right)\,d \lambda,\end{split} \tag{5.12}\] where we have used the second inequality in (5.10) for the fourth equality, weak 1-1 estimate for the fifth inequality and the first inequality in (5.10) for the sixth inequality. We next note from (4.6) with \(\alpha=0\), \(\tilde{p}=p_{h}\) and \(\tilde{q}=p\) that either \[\frac{\lambda}{2c}\leq\left(\fint_{\mathcal{B}_{\rho_{x_{i}}}\left(x_{i} \right)}|D^{\tau}d_{s}u|^{p_{h}}\,d\mu_{\tau}\right)^{\frac{1}{p_{h}}}\] or \[\frac{\lambda}{2c}\leq\frac{1}{\delta}\left(\fint_{\mathcal{B}_{\rho_{x_{i}}} \left(x_{i}\right)}|D^{\tau}d_{0}f|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}}\] holds for some constant \(c=c(n,s,p,q,\sigma)\). Considering the above two alternatives, we observe that \[\begin{split}\mu_{\tau}\left(\mathcal{B}_{\rho_{x_{i}}}(x_{i}) \right)&\leq\left(\frac{4c}{\lambda}\right)^{p_{h}}\int_{ \mathcal{B}_{\rho_{x_{i}}}\left(x_{i}\right)\cap\{|D^{\tau}d_{s}u|>a\lambda\}} |D^{\tau}d_{s}u|^{p_{h}}\,d\mu_{\tau}\\ &\quad+\left(\frac{4c}{\lambda}\right)^{p}\int_{\mathcal{B}_{\rho _{x_{i}}}\left(x_{i}\right)\cap\{|D^{\tau}d_{0}f|>b\lambda\}}\frac{1}{\delta^{ p}}|D^{\tau}d_{0}f|^{p}\,d\mu_{\tau},\end{split} \tag{5.13}\] where we have chosen \(a=\frac{1}{4c}\) and \(b=\frac{\delta}{4c}\). Using (5.12), (5.13) and the fact that \(\left\{\mathcal{B}_{\rho_{x_{i}}}\left(x_{i}\right)\right\}_{i}\) is the family of disjoint sets contained in \(\mathcal{B}_{\tau_{2}}\), we have \[\begin{split} J_{1}&\leq\sum_{i}cM^{\hat{q}-p}e^{p} \int_{\lambda_{0}}^{LM^{-1}}\lambda^{\hat{q}-1-p_{h}}\int_{\mathcal{B}_{\rho_{ x_{i}}}\left(x_{i}\right)\cap\{|D^{\tau}d_{s}u|>a\lambda\}}|D^{\tau}d_{s}u|^{p_{h}}\,d \mu_{\tau}\,d\lambda\\ &\quad+\sum_{i}cM^{\hat{q}-p}e^{p}\int_{\lambda_{0}}^{LM^{-1}} \lambda^{\hat{q}-1-p}\int_{\mathcal{B}_{\rho_{x_{i}}}\left(x_{i}\right)\cap\{|D ^{\tau}d_{0}f|>b\lambda\}}\frac{1}{\delta^{p}}|D^{\tau}d_{0}f|^{p}\,d\mu_{ \tau}\,d\lambda\\ &\leq cM^{\hat{q}-p}e^{p}\int_{\lambda_{0}}^{\infty}\lambda^{\hat{q} -p_{h}-1}\int_{\mathcal{B}_{\tau_{2}}\cap\{|D^{\tau}d_{s}u|_{LM^{-1}}>a\lambda \}}|D^{\tau}d_{s}u|^{p_{h}}\,d\mu_{\tau}\,d\lambda\\ &\quad+cM^{\hat{q}-p}e^{p}\int_{\lambda_{0}}^{\infty}\lambda^{\hat{ q}-p-1}\int_{\mathcal{B}_{\tau_{2}}\cap\{|D^{\tau}d_{0}f|>b\lambda\}}\frac{1}{\delta^{p}}|D^{ \tau}d_{0}f|^{p}\,d\mu_{\tau}\,d\lambda\end{split}\] for some constant \(c=c(n,s,p,q,\sigma)\). Apply Fubini's theorem to the last inequality in order to discover that \[J_{1}\leq cM^{\hat{q}-p}e^{p}\int_{\mathcal{B}_{\tau_{2}}}|D^{\tau}d_{s}u|_{LM^ {-1}}^{\hat{q}-p_{h}}|D^{\tau}d_{s}u|^{p_{h}}\,d\mu_{\tau}+cM^{\hat{q}-p}e^{p} \int_{\mathcal{B}_{\tau_{2}}}\frac{1}{\delta^{\hat{q}}}|D^{\tau}d_{0}f|^{\hat{q }}\,d\mu_{\tau}. \tag{5.14}\] We now estimate the remaining term \(J_{2}\). We first observe that there are constants \(c=c(n,s,p,q,\sigma)\) and \(c_{u}=c_{u}(n,s,p,q,\sigma)\) such that \[\begin{split} J_{2}&\leq\sum_{\mathcal{Q}\in \mathcal{A}}\int_{\lambda_{0}}^{LM^{-1}}\hat{q}M^{\hat{q}}\lambda^{\hat{q}-1} \mu_{\tau}\left(\{(x,y)\in\mathcal{Q}\,;\,|D^{\tau}d_{s}u(x,y)|>M\lambda\} \right)\,d\lambda\\ &\leq\sum_{\mathcal{Q}\in\mathcal{A}}\int_{\lambda_{0}}^{LM^{-1}} \hat{q}M^{\hat{q}}\lambda^{\hat{q}-1}\left(\int_{\mathcal{Q}\cap\{|D^{\tau}d_ {s}u|>M\lambda\}}\left(\frac{|D^{\tau}d_{s}u|}{M\lambda}\right)^{\gamma_{h}}\, d\mu_{\tau}\right)\,d\lambda\\ &\leq\sum_{\mathcal{Q}\in\mathcal{A}}\int_{\lambda_{0}}^{LM^{-1}} cM^{\hat{q}-\gamma_{h}}\lambda^{\hat{q}-1}\mu_{\tau}\left(\mathcal{Q}\right)\,d \lambda\\ &\leq\int_{\lambda_{0}}^{LM^{-1}}cM^{\hat{q}-\gamma_{h}}\lambda^ {\hat{q}-1-p_{h}}\int_{\mathcal{B}_{\tau_{2}\cap\{|D^{\tau}d_{s}u|>cu\lambda\} }}|D^{\tau}d_{s}u|^{p_{h}}\,d\mu_{\tau}\,d\lambda,\end{split} \tag{5.15}\] where we have used weak 1-1 estimate, (4.7) and (4.8) with \(\alpha=0\), \(\tilde{p}=p_{h}\) and \(\tilde{\gamma}=\gamma_{h}\). Since the value of \(\min\{\gamma_{h}-\hat{q}\,;\,h=0,1,\ldots l_{q}\}\) is positive and depends only on \(n,s,p,q\) and \(\sigma\), Fubini's theorem yields that \[J_{2}\leq cM^{\hat{q}-\gamma_{h}}\int_{\mathcal{B}_{\tau_{2}}}|D^{\tau}d_{s}u |_{LM^{-1}}^{\hat{q}-p_{h}}\,|D^{\tau}d_{s}u|^{p_{h}}\,d\mu_{\tau}\leq\frac{1} {2^{2n+2q}}\int_{\mathcal{B}_{\tau_{2}}}|D^{\tau}d_{s}u|_{LM^{-1}}^{\hat{q}}\, d\mu_{\tau} \tag{5.16}\] by taking \(cM^{\hat{q}-\gamma_{h}}\leq\frac{1}{2^{2n+2q}}\) and \(M>c_{0}c_{1}\). Since \(M\) depends only on data, we now choose \(\epsilon=\epsilon(\texttt{data})\) sufficiently small so that (5.14) becomes \[J_{1}\leq\frac{1}{2^{2n+2q}}\int_{\mathcal{B}_{\tau_{2}}}|D^{\tau}d_{s}u|_{LM^ {-1}}^{\hat{q}}\,d\mu_{\tau}+c\int_{\mathcal{B}_{\tau_{2}}}|D^{\tau}d_{0}f|^{ \hat{q}}\,d\mu_{\tau}\] for some constant \(c=c(\texttt{data})\). Combine all the estimates \(I,J_{1}\) and \(J_{2}\) to derive that for any \(L>M\lambda_{0}\), \[\begin{split}\phi_{LM^{-1}}(r_{1})&\leq\frac{1}{2} \phi_{LM^{-1}}(r_{2})+\frac{c}{(r_{2}-r_{1})^{2n}}\left(\left(\mathchoice{ \vbox{\hrule width 100 { \vbox{\hrule width 100 {\vbox{\hrule width 100 {\vbox{\hrule width 100 {\vbox{\hrule width 100 {\vbox{\hrule width 100 {\vbox{\hrule width 100 {\vbox{\hrule width 100 {\vbox{\hrule width 100{\vbox{\hrule width 100{\vbox{\vboxvbox{\hrule width width width 100 {\vbox{\vboxvbox{\vbox and \[\hat{q}=\begin{cases}p_{1}&\text{ if }\gamma_{0}\leq q,\\ q&\text{ if }\gamma_{0}>q.\end{cases} \tag{5.20}\] Let \(y_{0}\in B_{2r}(x_{0})\). We define \[\tilde{u}(x)=\left(\frac{r}{8}\right)^{-(\tau+s)}u\left(\frac{r}{8}x+y_{0} \right),\quad\tilde{f}(x)=\left(\frac{r}{8}\right)^{-\tau}f\left(\frac{r}{8}x +y_{0}\right),\quad\tilde{A}(x,y)=A\left(\frac{r}{8}x+y_{0},\frac{r}{8}y+y_{0}\right)\] for \(x,y\in\mathbb{R}^{n}\). Then \(\tilde{u}\in W^{s,p}(B_{4})\cap D_{sp}^{p-1}(\mathbb{R}^{n})\) is a weak solution to \[(-\Delta)^{s}_{p,\tilde{A}}\tilde{u}=(-\Delta_{p})^{\frac{r}{p}}\tilde{f}\quad \text{in }B_{4},\] where \(\tilde{A}\) is diagonal \((\delta,2)\)-vanishing in \(B_{4}\times B_{4}\). In addition, we observe that \[d_{0}\tilde{f}\in L^{q}_{\text{loc}}\left(\mathcal{B}_{4};\frac{dx\,dy}{|x-y| ^{n+q}}\right)\quad\text{and}\quad D^{\tau}d_{s}\tilde{u}\in L^{p}_{\text{ loc}}\left(\mathcal{B}_{4}\,;\mu_{\tau}\right).\] Therefore, we apply Lemma 5.1 with \(u=\tilde{u}\), \(f=\tilde{f}\), \(A=\tilde{A}\) and \(h=0\) to see that \[\left(\fint_{\mathcal{B}_{1}}|D^{\tau}d_{s}\tilde{u}|^{\tilde{q} }\,d\mu_{\tau}\right)^{\frac{1}{q}} \leq c\left(\left(\fint_{\mathcal{B}_{2}}|D^{\tau}d_{s}\tilde{u}|^ {p}\,d\mu_{\tau}\right)^{\frac{1}{p}}+\text{Tail}_{\text{s,p}}\left(\frac{ \tilde{u}-(\tilde{u})_{B_{2}}}{2^{\tau+s}};B_{2}\right)\right)\] \[\quad+c\left(\left(\fint_{\mathcal{B}_{2}}|D^{\tau}d_{0}\tilde{f} |^{\tilde{q}}\,d\mu_{\tau}\right)^{\frac{1}{q}}+\text{Tail}_{\frac{p}{p},p} \left(\frac{\tilde{f}-\left(\tilde{f}\right)_{B_{2}}}{2^{\tau}};B_{2}\right)\right)\] for some constant \(c=c(\text{data})\). Using the change of variables, we then observe that \[\left(\fint_{\mathcal{B}_{\frac{r}{8}}(y_{0})}|D^{\tau}d_{s}u|^{ \tilde{q}}\,d\mu_{\tau}\right)^{\frac{1}{q}} \leq c\left(\left(\fint_{\mathcal{B}_{\frac{r}{4}}(y_{0})}|D^{ \tau}d_{s}u|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}}+\text{Tail}_{\text{s,p}} \left(\frac{u-(u)_{B_{\frac{r}{4}}(y_{0})}}{\left(\frac{r}{4}\right)^{\tau+s} };B_{\frac{r}{4}}(y_{0})\right)\right)\] \[+c\left(\left(\fint_{\mathcal{B}_{\frac{r}{4}}(y_{0})}|D^{\tau}d_ {0}f|^{\tilde{q}}\,d\mu_{\tau}\right)^{\frac{1}{q}}+\text{Tail}_{\frac{p}{p},p }\left(\frac{f-(f)_{B_{\frac{r}{4}}(y_{0})}}{\left(\frac{r}{4}\right)^{\tau} };B_{\frac{r}{4}}(y_{0})\right)\right). \tag{5.21}\] Applying (2.10) with \(\rho=\frac{r}{4}\), \(R=2r\), \(\alpha=0\) and \(t=s\) and then using (5.19), we find that \[\text{Tail}_{\text{s,p}}\left(\frac{u-(u)_{B_{\frac{r}{4}}(y_{0})}}{\left( \frac{r}{4}\right)^{\tau+s}};B_{\frac{r}{4}}(y_{0})\right) \leq c\,\text{Tail}_{\text{s,p}}\left(\frac{u-(u)_{B_{2r}(x_{0})}}{ \left(2r\right)^{\tau+s}};B_{2r}(x_{0})\right)+c\left(\fint_{\mathcal{B}_{2r}(x _{0})}|D^{\tau}d_{s}u|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}} \tag{5.22}\] for some constant \(c=c(\text{data})\). Likewise, (2.10) with \(u(x)=f(x)\)\(\rho=\frac{r}{4}\), \(R=2r\), \(s=\frac{s}{p}\), \(\alpha=0\) and \(t=0\), and (5.19) yield that \[\text{Tail}_{\frac{p}{p},p}\left(\frac{f-(f)_{B_{\frac{r}{4}}(y_{0 })}}{\left(\frac{r}{4}\right)^{\tau}};B_{\frac{r}{4}}(y_{0})\right) \leq c\,\text{Tail}_{\frac{p}{p},p}\left(\frac{f-(f)_{B_{2r}(x_{0 })}}{\left(2r\right)^{\tau}};B_{2r}(x_{0})\right)+c\left(\fint_{\mathcal{B}_{2r }(x_{0})}|D^{\tau}d_{0}f|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}} \tag{5.23}\] \[\leq c\,\text{Tail}_{\frac{p}{p},p}\left(\frac{f-(f)_{B_{2r}(x_{0 })}}{\left(2r\right)^{\tau}};B_{2r}(x_{0})\right)+c\left(\fint_{\mathcal{B}_{2r }(x_{0})}|D^{\tau}d_{0}f|^{\tilde{q}}\,d\mu_{\tau}\right)^{\frac{1}{q}},\] where we have used Holder's inequality for the last inequality. By inserting (5.22) and (5.23) into (5.21) and using the fact that \(\mathcal{B}_{\frac{r}{8}}(y_{0})\subset\mathcal{B}_{2r}(x_{0})\), we have \[\left(\fint_{\mathcal{B}_{\frac{r}{8}}(y_{0})}|D^{\tau}d_{s}u|^{ \tilde{q}}\,d\mu_{\tau}\right)^{\frac{1}{q}} \leq c\left(\left(\fint_{\mathcal{B}_{2r}(x_{0})}|D^{\tau}d_{s} \tilde{u}|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}}+\text{Tail}_{\text{s,p}}\left( \frac{u-(u)_{B_{\frac{r}{8}}(y_{0})}}{\left(2r\right)^{\tau+s}};B_{2r}(x_{0}) \right)\right)\] \[+c\left(\left(\fint_{\mathcal{B}_{2r}(x_{0})}|D^{\tau}d_{0}\tilde{ f}|^{\tilde{q}}\,d\mu_{\tau}\right)^{\frac{1}{q}}+\text{Tail}_{\frac{s}{p},p}\left(\frac{f-(f)_{B_{2r}(x _{0})}}{\left(2r\right)^{\tau}};B_{2r}(x_{0})\right)\right). \tag{5.24}\] On the other hand, using Holder's inequality along with the fact that \(\hat{q}\leq\gamma_{0}\) by (5.2), (5.3) and (5.20), and then applying Lemma 4.2 with \(\tilde{\gamma}=\gamma_{0}\), \(\alpha=0\) and \(\tilde{p}=p\), we find that there is a constant \(c=c(n,s,p,q,\sigma)\) such that for any \(\mathcal{Q}\equiv Q_{\frac{r}{8\sqrt{n}}}(z_{1})\times Q_{\frac{r}{8\sqrt{n}} }(z_{2})\Subset\Omega\times\Omega\) satisfying (4.30), \[\begin{split}\left(\fint_{\mathcal{Q}}|D^{r}d_{s}u|^{\hat{q}}\,d \mu_{\tau}\right)^{\frac{1}{q}}&\leq\left(\fint_{\mathcal{Q}}|D^ {r}d_{s}u|^{\gamma_{0}}\,d\mu_{\tau}\right)^{\frac{1}{\gamma_{0}}}\\ &\leq c\left(\fint_{\mathcal{Q}}|D^{r}d_{s}u|^{p}\,d\mu_{\tau} \right)^{\frac{1}{p}}+c\left[\sum_{d=1}^{2}\left(\frac{1}{\tau}\fint_{P^{d} \mathcal{Q}}|D^{r}d_{s}u|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}}\right].\end{split} \tag{5.25}\] Let us assume that \(z_{1},z_{2}\in B_{r}(x_{0})\). From (4.32) together with the observation that \[\frac{r}{8\sqrt{n}}=d(\mathcal{Q})<|x-y|<r\quad\text{for any }(x,y)\in \mathcal{Q},\] we have \[\frac{r^{n+pr}}{c}\leq\mu_{\tau}(\mathcal{Q})\leq cr^{n+pr} \tag{5.26}\] for some constant \(c=c(n,s,p,q,\sigma)\). Using (5.19), (5.26) and the fact that \(\mathcal{Q},P^{1}\mathcal{Q},P^{2}\mathcal{Q}\subset\mathcal{B}_{2r}(x_{0})\), we further estimate the both sides of (5.25) to see that \[\left(\frac{1}{r^{n+pr}}\int_{\mathcal{Q}}|D^{r}d_{s}u|^{\hat{q}}\,d\mu_{\tau} \right)^{\frac{1}{q}}\leq c\left(\fint_{\mathcal{B}_{2r}(x_{0})}|D^{r}d_{s}u| ^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}} \tag{5.27}\] for some constant \(c=c(\mathsf{data})\). Since we can cover \(\mathcal{B}_{r}(x_{0})\) with finitely many diagonal balls \(\mathcal{B}_{\frac{r}{8\sqrt{n}}}\left(y_{i}\right)\) and off-diagonal cubes \(\mathcal{Q}_{\frac{r}{8\sqrt{n}}}\left(z_{1,i},z_{2,i}\right)\) satisfying (4.30) for some \(y_{i}\), \(z_{1,i}\), \(z_{2,i}\in B_{r}(x_{0})\), the standard covering argument along with (5.19), (5.24) and (5.27) leads to (5.18). Since we have arbitrarily chosen \(x_{0},z_{1},z_{2}\in\Omega\) and \(r\in(0,R]\) satisfying \(B_{2r}(x_{0})\Subset\Omega\) and \(\mathcal{Q}_{\frac{r}{8\sqrt{n}}}\left(z_{1},z_{2}\right)\Subset\Omega\times\Omega\) with (4.30), it follows from (5.18) and (5.25) that \(D^{r}d_{s}u\in L^{\hat{q}}_{\mathrm{loc}}\left(\Omega\times\Omega;\mu_{\tau}\right)\). If \(l_{q}=0\), where \(l_{q}\) is given in (5.4), then \(\hat{q}=q\). Let us assume that \(l_{q}>0\). Then the fact that \(\hat{q}=p_{1}\) yields \(D^{r}d_{s}\tilde{u}\in L^{p_{1}}_{\mathrm{loc}}\left(\mathcal{B}_{4}\,;\mu_{ \tau}\right)\). Thus, we apply Lemma 5.1 with \(u=\tilde{u}\), \(f=\tilde{f}\), \(A=\tilde{A}\) and \(h=1\), and follow the same arguments as in (5.21) to find that \[\begin{split}\left(\fint_{\mathcal{B}_{\frac{r}{8}}\left(y_{0} \right)}|D^{r}d_{s}u|^{\hat{q}_{1}}\,d\mu_{\tau}\right)^{\frac{1}{q_{1}}}& \leq c\left(\left(\fint_{\mathcal{B}_{\frac{r}{4}}\left(y_{0} \right)}|D^{r}d_{s}u|^{p_{1}}\,d\mu_{\tau}\right)^{\frac{1}{p_{1}}}+\mathrm{ Tail}_{\mathrm{s,p}}\left(\frac{u-(u)_{B_{\frac{r}{8}}\left(y_{0} \right)}}{\left(\frac{r}{4}\right)^{\tau+s}};B_{\frac{r}{4}}\left(y_{0}\right) \right)\right)\\ &+c\left(\left(\fint_{\mathcal{B}_{\frac{r}{4}}\left(y_{0} \right)}|D^{r}d_{0}f|^{\hat{q}_{1}}\,d\mu_{\tau}\right)^{\frac{1}{q_{1}}}+ \mathrm{Tail}_{\frac{r}{p},\mathrm{p}}\left(\frac{f-(f)_{B_{\frac{r}{8}}\left(y_ {0}\right)}}{\left(\frac{r}{4}\right)^{\tau}};B_{\frac{r}{4}}\left(y_{0} \right)\right)\right),\end{split} \tag{5.28}\] where \[\hat{q}_{1}=\begin{cases}p_{2}&\quad\text{if }\gamma_{1}\leq q,\\ q&\quad\text{if }\gamma_{1}>q.\end{cases} \tag{5.29}\] Inserting (2.10) with \(\rho=\frac{r}{4}\), \(R=2r\), \(\alpha=0\) and \(t=s\) into (5.18) with \(x_{0}\) and \(r\) there, replaced by \(y_{0}\) and \(\frac{r}{4}\), respectively, and then using the fact that \(\mathcal{B}_{\frac{r}{2}}(y_{0})\subset\mathcal{B}_{2r}(x_{0})\), we have \[\begin{split}\left(\fint_{\mathcal{B}_{\frac{r}{4}}\left(y_{0} \right)}|D^{r}d_{s}u|^{p_{1}}\,d\mu_{\tau}\right)^{\frac{1}{p_{1}}}& \leq c\left(\left(\fint_{\mathcal{B}_{2r}(x_{0})}|D^{r}d_{s}u|^{p}\,d\mu_{ \tau}\right)^{\frac{1}{p}}+\mathrm{Tail}_{\mathrm{s,p}}\left(\frac{u-(u)_{B_{ 2r}(x_{0})}}{\left(2r\right)^{\tau+s}};B_{2r}(x_{0})\right)\right)\\ &\quad+c\left(\left(\fint_{\mathcal{B}_{2r}(x_{0})}|D^{r}d_{0}f|^{ \hat{q}_{1}}\,d\mu_{\tau}\right)^{\frac{1}{q_{1}}}+\mathrm{Tail}_{\frac{r}{p}, \mathrm{p}}\left(\frac{f-(f)_{B_{\frac{r}{8}}\left(y_{0}\right)}}{\left( \frac{r}{2}\right)^{\tau}};B_{\frac{r}{4}}\left(y_{0}\right)\right)\right), \end{split} \tag{5.30}\] where we have used Holder's inequality along with the fact that \(p_{1}\leq\hat{q}_{1}\) for the third term in the right-hand side of (5.30). We now combine (5.28) and (5.30) to obtain that \[\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{\hbox{$-$}} \kern-13.499949pt}}{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{ \hbox{$-$}}\kern-13.499949pt}}\!\int_{\mathcal{B}_{\frac{p}{2}}(y_{0})}|D^{ \tau}d_{s}u|^{\hat{q}_{1}}\,d\mu_{\tau}\right)^{\frac{1}{\hat{q}_{1}}} \leq c\left(\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{\hbox{$-$}} \kern-13.499949pt}}{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}\!\int_{\mathcal{B}_{2r}(x_{0})}|D^{ \tau}d_{s}u|^{p}\,d\mu_{\tau}\right)^{\frac{1}{\hat{p}}}+\mathrm{Tail}_{\mathrm{ s,p}}\left(\frac{u-(u)_{B_{\frac{r}{2}}(y_{0})}}{(2r)^{\tau+s}};B_{2r}(x_{0}) \right)\right)\] \[+c\left(\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{\hbox{$-$}} \kern-13.499949pt}}{{\vbox{\hbox{$-$}}\kern-13.499949pt}}\!\int_{ \mathcal{B}_{2r}(x_{0})}|D^{\tau}d_{0}f|^{\hat{q}_{1}}\,d\mu_{\tau}\right)^{ \frac{1}{\hat{q}_{1}}}+\mathrm{Tail}_{\frac{1}{p},p}\left(\frac{f-(f)_{B_{2r}(x_ {0})}}{(2r)^{\tau}};B_{2r}(x_{0})\right)\right). \tag{5.31}\] For the fourth terms in the right-hand side of (5.28) and (5.30), we have used (2.10) with \(u(x)=f(x)\)\(\rho=\frac{r}{2}\) or \(\frac{r}{4}\), \(R=2r\), \(s=\frac{s}{p}\), \(\alpha=0\) and \(t=0\), and Holder's inequality along with the fact that \(p\leq\hat{q}_{1}\). Furthermore, as in (5.25), Lemma 4.2 with \(\tilde{\gamma}=\gamma_{1}\), \(\alpha=0\) and \(\tilde{p}=p_{1}\) yields \[\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{\hbox{$-$}} \kern-13.499949pt}}{{\vbox{\hbox{$-$}}\kern-13.499949pt}}\!\int_{ \mathcal{Q}}|D^{\tau}d_{s}u|^{\hat{q}_{1}}\,d\mu_{\tau}\right)^{\frac{1}{\hat{ q}_{1}}} \leq c\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{\hbox{$-$}} \kern-13.499949pt}}\!\int_{\mathcal{Q}}|D^{\tau}d_{s}u|^{p}\,d\mu_{\tau} \right)^{\frac{1}{\tilde{p}}}+c\left[\sum_{d=1}^{2}\left(\mathchoice{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{\hbox{$-$}} \kern-13.499949pt}}{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}\!\int_{ \mathcal{Q}}|D^{\tau}d_{s}u|^{p_{1}}\,d\mu_{\tau}\right)^{\frac{1}{\tilde{p}_{1} }}\right], \tag{5.32}\] provided that \(\mathcal{Q}=\mathcal{Q}_{\frac{r}{8\sqrt{p}}}\left(z_{1},z_{2}\right)\Subset \Omega\times\Omega\) satisfying (4.30). Let \(z_{1},z_{2}\in B_{r}(x_{0})\). Since \(P^{d}\mathcal{Q}\subset\mathcal{B}_{\frac{s}{8}}(z_{d})\), Holder's inequality and the same computations as in (5.31) with \(y_{0}=z_{d}\) give that \[\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{\hbox{$-$}} \kern-13.499949pt}}{{\vbox{\hbox{$-$}}\kern-13.499949pt}}\!\int_{P^{d} \mathcal{Q}}|D^{\tau}d_{s}u|^{p_{1}}\,d\mu_{\tau}\right)^{\frac{1}{\hat{p}_{1} }} \leq\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{\hbox{$-$}} \kern-13.499949pt}}\!\int_{\mathcal{B}_{\frac{s}{8}}(z_{d})}|D^{\tau}d_{s}u|^{ \hat{q}_{1}}\,d\mu_{\tau}\right)^{\frac{1}{\hat{q}_{1}}}\] \[\leq c\left(\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{\hbox{$-$}} \kern-13.499949pt}}\!\int_{\mathcal{B}_{2r}(x_{0})}|D^{\tau}d_{s}u|^{p}\,d\mu_{ \tau}\right)^{\frac{1}{\hat{r}}}+\mathrm{Tail}_{\mathrm{s,p}}\left(\frac{u-(u )_{B_{\frac{r}{2}}(y_{0})}}{(2r)^{\tau+s}};B_{2r}(x_{0})\right)\right)\] \[+c\left(\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{\hbox{$-$}} \kern-13.499949pt}}\!\int_{\mathcal{B}_{2r}(x_{0})}|D^{\tau}d_{0}f|^{\hat{q}_{1}} \,d\mu_{\tau}\right)^{\frac{1}{\hat{q}_{1}}}+\mathrm{Tail}_{\frac{1}{p},p}\left( \frac{f-(f)_{B_{2r}(x_{0})}}{(2r)^{\tau}};B_{2r}(x_{0})\right)\right). \tag{5.33}\] In light of (5.26), (5.33) and the fact that \(\mathcal{Q}\subset\mathcal{B}_{2r}(x_{0})\), we further estimate the both sides of (5.32) to see \[\left(\frac{1}{r^{n+pr}}\int_{\mathcal{Q}}|D^{\tau}d_{s}u|^{\hat{q} _{1}}\,d\mu_{\tau}\right)^{\frac{1}{\hat{q}_{1}}} \leq c\left(\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{\hbox{$-$}} \kern-13.499949pt}}\!\int_{\mathcal{B}_{2r}(x_{0})}|D^{\tau}d_{s}u|^{p}\,d\mu_{ \tau}\right)^{\frac{1}{p}}+\mathrm{Tail}_{\mathrm{s,p}}\left(\frac{u-(u)_{B_{ 2r}(x_{0})}}{(2r)^{\tau+s}};B_{2r}(x_{0})\right)\right)\] \[+c\left(\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{\hbox{$-$}} \kern-13.499949pt}}\!\int_{\mathcal{B}_{2r}(x_{0})}|D^{\tau}d_{0}f|^{\hat{q}_{1}} \,d\mu_{\tau}\right)^{\frac{1}{\hat{q}_{1}}}+\mathrm{Tail}_{\frac{1}{p},p}\left( \frac{f-(f)_{B_{2r}(x_{0})}}{(2r)^{\tau}};B_{2r}(x_{0})\right)\right). \tag{5.34}\] As in the case when \(l_{q}=0\), the standard covering argument together with (5.19), (5.31), (5.32) and (5.34) yields (5.18) with \(\hat{q}=\hat{q}_{1}\) and \(D^ to see that \[\tilde{\alpha}_{m}+\left(1-\frac{p}{q}\right)\tilde{\tau}_{m}=\tilde{\tau_{0}}- \frac{p}{q}\tilde{\tau}_{m}=\tilde{\tau}_{0}\left(1-\frac{p}{q}\left(\frac{q_{0} +p}{2q_{0}}\right)^{m}\right), \tag{5.39}\] where \(p_{1}\) is given in (5.3). From (5.35), (5.36), (5.37) and the fact that \[\tilde{\tau}_{0}\left(1-\left(\frac{p}{q}\right)\right)<\sigma\quad\text{and} \quad\tilde{\tau}_{0}>\sigma,\] there is a positive integer \(n_{\sigma}=n_{\sigma}(n,s,p,q,\sigma)\) such that \[\tilde{\tau}_{0}\left(1-\frac{p}{q}\left(\frac{q_{0}+p}{2q_{0}}\right)^{n_{ \sigma}}\right)\geq\sigma\quad\text{and}\quad\tilde{\tau}_{0}\left(1-\frac{p} {q}\left(\frac{q_{0}+p}{2q_{0}}\right)^{n_{\sigma}-1}\right)<\sigma.\] We now select \(\tau_{0}\in(0,\tilde{\tau}_{0}]\) such that \[\tau_{0}\left(1-\frac{p}{q}\left(\frac{q_{0}+p}{2q_{0}}\right)^{n_{\sigma}} \right)=\sigma\quad\text{and}\quad\tau_{0}\left(1-\frac{p}{q}\left(\frac{q_{0 }+p}{2q_{0}}\right)^{n_{\sigma}-1}\right)<\sigma. \tag{5.40}\] Similarly, we take \[\tau_{m+1}=\frac{q_{0}+p}{2q_{0}}\tau_{m}\quad\text{and}\quad\alpha_{m}=\tau_ {0}-\tau_{m} \tag{5.41}\] to observe that \[\alpha_{n_{\sigma}}+\left(1-\frac{p}{q}\right)\tau_{n_{\sigma}}=\sigma\quad \text{and}\quad\alpha_{i}+\left(1-\frac{p}{q}\right)\tau_{i}<\sigma\quad(i=0, 1,\ldots,n_{\sigma}-1), \tag{5.42}\] where we have used (5.38), (5.39), (5.40) and (5.41). In this setting, we define \[\tilde{p}_{m}=p+\sum_{i=0}^{m}\frac{1}{2^{i+2}}\frac{q_{0}-p}{4}\quad\text{ and}\quad\tilde{q}_{m}=q_{0}\left(\frac{3p+q_{0}}{2(q_{0}+p)}+\sum_{i=0}^{m}\frac{1}{2 ^{i+2}}\frac{q_{0}-p}{2(q_{0}+p)}\right)\] to find that \[p<\tilde{p}_{m}<\frac{3p+q_{0}}{4}<q_{0}\frac{3p+q_{0}}{2(q_{0}+p)}<\tilde{q}_ {m}<q_{0}. \tag{5.43}\] In light of (5.43) and (5.41), we find \[\alpha_{m}+\left(1-\frac{p}{\tilde{p}_{m}}\right)\tau_{m}<\alpha_{m-1}+\left( 1-\frac{p}{\tilde{q}_{m}}\right)\tau_{m-1}. \tag{5.44}\] Fix a nonnegative integer \(m\in[0,n_{\sigma}]\). Throughout the proof of the next two lemmas, we assume that any weak solution \(u\in W^{s,p}_{\text{loc}}(B_{4})\cap D^{s-p}_{sp}(\mathbb{R}^{n})\) to the localized problem \[(-\Delta_{p})^{s}_{A}u=(-\Delta_{p})^{\frac{s}{p}}f\quad\text{in}\;B_{4}, \tag{5.45}\] where \(f\in L^{p-1}_{s}(\mathbb{R}^{n})\) with \(d_{0}f\in L^{q}_{\text{loc}}\left(\mathcal{B}_{4};\frac{dx\,dy}{|x-y|^{n+q}}\right)\) satisfies \(D^{r_{m}}d_{\alpha_{m}+s}u\in L^{p}_{\text{loc}}\left(\mathcal{B}_{4}\,;\,\mu_ {r_{m}}\right)\) with the estimates \[\left(\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hrule height 0.4pt width 100 \hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 }}}{{\vbox{\hrule height 0.4pt width 100 \hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100%}}}{{\vbox{\hrule height 0.4pt width 100 \hbox{\vrule width 0.4pt height 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100%}}}{{\vbox{\hrule height 0.4pt width 100 \hbox{\vrule width 0.4pt height 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100%}}}}{{\vbox{\hrule height 0.4pt width 100%}}}}\!\int_{\mathcal{B}_{r}(x_{0})}|D^{r_{m}}d_{\alpha_{m}+s}u|^{ \tilde{p}_{m}}\,d\mu_{r_{m}}\right)^{\frac{1}{p}} \tag{5.46}\] \[\leq d_{m}\left[\left(\frac{1}{\tau_{m}}\mathchoice{{\vbox{ \hrule height 0.4pt width 100 \hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 }}}{{\vbox{\hrule height 0.4pt width 100 \hbox{\vrule width 0.4pt height 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100%}}}{{\vbox{\hrule height 0.4pt width 100 \hbox{\vrule width 0.4pt height 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100%}}}{{\vbox{\hrule height 0.4pt width 100%}}}}{{\vbox{\hrule height 0.4pt width 100%}}} \!\int_{\mathcal{B}_{2r}(x_{0})}|D^{r_{m}}d_{\alpha_{m}+s}u|^{p}\,d\mu_{r_{m}} \right)^{\frac{1}{p}}+\text{Tail}_{\text{s,p}}\left(\frac{u-(u)_{B_{2r}(x_{0})}}{(2 r)^{\alpha_{m}+\tau_{m}+s}};B_{2r}(x_{0})\right)\right]\] \[\quad+d_{m}\left[\left(\frac{1}{\tau_{m}}\mathchoice{{\vbox{ \hrule height 0.4pt width 100 \hbox{\vrule width 0.4pt height 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 }}}{{\vbox{\hrule height 0.4pt width 100 \hbox{\vrule width 0.4pt height 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100%}}}{{\vbox{\hrule height 0.4pt width 100%}}}{{\vbox{\hrule height 0.4pt width 100%}}}{{\vbox{\hrule height 0.4pt width 100%}}}}{{\vbox{\hrule height 0.4pt width 100%}}} \!\int_{\mathcal{B}_{2r}(x_{0})}|D^{r_{m}}d_{\alpha_{m}}f|^{\tilde{q}_{m}}\,d\mu_{r_{m}} \right)^{\frac{1}{\tilde{q}_{m}}}+\text{Tail}_{\text{s,p}}\left(\frac{f-(f)_{B_{ 2r}(x_{0})}}{(2r)^{\alpha_{m}+\tau_{m}}};B_{2r}(x_{0})\right)\right]\] for some constant \(d_{m}>0\), provided that \(B_{4r}(x_{0})\Subset B_{4}\). We remark that applying Lemma 2.4 with \(p=\tilde{q}_{m}\), \(s_{1}=\alpha_{m}+\left(1-\frac{p}{\tilde{q}_{m}}\right)\tau_{m}\) and \(s_{2}=\sigma\), we discover that \[\left(\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hrule height 0.4pt width 100 \hbox{\vrule width 0.4pt height 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 }}}{{\vbox{\hrule height 0.4pt width 100 \hbox{\vrule width 0.4pt height 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100%}}}{{\vbox{\hrule height 0.4pt width 100%}}}{{\vbox{\hrule height 0.4pt width 100%}}}}{{\vbox{\hrule height 0.4pt width 100%}}}{{\vbox{\hrule height 0.4pt width 100%}}} \!\int_{\mathcal{B}_{2r}(x_{0})}|D^{r_{m}}d_{\alpha_{m}}f|^{\tilde{q}_{m}}\,d\mu_{r_{m}} \right)^{\frac{1}{\tilde{q}_{m}}}\leq c[f]_{W^{\alpha_{m}}+\left(1-\frac{p}{ \tilde{q}_{m}}\right)^{r_{m},\text{dim}}_{m}(B_{2r}(x_{0}))}\leq c[f]_{W^{\sigma,q}(B_{2r}(x_{0}))}<\infty,\] which implies that the right-hand side of (5.46) is well-defined. We now improve the comparison lemma 3.4 using an interpolation argument. **Lemma 5.2**.: _For any \(\epsilon>0\), there is a constant \(\delta=\delta(\mathsf{data},\epsilon,d_{m})\) such that for any weak solution \(u\) to (5.45) with_ \[B_{20\rho_{x_{i}}}(x_{i})\Subset B_{4}\] _and_ \[\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{ \mathcal{B}_{20\rho_{x_{i}}}(x_{0})}|D^{\tau_{m}}d_{\alpha_{m}+s}u|^{p}\,d\mu_{ \tau_{m}}+\mathrm{Tail}_{\mathrm{s,p}}\left(\frac{u-(u)_{B_{20\rho_{x_{i}}}(x_{ i})}}{(4r)^{\alpha_{m}+\tau_{m}+s}};B_{20\rho_{x_{i}}}(x_{i})\right)^{p}\leq \lambda^{p}, \tag{5.47}\] \[\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794 pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{ \mathcal{B}_{20\rho_{x_{i}}}(x_{i})}|D^{\tau_{m}}d_{\alpha_{m}}f|^{\tilde{q}_{ m}}\,d\mu_{\tau_{m}}+\mathrm{Tail}_{\frac{1}{\tau},p}\left(\frac{f-(f)_{B_{20 \rho_{x_{i}}}(x_{i})}}{(4r)^{\alpha_{m}+\tau_{m}}};B_{20\rho_{x_{i}}}(x_{i}) \right)^{\tilde{q}_{m}}\] \[\quad+\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794 pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{B_{10\rho_{x_{i}}}(x_{i})}\mathchoice{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}} \!\int_{B_{10\rho_{x_{i}}}(x_{i})}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int_{B_{10\rho_{x_{i}}}(x_{i})} \lambda|A(x,y)-(A)_{B_{10\rho_{x_{i}}}(x_{i})\times B_{10\rho_{x_{i}}}(x_{i})} |\,dx\,dy\right)^{\tilde{q}_{m}}\leq(\delta\lambda)^{\delta_{m}},\] _there exists a weak solution \(v\in W^{s,p}(B_{10\rho_{x_{i}}}(x_{i}))\cap L^{p-1}_{sp}(\mathbb{R}^{n})\) to_ \[(-\Delta)^{s}_{p,A_{10\rho_{x_{i}}},x_{i}}v=0\quad\text{in }B_{10\rho_{x_{i}}}(x_{i}) \tag{5.48}\] _such that_ \[\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{ \mathcal{B}_{5\rho_{x_{i}}}(x_{i})}|D^{\tau_{m}}d_{\alpha_{m}+s}(u-v)|^{p}\,d \mu_{\tau_{m}}\leq(\epsilon\lambda)^{p}\quad\text{and}\quad\|D^{\tau_{m}}d_{ \alpha_{m}+s}v\|_{L^{\infty}}(\mathcal{B}_{5\rho_{x_{i}}}(x_{i}))\leq c_{2}\lambda \tag{5.49}\] _for some constant \(c_{2}=c_{2}(\mathsf{data})\)._ Proof.: Let us define for \(x,y\in\mathbb{R}^{n}\), \[\tilde{u}(x)=\frac{u(5\rho_{x_{i}}x+x_{i})}{\left(5\rho_{x_{i}}\right)^{\alpha _{m}+\tau_{m}+s}\lambda},\quad\tilde{f}(x)=\frac{f(5\rho_{x_{i}}x+x_{i})}{ \left(5\rho_{x_{i}}\right)^{\alpha_{m}+\tau_{m}}\lambda}\quad\text{and}\quad \tilde{A}(x,y)=A(5\rho_{x_{i}}x+x_{i},5\rho_{x_{i}}y+x_{i})\] to see that \[(-\Delta)^{s}_{p,\tilde{A}}\tilde{u}=(-\Delta_{p})^{\frac{s}{p}}\tilde{f}\quad \text{in }B_{4}\] and \[\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{ \mathcal{B}_{4}}|D^{\tau_{m}}d_{\alpha_{m}+s}\tilde{u}|^{p}\,d\mu_{\tau_{m}}+ \mathrm{Tail}_{\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{\mathcal{B}_{4}}}\left(\frac{\tilde{u}-(\tilde{u})_{B_{4 }}}{4^{\alpha_{m}+\tau_{m}+s}};B_{4}\right)^{p}\leq 1,\] \[\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{\mathcal{B}_{4}}|D^{\tau_{m}}d_{\alpha_{m}}\tilde{f}|^{ \tilde{q}_{m}}\,d\mu_{\tau_{m}}+\mathrm{Tail}_{\frac{1}{\tau_{m}}\mathchoice{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int_{\mathcal{B}_{4}}}\left(\frac{\tilde{f}-( \tilde{f})_{B_{4}}}{4^{\alpha_{m}+\tau_{m}}};B_{4}\right)^{\tilde{q}_{m}} \tag{5.50}\] which follows from (5.47). We first observe that there is a constant \(c=c(n,s,p,q,\sigma)\) such that \[\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{\mathcal{B}_{4}}|D^{\tau_{m}}d_{0}\tilde{f}|^{p}\,d\mu_{ \tau_{m}}\leq\frac{1}{\tau_{m}^{1-\frac{s}{4m}}}\left(\frac{1}{\tau_{m}} \mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int_{\mathcal{B}_{4}}|D^{\tau_{m}}d_{0}\tilde{f}|^{ \tilde{q}_{m}}\,d\mu_{\tau_{m}}\right)^{\frac{p}{4m}}\leq c, \tag{5.51}\] where we have used Holder's inequality. Combining the estimates (5.50) and (5.51), and then using the fact \[1\leq\frac{8^{\alpha_{m}}}{|x-y|^{\alpha_{m}}}\leq\frac{c}{|x-y|^{\alpha_{m}}} \quad\text{for }x,y\in B_{4}\] and \[\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-1 By following the same lines as in the proof of (3.20) and (3.24), we observe that there is a weak solution \(\tilde{v}\in W^{s,p}(B_{2})\cap L^{p-1}_{\text{sp}}(\mathbb{R}^{n})\) to \[(-\Delta_{p})^{s}_{\tilde{A}_{2}}\tilde{v}=0\quad\text{in }B_{2} \tag{5.54}\] such that \[\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{ \mathbb{B}_{2}}|D^{\tau_{m}}d_{s}(\tilde{u}-\tilde{v})|^{p}\,d\mu_{\tau_{m}}\leq c \,(\delta^{\nu}+\delta^{p})\quad\text{and}\quad\text{Tail}_{\text{s,p}}(\tilde {v}-(\tilde{v})_{B_{2}};B_{2})\leq c \tag{5.55}\] for some constants \(c=c(\mathsf{data})\) and \(\nu=\nu(\mathsf{data})>0\). In light of Lemma 3.2 along with the fact that \(s+\alpha_{m}+\tau_{m}=s+\tau_{0}<\min\left\{1,\frac{sp}{p-1}\right\}\) by (5.37) and (5.40), Lemma 2.2, (5.55) and (5.53), we then find that there is a constant \(c_{2}=c_{2}(\mathsf{data})\) such that \[\|D^{\tau_{m}}d_{\alpha_{m}+s}\tilde{v}\|_{L^{\infty}(\mathcal{B }_{1})} =\|\tilde{v}\|_{C^{s+\tau_{0}}(B_{1})} \tag{5.56}\] \[\leq c\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\! \int_{\mathbb{B}_{2}}|\tilde{v}-(\tilde{v})_{B_{2}}|^{p}\,dx\right)^{\frac{1}{p }}+c\,\text{Tail}_{\text{s,p}}(\tilde{v}-(\tilde{v})_{B_{2}};B_{2})\] \[\leq c\left(\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int_{\mathbb{B}_{2}}|D^{\tau_{m}}d_{s} \tilde{v}|^{p}\,d\mu_{\tau_{m}}\right)^{\frac{1}{p}}+c\] \[\leq c_{2}.\] We next note from (2.9) that \[\text{Tail}_{\text{s,p}}\left(\frac{\tilde{u}-(\tilde{u})_{B_{2}}}{2^{\alpha_{ m}+\tau_{m}+\tilde{s}}};B_{2}\right)\leq c\left(\frac{1}{\tau_{m}}\mathchoice{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\! \int_{\mathbb{B}_{4}}|D^{\tau_{m}}d_{\alpha_{m}+s}\tilde{u}|^{p}\,\,d\mu_{\tau_{ m}}\right)^{\frac{1}{p}}+c\,\text{Tail}_{\text{s,p}}\left(\frac{\tilde{u}-( \tilde{u})_{B_{4}}}{4^{\alpha_{m}+\tau_{m}+\tilde{s}}};B_{4}\right)\] and \[\text{Tail}_{\text{s,p}}\left(\frac{\tilde{f}-(\tilde{f})_{B_{2}}}{2^{\alpha_{ m}+\tau_{m}}};B_{2}\right)\leq c\left(\frac{1}{\tau_{m}}\mathchoice{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\! \int_{\mathbb{B}_{4}}\left|D^{\tau_{m}}d_{\alpha_{m}}\tilde{f}\right|^{\tilde{ q}_{m}}\,\,d\mu_{\tau_{m}}\right)^{\frac{1}{q_{m}}}+c\,\text{Tail}_{\text{s,p}} \left(\frac{\tilde{f}-(\tilde{f})_{B_{4}}}{4^{\alpha_{m}+\tau_{m}}};B_{4} \right),\] where we have used Holder's inequality along with the fact that \(p\leq\tilde{q}_{m}\) for the first term in the right-hand side of the above inequality. Combining the above two inequalities, the scaled version of (5.46) with \(u=\tilde{u}\), \(r=1\) and \(x_{0}=0\), and (5.50), we have \[\left(\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int_{\mathbb{B}_{1}}|D^{\tau_{m}}d_{\alpha_{m}+s} \tilde{u}|^{\tilde{p}_{m}}\,d\mu_{\tau_{m}}\right)^{\frac{1}{p_{m}}} \tag{5.57}\] \[\leq c\left[\left(\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int_{\mathbb{B}_{4}}|D^{\tau_{m}}d_{ \alpha_{m}+s}\tilde{u}|^{p}\,d\mu_{\tau_{m}}\right)^{\frac{1}{p}}+\text{Tail}_{ \text{s,p}}\left(\frac{\tilde{u}-(\tilde{u})_{B_{4}}}{4^{\alpha_{m}+\tau_{m}+ \tilde{s}}};B_{4}\right)\right]\] \[\quad+c\left[\left(\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int_{\mathbb{B}_{4}}|D^{\tau_{m}}d_{ \alpha_{m}}\tilde{f}|^{\tilde{q}_{m}}\,d\mu_{\tau_{m}}\right)^{\frac{1}{q_{m}}}+ \text{Tail}_{\text{s,p}}\left(\frac{\tilde{f}-(\tilde{f})_{B_{4}}}{4^{\alpha_{m}+ \tau_{m}}};B_{4}\right)\right]\] \[\leq c\] for some constant \(c=c(\mathsf{data},d_{m})\). Applying Lemma 2.4 with \(q=\tilde{p}_{m}\), \(s_{1}=s+\alpha_{m}+\frac{1}{2}\left(1-\frac{p}{\tilde{p}_{m}}\right)\) and \(s_{2}=s+\alpha_{m}+\left(1-\frac{p}{\tilde{p}_{m}}\right)\), and then using (5.57), (5.56) and (5.52), we get that \[\left[\tilde{u}-\tilde{v}\right]_{W^{s+\alpha_{m}+\frac{1}{2} \left(1-\frac{p}{\tilde{p}_{m}}\right)\tau_{m},\tilde{r}}(B_{1})} \leq c[\tilde{u}-\tilde{v}]_{W^{s+\alpha_{m}+\left(1-\frac{p}{ \tilde{p}_{m}}\right)\tau_{m},\tilde{r}_{m}}(B_{1})}\] \[\leq c\left(\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int_{\mathbb{B}_{1}}|D^{\tau_{m}}d_{ \alpha_{m}+s}(\tilde{u}-\tilde{v})|^{\tilde{p}_{m}}\,d\mu_{\tau_{m}}\right)^{ \frac{1}{\tilde{p}_{m}}}\] \[\leq c\left(\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int_{\mathbb{B}_{1}}|D^{\tau_{m}}d_{ \alpha_{m}+s}\tilde{u}|^{\tilde{p}_{m}}\,d\mu_{\tau_{m}}\right)^{\frac{1}{\tilde{ p}_{m}}}\] \[\quad+c\left(\frac{1}{\tau_{m}}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.4 We next apply Lemma 2.5 with \(s_{1}=s\), \(s_{2}=s+\alpha_{m}+\frac{1}{2}\left(1-\frac{p}{\tilde{p}_{m}}\right)\tau_{m}\) and \(t=t_{m}\in(0,1)\) such that \[t_{m}s+(1-t_{m})\left(s+\alpha_{m}+\frac{1}{2}\left(1-\frac{p}{\tilde{p}_{m}} \right)\tau_{m}\right)=s+\alpha_{m},\] to see that \[[\tilde{u}-\tilde{v}]_{W^{s+\alpha_{m},p}(B_{1})} \leq[\tilde{u}-\tilde{v}]_{W^{s,p}(B_{1})}^{t_{m}}[\tilde{u}- \tilde{v}]_{W^{s+\alpha_{m}+\frac{1}{2}\left(1-\frac{p}{\tilde{p}_{m}}\right) _{m,p}(B_{1})}^{1-t_{m}}}. \tag{5.58}\] We now take \(t=\min\limits_{m=0,1,\ldots n_{\sigma}}t_{m}>0\) which depends only on \(n,s,p,q\) and \(\sigma\), since \(t_{m}\) and \(n_{\sigma}\) depend only on \(n,s,p,q\) and \(\sigma\). In light of (5.55), (5.52) and (5.58), we get that \[\frac{1}{\tau_{m}}\fint_{\mathcal{B}_{1}}|D^{\tau_{m}}d_{\alpha_{ m}+s}(\tilde{u}-\tilde{v})|^{p}\,d\mu_{\tau_{m}} =\frac{1}{\tau_{m}}\frac{1}{\mu_{\tau_{m}}(\mathcal{B}_{1})}[ \tilde{u}-\tilde{v}]_{W^{s+\alpha_{m},p}(B_{1})}\] \[\leq c\left[(\delta^{p}+\delta^{\sigma})\right]^{t}.\] By taking \(\delta\) sufficiently small depending only on data, \(\epsilon\) and \(d_{m}\), we have \[\frac{1}{\tau_{m}}\fint_{\mathcal{B}_{1}}|D^{\tau_{m}}d_{\alpha_{ m}+s}(\tilde{u}-\tilde{v})|^{p}\,d\mu_{\tau_{m}}\leq\epsilon^{p}. \tag{5.59}\] We define \(v(x)=\left(5\rho_{x_{i}}\right)^{\alpha_{m}+\tau_{m}+s}\tilde{v}\left(\frac{x -x_{i}}{5\rho_{x_{i}}}\right)\) for \(x\in\mathbb{R}^{n}\) to conclude that \(v\) is a local weak solution to (5.48) satisfying (5.49) which follows from (5.54), (5.56) and (5.59). By following the same strategy as for the proof of Lemma 5.1, we now obtain (5.46) with \(m\), replaced by \(m+1\), where the constant \(d_{m+1}\) depends only on data and \(d_{m}\). **Lemma 5.3**.: _Let \(u\) be a weak solution to (5.45), and let \(m\leq n_{\sigma}-1\). Then there is a small \(\delta=\delta(\texttt{data},d_{m})\) such that if \(A\) is \((\delta,2)\)-vanishing in \(B_{4}\times B_{4}\) only at the diagonal, then \(D^{\tau_{m+1}}d_{\alpha_{m+1}+s}u\in L^{p}_{\rm loc}\left(\mathcal{B}_{4}\,;\, \mu_{\tau_{m}}\right)\) and that (5.46) with \(m\) replaced by \(m+1\) holds, where \(d_{m+1}=d_{m+1}(\texttt{data},d_{m})\)._ Proof.: Let \(B_{4r}(x_{0})\Subset B_{4}\) for some \(r>0\). We then define for \(x,y\in\mathbb{R}^{n}\), \[\tilde{u}(x)=\frac{u\left(rx+x_{0}\right)}{r^{\alpha_{m}+\tau_{m}+s}},\quad \tilde{f}(x)=\frac{f\left(rx+x_{0}\right)}{r^{\alpha_{m}+\tau_{m}}}\quad\text {and}\quad\tilde{A}(x,y)=A\left(rx+x_{0},ry+x_{0}\right)\] to see that \[(-\Delta)^{s}_{p,\tilde{A}}\tilde{u}=(-\Delta_{p})^{\frac{\epsilon}{p}}\tilde {f}\quad\text{in }B_{4},\] where \(\tilde{A}\) is \((\delta,2)\)-vanishing in \(B_{4}\times B_{4}\) only at the diagonal. We first want to show that \[\left(\fint_{\mathcal{B}_{1}}|D^{\tau_{m}}d_{\alpha_{m}+\tilde{u }}\tilde{u}|^{\tilde{q}_{m+1}}\,d\mu_{\tau}\right)^{\frac{1}{d_{m+1}}} \leq c\left[\left(\fint_{\mathcal{B}_{2}}|D^{\tau_{m}}d_{\alpha_{m}+s} \tilde{u}|^{p}\,d\mu_{\tau}\right)^{\frac{1}{p}}+\text{Tail}_{\pi,p}\left( \frac{\tilde{u}-(\tilde{u})_{B_{2}}}{2^{\alpha_{m}+\tau_{m}+s}};B_{2}\right)\right] \tag{5.60}\] for some constant \(c=c(\texttt{data},d_{m})\). We fix \(\epsilon\in(0,1)\), and then take \(\delta=\delta(\texttt{data},\epsilon,d_{m})\) given in Lemma 5.2. Let \(1\leq r_{1}<r_{2}\leq 2\). We now apply Lemma 4.1 with \(\alpha=\alpha_{m}\)\(\tilde{p}=p\), \(\tilde{q}=\tilde{q}_{m}\), \(\tilde{\gamma}=\gamma_{0}\) and \(\tau=\tau_{m}\) to gain families of countable disjoint diagonal balls and off-diagonal cubes, \(\{\mathcal{B}_{\rho_{x_{i}}}(x_{i})\}_{i\in\mathbb{N}}\) and \(\{\mathcal{Q}\}_{\mathcal{Q}\in\tilde{A}}\), respectively, such that \[\{(x,y)\in\mathcal{B}_{r_{1}}\,;\,|D^{\tau_{m}}d_{\alpha_{m}+s}u|(x,y)>\lambda\} \subset\left(\bigcup_{i\in\mathbb{N}}\mathcal{B}_{5\rho_{x_{i}}}(x_{i})\right) \bigcup\left(\bigcup_{\mathcal{Q}\in\tilde{A}}\mathcal{Q}\right)\] provided that \(\lambda\geq\lambda_{0}\), where \(\lambda_{0}\) is given in (4.5) with \(\alpha=\alpha_{m}\), \(\tilde{p}=p\), \(\tilde{q}=\tilde{q}_{m}\) and \(\tau=\tau_{m}\). Moreover, by Lemma 4.1, we have (4.6), (4.7) and (4.8) with \(\alpha=\alpha_{m}\), \(\tilde{p}=p\), \(\tilde{q}=\tilde{q}_{m}\), \(\tilde{\gamma}=\gamma_{0}\) and \(\tau=\tau_{m}\). We now define for \(L\geq\lambda_{0}\), \[\tilde{\phi}_{L}(r)=\left(\fint_{\mathcal{B}_{r}}|D^{\tau_{m}}d_{\alpha_{m}+s} \tilde{u}|_{L}^{\tilde{q}_{m+1}}\,d\mu_{\tau}\right)^{\frac{1}{\tilde{q}_{m+1}}},\] where \(\left|D^{\tau_{m}}d_{\alpha_{m}+s}\tilde{u}\right|_{L}=\min\{\left|D^{\tau_{m}}d_{ \alpha_{m}+s}\tilde{u}\right|,L\}\). We are going to prove that if \(L\geq\lambda_{0}\), then \[\begin{split}\tilde{\phi}_{L}(r_{1})&\leq\frac{1}{ 2}\tilde{\phi}_{L}(r_{2})+\frac{c}{(r_{2}-r_{1})^{2n}}\left(\left(\fint_{ \mathcal{B}_{2}}\left|D^{\tau_{m}}d_{\alpha_{m}+s}\tilde{u}\right|^{p}d\mu_{ \tau_{m}}\right)^{\frac{1}{\tilde{p}}}+\mathrm{Tail}_{\mathrm{s},p}\left(\frac {\tilde{u}-(\tilde{u})_{B_{2}}}{2^{\alpha_{m}+\tau_{m}+s}};B_{2}\right)\right) \\ &\qquad+\frac{c}{(r_{2}-r_{1})^{2n}}\left(\left(\fint_{\mathcal{B} _{2}}\left|D^{\tau_{m}}d_{\alpha_{m}}\tilde{f}\right|^{\tilde{q}_{m}}d\mu_{ \tau_{m}}\right)^{\frac{1}{\tilde{q}_{m}}}+\mathrm{Tail}_{\frac{1}{\tilde{p}}, p}\left(\frac{\tilde{f}-(\tilde{f})_{B_{2}}}{2^{\alpha_{m}+\tau_{m}}};B_{2} \right)\right)\end{split} \tag{5.61}\] for some constant \(c=c(\mathsf{data},d_{m})\) independent of \(L\). Using (4.58) with \(\alpha=\alpha_{m}\), \(\tilde{p}=p\), \(\tilde{q}=q_{m}\), and \(\tau=\tau_{m}\) and the fact that \(\frac{s}{p-1}-(\alpha_{m}+\tau_{m})=\frac{s}{p-1}-\tau_{0}>0\), we have \[\begin{split}\frac{1}{\tau_{m}}\fint_{\mathcal{B}_{2\theta \rho_{x_{i}}}(x_{i})}\left|D^{\tau_{m}}d_{\alpha_{m}+s}\tilde{u}\right|^{p}d \mu_{\tau_{m}}+\mathrm{Tail}_{\mathrm{s},p}\left(\frac{\tilde{u}-(\tilde{u})_ {B_{20\rho_{x_{i}}}}}{(20\rho_{x_{i}})^{\alpha_{m}+\tau_{m}+s}};B_{20\rho_{x_{ i}}}(x_{i})\right)^{p}&\leq(c_{1}\lambda)^{p},\\ \frac{1}{\tau_{m}}\fint_{\mathcal{B}_{20\rho_{x_{i}}}(x_{i})}\left| D^{\tau_{m}}d_{\alpha_{m}}\tilde{f}\right|^{\tilde{q}_{m}}d\mu_{\tau_{m}}+ \mathrm{Tail}_{\frac{1}{\tilde{p}},p}\left(\frac{\tilde{f}-(\tilde{f})_{B_{20 \rho_{x_{i}}}}}{(20\rho_{x_{i}})^{\alpha_{m}+\tau_{m}}};B_{20\rho_{x_{i}}}(x_{ i})\right)^{\tilde{q}_{m}}&\leq(c_{1}\lambda\delta)^{\tilde{q}_{m}} \end{split}\] for some constant \(c_{1}=c_{1}(n,s,p,q,\sigma)\). We then apply Lemma 5.2 with \(\lambda\) there, replaced by \(c_{1}\lambda\), in order to obtain a weak solution \(\tilde{v}\) of (5.48) satisfying \[\frac{1}{\tau_{m}}\fint_{\mathcal{B}_{5\rho_{x_{i}}}(x_{i})}\left|D^{\tau_{m}}d _{\alpha_{m}+s}(\tilde{u}-\tilde{v})\right|^{p}d\mu_{\tau_{m}}\leq(\epsilon c _{1}\lambda)^{p}\quad\text{and}\quad\left\|D^{\tau_{m}}d_{\alpha_{m}+s}\tilde{ v}\right\|_{L^{\infty}(\mathcal{B}_{5\rho_{x_{i}}}(x_{i}))}\leq c_{2}c_{1}\lambda, \tag{5.62}\] where the constant \(c_{2}\) is given in Lemma 5.2. We first notice from Fubini's theorem that \[\begin{split}\int_{\mathcal{B}_{\tau_{1}}}\left|D^{\tau_{m}}d_{ \alpha_{m}+s}\tilde{u}\right|_{L}^{\tilde{q}_{m+1}}\,d\mu_{\tau_{m}}& =\int_{0}^{\infty}\tilde{q}_{m+1}\lambda^{\tilde{q}_{m+1}-1}\mu_ {\tau_{m}}\left\{(x,y)\in\mathcal{B}_{\tau_{1}}\,;\,\left|D^{\tau_{m}}d_{ \alpha_{m}+s}\tilde{u}\right|_{L}(x,y)>\lambda\right\}\,d\lambda\\ &=\int_{0}^{M\lambda_{0}}\tilde{q}_{m+1}\lambda^{\tilde{q}_{m+1}-1 }\mu_{\tau_{m}}\left\{(x,y)\in\mathcal{B}_{\tau_{1}}\,;\,\left|D^{\tau_{m}}d_{ \alpha_{m}+s}\tilde{u}(x,y)>\lambda\right\}\,d\lambda\\ &\quad+\int_{M\lambda_{0}}^{L}\tilde{q}_{m+1}\lambda^{\tilde{q}_{m+ 1}-1}\mu_{\tau_{m}}\left\{(x,y)\in\mathcal{B}_{\tau_{1}}\,;\,\left|D^{\tau}d_{s} \tilde{u}\right|(x,y)>\lambda\right\}\,d\lambda\coloneqq I+J,\end{split}\] where \(M>1\) will be selected later and \(L>M\lambda_{0}\). We now estimate \(I\) as \[\begin{split} I\leq(M\lambda_{0})^{\tilde{q}_{m+1}}\mu_{\tau_{m}} (\mathcal{B}_{\tau_{1}}).\end{split}\] Proceeding as for (5.11) leads to \[\begin{split} J&\leq\int_{\lambda_{0}}^{LM^{-1}}\tilde{q} _{m+1}M^{\tilde{q}_{m+1}}\lambda^{\tilde{q}_{m+1}-1}\mu_{\tau_{m}}\left(\left\{ (x,y)\in\left(\bigcup_{i}\mathcal{B}_{5\rho_{x_{i}}}(x_{i})\right)\,;\,\left|D ^{\tau_{m}}d_{\alpha_{m}+s}\tilde{u}(x,y)\right|>M\lambda\right\}\right)\,d \lambda\\ &\quad+\int_{\lambda_{0}}^{LM^{-1}}\tilde{q}_{m+1}M^{\tilde{q}_{m+ 1}}\lambda^{\tilde{q}_{m+1}-1}\mu_{\tau_{m}}\left(\left\{(x,y)\in\left(\bigcup_{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcalmathcalmathcalmathcalmathcalmathcalmathcalmathcalmathcalmathcalmathcalmathcalmathcalmathcalmathcalmathcal \ holds. After a few algebraic manipulations along with (5.52), (5.64) and (5.65), we find that \[\mu_{\tau}\left(\mathcal{B}_{\rho_{x_{i}}}(x_{i})\right)\leq\left(\frac{c}{ \lambda}\right)^{p}\int_{\mathcal{B}_{\rho_{x_{i}}}(x_{i})}|D^{\tau_{m}}d_{ \alpha_{m}+s}\tilde{u}|^{p}\,d\mu_{\tau_{m}}+\left(\frac{c}{\delta\lambda} \right)^{\tilde{q}_{m}}\int_{\mathcal{B}_{\rho_{x_{i}}}(x_{i})}|D^{\tau_{m}}d_ {\alpha_{m}}\tilde{f}|^{\tilde{q}_{m}}\,d\mu_{\tau_{m}}\] for some constant \(c=c(\mathsf{data})\). We then observe that \[\mu_{\tau_{m}}\left(\mathcal{B}_{\rho_{x_{i}}}(x_{i})\right) \leq\left(\frac{2c}{\lambda}\right)^{p}\int_{\mathcal{B}_{\rho_{x_ {i}}}(x_{i})\cap\{|D^{\tau_{m}}d_{\alpha_{m}+s}\tilde{u}|>a\lambda\}}|D^{\tau_{ m}}d_{\alpha_{m}+s}\tilde{u}|^{p}\,d\mu_{\tau_{m}} \tag{5.66}\] \[\quad+\left(\frac{2c}{\delta\lambda}\right)^{\tilde{q}_{m}}\int_ {\mathcal{B}_{\rho_{x_{i}}}(x_{i})\cap\{|D^{\tau_{m}}d_{\alpha_{m}}\tilde{f}|> b\lambda\}}|D^{\tau_{m}}d_{\alpha_{m}}\tilde{f}|^{\tilde{q}_{m}}\,d\mu_{\tau_{m}},\] where \(a=\frac{1}{4c}\) and \(b=\frac{\delta}{4c}\). Combine (5.63) and (5.66) together with the fact that \(\left\{\mathcal{B}_{\rho_{x_{i}}}(x_{i})\right\}\) is a collection of disjoint sets contained in \(\mathcal{B}_{\tau_{2}}\) to see that \[J_{1} \leq cM^{\tilde{q}_{m+1}-p}e^{p}\int_{\lambda_{0}}^{\infty} \lambda^{\tilde{q}_{m+1}-p-1}\int_{\mathcal{B}_{\tau_{2}}\cap\{|D^{\tau_{m}}d_ {\alpha_{m}+s}\tilde{u}|_{LM^{-1}}>a\lambda\}}|D^{\tau_{m}}d_{\alpha_{m}+s} \tilde{u}|^{p}\,d\mu_{\tau_{m}}\,d\lambda\] \[\quad+cM^{\tilde{q}_{m+1}-p}e^{p}\int_{\lambda_{0}}^{\infty} \lambda^{\tilde{q}_{m+1}-\tilde{q}_{m}-1}\int_{\mathcal{B}_{\tau_{2}}\cap\{|D ^{\tau_{m}}d_{\alpha_{m}}\tilde{f}|>b\lambda\}}\frac{1}{\delta^{\tilde{q}_{m} }}|D^{\tau_{m}}d_{\alpha_{m}}\tilde{f}|^{\tilde{q}_{m}}\,d\mu_{\tau_{m}}\,d\lambda\] for some constant \(c=c(\mathsf{data})\). We then have \[J_{1} \leq cM^{\tilde{q}_{m+1}-p}e^{p}\int_{\mathcal{B}_{\tau_{2}}}|D^{ \tau_{m}}d_{\alpha_{m}+s}\tilde{u}|_{LM^{-1}}^{\tilde{q}_{m+1}-p}\,|D^{\tau_{m} }d_{\alpha_{m}+s}\tilde{u}|^{p}\,d\mu_{\tau_{m}}\] \[\quad+cM^{\tilde{q}_{m+1}-\tilde{q}_{m}}e^{p}\int_{\mathcal{B}_{ \tau_{2}}}\frac{1}{\delta^{\tilde{q}_{m}}+1}|D^{\tau_{m}}d_{\alpha_{m}}\tilde{f }|^{\tilde{q}_{m+1}}\,d\mu_{\tau_{m}},\] where we have used Fubini's theorem. Since \(\gamma_{0}>\tilde{q}_{m+1}\), similar arguments performed on the estimates of (5.15) and (5.16) along with (4.7) and (4.8) with \(\alpha=\alpha_{m}\), \(\tilde{p}=p\), \(\tilde{\gamma}=\gamma_{0}\) and \(\tau=\tau_{m}\) yield that \[J_{2} \leq cM^{\tilde{q}_{m+1}-\gamma_{0}}\int_{\mathcal{B}_{\tau_{2}}} |D^{\tau_{m}}d_{\alpha_{m}+s}\tilde{u}|_{LM^{-1}}^{\tilde{q}_{m+1}-p}\,|D^{ \tau_{m}}d_{\alpha_{m}+s}\tilde{u}|^{p}\,d\mu_{\tau_{m}}\] \[\leq\frac{1}{2^{2n+2q}}\int_{\mathcal{B}_{\tau_{2}}}|D^{\tau_{m}}d _{\alpha_{m}+s}\tilde{u}|_{LM^{-1}}^{\tilde{q}_{m+1}}\,d\mu_{\tau_{m}}\] by taking \(M\) sufficiently large such that \(cM^{\tilde{q}_{m+1}-\gamma_{0}}\leq\frac{1}{2^{2n+2q}}\) and \(M>c_{1}c_{2}\) depending only on \(\mathsf{data}\). We now choose a sufficiently small \(\epsilon=\epsilon(\mathsf{data})\) such that \(cM^{\tilde{q}_{m+1}-p}e^{p}\leq\frac{1}{2^{2n+2q}}\) to see that \[J_{1} \leq\frac{1}{2^{2n+2q}}\int_{\mathcal{B}_{\tau_{2}}}|D^{\tau_{m}}d_{ \alpha_{m}+s}\tilde{u}|_{LM^{-1}}^{\tilde{q}_{m+1}}\,\,d\mu_{\tau_{m}}+c\int_{ \mathcal{B}_{\tau_{2}}}|D^{\tau_{m}}d_{\alpha_{m}}\tilde{f}|^{\tilde{q}_{m+1}}\, d\mu_{\tau_{m}}\] for some constant \(c=c(\mathsf{data},d_{m})\), as \(\delta\) depends only on \(\mathsf{data}\) and \(d_{m}\). In light of the estimates \(I\), \(J_{1}\) and \(J_{2}\), the same arguments as for (5.17) yield (5.61). Then (5.60) follows by using the technical lemma 2.8 and the limit procedure as in the last part for the proof of Lemma 5.1. Using the change of variables and (5.60), we find that \[\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499849pt}}{{ \vbox{\hbox{$-$}}\kern-13.499849pt}}{{\vbox{\hbox{$-$}} \kern-12.499849pt}}{{\vbox{\hbox{$-$}}\kern-12.499849pt}}\!\int_{ \mathcal{B}_{\tau}(x_{0})}|D^{\tau_{m}}d_{\alpha_{m}+s}u|^{\tilde{q}_{m+1}}\,d\mu _{\tau_{m}}\right)^{\frac{1}{\tilde{q}_{m+1}}} \tag{5.67}\] \[\leq c\left[\left(\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499849pt}}{{\vbox{\hbox{$-$}}\kern-12.499849pt}}{{\vbox{ \hbox{$-$}}\kern-12.499849pt}}{{\vbox{\hbox{$-$}} \kern-12.499849pt}}\!\int_{\mathcal{B}_{\tau}(x_{0})}|D^{\tau_{m}}d_{\alpha_{m}+s}u|^{p }\,d\mu_{\tau_{m}}\right)^{\frac{1}{\tilde{p}}}+\mathrm{Tail}_{\mathrm{s},p} \left(\frac{u-(u)_{B_{2}(x_{0})}}{(2r)^{\alpha_{m}+\tau_{m}+s}};B_{2r}(x_{0}) \right)\right]\] \[+c\left[\left(\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499849pt}}{{\vbox{\hbox{$-$}}\kern-12.499849pt}}{{\vbox{ \hbox{$-$}}\kern-12.499849pt}}{{\vbox{\hbox{$-$}} \kern-12.499849pt}}{{\vbox{\hbox{$-$}}\kern-12.499849pt}}\!\int_{ \mathcal{B}_{\tau}(x_{0})}|D^{\tau_{m}}d_{\alpha_{m}+s}u|^{\tilde{q}_{m+1}}\,d \mu_{\tau_{m}}\right)^{\frac{1}{\tilde{q}_{m+1}}}\] for some constant \(c=c(\mathsf{data})\). Using (5.68), the fact that \[1\leq\left(\frac{4r}{|x-y|}\right)^{\alpha_{m+1}-\alpha_{m}}\quad\text{for }x,y \in\mathcal{B}_{2r}(x_{0}) \tag{5.69}\] and (5.41), we further estimate the left-hand side and the third term in the right-hand side in (5.67) to find that \[\begin{split}&\left(\mathchoice{\vbox{\hbox{$-$}}\kern-13.0pt}{ \vbox{\hbox{$-$}}\kern-13.0pt}{\vbox{\hbox{$-$}} \kern-13.0pt}\!\int_{\mathcal{B}_{r}(x_{0})}|D^{\tau_{m+1}}d_{\alpha_{m+1}+s}u| ^{\tilde{p}_{m+1}}\,d\mu_{\tau_{m+1}}\right)^{\frac{1}{p_{m+1}}}\\ &\leq c\left[\left(\mathchoice{\vbox{\hbox{$-$}} \kern-13.0pt}{\vbox{\hbox{$-$}}\kern-13.0pt}{\vbox{\hbox{$-$}} \kern-13.0pt}\!\int_{\mathcal{B}_{2r}(x_{0})}|D^{\tau_{m}}d_{\alpha_{m+}s}u|^{p }\,d\mu_{\tau_{m}}\right)^{\frac{1}{p}}+\mathrm{Tail}_{\mathrm{s,p}}\left( \frac{u-(u)_{B_{2r}(x_{0})}}{(2r)^{\alpha_{m+1}+\tau_{m+1}+s}};B_{2r}(x_{0}) \right)\right]\\ &+c\left[\left(\mathchoice{\vbox{\hbox{$-$}} \kern-13.0pt}{\vbox{\hbox{$-$}}\kern-13.0pt}{\vbox{\hbox{$-$}} \kern-13.0pt}\!\int_{\mathcal{B}_{2r}(x_{0})}|D^{\tau_{m+1}}d_{\alpha_{m+1}+s}u| ^{\tilde{q}_{m+1}}\,d\mu_{\tau_{m+1}}\right)^{\frac{1}{q_{m+1}}}+\mathrm{Tail}_ {\frac{\tilde{p}}{p},p}\left(\frac{f-(f)_{B_{2r}(x_{0})}}{(2r)^{\alpha_{m+1}+ \tau_{m+1}}};B_{2r}(x_{0})\right)\right]\end{split} \tag{5.70}\] for some constant \(c=c(\mathsf{data},d_{m})\) provided that \(\mathcal{B}_{4r}(x_{0})\Subset\mathcal{B}_{4}\). Using Holder's inequality and (5.70), we have \[\left(\mathchoice{\vbox{\hbox{$-$}}\kern-13.0pt}{\vbox{\hbox{$-$}} \kern-13.0pt}{\vbox{\hbox{$-$}}\kern-13.0pt}\!\int_{\mathcal{B}_{r}(x_{0})}|D ^{\tau_{m+1}}d_{\alpha_{m+1}+s}u|^{p}\,d\mu_{\tau_{m+1}}\right)^{\frac{1}{p}} \leq\left(\mathchoice{\vbox{\hbox{$-$}}\kern-13.0pt}{ \vbox{\hbox{$-$}}\kern-13.0pt}{\vbox{\hbox{$-$}} \kern-13.0pt}\!\int_{\mathcal{B}_{r}(x_{0})}|D^{\tau_{m+1}}d_{\alpha_{m+1}+s}u| ^{\tilde{p}_{m+1}}\,d\mu_{\tau_{m+1}}\right)^{\frac{1}{p_{m+1}}}<\infty. \tag{5.71}\] On the other hand, for any cube \(\mathcal{Q}\equiv\mathcal{Q}_{\frac{r}{\sqrt{\pi}}}\left(z_{1},z_{2}\right)\) where \(z_{1},z_{2}\in B_{4}\) satisfying (4.30), we find that \[\left(\int_{\mathcal{Q}}|D^{\tau_{m+1}}d_{\alpha_{m+1}+s}u|^{p}\,d\mu_{\tau_{m+ 1}}\right)^{\frac{1}{p}}\leq c(\mathsf{data},r)\left(\int_{\mathcal{Q}}|D^{ \tau_{0}}d_{s}u|^{p}\,d\mu_{\tau_{m}}\right)^{\frac{1}{p}}<\infty, \tag{5.72}\] where we have used (5.41) and the fact that \(\frac{1}{8}\leq\frac{1}{|x-y|}\leq\frac{\sqrt{\pi}}{r}\) for any \((x,y)\in\mathcal{Q}\). Thus, the standard covering argument along with (5.71) and (5.72) yields \(D^{\tau_{m+1}}d_{\alpha_{m+1}+s}u\in L^{p}_{\mathrm{loc}}\left(\mathcal{B}_{4} ;\,\mu_{\tau_{m+1}}\right)\). Lastly, using (5.69), we estimate the first term in the right-hand side of (5.70) to get (5.46) with \(m\) there, replaced by \(m+1\), where \(d_{m+1}=d_{m+1}(\mathsf{data},d_{m})\). **Corollary 5.4**.: _Let \(u\) be a weak solution to (5.5) with (5.35). Then there is a small constant \(\delta=\delta(\mathsf{data})\) such that if \(A\) is \((\delta,2)\)-vanishing in \(B_{4}\times B_{4}\) only at the diagonal, then we have that (5.46) with \(m=n_{\sigma}\), where \(d_{n_{\sigma}}=d_{n_{\sigma}}(\mathsf{data})\), and \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u\in L^{p}_{\mathrm{loc}}\left( \mathcal{B}_{4};\,\mu_{\tau_{n_{\sigma}}}\right)\) with the estimate_ \[\begin{split}&\left(\mathchoice{\vbox{\hbox{$-$}} \kern-13.0pt}{\vbox{\hbox{$-$}}\kern-13.0pt}{\vbox{\hbox{$-$}} \kern-13.0pt}\!\int_{\mathcal{B}_{\frac{r}{2\sigma}-1}(x_{0})}|D^{\tau_{n_{ \sigma}}}d_{\alpha_{n_{\sigma}}+s}u|^{p}\,d\mu_{\tau_{n_{\sigma}}}\right)^{ \frac{1}{p}}\\ &\leq c\left[\left(\mathchoice{\vbox{\hbox{$-$}} \kern-13.0pt}{\vbox{\hbox{$-$}}\kern-13.0pt}{\vbox{\hbox{$-$}} \kern-13.0pt}\!\int_{\mathcal{B}_{2r}(x_{0})}|D^{\tau_{0}}d_{s}u|^{p}\,d\mu_{ \tau_{0}}\right)^{\frac{1}{p}}+\mathrm{Tail}_{\mathrm{s,p}}\left(\frac{u-(u)_{ B_{2}(x_{0})}}{(2r)^{\tau_{0}+s}};B_{2r}(x_{0})\right)\right]\\ &\quad+c\left[\left(\mathchoice{\vbox{\hbox{$-$}} \kern-13.0pt}{\vbox{\hbox{$-$}}\kern-13.0pt}{\vbox{\hbox{$-$}} \kern-13.0pt}\!\int_{\mathcal{B}_{2r}(x_{0})}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{ \sigma}}}f|^{q}\,d\mu_{\tau_{n_{\sigma}}}\right)^{\frac{1}{q}}+\mathrm{Tail}_{ \mathrm{s,p}}\left(\frac{f-(f)_{B_{2}(x_{0})}}{(2r)^{\tau_{0}}};B_{2r}(x_{0}) \right)\right]\end{split} \tag{5.73}\] _for some constant \(c=c(\mathsf{data})\) whenever \(B_{2r}(x_{0})\Subset B_{4}\)._ Proof.: From Lemma 2.4, \(f\in W^{\sigma,q}_{\mathrm{loc}}(B_{4})\) implies \(f\in W^{\left(1-\frac{n}{q_{0}}\right)\tau_{0},\tilde{q}_{0}}_{\mathrm{loc}}(B_{4})\), which is equivalent to \[d_{0}f\in L^{\tilde{q}_{0}}_{\mathrm{loc}}\left(\mathcal{B}_{4};\frac{dx\,dy}{|x- y|^{n+(\tilde{q}_{0}-p)\tau_{0}}}\right).\] We now apply Theorem 1.2 with \(\sigma=\left(1-\frac{p}{\delta_{0}}\right)\tau_{0}\) to find a small constant \(\delta_{0}=\delta_{0}(\texttt{data})\in(0,1)\). Thus if \(\delta<\delta_{0}\), then we have \[\left(\fint_{\mathcal{B}_{r}(x_{0})}|D^{\tau_{0}}d_{s}u|^{\tilde{ q}_{0}}\,d\mu_{\tau_{0}}\right)^{\frac{1}{\delta_{0}}}\] \[\leq d_{0}\left[\left(\fint_{\mathcal{B}_{2r}(x_{0})}|D^{\tau_{0} }d_{s}u|^{p}\,d\mu_{\tau_{0}}\right)^{\frac{1}{p}}+\text{Tail}_{\text{s,p}} \left(\frac{u-(u)_{B_{2r}(x_{0})}}{(2r)^{\tau_{0}+s}};B_{2r}(x_{0})\right)\right]\] \[+d_{0}\left[\left(\fint_{\mathcal{B}_{2r}(x_{0})}|D^{\tau_{0}}d_ {0}f|^{\tilde{q}_{0}}\,d\mu_{\tau_{0}}\right)^{\frac{1}{\delta_{0}}}+\text{ Tail}_{\frac{p}{p},p}\left(\frac{f-(f)_{B_{2r}(x_{0})}}{(2r)^{\tau_{0}}};B_{2r}(x_{0}) \right)\right]\] for some constant \(d_{0}=d_{0}(\texttt{data})\), provided that \(B_{2r}(x_{0})\Subset B_{4}\). Since Holder's inequality yields that \[\left(\fint_{\mathcal{B}_{r}(x_{0})}|D^{\tau_{0}}d_{s}u|^{\tilde{ q}_{0}}\,d\mu_{\tau_{0}}\right)^{\frac{1}{\delta_{0}}}\leq\left(\fint_{ \mathcal{B}_{r}(x_{0})}|D^{\tau_{0}}d_{s}u|^{\tilde{q}_{0}}\,d\mu_{\tau_{0}} \right)^{\frac{1}{\delta_{0}}},\] the above two inequalities give that (5.46) with \(m=0\), where \(d_{0}=d_{0}(\texttt{data})\). By applying Lemma 5.3 with \(m=0\), we have that (5.46) with \(m=1\), where \(d_{1}=d_{1}(\texttt{data})\) whenever \(\delta<\min\{\delta_{0},\delta_{1}\}\), where the constant \(\delta_{1}\) is determined by Lemma 5.3 with \(m=0\). By iterating this procedure \(n_{\sigma}-1\) times, we conclude that if \(\delta<\min\limits_{h=0,1,\ldots,n_{\sigma}}\delta_{h}\), where \(\delta_{h}\) is the constant given in Lemma 5.3 with \(m=h-1\) for \(h\geq 1\), then \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u\in L^{p}_{\text{loc}}( \mathcal{B}_{4};\,\mu_{\tau_{n\sigma}})\) and (5.46) with \(m=n_{\sigma}\) is true, where \(d_{n_{\sigma}}=d_{n_{\sigma}}(\texttt{data})\). We are now in the position to prove (5.73). Combining the first inequality in (5.71) and (5.70) with \(r\) there, replaced by \(\frac{1}{2^{m-1}}\), respectively, and then applying Holder's inequality along with the fact that \(\tilde{q}_{m+1}<q\) to the third term in the right-hand side of (5.70), we get that \[\left(\fint_{\mathcal{B}_{\frac{r}{2^{m-1}}}(x_{0})}|D^{\tau_{m+1 }}d_{\alpha_{m+1}+s}u|^{p}\,d\mu_{\tau_{m+1}}\right)^{\frac{1}{p}}\] \[\leq c\left[\left(\fint_{\mathcal{B}_{\frac{r}{2^{m-1}}}(x_{0})}| D^{\tau_{m}+\alpha_{m}}d_{s}u|^{p}\,d\mu_{\tau_{m}}\right)^{\frac{1}{p}}+\text{ Tail}_{\text{s,p}}\left(\frac{u-(u)_{\frac{r}{2^{m-1}}}(x_{0})}{( \frac{r}{2^{m-1}})^{\tau_{0}+s}};B_{\frac{r}{2^{m-1}}}(x_{0})\right)\right] \tag{5.74}\] \[\quad+c\left[\left(\fint_{\mathcal{B}_{\frac{r}{2^{m-1}}}(x_{0}) }|D^{\tau_{m+1}}d_{\alpha_{m+1}}f|^{q}\,d\mu_{\tau_{m+1}}\right)^{\frac{1}{q}} +\text{Tail}_{\frac{p}{p},p}\left(\frac{f-(f)_{B_{\frac{r}{2^{m-1}}}(x_{0})}}{( \frac{r}{2^{m-1}})^{\tau_{0}}};B_{\frac{r}{2^{m-1}}}(x_{0})\right)\right].\] Combine the above estimates (5.74) for \(m=0,1,\ldots,n_{\sigma}-1\) to see that \[\left(\fint_{\mathcal{B}_{\frac{r}{2^{m-1}}}(x_{0})}|D^{\tau_{n_{ \sigma}}}d_{\alpha_{n_{\sigma}}+s}u|^{p}\,d\mu_{\tau_{n_{\sigma}}}\right)^{ \frac{1}{p}}\] \[\leq c\left[\left(\fint_{\mathcal{B}_{2r}(x_{0})}|D^{\tau_{0}}d_ {0}u|^{p}\,d\mu_{\tau_{0}}\right)^{\frac{1}{p}}+\sum_{m=0}^{n_{\sigma}-1}\text{ Tail}_{\text{s,p}}\left(\frac{u-(u)_{2r}(x_{0})}{(\frac{r}{2^{m-1}})^{\tau_{0}+s}};B_{ \frac{r}{2^{m-1}}}(x_{0})\right)\right]\] \[\quad+c\sum_{m=0}^{n_{\sigma}-1}\left[\left(\fint_{\mathcal{B}_{ \frac{r}{2^{m-1}}}(x_{0})}|D^{\tau_{m+1}}d_{\alpha_{m+1}}f|^{q}\,d\mu_{\tau_{m+1 }}\right)^{\frac{1}{q}}+\text{Tail}_{\frac{p}{p},p}\left(\frac{f-(f)_{B_{ \frac{r}{2^{m-1}}}(x_{0})}}{(\frac{r}{2^{m-1}})^{\tau_{0}}};B_{\frac{r}{2^{m-1 }}}(x_{0})\right)\right], \tag{5.75}\] where \(c=c(\texttt{data})\). We next note from (5.41) and (5.52) that for any \(\mathcal{B}\Subset B_{4}\), \[\left(\fint_{\mathcal{B}}|D^{\tau_{m-1}}d_{\alpha_{m-1}}f|^{q}\,d\mu_{\tau_{m-1 }}\right)^{\frac{1}{q}}\leq c\left(\fint_{\mathcal{B}}|D^{\tau_{m}}d_{\alpha_{ m}}f|^{q}\,d\mu_{\tau_{m}}\right)^{\frac{1}{q}}, \tag{5.76}\] where \(c=c(\mathsf{data})\). In addition, applying (2.9) with \(\tau=\tau_{0}\), \(t=s\), \(\alpha=0\), \(\rho=\frac{r}{2^{m-1}}\) and \(i=m\) leads to \[\sum_{m=0}^{n_{e}-1}\operatorname{Tail}_{\mathrm{s,p}}\left(\frac{u -(u)_{\frac{r}{2^{m-1}}}(x_{0})}{\left(\frac{r}{2^{m-1}}\right)^{\tau_{0}+s}};B _{\frac{r}{2^{m-1}}}(x_{0})\right) \leq c\left(\fint_{\mathcal{B}_{2r}(x_{0})}|D^{\tau_{0}}d_{s}u|^{ p}\,d\mu_{\tau_{0}}\right)^{\frac{1}{p}} \tag{5.77}\] \[\quad+c\operatorname{Tail}_{\mathrm{s,p}}\left(\frac{u-(u)_{2r}( x_{0})}{(2r)^{\tau_{0}+s}};B_{2r}(x_{0})\right)\] for some constant \(c=c(\mathsf{data})\), as \(n_{\sigma}\) and \(\tau_{0}\) depend only on \(\mathsf{data}\). Similarly, applying (2.9) with \(u(x)\), \(s\), \(\tau\), \(t\), \(\alpha\), \(\rho\) and \(i\) there, replaced by \(f(x)\), \(\frac{s}{p}\), \(\tau_{0}\), \(0\), \(0\), \(\frac{r}{2^{m-1}}\) and \(m\), respectively, and then using Holder's inequality along with the fact that \(p<q\), we find \[\sum_{m=0}^{n_{\sigma}-1}\operatorname{Tail}_{\frac{r}{p}}\left( \frac{f-(f)_{\frac{r}{2^{m-1}}}(x_{0})}{\left(\frac{r}{2^{m-1}}\right)^{\tau_{ 0}+s}};B_{\frac{r}{2^{m-1}}}(x_{0})\right) \leq c\left(\fint_{\mathcal{B}_{2r}(x_{0})}|D^{\tau_{0}}d_{0}f|^{ q}\,d\mu_{\tau_{0}}\right)^{\frac{1}{q}} \tag{5.78}\] \[\quad+c\operatorname{Tail}_{\mathrm{s,p}}\left(\frac{f-(f)_{2r}( x_{0})}{(2r)^{\tau_{0}}};B_{2r}(x_{0})\right).\] We combine the estimates (5.75), (5.76), (5.77) and (5.78) to show that (5.73). We now give a similar result corresponding to Lemma 5.1. **Lemma 5.5**.: _Let \(u\) be a weak solution to (5.5) with (5.35) and \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u\in L^{p_{h}}_{\mathrm{loc}} \left(\mathcal{B}_{4}\,;\,\mu_{\tau_{n_{\sigma}}}\right)\) for some nonnegative integer \(h\leq l_{q}\). Then there are a small positive constant \(\delta=\delta(\mathsf{data})\) and a positive constant \(c=c(\mathsf{data})\) such that if \(A\) is \((\delta,2)\)-vanishing in \(B_{4}\times B_{4}\) only at the diagonal, then we have_ \[\left(\fint_{\mathcal{B}_{1}}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{ \sigma}}+s}u|^{\hat{q}}\,d\mu_{\tau_{n_{\sigma}}}\right)^{\frac{1}{q}} \leq c\left(\left(\fint_{\mathcal{B}_{2}}|D^{\tau_{n_{\sigma}}}d_{ \alpha_{n_{\sigma}}+s}u|^{p_{h}}\,d\mu_{\tau_{n_{\sigma}}}\right)^{\frac{1}{p _{h}}}+\operatorname{Tail}_{\mathrm{s,p}}\left(\frac{u-(u)_{B_{2}}}{2^{\alpha_ {n_{\sigma}}+\tau_{n_{\sigma}}+s}};B_{2}\right)\right) \tag{5.79}\] \[\quad+c\left(\left(\fint_{\mathcal{B}_{2}}|D^{\tau_{n_{\sigma}}} d_{\alpha_{n_{\sigma}}}f|^{\hat{q}}\,d\mu_{\tau_{n_{\sigma}}}\right)^{\frac{1}{q}}+ \operatorname{Tail}_{\frac{r}{p},p}\left(\frac{f-(f)_{B_{2}}}{2^{\alpha_{n_{ \sigma}}+\tau_{n_{\sigma}}}};B_{2}\right)\right),\] _where the constant \(\hat{q}\) is given in (5.8)._ Proof.: Let us first assume \(\delta\leq\hat{\delta}\) where the constant \(\hat{\delta}\) is given in Corollary 5.4 depending only on \(\mathsf{data}\). Therefore, applying Corollary 5.4, we have (5.46) with \(m=n_{\sigma}\), where \(d_{n_{\sigma}}=d_{n_{\sigma}}(\mathsf{data})\). For a parameter \(\epsilon\in(0,1)\) to be determined later, we take \(\delta=\min\left\{\tilde{\delta},\tilde{\delta}\right\}\) where the constant \(\tilde{\delta}=\tilde{\delta}(\mathsf{data},\epsilon)\) is given in Lemma 5.2 with \(m=n_{\sigma}\). Let \(1\leq r_{1}<r_{2}\leq 2\). We now apply Lemma 4.1 with \(\alpha=\alpha_{n_{\sigma}}\), \(\tilde{p}=p_{h}\), \(\tilde{q}=\tilde{q}_{n_{\sigma}}\), \(\tilde{\gamma}=\gamma_{h}\) and \(\tau=\tau_{n_{\sigma}}\) to find families of countable disjoint diagonal balls and off-diagonal cubes, \(\left\{\mathcal{B}_{\rho_{x_{i}}}\left(x_{i}\right)\right\}_{i\in\mathbb{N}}\) and \(\left\{\mathcal{Q}\right\}_{\mathcal{Q}\in\tilde{A}}\), such that \[\left\{(x,y)\in\mathcal{B}_{r_{1}}\,;\,|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{ \sigma}}+s}u|(x,y)>\lambda\right\}\subset\left(\bigcup_{i\in\mathbb{N}} \mathcal{B}_{5\rho_{x_{i}}}\left(x_{i}\right)\right)\bigcup\left(\bigcup_{ \mathcal{Q}\in\mathcal{A}}\mathcal{Q}\right)\] provided that \(\lambda\geq\lambda_{0}\), where \(\lambda_{0}\) is given in (4.5) with \(\alpha=\alpha_{n_{\sigma}}\), \(\tilde{p}=p_{h}\), \(\tilde{q}=\tilde{q}_{n_{\sigma}}\) and \(\tau=\tau_{n_{\sigma}}\). Furthermore, Lemma 4.1 yields (4.6), (4.7) and (4.8) with \(\alpha=\alpha_{n_{\sigma}}\), \(\tilde{p}=p_{h}\), \(\tilde{q}=\tilde{q}_{n_{\sigma}}\), \(\tilde{\gamma}=\gamma_{h}\) and \(\tau=\tau_{n_{\sigma}}\). In addition, it follows from (4.58) with \(\tilde{p}=p_{h}\), \(\tilde{q}=q_{n_{\sigma}}\), \(\tau=\tau_{n_{\sigma}}\) and \(\alpha=\alpha_{n_{\sigma}}\) that \[\frac{1}{\tau_{n_{\sigma}}}\fint_{\mathcal{B}_{20\rho_{x_{i}}}\left(x_{i} \right)}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u|^{p}\,d\mu_{\tau_{m}}+ \operatorname{Tail}_{\mathrm{s,p}}\left(\frac{u-(u)_{B_{20\rho_{x_{i}}}}}{(20 \rho_{x_{i}})^{\alpha_{n_{\sigma}}+\tau_{n_{\sigma}}+s}};B_{20\rho_{x_{i}}}(x_ {i})\right)^{p}\leq(c_{1}\lambda)^{p},\] \[\frac{1}{\tau_{n_{\sigma}}}\fint_{\mathcal{B}_{20\rho_{x_{i}}} \left(x_{i}\right)}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}}f|^{\tilde{q} _{n_{\sigma}}}\,d\mu_{\tau_{n_{\sigma}}}+\operatorname{Tail}_{\frac{r}{p},p} \left(\frac{f-(f)_{B_{20\rho_{x_{i}}}}}{(20\rho_{x_{i}})^{\alpha_{n_{\sigma}}+ \tau_{n_{\sigma}}}};B_{20\rho_{x_{i}}}(x_{i})\right)^{\tilde{q}_{n_{\sigma}}} \leq(c_{1}\lambda\delta)^{\tilde{q}_{n_{\sigma}}}\] for some constant \(c_{1}=c_{1}(n,s,p,q,\sigma)\) by the fact that \(\frac{s}{p-1}-(\alpha_{n_{\sigma}}+\tau_{n_{\sigma}})=\frac{s}{p-1}-\tau_{0}>0\). Applying Lemma 5.2 with \(\lambda\) there, replaced by \(c_{1}\lambda\), we find a weak solution \(v\) to (5.48) such that \[\frac{1}{\tau_{n_{\sigma}}}\fint_{\mathcal{B}_{5\rho_{x_{i}}}\left(x_{i} \right)}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}(u-v)|^{p}\,d\mu_{\tau_ Following the same arguments as for the estimates of \(I\) and \(J\) in Lemma 5.1 with \(\mu_{\tau}\), \(D^{\tau}d_{s}u\), \(D^{\tau}d_{0}f\) and \(|D^{\tau}d_{0}f|^{p}\) there, replaced by \(\mu_{\tau_{n_{\sigma}}}\), \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u\), \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}}f\) and \(|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}}f|^{\tilde{q}_{n_{\sigma}}}\), respectively, along with the fact that \(\gamma_{h}>\tilde{q}>\tilde{q}_{n_{\sigma}}\), we have that if \(L>\lambda_{0}\), then \[\phi_{L}(r_{1})\leq\frac{1}{2}\phi_{L}(r_{2})+\frac{c}{(r_{2}-r_{ 1})^{2n}}\left(\left(\int_{\mathcal{B}_{2}}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_ {\sigma}}+s}u|^{p_{h}}\,d\mu_{\tau_{n_{\sigma}}}\right)^{\frac{1}{p_{h}}}+ \text{Tail}_{\text{s,p}}\left(\frac{u-(u)_{B_{2}}}{2^{\alpha_{n_{\sigma}}+ \tau_{n_{\sigma}}+s}};B_{2}\right)\right)\] \[\qquad\qquad+\frac{c}{(r_{2}-r_{1})^{2n}}\left(\left(\int_{ \mathcal{B}_{2}}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}}f|^{\tilde{q}}\, d\mu_{\tau}\right)^{\frac{1}{q}}+\text{Tail}_{\frac{1}{p},p}\left(\frac{f-(f)_{B_{2}}}{2 ^{\alpha_{n_{\sigma}}+\tau_{n_{\sigma}}}};B_{2}\right)\right)\] for some constant \(c=c(\mathsf{data})\) by taking \(\epsilon\in(0,1)\) sufficiently small depending only on \(\mathsf{data}\). Using the technical lemma 2.8 and then passing to the limit \(L\to\infty\) as in the last part for the proof of (5.7), we get the desired result (5.79). We now give the complete proof of our main theorem 1.2. **Proof of Theorem 1.2.** Since we proved Theorem 1.2 when (5.6) in Section 5.1, we may assume (5.35). Take \(\delta=\delta(\mathsf{data})\in(0,1)\) determined in Lemma 5.5. We note from the first line in the proof of Lemma 5.5 that \(\delta<\tilde{\delta}\), where the constant \(\tilde{\delta}\) is given in Corollary 5.4. We first want to show that \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u\in L^{p}_{\text{loc}}\left( \Omega\times\Omega\,;\,\mu_{\tau_{n_{\sigma}}}\right)\). Let \(B_{\frac{\varepsilon}{2}}(z_{0})\in\Omega\) with \(r\in(0,R]\). Then we define for \(x,y\in\mathbb{R}^{n}\), \[\tilde{u}(x)=\left(\frac{2}{r}\right)^{s}u\left(\frac{r}{2}x+z_{0}\right),\quad \tilde{f}(x)=f\left(\frac{r}{2}x+z_{0}\right),\quad\tilde{A}(x,y)=A\left(\frac {r}{2}x+z_{0},\frac{r}{2}y+z_{0}\right)\] to see that \(\tilde{u}\) is a weak solution to (5.5) with \(f\) and \(A\) there, replaced by \(\tilde{f}\) and \(\tilde{A}\), respectively. By Corollary 5.4, we have that \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}\tilde{u}\in L^{p}_{\text{loc}} \left(\mathcal{B}_{4};\,\mu_{\tau_{n_{\sigma}}}\right)\) which is equivalent to \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u\in L^{p}_{\text{loc}}\left( \mathcal{B}_{2r}(z_{0})\,;\,\mu_{\tau_{n_{\sigma}}}\right)\). On the other hand, for any cube \(\mathcal{Q}\equiv\mathcal{Q}_{\frac{r}{2\sqrt{n}}}(z_{1},z_{2})\Subset\Omega \times\Omega\) satisfying (4.30), we have \[\left(\int_{\mathcal{Q}}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u|^{p} \,d\mu_{\tau_{n_{\sigma}}}\right)^{\frac{1}{p}}\leq c(\mathsf{data},\text{diam} (\Omega))\left(\int_{\mathcal{Q}}|D^{\tau_{0}}d_{s}u|^{p}\,d\mu_{\tau_{n_{ \sigma}}}\right)^{\frac{1}{p}}<\infty,\] where we have used (5.41) and the fact that \[\frac{r}{2\sqrt{n}}\leq|x-y|\leq 2\text{diam}(\Omega).\] Since any compact subset of \(\Omega\times\Omega\) is covered by finitely many balls \(\mathcal{B}_{r}(z_{i})\) with \(r\in(0,R]\) and cubes \(\mathcal{Q}_{\frac{r}{2\sqrt{n}}}(z_{1,i},z_{2,i})\) with (4.30) for \(z_{i},z_{1,i},z_{2,i}\in\Omega\), we thus prove that \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u\in L^{p}_{\text{loc}}\left( \Omega\times\Omega\,;\,\mu_{\tau_{n_{\sigma}}}\right)\). We next want to show that \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u\in L^{q}_{\text{loc}}\left( \Omega\times\Omega\,;\,\mu_{\tau_{n_{\sigma}}}\right)\) and (1.8). Let \(\mathcal{B}_{2r}(x_{0})\Subset\Omega\) and \(r\in(0,R]\). Since Lemma 5.5 is the corresponding one for Lemma 5.1, by following the same arguments as in the proof of (1.8) when (5.6) with \(D^{\tau}d_{s}u\), \(D^{\tau}d_{0}f\) and \(\mu_{\tau}\) there, replaced by \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u\), \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}}f\) and \(\mu_{\tau_{n_{\sigma}}}\), respectively, we have that for \(y_{0}\in B_{r}(x_{0})\) and \(\hat{q}\) defined in (5.20), \[\left(\int_{\mathcal{B}_{\frac{r}{2^{\alpha_{n_{\sigma}}+2}}(y_{0} )}}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u|^{\hat{q}}\,d\mu_{\tau_{n_{ \sigma}}}\right)^{\frac{1}{q}}\] \[\leq c\left(\left(\int_{\mathcal{B}_{\frac{r}{2^{\alpha_{n_{\sigma}}+ 2}}(y_{0})}}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u|^{p}\,d\mu_{\tau_{n_{ \sigma}}}\right)^{\frac{1}{p}}+\text{Tail}_{\text{s,p}}\left(\frac{u-(u)_{B_{ \frac{r}{2^{\alpha_{n_{\sigma}}+2}}(y_{0})}}}{\left(\frac{r}{2^{\alpha_{n_{ \sigma}}+2}}\right)^{\alpha_{n_{\sigma}}+\tau_{n_{\sigma}}+s}};B_{\frac{r}{2^{ \alpha_{\sigma}}+2}}(y_{0})\right)\right) \tag{5.80}\] \[+c\left(\left(\int_{\mathcal{B}_{\frac{r}{2^{\alpha_{n_{\sigma}}+2}}(y _{0})}}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}}f|^{\tilde{q}}\,d\mu_{\tau_{n_{ \sigma}}}\right)^{\frac{1}{q}}+\text{Tail}_{\frac{1}{p},p}\left(\frac{f-(f)_{B_ {\frac{r}{2^{\alpha_{n_{\sigma}}+2}}(y_{0})}}}{\left(\frac{r}{2^{\alpha_{n_{ \sigma}}+2}}\right)^{\alpha_{n_{\sigma}}+\tau_{n_{\sigma}}}};B_{\frac{r}{2^{ \alpha_{\sigma}}+2}}(y_{0})\right)\right).\] We now insert (5.73) with \(\frac{r}{2^{n_{\sigma}-1}}\) and \(x_{0}\) there, replaced by \(\frac{r}{2^{n_{\sigma}+3}}\) and \(y_{0}\), respectively, into (5.80), and then use (5.41) and (2.9) for the two tail terms in (5.80), in order to get that \[\begin{split}&\left(\fint_{\mathcal{B}_{\frac{r}{2^{n_{\sigma}+3}}}( y_{0})}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u|^{\tilde{q}}\,d\mu_{ \tau_{n_{\sigma}}}\right)^{\frac{1}{q}}\\ &\leq c\left(\left(\fint_{\mathcal{B}_{\frac{r}{4}}(y_{0})}|D^{ \tau_{0}}d_{s}u|^{p}\,d\mu_{\tau_{0}}\right)^{\frac{1}{p}}+\text{Tail}_{\text{s, p}}\left(\frac{u-(u)_{B_{\frac{r}{4}}(y_{0})}}{\left(\frac{r}{4}\right)^{\tau_{0}+s}}; B_{\frac{r}{4}}(y_{0})\right)\right)\\ &+c\left(\left(\fint_{\mathcal{B}_{\frac{r}{4}}(y_{0})}|D^{\tau_ {n_{\sigma}}}d_{\alpha_{n_{\sigma}}}f|^{q}\,d\mu_{\tau_{n_{\sigma}}}\right)^ {\frac{1}{q}}+\text{Tail}_{\frac{r}{p}}\left(\frac{f-(f)_{B_{\frac{r}{4}}(y_{ 0})}}{\left(\frac{r}{4}\right)^{\tau_{0}}};B_{\frac{r}{4}}(y_{0})\right)\right) \end{split} \tag{5.81}\] for some constant \(c=c(\texttt{data})\). For the third term in the right-hand side of (5.81), we have used Holder's inequality along with the fact that \(p,\hat{q}\leq q\). Furthermore, (2.10) with \(\rho=\frac{r}{4}\), \(R=2r\), \(\tau=\tau_{0}\), \(t=s\) and \(\alpha=0\) gives that \[\text{Tail}_{\text{s,p}}\left(\frac{u-(u)_{B_{\frac{r}{4}}(y_{0})}}{\left( \frac{r}{4}\right)^{\tau_{0}+s}};B_{\frac{r}{4}}(y_{0})\right)\leq c\left(\fint _{\mathcal{B}_{2r}(x_{0})}|D^{\tau_{0}}d_{s}u|^{p}\,d\mu_{\tau_{0}}\right)^{ \frac{1}{p}}+c\,\text{Tail}_{\text{s,p}}\left(\frac{u-(u)_{B_{2r}(x_{0})}}{(2 r)^{\tau_{0}+s}};B_{2r}(x_{0})\right). \tag{5.82}\] Similarly, (2.10) with \(u=f\), \(s=\frac{s}{\rho}\)\(\rho=\frac{r}{4}\), \(R=2r\), \(\tau=\tau_{n_{\sigma}}\), \(t=0\) and \(\alpha=\alpha_{n_{\sigma}}\), and Holder's inequality along with the fact that \(p<q\) imply that \[\begin{split}\text{Tail}_{\text{s,p}}\left(\frac{f-(f)_{B_{\frac{ r}{4}}(y_{0})}}{\left(\frac{r}{4}\right)^{\tau_{0}}};B_{\frac{r}{4}}(y_{0}) \right)&\leq c\left(\fint_{\mathcal{B}_{2r}(x_{0})}|D^{\tau_{n_{ \sigma}}}d_{\alpha_{n_{\sigma}}}f|^{q}\,d\mu_{\tau_{n_{\sigma}}}\right)^{\frac{ 1}{q}}\\ &\quad+c\,\text{Tail}_{\text{s,p}}\left(\frac{f-(f)_{B_{2r}(x_{0}) }}{(2r)^{\tau_{0}}};B_{2r}(x_{0})\right).\end{split} \tag{5.83}\] Combine (5.81), (5.82) and (5.83) together with the fact that \(\mathcal{B}_{\frac{r}{4}}(y_{0})\subset\mathcal{B}_{2r}(x_{0})\) to show that \[\begin{split}&\left(\fint_{\mathcal{B}_{\frac{r}{2^{n_{\sigma}+3}}} (y_{0})}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u|^{\tilde{q}}\,d\mu_{ \tau_{n_{\sigma}}}\right)^{\frac{1}{q}}\\ &\leq c\left(\left(\fint_{\mathcal{B}_{2r}(x_{0})}|D^{\tau_{0}}d_{ s}u|^{p}\,d\mu_{\tau_{0}}\right)^{\frac{1}{p}}+\text{Tail}_{\text{s,p}}\left(\frac{u-(u)_{B _{2r}(x_{0})}}{(2r)^{\tau_{0}+s}};B_{2r}(x_{0})\right)\right)\\ &+c\left(\left(\fint_{\mathcal{B}_{2r}(x_{0})}|D^{\tau_{n_{\sigma} }}d_{\alpha_{n_{\sigma}}}f|^{q}\,d\mu_{\tau_{n_{\sigma}}}\right)^{\frac{1}{q}} +\text{Tail}_{\frac{r}{p},p}\left(\frac{f-(f)_{B_{2r}(x_{0})}}{(2r)^{\tau_{0}}}; B_{2r}(x_{0})\right)\right).\end{split} \tag{5.84}\] On the other hand, as in 5.25, Lemma 4.2 with \(\tilde{\gamma}=\gamma_{0}\), \(\tau=\tau_{n_{\sigma}}\) and \(\alpha=\alpha_{n_{\sigma}}\) yields that \[\begin{split}&\left(\fint_{\mathcal{Q}}|D^{\tau_{n_{\sigma}}}d_{ \alpha_{n_{\sigma}}+s}u|^{\tilde{q}}\,d\mu_{\tau_{n_{\sigma}}}\right)^{\frac{1} {q}}\\ &\leq c\left(\fint_{\mathcal{Q}}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_ {\sigma}}+s}u|^{p}\,d\mu_{\tau_{n_{\sigma}}}\right)^{\frac{1}{p}}+c\left[ \sum_{d=1}^{2}\left(\frac{1}{\tau_{n_{\sigma}}}\fint_{P^{d}\mathcal{Q}}|D^{\tau_ {n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u|^{p}\,d\mu_{\tau_{n_{\sigma}}}\right)^{ \frac{1}{p}}\right],\end{split} \tag{5.85}\] whenever \(\mathcal{Q}\equiv\mathcal{Q}_{\frac{r}{\sqrt{n2^{n_{\sigma}+3}}}}(z_{1},z_{2}) \Subset\Omega\times\Omega\) satisfying (4.30). Let us assume that \(z_{1},z_{2}\in B_{r}(x_{0})\). We note that there is a constant \(c=c(n,s,p,q,\sigma)\) such that \[\frac{r^{n+p\tau_{n}}}{c}\leq\mu_{\tau_{m}}\left(\mathcal{Q}\right)\leq cr^{n +p\tau_{m}}\quad(m=0,1,\ldots,n_{\sigma}) \tag{5.86}\] by following the same lines for the proof of (4.32) along with the fact that \[\frac{r}{\sqrt{n2^{n_{\sigma}+3}}}<|x-y|<2r\quad\text{for any }(x,y)\in \mathcal{Q}. \tag{5.87}\] Therefore, using (5.41), (5.86), (5.87) and (5.52), we have \[\fint_{\mathcal{Q}}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u|^{p}\,d\mu_{ \tau_{n_{\sigma}}}\leq c\fint_{\mathcal{Q}}|D^{\tau_{0}}d_{s}u|^{p}\,d\mu_{ \tau_{0}}\leq c\fint_{\mathcal{Q}}|D^{\tau_{0}}d_{s}u|^{p}\,d\mu_{\tau_{0}}. \tag{5.88}\] Since \(P^{d}\mathcal{Q}\Subset\mathcal{B}_{\frac{r}{2^{n_{\sigma}+s}}}(z_{d})\), we estimate the the second term in the right-hand side of (5.85) as \[c\left[\sum_{d=1}^{2}\left(\frac{1}{\tau_{n\sigma}}\mathchoice{{\vbox{\hbox{$ \int$}}\kern-13.499896pt}}{{\vbox{\hbox{$\int$}} \kern-13.499896pt}}{{\vbox{\hbox{$\int$}}\kern-13.499896pt}}{{\vbox{ \hbox{$\int$}}\kern-13.499896pt}}{{\vbox{\hbox{$\int$}} \kern-13.499896pt}}\!\int_{P^{d}\mathcal{Q}}|D^{\tau_{n\sigma}}d_{\alpha_{n_{ \sigma}}+s}u|^{p}\,d\mu_{\tau_{n\sigma}}\right)^{\frac{1}{p}}\right]\leq c\left[ \sum_{d=1}^{2}\left(\frac{1}{\tau_{n\sigma}}\mathchoice{{\vbox{\hbox{$ \int$}}\kern-13.499896pt}}{{\vbox{\hbox{$\int$}}\kern-13.499896pt}}{{ \vbox{\hbox{$\int$}}\kern-13.499896pt}}{{\vbox{\hbox{$ \int$}}\kern-13.499896pt}}\!\int_{\mathcal{B}\frac{r}{2^{n_{\sigma}+s}}}(z_{ d})\,|D^{\tau_{n\sigma}}d_{\alpha_{n_{\sigma}}+s}u|^{\hat{q}}\,d\mu_{\tau_{n \sigma}}\right)^{\frac{1}{p}}\right],\] where we have applied Holder's inequality together with the fact that \(p\leq\hat{q}\). In light of (5.88), the above inequality and (5.84) with \(y_{0}=z_{d}\), we estimate the right-hand side of (5.85) as \[\begin{split}&\left(\mathchoice{{\vbox{\hbox{$ \int$}}\kern-13.499896pt}}{{\vbox{\hbox{$\int$}} \kern-13.499896pt}}{{\vbox{\hbox{$\int$}}\kern-13.499896pt}}{{\vbox{ \hbox{$\int$}}\kern-13.499896pt}}\!\int_{\mathcal{Q}}|D^{\tau_{n_{\sigma}}}d_ {\alpha_{n_{\sigma}}+s}u|^{\hat{q}}\,d\mu_{\tau_{n\sigma}}\right)^{\frac{1}{q} }\\ &\leq c\left[\left(\mathchoice{{\vbox{\hbox{$ \int$}}\kern-13.499896pt}}{{\vbox{\hbox{$\int$}} \kern-13.499896pt}}{{\vbox{\hbox{$\int$}}\kern-13.499896pt}}{{\vbox{ \hbox{$\int$}}\kern-13.499896pt}}\!\int_{\mathcal{B}_{2r}(x_{0})}\,|D^{ \tau_{0}}d_{s}u|^{p}\,d\mu_{\tau_{0}}\right)^{\frac{1}{p}}+\text{Tail}_{\text{ s,p}}\left(\frac{u-(u)_{B_{2r}(x_{0})}}{(2r)^{\tau_{0}+s}};B_{2r}(x_{0})\right) \right]\\ &\quad+c\left[\left(\mathchoice{{\vbox{\hbox{$ \int$}}\kern-13.499896pt}}{{\vbox{\hbox{$\int$}} \kern-13.499896pt}}{{\vbox{\hbox{$\int$}}\kern-13.499896pt}}{{\vbox{ \hbox{$\int$}}\kern-13.499896pt}}\!\int_{\mathcal{B}_{2r}(x_{0})}\,|D^{ \tau_{0}}d_{0}f|^{q}\,d\mu_{\tau_{0}}\right)^{\frac{1}{q}}+\text{Tail}_{\text{ s,p}}\left(\frac{f-(f)_{B_{2r}(x_{0})}}{(2r)^{\tau_{0}}};B_{2r}(x_{0})\right) \right].\end{split} \tag{5.89}\] Since \(\mathcal{B}_{r}(x_{0})\) is covered by finitely many balls \(\mathcal{B}_{\frac{r}{2^{n_{\sigma}+s}}}(y_{i})\) and cubes \(\mathcal{Q}_{\frac{r}{\sqrt{n_{\sigma}+s}}}\left(z_{1,i},z_{2,i}\right)\) satisfying (4.30) for \(y_{i},z_{1,i},z_{2,i}\in B_{r}(x_{0})\), (1.8) with \(q=\hat{q}\) follows by using the standard covering argument along with (5.84) and (5.89), and then using a few algebraic manipulations together with (5.41). Moreover, we have \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u\in L^{\hat{q}}_{\text{loc}} \left(\Omega\times\Omega\,;\,\mu_{\tau_{n_{\sigma}}}\right)\) which follows from the combinations of (5.81) and (5.85) along with the fact that \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u\in L^{\hat{p}}_{\text{loc}} \left(\Omega\times\Omega\,;\,\mu_{\tau_{n_{\sigma}}}\right)\). If \(l_{q}=0\), then \(\hat{q}=q\). Let \(l_{q}>0\). Then \(\hat{q}=p_{1}\). As in (5.28) and (5.80), we find that (5.90) where \(y_{0}\in B_{r}(x_{0})\) and \(\hat{q}_{1}\) is defined in (5.29). Plugging (5.81) with \(\frac{r}{2^{n_{\sigma}+s}}\) there, replaced by \(\frac{r}{2^{n_{\sigma}+s}}\) into (5.90), and using (2.10) as in (5.82) and (5.83), we have \[\begin{split}&\left(\mathchoice{{\vbox{\hbox{$ \int$}}\kern-13.499896pt}}{{\vbox{\hbox{$\int$}} \kern-13.499896pt}}{{\vbox{\hbox{$\int$}}\kern-13.499896pt}}{{\vbox{ \hbox{$\int$}}\kern-13.499896pt}}\!\int_{\mathcal{B}_{\frac{r}{2^{n_{\sigma}+s}}} }(y_{0})\,|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u|^{\hat{q}_{1}}\,d\mu_{ \tau_{n_{\sigma}}}\right)^{\frac{1}{\hat{q}_{1}}}\\ &\leq c\left(\mathchoice{{\vbox{\hbox{$ \int$}}\kern-13.499896pt}}{{\vbox{\hbox{$\int$}} \kern-13.499896pt}}{{\vbox{\hbox{$\int$}}\kern-13.499896pt}}{{\vbox{ \hbox{$\int$}}\kern-13.499896pt}}\!\int_{\mathcal{B}_{2r}(x_{0})}\,|D^{ \tau_{0}}d_{s}u|^{p}\,d\mu_{\tau_{n_{\sigma}}}\right)^{\frac{1}{p}}+\text{Tail}_{ \text{s,p}}\left(\frac{u-(u)_{B_{2r}(x_{0})}}{(2r)^{\tau_{0}+s}};B_{2r}(x_{0}) \right)\right)\\ &\quad+c\left(\left(\mathchoice{{\vbox{\hbox{$ \int$}}\kern-13.499896pt}}{{\vbox{\hbox{$\int$}} \kern-13.499896pt}}{{\vbox{\hbox{$\int$}}\kern-13.499896pt}}{{\vbox{ \hbox{$\int$}}\kern-13.499896pt}}{{\vbox{\hbox{$\int$}}\kern-13.499896pt}}\! \int_{\mathcal{B}_{2r}(x_{0})}\,|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u|^{ \hat{q}_{1}}\,d\mu_{\tau_{n_{\sigma}}}\right)^{\frac{1}{q}}+\text{Tail}_{\frac{r}{p},p}\left(\frac{f-(f)_{B_{2r}(x_{0})}}{(2r)^{\tau_{0}}};B_{2r}(x_{0})\right) \right).\end{split} \tag{5.91}\] On the contrary, as in (5.25), Lemma 4.2 with \(\tilde{\gamma}=\gamma_{1}\), \(p=p_{1}\), \(\tau=\tau_{n_{\sigma}}\) and \(\alpha=\alpha_{n_{\sigma}}\) gives that for \(\mathcal{Q}\equiv\mathcal{Q}_{\frac{r}{\sqrt{n_{\sigma}+s}}}(z_{1},z_{2})\) satisfying (4.30), (5.92) Let us assume \(z_{1},z_{2}\in B_{r}(x_{0})\) to see that \[\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{ \mathcal{Q}}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u|^{\hat{q}_{1}}\,d\mu_ {\tau_{n_{\sigma}}}\right)^{\frac{1}{\hat{q}_{1}}}\] \[\leq c\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{\mathcal{B}_{2r}(x_{0})}|D^{\tau_{0}}d_{s}u|^{p}\,d \mu_{\tau_{0}}\right)^{\frac{1}{p}}+c\left[\sum_{d=1}^{2}\left(\mathchoice{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\! \int_{\mathcal{B}_{2r}(x_{0})}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u| ^{p_{1}}\,d\mu_{\tau_{n_{\sigma}}}\right)^{\frac{1}{p_{1}}}\right],\] where we have used (5.88), (5.52) and the fact that \(P_{d}\mathcal{Q}\subset\mathcal{B}_{\frac{r}{2^{\alpha_{n}+3}}}(z_{d})\). We now plug (5.84) with \(\hat{q}=p_{1}\) and \(y_{0}=z_{d}\) into the second term in the right-hand side of the above inequality, in order to find that \[\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}\!\int_{\mathcal{Q}}|D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{ \sigma}}+s}u|^{\hat{q}_{1}}\,d\mu_{\tau_{n_{\sigma}}}\right)^{\frac{1}{\hat{q} _{1}}} \tag{5.93}\] \[\leq c\left[\left(\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{\mathcal{B}_{2r}(x_{0})}|D^{ \tau_{0}}d_{s}u|^{p}\,d\mu_{\tau_{0}}\right)^{\frac{1}{p}}+\operatorname{Tail }_{\operatorname{s,p}}\left(\frac{u-(u)_{B_{2r}(x_{0})}}{(2r)^{\tau_{0}+s}};B_{ 2r}(x_{0})\right)\right]\] \[\quad+c\left[\left(\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}\!\int_{\mathcal{B}_{2r}(x_{0})}|D^{ \tau_{0}}d_{0}f|^{q}\,d\mu_{\tau_{0}}\right)^{\frac{1}{q}}+\operatorname{Tail }_{\operatorname{s,p}}\left(\frac{f-(f)_{B_{2r}(x_{0})}}{(2r)^{\tau_{0}}};B_{ 2r}(x_{0})\right)\right].\] Taking into account (5.91) and (5.93), the standard covering argument and a few algebraic manipulation along with (5.41) give that (1.8) with \(q=\hat{q}_{1}\). Furthermore, (5.91) and (5.92) along with the fact \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u\in L^{p_{\operatorname{loc}}} _{\operatorname{loc}}\left(\Omega\times\Omega\,;\,\mu_{\tau_{n_{\sigma}}}\right)\) imply that \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u\in L^{\tilde{q}}_{\operatorname {loc}}\left(\Omega\times\Omega\,;\,\mu_{\tau_{n_{\sigma}}}\right)\). By iterating this procedure \(l_{q}-1\) times, we have (1.8) and \(D^{\tau_{n_{\sigma}}}d_{\alpha_{n_{\sigma}}+s}u\in L^{q}_{\operatorname{loc}} \left(\Omega\times\Omega\,;\,\mu_{\tau_{n_{\sigma}}}\right)\). Using (5.42), we complete that \(u\in L^{q}_{\operatorname{loc}}\left(\Omega\times\Omega\,;\,\frac{dx\,dy}{|x-y| ^{1+s+q}}\right)\).
2303.10358
Neural Frailty Machine: Beyond proportional hazard assumption in neural survival regressions
We present neural frailty machine (NFM), a powerful and flexible neural modeling framework for survival regressions. The NFM framework utilizes the classical idea of multiplicative frailty in survival analysis to capture unobserved heterogeneity among individuals, at the same time being able to leverage the strong approximation power of neural architectures for handling nonlinear covariate dependence. Two concrete models are derived under the framework that extends neural proportional hazard models and nonparametric hazard regression models. Both models allow efficient training under the likelihood objective. Theoretically, for both proposed models, we establish statistical guarantees of neural function approximation with respect to nonparametric components via characterizing their rate of convergence. Empirically, we provide synthetic experiments that verify our theoretical statements. We also conduct experimental evaluations over $6$ benchmark datasets of different scales, showing that the proposed NFM models outperform state-of-the-art survival models in terms of predictive performance. Our code is publicly availabel at https://github.com/Rorschach1989/nfm
Ruofan Wu, Jiawei Qiao, Mingzhe Wu, Wen Yu, Ming Zheng, Tengfei Liu, Tianyi Zhang, Weiqiang Wang
2023-03-18T08:15:15Z
http://arxiv.org/abs/2303.10358v2
# Neural Frailty Machine: Beyond proportional hazard assumption in neural survival regressions ###### Abstract We present neural frailty machine (NFM), a powerful and flexible neural modeling framework for survival regressions. The NFM framework utilizes the classical idea of multiplicative frailty in survival analysis to capture unobserved heterogeneity among individuals, at the same time being able to leverage the strong approximation power of neural architectures for handling nonlinear covariate dependence. Two concrete models are derived under the framework that extends neural proportional hazard models and nonparametric hazard regression models. Both models allow efficient training under the likelihood objective. Theoretically, for both proposed models, we establish statistical guarantees of neural function approximation with respect to nonparametric components via characterizing their rate of convergence. Empirically, we provide synthetic experiments that verify our theoretical statements. We also conduct experimental evaluations over 6 benchmark datasets of different scales, showing that the proposed NFM models outperform state-of-the-art survival models in terms of predictive performance. Our code is publicly availabel at [https://github.com/Rorschach1989/nfm](https://github.com/Rorschach1989/nfm) ## 1 Introduction Regression analysis of time-to-event data Kalbfleisch and Prentice (2002) has been among the most important modeling tools for clinical studies and has witnessed a growing interest in areas like corporate finance Duffie et al. (2009), recommendation systems Jing and Smola (2017), and computational advertising Wu et al. (2015). The key feature that differentiates time-to-event data from other types of data is that they are often _incompletely observed_, with the most prevailing form of incompleteness being the _right censoring_ mechanism Kalbfleisch and Prentice (2002). In the right censoring mechanism, the duration time of a sampled subject is (sometimes) only known to be larger than the observation time instead of being recorded precisely. It is well known in the community of survival analysis that even in the case of linear regression, naively discarding the censored observations produces estimation results that are statistically biased Buckley and James (1979), at the same time losses sample efficiency if the censoring proportion is high. Cox's proportional hazard (CoxPH ) model Cox (1972) using the convex objective of negative partial likelihood Cox (1975) is the _de facto_ choice in modeling right censored time-to-event data (hereafter abbreviated as censored data without misunderstandings). The model is _semiparametric_Bickel et al. (1993) in the sense that the baseline hazard function needs no parametric assumptions. The original formulation of CoxPH model assumes a linear form and therefore has limited flexibility since the truth is not necessarily linear. Subsequent studies extended CoxPH model to nonlinear variants using ideas from nonparametric regression Huang (1999), Cai et al. (2007, 2008), ensemble learning Ishwaran et al. (2008), and neural networks Faraggi and Simon (1995), Katzman et al. (2018). While such extensions allowed a more flexible nonlinear dependence structure with the covariates, the learning objectives were still derived under the proportional hazards (PH) assumption, which was shown to be inadequate in many real-world scenarios Gray (2000). The most notable case was the failure of modeling the phenomenon of crossing hazards Stablein and Koutrouvelis (1985). It is thus of significant interest to explore extensions of CoxPH that both allow nonlinear dependence over covariates and relaxations of the PH assumption. Frailty models Wienke (2010), Duchateau and Janssen (2007) are among the most important research topics in modern survival analysis, in that they provide a principled way of extending CoxPH model via incorporating a multiplicative random effect to capture unobserved heterogeneity. The resulting parameterization contains many useful variants of CoxPH like the proportional odds model Bennett (1983), under specific choices of frailty families. While the theory of frailty models has been well-established Murphy (1994, 1995), Parner (1998), Kosorok et al. (2004), most of them focused on the linear case. Recent developments on applying neural approaches to survival analysis Katzman et al. (2018), Kvamme et al. (2019), Tang et al. (2022), Rindt et al. (2022) have shown promising results in terms of empirical predictive performance, with most of them lacking theoretical discussions. Therefore, it is of significant interest to build more powerful frailty models via adopting techniques in modern deep learning Goodfellow et al. (2016) with provable statistical guarantees. In this paper, we present a general framework for neural extensions of frailty models called the **neural frailty machine (NFM)**. Two concrete neural architectures are derived under the framework: The first one adopts the proportional frailty assumption, allowing an intuitive interpretation of the neural CoxPH model with a multiplicative random effect. The second one further relaxes the proportional frailty assumption and could be viewed as an extension of nonparametric hazard regression (NHR) Cox and O'Sullivan (1990), Kooperberg et al. (1995), sometimes referred to as "fully neural" models under the context of neural survival analysis Omi et al. (2019). We summarize our contributions as follows. * We propose the neural frailty machine (NFM) framework as a principled way of incorporating unobserved heterogeneity into neural survival regression models. The framework includes many commonly used survival regression models as special cases. * We derive two model architectures based on the NFM framework that extend neural CoxPH models and neural NHR models. Both models allow stochastic training and scale to large datasets. * We show theoretical guarantees for the two proposed models via characterizing the rates of convergence of the proposed nonparametric function estimators. The proof technique is different from previous theoretical studies on neural survival analysis and is applicable to many other types of neural survival models. * We conduct extensive studies on various benchmark datasets at different scales. Under standard performance metrics, both models are empirically shown to perform competitively, matching or outperforming state-of-the-art neural survival models. ## 2 Related works ### Nonlinear extensions of CoxPH Most nonlinear extensions of CoxPH model stem from the equivalence of partial likelihood and semiparametric profile likelihood Murphy and Van der Vaart (2000) of CoxPH model, resulting in nonlinear variants that essentially replaces the linear term in partial likelihood with nonlinear variants: Huang (1999) used smoothing splines, Cai et al. (2007, 2008) used local polynomial regression Fan and Gijbels (1996). The empirical success of tree-based models inspired subsequent developments like Ishwaran et al. (2008) that equip tree-based models such as gradient boosting trees and random forests with losses in the form of negative log partial likelihood. Early developments of neural survival analysis Faraggi and Simon (1995) adopted similar extension strategies and obtained neural versions of partial likelihood. Later attempts Katzman et al. (2018) suggest using the successful practice of stochastic training which is believed to be at the heart of the empirical success of modern neural methods Hardt et al. (2016). However, stochastic training under the partial likelihood objective is highly non-trivial, as mini-batch versions of log partial likelihood Katzman et al. (2018) are no longer valid stochastic gradients of the full-sample log partial likelihood Tang et al. (2022). ### Beyond CoxPH in survival analysis In linear survival modeling, there are standard alternatives to CoxPH such as the accelerated failure time (AFT) model Buckley and James (1979), Ying (1993), the extended hazard regression model Etezadi-Amoli and Ciampi (1987), and the family of linear transformation models Zeng and Lin (2006). While these models allow certain types of nonlinear extensions, the resulting form of (conditional) hazard function is still restricted to be of a specific form. The idea of nonparametric hazard regression (NHR) Cox and O'Sullivan (1990); Kooperberg et al. (1995); Strawderman and Tsiatis (1996) further improves the flexibility of nonparametric survival analysis via directly modeling the conditional hazard function by nonparametric regression techniques such as spline approximation. Neural versions of NHR have been developed lately such as the CoxTime model Kvamme et al. (2019). Rindt et al. (2022) used a neural network to approximate the conditional survival function and could be thus viewed as another trivial extension of NHR. Aside from developments in NHR, Lee et al. (2018) proposed a discrete-time model with its objective being a mix of the discrete likelihood and a rank-based score; Zhong et al. (2021) proposed a neural version of the extended hazard model, unifying both neural CoxPH and neural AFT model; Tang et al. (2022) used an ODE approach to model the hazard and cumulative hazard functions. ### Theoretical justification of neural survival models Despite the abundance of neural survival models, assessment of their theoretical properties remains nascent. In Zhong et al. (2021), the authors developed minimax theories of partially linear cox model using neural networks as the functional approximator. Zhong et al. (2021) provided convergence guarantees of neural estimates under the extended hazard model. The theoretical developments therein rely on specific forms of objective function (partial likelihood and kernel pseudo-likelihood) and are not directly applicable to the standard likelihood-based objective which is frequently used in survival analysis. ## 3 Methodology ### The neural frailty machine framework Let \(\tilde{T}\geq 0\) be the interested event time with survival function denoted by \(S(t)=\mathbb{P}(\tilde{T}>t)\) associated with a feature(covariate) vector \(Z\in\mathbb{R}^{d}\). Suppose that \(\tilde{T}\) is a continuous random variable and let \(f(t)\) be its density function. Then \(\lambda(t)=f(t)/S(t)\) is the hazard function and \(\Lambda(t)=\int_{0}^{t}\lambda(s)ds\) is the cumulative hazard function. Aside from the covariate \(Z\), we use a positive scalar random variable \(\omega\in\mathbb{R}^{+}\) to express the unobserved heterogeneity corresponding to individuals, or _fraility_. 1. In this paper we will assume the following generating scheme of \(\tilde{T}\) via specifying its conditional hazard function: Footnote 1: For example in medical biology, it was observed that genetically identical animals kept in as similar an environment as possible will typically not behave the same upon exposure to environmental carcinogens Brennan (2002) \[\lambda(t|Z,\omega)=\omega\widetilde{\nu}(t,Z). \tag{1}\] Here \(\widetilde{\nu}\) is an unspecified non-negative function, and we let the distribution of \(\omega\) be parameterized by a one-dimensional parameter \(\theta\in\mathbb{R}\). 2 The formulation (1) is quite general and contains several important models in both traditional and neural survival analysis: Footnote 2: The choice of one-dimensional frailty family is mostly for simplicity and clearness of theoretical derivations. Note that there exist multi-dimensional frailty families like the PVF family Wienke (2010). Generalizing our theoretical results to such kinds of families would require additional sets of regularity conditions, and will be left to future explorations. 1. When \(\omega\) follows parametric distributional assumptions, and \(\widetilde{\nu}(t,Z)=\lambda(t)e^{\beta^{\top}Z}\), (1) reduces to the standard proportional frailty model Kosorok et al. (2004). A special case is when \(\omega\) is degenerate, i.e., it has no randomness, then the model corresponds to the classic CoxPH model. 2. When \(\omega\) is degenerate and \(\widetilde{\nu}\) is arbitrary, the model becomes equivalent to nonparametric hazard regression (NHR) Cox and O'Sullivan (1990); Kooperberg et al. (1995). In NHR, the function parameter of interest is usually the logarithm of the (conditional) hazard function. In this paper we construct neural approximations to the logarithm of \(\widetilde{\nu}\), i.e., \(\nu(t,Z)=\log\widetilde{\nu}(t,Z)\). The resulting models are called **Neural Fraility Machines (NFM)**. Depending on the prior knowledge of the function \(\nu\), we propose two function approximation schemes: **The proportional frailty (PF) scheme** assumes the dependence of \(\nu\) on event time and covariates to be completely _decoupled_, i.e., \[\nu(t,Z)=h(t)+m(Z). \tag{2}\] Proportional-style assumption over hazard functions has been shown to be a useful inductive bias in survival analysis. We will treat both \(h\) and \(m\) in (2) as function parameters, and device two multi-layer perceptrons (MLP) to approximate them separately. **The fully neural (FN) scheme** imposes no a priori assumptions over \(\nu\) and is the most general version of NFM. It is straightforward to see that the most commonly used survival models, such as CoxPH, AFTYing (1993); EHZhong et al. (2021), or PF models are included in the proposed model space as special cases. We treat \(\nu=\nu(t,Z)\) as the function parameter with input dimension \(d+1\) and use a multi-layer perceptron (MLP) as the function approximator to \(\nu\). Similar approximation schemes with respect to the hazard function have been proposed in some recent works Omi et al. (2019); Rindt et al. (2022), referred to as "fully neural approaches" without theoretical characterizations. **The choice of frailty family** There are many commonly used families of frailty distributions Kosorok et al. (2004); Duchateau and Janssen (2007); Wienke (2010), among which the most popular one is the _gamma frailty_, where \(\omega\) follows a gamma distribution with mean \(1\) and variance \(\theta\). We briefly introduce some other types of frailty families in appendix A. ### Parameter learning under censored observations In time-to-event modeling scenarios, the event times are typically observed under right censoring. Let \(C\) be the right censoring time which is assumed to be conditionally independent of the event time \(\tilde{T}\) given \(Z\), i.e., \(\tilde{T}\perp C|Z\). In data collection, one can observe the minimum of the survival time and the censoring time, that is, observe \(T=\tilde{T}\wedge C\) as well as the censoring indicator \(\delta=I(\tilde{T}\leqslant C)\), where \(a\wedge b=\min(a,b)\) for constants \(a\) and \(b\) and \(I(\cdot)\) stands for the indicator function. We assume \(n\) independent and identically distributed (i.i.d.) copies of \((T,\delta,Z)\) are used as the training sample \((T_{i},\delta_{i},Z_{i}),i\in[n]\), where we use \([n]\) to denote the set \(\{1,2,\ldots,n\}\). Additionally, we assume the unobserved frailties are independent and identically distributed, i.e., \(\omega_{i}\overset{\text{i.i.d.}}{\sim}f_{\theta}(\omega),i\in[n]\). Next, we derive the learning procedure based on the **observed log-likelihood (OLL)** objective under both PF and FN scheme. To obtain the observed likelihood, we first integrate the conditional survival function given the frailty: \[\begin{split} S(t|Z)&=\mathbb{E}_{\omega\sim f_{ \theta}}\left[e^{-\omega\int_{0}^{t}e^{\nu(\iota,Z)}ds}\right]\\ &=:e^{-G_{\theta}\left(\int_{0}^{t}e^{\nu(\iota,Z)}ds\right)}. \end{split} \tag{3}\] Here the _frailty transform_\(G_{\theta}(x)=-\log\left(\mathbb{E}_{\omega\sim f_{\theta}}\left[e^{-\omega x}\right]\right)\) is defined as the negative of the logarithm of the Laplace transform of the frailty distribution. The conditional cumulative hazard function is thus \(\Lambda(t|Z)=G_{\theta}(\int_{0}^{t}e^{\nu(s,Z)}ds)\). For the PF scheme of NFM, we use two MLPs \(\widehat{h}=\widehat{h}(t;\mathbf{W}^{h},\mathbf{b}^{h})\) and \(\widehat{m}=\widehat{m}(Z;\mathbf{W}^{m},\mathbf{b}^{m})\) as function approximators to \(\nu\) and \(m\), parameterized by \((\mathbf{W}^{h},\mathbf{b}^{h})\) and \((\mathbf{W}^{m},\mathbf{b}^{m})\), respectively. 3 According to standard results on censored data likelihood Kalbfleisch and Prentice (2002), we write the learning objective under the PF scheme as: Footnote 3: Here we adopt the conventional notation that \(\mathbf{W}\) is the collection of the weight matrices of the MLP in all layers, and \(\mathbf{b}\) corresponds to the collection of the bias vectors in all layers. \[\begin{split}&\mathcal{L}(\mathbf{W}^{h},\mathbf{b}^{h},\mathbf{W}^{m },\mathbf{b}^{m},\theta)\\ =&\frac{1}{n}\left[\sum_{i\in[n]}\delta_{i}\log g_{ \theta}\left(e^{\widehat{m}(Z_{i})}\int_{0}^{T_{i}}e^{\widehat{h}(s)}ds \right)+\delta_{i}\widehat{h}(T_{i})+\delta_{i}\widehat{m}(Z_{i})-G_{\theta} \left(e^{\widehat{m}(Z_{i})}\int_{0}^{T_{i}}e^{\widehat{h}(s)}ds\right) \right].\end{split} \tag{4}\] Here we define \(g_{\theta}(x)=\frac{\partial}{\partial x}G_{\theta}(x)\). Let \((\widehat{\mathbf{W}}_{n}^{h},\widehat{\mathbf{b}}_{n}^{h},\widehat{\mathbf{W} }_{m}^{m},\widehat{\mathbf{b}}_{n}^{m},\widehat{\theta}_{n})\) be the maximizer of (4) and further denote \(\widehat{h}_{n}(t)=\widehat{h}(t;\widehat{\mathbf{W}}_{n}^{h},\widehat{\mathbf{ b}}_{n}^{h})\) and \(\widehat{m}_{n}(Z)=\widehat{m}(Z;\widehat{\mathbf{W}}_{n}^{m},\widehat{\mathbf{ b}}_{n}^{m})\). The resulting estimators for conditional cumulative hazard and survival functions are: \[\begin{split}\widehat{\Lambda}_{\mathsf{PF}}(t|Z)&=G_ {\widehat{\theta}_{n}}\left(\int_{0}^{t}e^{\widehat{h}_{n}(s)+\widehat{m}_{n}(Z)} ds\right),\\ \widehat{S}_{\mathsf{PF}}(t|Z)&=e^{-\widehat{\Lambda} _{\mathsf{PF}}(t|Z)},\end{split} \tag{5}\] For the FN scheme, we use \(\widehat{\nu}=\widehat{\nu}(t,Z;\mathbf{W}^{\nu},\mathbf{b}^{\nu})\) to approximate \(\nu(t,Z)\) parameterized by \((\mathbf{W}^{\nu},\mathbf{b}^{\nu})\). The OLL objective is written as: \[\mathcal{L}(\mathbf{W}^{\nu},\mathbf{b}^{\nu},\theta) \tag{6}\] \[= \frac{1}{n}\left[\sum_{i\in[n]}\delta_{i}\log g_{\theta}\left( \int_{0}^{T_{i}}e^{\widehat{\nu}(s,Z_{i};\mathbf{W}^{\nu},\mathbf{b}^{\nu})} ds\right)+\delta_{i}\widehat{\nu}(T_{i},Z_{i};\mathbf{W}^{\nu},\mathbf{b}^{\nu})-G_{ \theta}\left(\int_{0}^{T_{i}}e^{\widehat{\nu}(s,Z_{i};\mathbf{W}^{\nu},\mathbf{ b}^{\nu})}ds\right)\right].\] Let \((\widehat{\mathbf{W}}_{n}^{\nu},\widehat{\mathbf{b}}_{n}^{\nu},\widehat{ \theta}_{n})\) be the maximizer of (6), and further denote \(\widehat{\nu}_{n}(t,Z)=\widehat{\nu}(t,Z;\widehat{\mathbf{W}}_{n}^{\nu}, \widehat{\mathbf{b}}_{n}^{\nu})\). The conditional cumulative hazard and survival functions are therefore estimated as: \[\widehat{\Lambda}_{\text{FN}}(t|Z) =G_{\widehat{\theta}_{n}}\left(\int_{0}^{t}e^{\widehat{\nu}_{n}( s,Z)}ds\right), \tag{7}\] \[\widehat{S}_{\text{FN}}(t|Z) =e^{-\widehat{\Lambda}_{\text{FN}}(t|Z)}.\] The evaluation of objectives like (6) and its gradient requires computing a definite integral of an exponentially transformed MLP function. Instead of using exact computations that are available for only a restricted type of activation functions and network structures, we use numerical integration for such kinds of evaluations, using the method of Clenshaw-Curtis quadrature Boyd (2001), which has shown competitive performance and efficiency in recent applications to monotonic neural networks Wehenkel and Louppe (2019). _Remark 3.1_.: The interpretation of frailty terms differs in the two schemes. In the PF scheme, introducing the frailty effect strictly increases the modeling capability (i.e., the capability of modeling crossing hazard) in comparison to CoxPH or neural variants of CoxPH Kosorok et al. (2004). In the FN scheme, it is arguable that in the i.i.d. case, the marginal hazard function is a reparameterization of the hazard function in the context of NHR. Therefore, we view the incorporation of frailty effect as injecting a domain-specific inductive bias that has proven to be useful in survival analysis and time-to-event regression modeling and verify this claim empirically in section 5.2. Moreover, frailty becomes especially helpful when handling correlated or clustered data where the frailty term is assumed to be shared among certain groups of individuals Parner (1998). Extending NFM to such scenarios is valuable and we left it to future explorations. ## 4 Theoretical results In this section, we present theoretical properties of both NFM estimates by characterizing their rates of convergence when the underlying event data follows corresponding model assumptions. The proof technique is based on the method of sieves Shen and Wong (1994); Shen (1997); Chen (2007) that views neural networks as a special kind of nonlinear sieve Chen (2007) that satisfies desirable approximation properties Yarotsky (2017). Since both models produce estimates of function parameters, we need to specify a suitable function space to work with. Here we choose the following Holder ball as was also used in previous works on nonparametric estimation using neural networks Schmidt-Hieber (2020); Farrell et al. (2021); Zhong et al. (2021b) \[\mathcal{W}_{M}^{\beta}(\mathcal{X})=\left\{f:\max_{\alpha:|\alpha|\leq\beta} \operatorname*{esssup}_{x\in\mathcal{X}}|D^{\alpha}(f(x))|\leq M\right\}, \tag{8}\] where the domain \(\mathcal{X}\) is assumed to be a subset of \(d\)-dimensional euclidean space. \(\alpha=(\alpha_{1},\ldots,\alpha_{d})\) is a \(d\)-dimensional tuple of nonnegative integers satisfying \(|\alpha|=\alpha_{1}+\cdots+\alpha_{d}\) and \(D^{\alpha}f=\frac{\partial^{|\alpha|}f}{\partial x_{1}^{2}\cdots\partial x_{d} ^{2}}\) is the weak derivative of \(f\). Now assume that \(M\) is a reasonably large constant, and let \(\Theta\) be a closed interval over the real line. We make the following assumptions for the _true parameters_ under both schemes: **Condition 4.1** (True parameter, PF scheme).: _The euclidean parameter \(\theta_{0}\in\Theta\subset\mathbb{R}\), and the two function parameters \(m_{0}\in\mathcal{W}_{M}^{\beta}([-1,1]^{d}),h_{0}\in\mathcal{W}_{M}^{\beta}([0,\tau])\), and \(\tau>0\) is the ending time of the study duration, which is usually adopted in the theoretical studies in survival analysis Van der Vaart (2000)._ **Condition 4.2** (True parameter, FN scheme).: _The euclidean parameter \(\theta_{0}\in\Theta\subset\mathbb{R}\), and the function parameter \(\nu_{0}\in\mathcal{W}_{M}^{\beta}([0,\tau]\times[-1,1]^{d})\),_ Next, we construct sieve spaces for function parameter approximation via restricting the complexity of the MLPs to "scale" with the sample size \(n\). **Condition 4.3** (Sieve space, PF scheme).: _The sieve space \(\mathcal{H}_{n}\) is constructed as a set of MLPs satisfying \(\widehat{h}\in\mathcal{W}^{\beta}_{M_{n}}([0,\tau])\), with depth of order \(O(\log n)\) and total number of parameters of order \(O(n^{\frac{1}{\beta+4}}\log n)\). The sieve space \(\mathcal{M}_{n}\) is constructed as a set of MLPs satisfying \(\widehat{m}\in\mathcal{W}^{\beta}_{M_{n}}([-1,1]^{d})\), with depth of order \(O(\log n)\) and total number of parameters of order \(O(n^{\frac{d}{\beta+2}}\log n)\). Here \(M_{n}\) and \(M_{m}\) are sufficiently large constants such that every function in \(\mathcal{W}^{\beta}_{M}([-1,1]^{d})\) and \(\mathcal{W}^{\beta}_{M}([0,\tau])\) could be accurately approximated by functions inside \(\mathcal{H}_{n}\) and \(\mathcal{M}_{n}\), according to [17, Theorem 1]._ **Condition 4.4** (Sieve space, FN scheme).: _The sieve space \(\mathcal{V}_{n}\) is constructed as a set of MLPs satisfying \(\widehat{\nu}\in\mathcal{W}^{\beta}_{M_{\nu}}([0,\tau])\), with depth of order \(O(\log n)\) and total number of parameters of order \(O(n^{\frac{d+1}{\beta+d+1}}\log n)\). Here \(M_{\nu}\) is a sufficiently large constant such that \(\mathcal{V}_{n}\) satisfies approximation properties, analogous to condition 4.3._ For technical reasons, we will assume the nonparametric function estimators are constrained to fall inside the corresponding sieve spaces, i.e., \(\widehat{h}_{n}\in\mathcal{H}_{n}\), \(\widehat{m}_{n}\in\mathcal{M}_{n}\) and \(\widehat{\nu}\in\mathcal{V}_{n}\). This will not affect the implementation of optimization routines as was discussed in Farrell et al. (2021). Furthermore, we restrict the estimate \(\widehat{\theta}_{n}\in\Theta\) in both PF and FN schemes. Additionally, we need the following regularity condition on the function \(G_{\theta}(x)\): **Condition 4.5**.: \(G_{\theta}(x)\) _is viewed as a bivariate function \(G:\Theta\times\mathcal{B}\mapsto\mathbb{R}\), where \(\mathcal{B}\) is a compact set on \(\mathbb{R}\). The functions \(G_{\theta}(x)\frac{\partial}{\partial\theta}G_{\theta}(x)\frac{\partial}{ \partial x}G_{\theta}(x)\),\(\log g_{\theta}(x)\),\(\frac{\partial}{\partial\theta}\log g_{\theta}(x)\), \(\frac{\partial}{\partial x}\log g_{\theta}(x)\) are bounded on \(\Theta\times\mathcal{B}\)._ We define two metrics that measures convergence of parameter estimates: For the PF scheme, let \(\phi_{0}=(h_{0},m_{0},\theta_{0})\) be the true parameters and \(\widehat{\phi}_{n}=(\widehat{h}_{n},\widehat{m}_{n},\widehat{\theta}_{n})\) be the estimates. We abbreviate \(\mathbb{P}_{\phi_{0},Z=z}\) as the conditional probability distribution of \((T,\delta)\) given \(Z=z\) under the true parameter, and \(\mathbb{P}_{\widehat{\phi}_{n},Z=z}\) as the conditional probability distribution of \((T,\delta)\) given \(Z=z\) under the estimates. Define the following metric \[d_{\mathsf{PF}}\left(\widehat{\phi}_{n},\phi_{0}\right)=\sqrt{ \mathbb{E}_{z\sim\mathbb{P}_{Z}}\left[H^{2}(\mathbb{P}_{\widehat{\phi}_{n},Z=z }\parallel\mathbb{P}_{\phi_{0},Z=z})\right]}, \tag{9}\] where \(H^{2}(\mathbb{P}\parallel\mathbb{Q})=\int\left(\sqrt{d\mathbb{P}}-\sqrt{d \mathbb{Q}}\right)^{2}\) is the squared Hellinger distance between probability distributions \(\mathbb{P}\) and \(\mathbb{Q}\). The case for the FN scheme is similar: Let \(\psi_{0}=(\nu_{0},\theta_{0})\) be the parameters and \(\widehat{\nu}_{n}=(\widehat{\nu}_{n},\widehat{\theta}_{n})\) be the estimates. Analogous to the definitions above, we define \(\mathbb{P}_{\psi_{0},Z=z}\) as the true conditional distribution given \(Z=z\), and \(\mathbb{P}_{\widehat{\psi}_{n},Z=z}\) be the estimated conditional distribution, we will use the following metric in the FN scheme: \[d_{\mathsf{FN}}\left(\widehat{\psi}_{n},\psi_{0}\right)=\sqrt{ \mathbb{E}_{z\sim\mathbb{P}_{Z}}\left[H^{2}(\mathbb{P}_{\widehat{\psi}_{n},Z= z}\parallel\mathbb{P}_{\psi_{0},Z=z})\right]}. \tag{10}\] Now we state our main theorems. We denote \(\mathbb{P}\) as the data generating distribution and use \(\widetilde{O}\) to hide poly-logarithmic factors in the big-O notation. **Theorem 4.6** (Rate of convergence, PF scheme).: _In the PF scheme, under condition 4.1, 4.3, 4.5, we have that \(d_{\mathsf{PF}}\left(\widehat{\phi}_{n},\phi_{0}\right)=\widetilde{O}_{ \mathbb{P}}\left(n^{-\frac{\beta}{2\beta+2d}}\right)\)._ **Theorem 4.7** (Rate of convergence, FN scheme).: _In the FN scheme, under condition 4.2, 4.4, 4.5, we have that \(d_{\mathsf{FN}}\left(\widehat{\psi}_{n},\psi_{0}\right)=\widetilde{O}_{ \mathbb{P}}\left(n^{-\frac{\beta}{2\beta+2d+2}}\right)\)._ _Remark 4.8_.: The idea of using Hellinger distance to measure the convergence rate of sieve MLEs was proposed in Wong and Shen (1995). Obtaining rates under a stronger topology such as \(L_{2}\) is possible if the likelihood function satisfies certain conditions such as the curvature condition Farrell et al. (2021). However, such kind of conditions are in general too stringent for likelihood-based objectives, instead, we use Hellinger convergence that has minimal requirements. Consequently, our proof strategy is applicable to many other survival models that rely on neural function approximation such as Rindt et al. (2022), with some modification to the regularity conditions. For proper choices of metrics in sieve theory, see also the discussion in (Chen, 2007, Chapter 2). ## 5 Experiments In this section, we assess the empirical performance of NFM. We first conduct synthetic experiments for verifying the theoretical convergence guarantees developed in section 4. To further illustrate the empirical efficacy of NFM, we evaluate the predictive performance of NFM over 6 benchmark datasets ranging from small scale to large scale, against state-of-the-art baselines. ### Synthetic experiments We conduct synthetic experiments to validate our proposed theory. The underlying data generating scheme is as follows: First, we generate a 5-dimensional feature \(Z\) that is independently sampled from the uniform distribution over the interval \([0,1]\). The (true) conditional hazard function of the event time takes the form of the proportional frailty model (2), with \(h(t)=t\) and \(m(Z)=\sin(\langle Z,\beta\rangle)+\langle\sin(Z),\beta\rangle\), where \(\beta=(0.1,0.2,0.3,0.4,0.5)\). The frailty \(\omega\) is generated according to a gamma distribution with mean and variance equal to 1. We use this generating model to assess the recovery guarantee of both NFM modeling schemes via inspecting the empirical recovery of \(\nu(t,Z)\). For the PF scheme, we have more underlying information about the generating model, and we present an additional assessment regarding the recovery of \(m(Z)\) in appendix D.1. We generate three training datasets of different scales, with \(n\in\{1000,5000,10000\}\). A censoring mechanism is applied such that the censoring ratio is around \(40\%\) for each dataset. The assessment will be made on a fixed test sample of 100 hold-out points that are independently drawn from the generating scheme of the event time. We report a more detailed description of the implementation of the data generating scheme and model architectures in appendix C.2. We present the results of our synthetic data experiments in figure 1. The evaluation results suggest that both NFM schemes are capable of approximating complicated nonlinear functions using a moderate amount of data, i.e., \(n\geq 1000\). ### Real-world data experiments **Datasets** We use five survival datasets and one non-survival dataset for evaluation. The survival datasets include the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) Curtis et al. Figure 1: Visualizations of synthetic data results under the NFM framework. The plots in the first row compare the empirical estimates of the nonparametric component \(\nu(t,Z)\) against its true value evaluated on 100 hold-out points, under the PF scheme. The plots in the second row are obtained using the FN scheme, with analogous semantics to the first row. [2012], the Rotterdam tumor bank and German Breast Cancer Study Group (RotGBSG)Knaus et al. [1995], the Assay Of Serum Free Light Chain (FLCHAIN) Dispenzieri et al. [2012], the Study to Understand Prognoses Preferences Outcomes and Risks of Treatment (SUPPORT) Knaus et al. [1995], and the Medical Information Mart for Intensive Care (MIMIC-III) Johnson et al. [2016]. For all the survival datasets, the event of interest is defined as the mortality after admission. In our experiments, we view METABRIC, RotGBSG, FLCHAIN, and SUPPORT as small-scale datasets and MIMIC-III as a moderate-scale dataset. We additionally use the KKBOX dataset Kvamme et al. [2019] as a large-scale evaluation. In this dataset, an event time is observed if a customer churns from the KKBOX platform. We summarize the basic statistics of all the datasets in table 3. **Baselines** We compare NFM with 12 baselines. The first one is the linear CoxPH model Cox [1972]. Gradient Boosting Machine (GBM) Friedman [2001], Chen and Guestrin [2016] and Random Survival Forests (RSF) Ishwaran et al. [2008] are two tree-based nonparametric survival regression methods. DeepSurv Katzman et al. [2018] and CoxTime Kvamme et al. [2019] are two models that adopt neural variants of partial likelihood as objectives. SuMo-net Rindt et al. [2022] is a neural variant of NHR. We additionally chose six latest state-of-the-art neural survival models: DeepHit Lee et al. [2018], SurvNode Groha et al. [2020], DeepEH Zhong et al. [2021a], DCM Nagpal et al. [2021], DeSurv Danks and Yau [2022] and SODEN Tang et al. [2022]. Among the chosen baselines, DeepSurv and SuMo-net are viewed as implementations of neural CoxPH and neural NHR and are therefore of particular interest for the empirical verification of the efficacy of frailty. **Evaluation strategy** We use two standard metrics in survival predictions for evaluating model performance: integrated Brier score (IBS) and integrated negative binomial log-likelihood (INBLL). Both metrics are derived from the following: \[\mathcal{S}(\ell,t_{1},t_{2})=\int_{t_{2}}^{t_{1}}\frac{1}{n}\sum_{i=1}^{n} \left[\frac{\ell(0,\widehat{S}(t|Z_{i}))I(T_{i}\leq t,\delta_{i}=1)}{\widehat {S}_{C}(T_{i})}+\frac{\ell(1,\widehat{S}(t|Z_{i}))I(T_{i}>t)}{\widehat{S}_{C }(t)}\right]dt. \tag{11}\] Where \(\widehat{S}_{C}(t)\) is an estimate of the survival function \(S_{C}(t)\) of the censoring variable, obtained by the Kaplan-Meier estimate Kaplan and Meier [1958] of the censored observations on the test data. \(\ell:\{0,1\}\times[0,1]\mapsto\mathbb{R}^{+}\) is some proper loss function for binary classification Gneiting and Raftery [2007]. The IBS metric corresponds to \(\ell\) being the square loss, and the INBLL metric corresponds to \(\ell\) being the negative binomial (Bernoulli) log-likelihood Graf et al. [1999]. Both IBS and INBLL are proper scoring rules if the censoring times and survival times are independent. 4 We additionally report the result of another widely used metric, the concordance index (C-index), in appendix D. Since all the survival datasets do not have standard train/test splits, we follow previous practice Zhong et al. [2021a] that uses 5-fold cross-validation (CV): 1 fold is for testing, and 20% of the rest is held out for validation. In our experiments, we \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Model & \multicolumn{2}{c}{METABRIC} & \multicolumn{2}{c}{RotGBSG} & \multicolumn{2}{c}{FLCHAIN} & \multicolumn{2}{c}{SUPPORT} \\ \cline{2-9} & IBS & INBLL & IBS & INBLL & IBS & INBLL & IBS & INBLL \\ \hline CoxPH & \(16.46_{\pm 0.90}\) & \(49.57_{\pm 2.66}\) & \(18.25_{\pm 0.44}\) & \(53.76_{\pm 1.11}\) & \(10.05_{\pm 0.38}\) & \(33.18_{\pm 1.16}\) & \(20.54_{\pm 0.38}\) & \(59.58_{\pm 0.86}\) \\ GBM & \(16.61_{\pm 0.82}\) & \(49.87_{\pm 2.44}\) & \(17.83_{\pm 0.44}\) & \(52.78_{\pm 1.11}\) & \(9.98_{\pm 0.37}\) & \(32.88_{\pm 1.05}\) & \(19.18_{\pm 0.30}\) & \(56.46_{\pm 0.10}\) \\ RSF & \(16.62_{\pm 0.64}\) & \(49.61_{\pm 1.54}\) & \(17.89_{\pm 0.42}\) & \(52.77_{\pm 1.01}\) & \(9.96_{\pm 0.37}\) & \(32.92_{\pm 1.05}\) & \(19.11_{\pm 0.40}\) & \(56.28_{\pm 1.00}\) \\ DeepSurv & \(16.55_{\pm 0.93}\) & \(49.85_{\pm 3.02}\) & \(17.80_{\pm 0.49}\) & \(52.62_{\pm 1.25}\) & \(10.09_{\pm 0.38}\) & \(33.28_{\pm 1.15}\) & \(19.20_{\pm 0.41}\) & \(56.48_{\pm 1.08}\) \\ CoxTime & \(16.54_{\pm 0.83}\) & \(49.67_{\pm 2.67}\) & \(17.80_{\pm 0.58}\) & \(52.56_{\pm 1.47}\) & \(10.28_{\pm 0.45}\) & \(34.18_{\pm 1.53}\) & \(19.17_{\pm 0.40}\) & \(56.45_{\pm 1.10}\) \\ DeepHit & \(17.50_{\pm 0.83}\) & \(52.10_{\pm 2.16}\) & \(19.61_{\pm 0.38}\) & \(56.67_{\pm 1.10}\) & \(11.83_{\pm 0.39}\) & \(37.72_{\pm 1.02}\) & \(20.66_{\pm 0.32}\) & \(60.06_{\pm 0.72}\) \\ DeepEH & \(16.56_{\pm 0.65}\) & \(49.42_{\pm 1.53}\) & \(17.62_{\pm 0.52}\) & \(52.08_{\pm 1.27}\) & \(10.11_{\pm 0.47}\) & \(33.30_{\pm 1.10}\) & \(19.30_{\pm 0.39}\) & \(56.67_{\pm 0.94}\) \\ SuMo-net & \(16.49_{\pm 0.83}\) & \(49.74_{\pm 2.11}\) & \(17.77_{\pm 0.47}\) & \(52.62_{\pm 1.11}\) & \(10.07_{\pm 0.40}\) & \(33.20_{\pm 1.10}\) & \(19.40_{\pm 0.38}\) & \(56.87_{\pm 0.96}\) \\ SODEN & \(16.52_{\pm 0.63}\) & \(49.39_{\pm 1.97}\) & \(17.05_{\pm 0.63}\) & \(50.45_{\pm 1.97}\) & \(10.13_{\pm 0.24}\) & \(33.37_{\pm 0.57}\) & \(19.07_{\pm 0.50}\) & \(56.15_{\pm 1.35}\) \\ SurvNode & \(16.67_{\pm 1.32}\) & \(49.73_{\pm 3.89}\) & \(17.42_{\pm 0.53}\) & \(51.70_{\pm 1.16}\) & \(10.40_{\pm 0.29}\) & \(34.37_{\pm 1.03}\) & \(19.58_{\pm 0.34}\) & \(57.49_{\pm 0.84}\) \\ DCM & \(16.58_{\pm 0.87}\) & \(49.48_{\pm 2.23}\) & \(17.66_{\pm 0.54}\) & \(52.26_{\pm 1.23}\) & \(10.13_{\pm 0.50}\) & \(33.04_{\pm 1.38}\) & \(19.29_{\pm 0.42}\) & \(56.68_{\pm 1.09}\) \\ DeSurv & \(16.71_{\pm 0.75}\) & \(49.61_{\pm 1.25}\) & \(17.98_{\pm 0.46}\) & \(53.23_{\pm 1.15}\) & \(10.06_{\pm 0.62}\) & \(33.18_{\pm 1.93}\) & \(19.50_{\pm 0.40}\) & \(57.28_{\pm 0.89}\) \\ \hline **NFM-PF** & \(16.33_{\pm 0.75}\) & \(49.07_{\pm 1.96}\) & \(17.60_{\pm 0.55}\) & \(52.12_{\pm 1.34}\) & \(9.96_{\pm 0.39}\) & \(32.84_{\pm 1.15}\) & \(19.14_{\pm 0 observed that a single random split into 5 folds does not produce stable results for most survival datasets. Therefore we perform 10 different CV runs for each survival dataset and report average metrics as well as their standard deviations. For the KKBOX dataset, we use the standard train/valid/test splits that are available via the pycox package Kvamme et al. (2019) and report results based on 10 trial runs. **Experimental setup** We follow standard preprocessing strategies Katzman et al. (2018); Kvamme et al. (2019); Zhong et al. (2021) that standardize continuous features into zero mean and unit variance, and do one-hot encodings for all categorical features. We adopt MLP with ReLU activation for all function approximators, including \(\widehat{h}\), \(\widehat{m}\) in PF scheme, and \(\widehat{\nu}\) in FN scheme, across all datasets, with the number of layers (depth) and the number of hidden units (width) within each layer being tunable. We tune the frailty transform over several standard choices detailed in appendix C.3. We find that the gamma frailty configuration performs reasonably well across all tasks and is recommended to be the default choice. A more detailed description of the tuning procedure, as well as training configurations for baseline models, are reported in appendix C.3. **Results** we report experimental results of small-scale datasets in table 1, and results of two larger datasets in table 2. The proposed NFM framework achieves the best performance on 5 of the 6 datasets. The improvement over baselines is particularly evident in METABRIC, SUPPORT, and MIMIC-III datasets. **Benefits of frailty** to better understand the additional benefits of introducing the frailty formulation, we compute the (relative) performance gain of NFM-PF and NFM-FN, against their non-frailty counterparts, namely DeepSurv Katzman et al. (2018) and SuMo-net Rindt et al. (2022). The evaluation is conducted for all three metrics mentioned in this paper. The results are shown in table 5. The results suggest a solid improvement in incorporating frailty, as the relative increase in performance could be over 10% for both NFM models. A more detailed discussion is presented in section D.4. ## 6 Conclusion In this paper, we make principled explorations on applying the idea of frailty models in modern survival analysis to neural survival regressions. A flexible and scalable framework called NFM is proposed that includes many useful survival models as special cases. Under the framework, we study two derived model architectures both theoretically and empirically. Theoretically, we obtain the rates of convergences of the nonparametric function estimators based on neural function approximation. Empirically, we demonstrate the superior predictive performance of the proposed models by evaluating several benchmark datasets. \begin{table} \begin{tabular}{l c c c c} \hline \hline Model & \multicolumn{2}{c}{MIMIC-III} & \multicolumn{2}{c}{KKBOX} \\ \cline{2-5} & IBS & INBLL & IBS & INBLL \\ \hline CoxPH & 20.40\({}_{\pm 0.00}\) & 60.02\({}_{\pm 0.00}\) & 12.60\({}_{\pm 0.00}\) & 39.40\({}_{\pm 0.00}\) \\ GBM & 17.70\({}_{\pm 0.00}\) & 52.30\({}_{\pm 0.00}\) & 11.81\({}_{\pm 0.00}\) & 38.15\({}_{\pm 0.00}\) \\ RSF & 17.79\({}_{\pm 0.19}\) & 53.34\({}_{\pm 0.41}\) & 14.46\({}_{\pm 0.00}\) & 44.39\({}_{\pm 0.00}\) \\ DeepSurv & 15.85\({}_{\pm 0.92}\) & 55.98\({}_{\pm 2.43}\) & 11.31\({}_{\pm 0.05}\) & 35.28\({}_{\pm 0.15}\) \\ CoxTime & 17.68\({}_{\pm 1.36}\) & 52.08\({}_{\pm 0.06}\) & 10.70\({}_{\pm 0.06}\) & 38.10\({}_{\pm 0.21}\) \\ DeepHit & 19.80\({}_{\pm 1.31}\) & 59.03\({}_{\pm 0.20}\) & 16.00\({}_{\pm 0.34}\) & 48.64\({}_{\pm 1.04}\) \\ SuMo-net & 18.62\({}_{\pm 1.23}\) & 54.51\({}_{\pm 2.97}\) & 11.58\({}_{\pm 0.11}\) & 36.61\({}_{\pm 0.28}\) \\ DCM & 18.02\({}_{\pm 0.49}\) & 52.83\({}_{\pm 0.94}\) & 10.71\({}_{\pm 0.11}\) & 33.24\({}_{\pm 0.06}\) \\ DeSurv & 18.19\({}_{\pm 0.65}\) & 54.69\({}_{\pm 2.83}\) & 10.77\({}_{\pm 0.21}\) & 33.22\({}_{\pm 0.10}\) \\ \hline **NFM-PF** & **16.28\({}_{\pm 0.36}\)** & **49.18\({}_{\pm 0.92}\)** & 11.02\({}_{\pm 0.11}\) & 35.10\({}_{\pm 0.22}\) \\ **NFM-FN** & 17.47\({}_{\pm 0.45}\) & 51.48\({}_{\pm 1.23}\) & **10.63\({}_{\pm 0.08}\)** & **32.81\({}_{\pm 0.14}\)** \\ \hline \hline \end{tabular} \end{table} Table 2: Survival prediction results measured in IBS and INBLL metric (%) on two larger datasets. In each column, the **boldfaced** score denotes the best result and the underlined score represents the second-best result. Two models are not reported, namely SODEN and DeepEH, as we found empirically that their computational/memory cost is significantly worse than the rest, and we fail to obtain reasonable performances over the two datasets for these two models.